Code development platform for open source projects from the European Union institutions :large_blue_circle: EU Login authentication by SMS has been phased out. To see alternatives please check here

Skip to content
Snippets Groups Projects

Documentation

Merged Ionut Beti requested to merge feature/documentation into develop
2 files
+ 256
0
Compare changes
  • Side-by-side
  • Inline

Files

+ 135
0
 
# Infrastructure Crossplane Provisioner
 
This repository contains the resources required to setup the Infrastructure Provisioner. The required components are the following:
 
 
* [Crossplane](https://docs.crossplane.io/latest/)
 
* [ArgoCD](https://argo-cd.readthedocs.io/en/stable/)
 
* [Gitea](https://about.gitea.com/products/gitea/)
 
* [ArgoEvents](https://argoproj.github.io/argo-events/)
 
* [ArgoWorkflows](https://argoproj.github.io/workflows/)
 
 
## Installation
 
 
The installation uses [Helm](https://helm.sh/docs/intro/install/) and requires a running Kubernetes cluster.
 
 
### **Warning**
 
> Crossplane is a Kubernetes control plane that manages cluster level resources. Thus it is industry standard practice to operate it in a dedicated cluster together with the other auxiliary plugins required for automating it.
 
<br>Installation in a cluster which also orchestrates application Pods is possible, but not recommended.
 
<br><b>DO NOT<b> attempt to perform multiple installations of Crossplane in the same cluster, as this will cause (potentially costly!) issues with the cloud resources provisioning process and will result in a broken installation which requires manual intervention to clean.
 
 
 
The two charts which need to be installed in the cluster:
 
* `dependencies` installs and configures the required k8s controllers and CRDs
 
* `resources` installs the required resources that implement provisioner logic
 
 
<b>Note: Currently, the charts have to be installed as-is with all their listed dependencies. Do not try to replace a dependency with one that you already have running, such as using an already existing gitea instance. Decoupling of dependencies is still a work in progress!</b>
 
 
### Dependencies Install
 
 
To install the dependencies module, you need to update the file `charts/dependencies/values.env.yaml` with the parameters to your environment:
 
 
- GITEA_STORAGE_CLASS: the k8s storage class to create the gitea persistence volume;
 
- ARGO_CD_CRDS_INSTALL: install ArgoCD without CRDs, use this to avoid resource conflicts if installing in a cluster that already has an ArgoCD installation;
 
- GITEA_USERNAME: gitea admin user name;
 
- GITEA_PASSWORD: gitea admin user password;
 
 
Execute the installation with the following commnands:
 
```cmd
 
helm dep up charts/dependencies
 
helm upgrade --install provisioner-dependencies --create-namespace -n $NS charts/dependencies -f charts/dependencies/values.yaml -f charts/dependencies/values.env.yaml
 
```
 
 
### Resources Install
 
 
To install the resources module, you need to update the file `charts/resources/values.env.yaml` with the parameters to you environment:
 
 
- KAFKA_ENDPOINT: kafka broker endpoint url and port;
 
- KAFKA_USERNAME: kafka user name to connect on kafka broker;
 
- KAFKA_PASSWORD: kafka user password to connect on kafka broker;
 
- IONOS_TOKEN: IONOS API Token to be use by crossplane IONOS provider to create the cloud environments.
 
- GITEA_USERNAME: gitea admin user name;
 
- GITEA_PASSWORD: gitea admin user password;
 
 
Execute the installation with the following commnands:
 
```cmd
 
helm upgrade --install provisioner-resources --create-namespace -n $NS charts/resources -f charts/resources/values.yaml -f charts/resources/values.env.yaml
 
```
 
 
### Accessing the Services Locally
 
 
To access the ArgoCD UI for the provisioner, retrieve the initial admin password. Username is "admin" by default.
 
 
```cmd
 
kubectl get -n $NS secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d > argopw
 
```
 
 
To access the ArgoWorkflows UI, ensure that the <b>cli</b> service account is enabled by setting ```cliEnabled: true``` in the resources helm chart values.yaml and retrieve the auth token.
 
 
```cmd
 
echo "Bearer $(kubectl get -n $NS secret cli.service-account-token -o=jsonpath='{.data.token}' | base64 --decode)" > argowftoken
 
```
 
 
Port forward for each service:
 
```cmd
 
kubectl port-forward -n $NS svc/argocd-server 8888:443
 
kubectl port-forward -n $NS svc/argowf-argo-workflows-server 8777:2746
 
kubectl port-forward -n $NS svc/gitea-http 8333:3000
 
```
 
 
### Setting up an environment for local development and testing
 
Local development and testing can be done using the [kind](https://kind.sigs.k8s.io/) tool. The following commands will create a local k8s cluster. The configuration will mount a local directory into the cluster for creating persistent volumes.
 
 
```cmd
 
cat <<EOF >> $HOME/.config/.kind
 
kind: Cluster
 
apiVersion: kind.x-k8s.io/v1alpha4
 
name: default
 
nodes:
 
- role: control-plane
 
image: kindest/node:v1.31.1
 
extraMounts:
 
- hostPath: $HOME/devtools/kind/default_volume
 
containerPath: /default
 
EOF
 
kind create cluster --config=$HOME/.config/.kind
 
```
 
 
Once the cluster has finalized creating, you can install the components using the `setup.sh` script. A successful install would look like this:
 
```
 
Creating cluster "default" ...
 
✓ Ensuring node image (kindest/node:v1.31.1) 🖼
 
✓ Preparing nodes 📦
 
✓ Writing configuration 📜
 
✓ Starting control-plane 🕹️
 
✓ Installing CNI 🔌
 
✓ Installing StorageClass 💾
 
Set kubectl context to "kind-default"
 
You can now use your cluster with:
 
 
kubectl cluster-info --context kind-default
 
 
$ bash setup.sh
 
 
namespace/infrastructure created
 
Release "dependencies" does not exist. Installing it now.
 
NAME: dependencies
 
LAST DEPLOYED: Mon Jan 27 15:22:21 2025
 
NAMESPACE: infrastructure
 
STATUS: deployed
 
REVISION: 1
 
pod/ionos-cloud-crossplane-provider-ionoscloud-6920f51664b9-7fg77ll condition met
 
Release "resources" does not exist. Installing it now.
 
W0127 15:24:23.858997 310697 warnings.go:70] metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
 
NAME: resources
 
LAST DEPLOYED: Mon Jan 27 15:24:23 2025
 
NAMESPACE: infrastructure
 
STATUS: deployed
 
REVISION: 1
 
TEST SUITE: None
 
pod/dependencies-argocd-server-5f549f7d8b-g2khh condition met
 
pod/dependencies-gitea-6d99cc7b84-j4kc5 condition met
 
pod/dependencies-argo-workflows-server-5cfc85fc45-jj5hw condition met
 
Forwarding from 127.0.0.1:8333 -> 3000
 
Forwarding from 127.0.0.1:8777 -> 2746
 
Forwarding from 127.0.0.1:8888 -> 8080
 
Handling connection for 8333
 
```
 
\ No newline at end of file
Loading