diff --git a/documents/README.md b/documents/README.md new file mode 100644 index 0000000000000000000000000000000000000000..487dbf19961a356128dea16784ec74ad99f09af7 --- /dev/null +++ b/documents/README.md @@ -0,0 +1,135 @@ +# Infrastructure Crossplane Provisioner +This repository contains the resources required to setup the Infrastructure Provisioner. The required components are the following: + + * [Crossplane](https://docs.crossplane.io/latest/) + * [ArgoCD](https://argo-cd.readthedocs.io/en/stable/) + * [Gitea](https://about.gitea.com/products/gitea/) + * [ArgoEvents](https://argoproj.github.io/argo-events/) + * [ArgoWorkflows](https://argoproj.github.io/workflows/) + +## Installation + +The installation uses [Helm](https://helm.sh/docs/intro/install/) and requires a running Kubernetes cluster. + +### **Warning** +> Crossplane is a Kubernetes control plane that manages cluster level resources. Thus it is industry standard practice to operate it in a dedicated cluster together with the other auxiliary plugins required for automating it. +<br>Installation in a cluster which also orchestrates application Pods is possible, but not recommended. +<br><b>DO NOT<b> attempt to perform multiple installations of Crossplane in the same cluster, as this will cause (potentially costly!) issues with the cloud resources provisioning process and will result in a broken installation which requires manual intervention to clean. + + +The two charts which need to be installed in the cluster: + * `dependencies` installs and configures the required k8s controllers and CRDs + * `resources` installs the required resources that implement provisioner logic + +<b>Note: Currently, the charts have to be installed as-is with all their listed dependencies. Do not try to replace a dependency with one that you already have running, such as using an already existing gitea instance. Decoupling of dependencies is still a work in progress!</b> + +### Dependencies Install + +To install the dependencies module, you need to update the file `charts/dependencies/values.env.yaml` with the parameters to your environment: + + - GITEA_STORAGE_CLASS: the k8s storage class to create the gitea persistence volume; + - ARGO_CD_CRDS_INSTALL: install ArgoCD without CRDs, use this to avoid resource conflicts if installing in a cluster that already has an ArgoCD installation; + - GITEA_USERNAME: gitea admin user name; + - GITEA_PASSWORD: gitea admin user password; + +Execute the installation with the following commnands: +```cmd +helm dep up charts/dependencies +helm upgrade --install provisioner-dependencies --create-namespace -n $NS charts/dependencies -f charts/dependencies/values.yaml -f charts/dependencies/values.env.yaml +``` + +### Resources Install + +To install the resources module, you need to update the file `charts/resources/values.env.yaml` with the parameters to you environment: + + - KAFKA_ENDPOINT: kafka broker endpoint url and port; + - KAFKA_USERNAME: kafka user name to connect on kafka broker; + - KAFKA_PASSWORD: kafka user password to connect on kafka broker; + - IONOS_TOKEN: IONOS API Token to be use by crossplane IONOS provider to create the cloud environments. + - GITEA_USERNAME: gitea admin user name; + - GITEA_PASSWORD: gitea admin user password; + +Execute the installation with the following commnands: +```cmd +helm upgrade --install provisioner-resources --create-namespace -n $NS charts/resources -f charts/resources/values.yaml -f charts/resources/values.env.yaml +``` + +### Accessing the Services Locally + +To access the ArgoCD UI for the provisioner, retrieve the initial admin password. Username is "admin" by default. + +```cmd +kubectl get -n $NS secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d > argopw +``` + +To access the ArgoWorkflows UI, ensure that the <b>cli</b> service account is enabled by setting ```cliEnabled: true``` in the resources helm chart values.yaml and retrieve the auth token. + +```cmd +echo "Bearer $(kubectl get -n $NS secret cli.service-account-token -o=jsonpath='{.data.token}' | base64 --decode)" > argowftoken +``` + +Port forward for each service: +```cmd +kubectl port-forward -n $NS svc/argocd-server 8888:443 +kubectl port-forward -n $NS svc/argowf-argo-workflows-server 8777:2746 +kubectl port-forward -n $NS svc/gitea-http 8333:3000 +``` + +### Setting up an environment for local development and testing +Local development and testing can be done using the [kind](https://kind.sigs.k8s.io/) tool. The following commands will create a local k8s cluster. The configuration will mount a local directory into the cluster for creating persistent volumes. + +```cmd +cat <<EOF >> $HOME/.config/.kind +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +name: default +nodes: + - role: control-plane + image: kindest/node:v1.31.1 + extraMounts: + - hostPath: $HOME/devtools/kind/default_volume + containerPath: /default +EOF +kind create cluster --config=$HOME/.config/.kind +``` + +Once the cluster has finalized creating, you can install the components using the `setup.sh` script. A successful install would look like this: +``` +Creating cluster "default" ... + â Ensuring node image (kindest/node:v1.31.1) đŧ + â Preparing nodes đĻ + â Writing configuration đ + â Starting control-plane đšī¸ + â Installing CNI đ + â Installing StorageClass đž +Set kubectl context to "kind-default" +You can now use your cluster with: + +kubectl cluster-info --context kind-default + +$ bash setup.sh + +namespace/infrastructure created +Release "dependencies" does not exist. Installing it now. +NAME: dependencies +LAST DEPLOYED: Mon Jan 27 15:22:21 2025 +NAMESPACE: infrastructure +STATUS: deployed +REVISION: 1 +pod/ionos-cloud-crossplane-provider-ionoscloud-6920f51664b9-7fg77ll condition met +Release "resources" does not exist. Installing it now. +W0127 15:24:23.858997 310697 warnings.go:70] metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers +NAME: resources +LAST DEPLOYED: Mon Jan 27 15:24:23 2025 +NAMESPACE: infrastructure +STATUS: deployed +REVISION: 1 +TEST SUITE: None +pod/dependencies-argocd-server-5f549f7d8b-g2khh condition met +pod/dependencies-gitea-6d99cc7b84-j4kc5 condition met +pod/dependencies-argo-workflows-server-5cfc85fc45-jj5hw condition met +Forwarding from 127.0.0.1:8333 -> 3000 +Forwarding from 127.0.0.1:8777 -> 2746 +Forwarding from 127.0.0.1:8888 -> 8080 +Handling connection for 8333 +``` \ No newline at end of file diff --git a/documents/UserGuide.md b/documents/UserGuide.md new file mode 100644 index 0000000000000000000000000000000000000000..7d608bb3ba01d26d5059a1787c5129402a0fe904 --- /dev/null +++ b/documents/UserGuide.md @@ -0,0 +1,121 @@ +## Crossplane Infrastructure Provisioner User Guide + +### Manual triggering + +If you wish to trigger the provisioning or de-provisioning process manually, you can do so without the API by accessing the internal components of the provisioner. This is useful during debugging and testing, and can be done with the following steps: + +#### 1. Clone the Gitops repositories used by the provisioner. Ensure the Gitea pod is accessible through a port-forward or ingress. For this step, you can also use an editor like VSCode to clone the repositories. + +```sh +mkdir repos +cd repos +git clone http://127.0.0.1:8333/gitops_test/data-repo.git +git clone http://127.0.0.1:8333/gitops_test/management-repo.git +``` +1.1 In the data-repo repository, create the following path under the `claims` directory: `claim_1/claim_1.yaml` + +```sh +cd data-repo +mkdir claims/claim_1 +touch claims/claim_1/claim_1.yaml +``` + +1.2 Add the following contents to the `claim_1.yaml` file, make sure to replace the placeholders according to your setup. + +```yaml +apiVersion: platform.example.org/v1alpha1 +kind: ServerInstance +metadata: + namespace: <use your provisioner installation namespace> + name: provisioned-manually + labels: + uuid: '1' + reference-kind: xserversinstances +spec: + parameters: + datacenterName: provisioned-manually + datacenterDescription: provisioned-manually + datacenterLocation: de/txl + serverName: server + cores: 2 + ram: 2048 + cpuFamily: INTEL_ICELAKE + cloudConfig: I2Nsb3VkLWNvbmZpZwpob3N0bmFtZTogZGVmYXVsdC1zZXJ2ZXIKc3NoX3B3YXV0aDogdHJ1ZQpjaHBhc3N3ZDoKICBleHBpcmU6IGZhbHNlCgp1c2VyczoKICAtIG5hbWU6IGRlZmF1bHQKICAtIG5hbWU6IHVidW50dQogICAgcGFzc3dkOiAkNSRyb3VuZHM9NDA5NiRHQnI1a3kyVmpBZzFkejBWJHlvQWxabjlZdThnLkZLR2pacHpYM1E3Z1dWZjZjNVlGWEtKcWg2VGZJUEEKICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIGxvY2tfcGFzc3dkOiBmYWxzZQogICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgZ3JvdXBzOiB1c2VycywgYWRtaW4sIHN1ZG8KICAtIG5hbWU6IHVidW50dQogICAgcGFzc3dkOiAkNSRyb3VuZHM9NDA5NiRHQnI1a3kyVmpBZzFkejBWJHlvQWxabjlZdThnLkZLR2pacHpYM1E3Z1dWZjZjNVlGWEtKcWg2VGZJUEEKICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIGxvY2tfcGFzc3dkOiBmYWxzZQogICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgZ3JvdXBzOiB1c2VycywgYWRtaW4sIHN1ZG8KCnJ1bmNtZDoKICAtIGVjaG8gInJlZ2VuZXJhdGluZyBob3N0IGtleXMiCiAgLSBybSAtZiAvZXRjL3NzaC9zc2hfaG9zdF8qCiAgLSBzc2gta2V5Z2VuIC1BCiAgLSBzeXN0ZW1jdGwgcmVzdGFydCBzc2hkCgpkZWJ1ZzogdHJ1ZQpvdXRwdXQ6CiAgYWxsOiAifCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1kZWJ1Zy5sb2ciCmZpbmFsX21lc3NhZ2U6ICJEZWZhdWx0IFZNIENsb3VkaW5pdCBkb25lIgo= + providerConfig: <your-namespace>-ionos-pc +``` + +1.3 Commit and push the changes, if prompted for a git username and password, use the credentials GITEA_USERNAME and GITEA_PASSWORD set in the `values.env.yaml` file. + +```sh +git add * +git commit -m "Manual provisioning" +git push +``` + +1.4 Switch to the management-repo repository and create the following path under the `applications` directory: `application_1/applications_1.yaml` + +```sh +cd ../management-repo +mkdir applications/application_1 +touch applications/application_1/application_1.yaml +``` + +1.5 Add the following content to the `application_1.yaml` file, make sure to replace the placeholders according to your setup. + +```yaml + apiVersion: argoproj.io/v1alpha1 + kind: Application + metadata: + name: crossplane-claim-1 + namespace: <use your provisioner installation namespace> + finalizers: + - resources-finalizer.argocd.argoproj.io + labels: + track-events: claim-application + claim-uuid: "1" + claim-kind: "xserversinstances" + spec: + project: default + source: + # Use your gitea server address set in the giteaUrl values.env.yaml file + repoURL: http://<replace with your own giteaUrl>/data-repo.git + path: claims/claim_1 + targetRevision: master + destination: + server: https://kubernetes.default.svc + syncPolicy: + automated: + selfHeal: true + prune: true + allowEmpty: true +``` + +1.6 Commit and push the changes, if prompted for a git username and password, use the credentials GITEA_USERNAME and GITEA_PASSWORD set in the `values.env.yaml` file. + +```sh +git add * +git commit -m "Manual provisioning" +git push +``` + +##### Note: To trigger manual de-provisioning, simply delete the created `claim_1` and `application_1` directories in their respective repositories and submit the changes. ArgoCD will automatically detect the changes and trigger the process. + +#### Optionally: Access the ArgoCD UI and refresh the `claim-manager` Application to speed up the provisiong or de-provisioning process. + +### Manual cleanup of internal resources + +Note: Under normal circumstances this should not be necessary, but in case of broken installations or other issues, there are several cluster level resources which need to be cleaned. + +1. First, depending on your method of installation, remove the crossplane pods or their respective deployments. This should be done in the namespace where crossplane was installed. + + +2. Make note of the configurations or configurationRevisions and delete them. You may need to patch the finalizer for deletion to work. As these are cluster level resources which configure the Crossplane controller, other resources will be automatically cleaned up. +```sh +kubectl get configurations.pkg.crossplane.io +kubectl get configurationrevisions.pkg.crossplane.io +``` + +3. Make note of any crossplane managed resources that may be stuck. This only occurs if the providerConfig of the crossplane provider that created these resources is improperly removed or if the Auth secret attached to it is improperly changed or removed. You may need to patch the finalizer for deletion to work. +```sh +kubectl get managed +``` \ No newline at end of file