Code development platform for open source projects from the European Union institutions

Skip to content
Snippets Groups Projects
Unverified Commit 836f6ff0 authored by Vara Bonthu's avatar Vara Bonthu Committed by GitHub
Browse files

Readme updated with the new module version and features (#32)

* Readme updated with the new module verison and features

* Updated readme with additional details

* Added assume role document link to readme

* Resource names updated with underscore
parent e692e797
No related branches found
No related tags found
No related merge requests found
Showing
with 301 additions and 229 deletions
# aws-eks-accelerator-for-terraform
# Main Purpose
This project provides a framework for deploying best-practice multi-tenant [EKS Clusters](https://aws.amazon.com/eks), provisioned via [Hashicorp Terraform](https://www.terraform.io/) and [Helm charts](https://helm.sh/) on [AWS](https://aws.amazon.com/).
This project provides a framework for deploying best-practice multi-tenant [EKS Clusters](https://aws.amazon.com/eks) with Kubernetes Addons, provisioned via [Hashicorp Terraform](https://www.terraform.io/) and [Helm charts](https://helm.sh/) on [AWS](https://aws.amazon.com/).
# Overview
The AWS EKS Accelerator for Terraform module helps you to provision [EKS Clusters](https://aws.amazon.com/eks), [managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with [on-demand](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) and [spot instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html), [Fargate profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html), and all the necessary plugins/add-ons for a production-ready EKS cluster. The [Terraform Helm provider](https://github.com/hashicorp/terraform-provider-helm) is used to deploy common Kubernetes add-ons with publicly available [Helm Charts](https://artifacthub.io/). This project leverages the official [terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks) module to create EKS Clusters
This framework helps you to design and create EKS clusters for different environments in various AWS accounts across multiple regions with a **unique Terraform configuration and state file** per EKS cluster.
* The top-level `live` folder contains the configuration for each cluster. Each folder under `live/<region>/application` represents an EKS cluster environment(e.g., dev, test, load etc.).
This folder contains `backend.conf` and `base.tfvars`, used to create a unique Terraform state for each cluster environment.
Terraform backend configuration can be updated in `backend.conf` and cluster common configuration variables in `base.tfvars`
* `vpc.tf` contains all VPC resources
* `eks.tf` contains all EKS Cluster resources
* `helm.tf` contains resources to invoke helm modules under helm folder
* `modules` folder contains all the AWS resource modules
* `helm` folder contains all the Helm chart modules
* `examples` folder contains sample template files with `base.tfvars` which can be used to deploy clusters with multiple add-on options
The AWS EKS Accelerator for Terraform module helps you to provision [EKS Clusters](https://aws.amazon.com/eks), [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with [On-Demand](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) and [Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html), [AWS Fargate profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html), and all the necessary Kubernetes add-ons for a production-ready EKS cluster. The [Terraform Helm provider](https://github.com/hashicorp/terraform-provider-helm) is used to deploy common Kubernetes Addons with publicly available [Helm Charts](https://artifacthub.io/). This project leverages the official [terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks) module to create EKS Clusters.
The intention of this framework is to help you design config driven solution. This will help you to create EKS clusters for various environments and AWS accounts across multiple regions with a **unique Terraform configuration and state file** per EKS cluster.
The top-level `deploy` folder provides an example of how you can structure your folders and files to define multiple EKS Cluster environments and consume this accelerator module. This approach is suitable for large projects, with clearly defined sub directory and file structure.
This can be modified the way that suits your requirement. You can define a unique configuration for each EKS Cluster and making this module as central source of truth. Please note that `deploy` folder can be moved to a dedicated repo and consume this module using `main.tf` file([see example file here](deploy/live/preprod/eu-west-1/application/dev/dev.tfvars) ).
e.g. folder/file structure for defining multiple clusters
├── deploy
│ └── live
│ └── preprod
│ └── eu-west-1
│ └── application
│ └── dev
│ └── backend.conf
│ └── dev.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
│ └── test
│ └── backend.conf
│ └── test.tfvars
│ └── prod
│ └── eu-west-1
│ └── application
│ └── prod
│ └── backend.conf
│ └── prod.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
Each folder under `live/<region>/application` represents an EKS cluster environment(e.g., dev, test, load etc.).
This folder contains `backend.conf` and `<env>.tfvars`, used to create a unique Terraform state for each cluster environment.
Terraform backend configuration can be updated in `backend.conf` and cluster common configuration variables in `<env>.tfvars`
* `eks.tf` - [EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/clusters.html) resources and [Amazon EKS Addon](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) resources
* `fargate-profiles.tf` - [AWS EKS Fargate profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html)
* `managed-nodegroups.tf` - [Amazon Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) resources
* `self-managed-nodegroups.tf` - [Self-managed nodes](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) resources
* `kubernetes-addons.tf` - contains resources to deploy multiple Kubernetes Addons
* `vpc.tf` - VPC and endpoints resources
* `modules` - folder contains all the AWS resource sub modules used in this module
* `kubernetes-addons` - folder contains all the Helm charts and Kubernetes resources for deploying Kubernetes Addons
* `examples` - folder contains sample template files with `<env>.tfvars` which can be used to deploy EKS cluster with multiple node groups and Kubernetes add-ons
# EKS Cluster Deployment Options
This module provisions the following EKS resources
......@@ -31,24 +65,25 @@ This module provisions the following EKS resources
3. [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html)
4. [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
NOTE: VPC/Subnets creation can be disabled using `create_vpc = false` in TFVARS file and import the existing VPC resources
## EKS Cluster resources
1. [EKS Cluster with multiple networking options](https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/)
1. [Fully Private EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html)
2. [Public + Private EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
3. [Public Cluster](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html))
2. [EKS Addons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) -
2. [Amazon EKS Addons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) -
- [CoreDNS](https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html)
- [Kube-Proxy](https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html)
- [VPC-CNI](https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html)
3. [Managed Node Groups with On-Demand](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - AWS Managed Node Groups with On-Demand Instances
4. [Managed Node Groups with Spot](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - AWS Managed Node Groups with Spot Instances
5. [Fargate Profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) - AWS Fargate Profiles
6. [Launch Templates with SSM agent](https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/) - Deployed through launch templates to Managed Node Groups
5. [AWS Fargate Profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) - AWS Fargate Profiles
6. [Launch Templates](https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/) - Deployed through launch templates to Managed Node Groups
7. [Bottlerocket OS](https://github.com/bottlerocket-os/bottlerocket) - Managed Node Groups with Bottlerocket OS and Launch Templates
8. [RBAC](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for Developers and Administrators with IAM roles
9. [Amazon Managed Service for Prometheus (AMP)](https://aws.amazon.com/prometheus/) - AMP makes it easy to monitor containerized applications at scale
10. [Self-managed Node Group with Windows support](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) - Ability to create a self-managed node group for Linux or Windows workloads. See [Windows](./examples/windows-support) and [Linux](./examples/self-managed-linux-nodegroup) examples.
8. [Amazon Managed Service for Prometheus (AMP)](https://aws.amazon.com/prometheus/) - AMP makes it easy to monitor containerized applications at scale
9. [Self-managed Node Group with Windows support](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) - Ability to create a self-managed node group for Linux or Windows workloads.
## Kubernetes Addons using [Helm Charts](https://helm.sh/docs/topics/charts/)
......@@ -56,73 +91,158 @@ This module provisions the following EKS resources
2. [Cluster Autoscaler](https://github.com/Kubernetes/autoscaler)
3. [AWS LB Ingress Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
4. [Traefik Ingress Controller](https://doc.traefik.io/traefik/providers/Kubernetes-ingress/)
5. [FluentBit to CloudWatch for Managed Node groups](https://github.com/aws/aws-for-fluent-bit)
6. [FluentBit to CloudWatch for Fargate Containers](https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/)
7. [Agones](https://agones.dev/site/) - Host, Run and Scale dedicated game servers on Kubernetes
8. [Prometheus](https://github.com/prometheus-community/helm-charts)
9. [Kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics)
10. [Alert-manager](https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager)
11. [Prometheus-node-exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter)
12. [Prometheus-pushgateway](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway)
13. [OpenTelemetry](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector)
# Helm Charts Modules
Helm Chart Module within this framework allows you to deploy Kubernetes apps using Terraform helm chart provider with **enabled** conditional parameter in `base.tfvars`.
You can find the README for each Helm module with instructions on how to download the images from Docker Hub or third-party repos and upload it to your private ECR repo.
5. [Nginix Ingress Controller](https://kubernetes.github.io/ingress-nginx/)
6. [FluentBit to CloudWatch for Nodes](https://github.com/aws/aws-for-fluent-bit)
7. [FluentBit to CloudWatch for Fargate Containers](https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/)
8. [Agones](https://agones.dev/site/) - Host, Run and Scale dedicated game servers on Kubernetes
9. [Prometheus](https://github.com/prometheus-community/helm-charts)
10. [Kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics)
11. [Alert-manager](https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager)
12. [Prometheus-node-exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter)
13. [Prometheus-pushgateway](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway)
14. [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector)
15. [AWS Distro for OpenTelemetry Collector(AWS OTel Collector) ](https://github.com/aws-observability/aws-otel-collector)
# Node Group Modules
This module contains dedicated sub modules for creating [AWS Managed Node Groups](modules/aws-eks-managed-node-groups), [Self-managed Node groups](modules/aws-eks-self-managed-node-groups) and [Fargate profiles](modules/aws-eks-fargate-profiles)
Mixed Node groups with Fargate profiles can be defined simply as a map variable in `<env.tfvars>`.
This approach allows you to add/remove node groups or fargate profiles by just adding/removing map of values to the existing `<env>.tfvars`. AWS AUTH config map handled by this module to ensure new node groups successfully joined with the Cluster.
Please refer to the `dev.tfvars` for full example.
**Managed Node Groups Example**
enable_managed_nodegroups = true
managed_node_groups = {
mg_m4 = {
# 1> Node Group configuration
node_group_name = "managed-ondemand"
create_launch_template = true # false will use the default launch template
custom_ami_type = "amazonlinux2eks" # amazonlinux2eks or windows or bottlerocket
public_ip = false # Use this to enable public IP for EC2 instances; only for public subnets used in launch templates ;
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent"
EOT
# 2> Node Group scaling configuration
desired_size = 3
max_size = 3
min_size = 3
max_unavailable = 1 # or percentage = 20
# 3> Node Group compute configuration
ami_type = "AL2_x86_64" # AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM
capacity_type = "ON_DEMAND" # ON_DEMAND or SPOT
instance_types = ["m4.large"] # List of instances used only for SPOT type
disk_size = 50
# 4> Node Group network configuration
subnet_type = "private" # private or public
subnet_ids = [] # Define your private/public subnets list with comma seprated subnet_ids = ['subnet1','subnet2','subnet3']
k8s_taints = []
k8s_labels = {
Environment = "preprod"
Zone = "dev"
WorkerType = "ON_DEMAND"
}
additional_tags = {
ExtraTag = "m4-on-demand"
Name = "m4-on-demand"
subnet_type = "private"
}
create_worker_security_group = true
},
mg_m5 = {...}
}
**Fargate Profiles Example**
enable_fargate = true
fargate_profiles = {
default = {
fargate_profile_name = "default"
fargate_profile_namespaces = [{
namespace = "default"
k8s_labels = {
Environment = "preprod"
Zone = "dev"
env = "fargate"
}
}]
subnet_ids = [] # Provide list of private subnets
additional_tags = {
ExtraTag = "Fargate"
}
},
finance = {...}
}
# Kubernetes Addons Module
Kubernetes Addons Module within this framework allows you to deploy Kubernetes Addons using Terraform Helm provider and Kubernetes provider with simple **true/false** feature in `<env>.tfvars`.
e.g., `<env>.tfvars` config for enabling AWS LB INGRESS CONTROLLER. Refer to example [dev.tfvars](deploy/live/preprod/eu-west-1/application/dev/dev.tfvars) to enable other Kubernetes Addons
#---------------------------------------------------------//
# ENABLE AWS LB INGRESS CONTROLLER
#---------------------------------------------------------//
aws_lb_ingress_controller_enable = true
aws_lb_image_repo_name = "amazon/aws-load-balancer-controller"
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
aws_lb_helm_repo_url = "https://aws.github.io/eks-charts"
aws_lb_helm_helm_chart_name = "aws-load-balancer-controller"
This module currently configured to fetch the Helm Charts from Open Source repos and Docker images from Docker Hub/Public ECR repos which requires outbound Internet connection from your EKS Cluster. Alternatively you can download the Docker images for each Addon and push it to AWS ECR repo and this can be accessed within VPC using ECR endpoint.
You can find the README for each Helm module with instructions on how to download the images from Docker Hub or third-party repos and upload it to your private ECR repo. This module provides the option to use internal Helm and Docker image repos from `<env>.tfvars`.
For example, [ALB Ingress Controller](kuberenets-addons/lb-ingress-controller/README.md) for AWS LB Ingress Controller module.
## Ingress Controller Modules
Ingress is an API object that defines the traffic routing rules (e.g., load balancing, SSL termination, path-based routing, protocol), whereas the Ingress Controller is the component responsible for fulfilling those requests.
* [ALB Ingress Controller](kuberenets-addons/lb-ingress-controller/README.md) can be deployed by specifying the following line in `base.tfvars` file.
**AWS ALB Ingress controller** triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource in the cluster.
* [ALB Ingress Controller](kuberenets-addons/lb-ingress-controller/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
**AWS LB Ingress controller** triggers the creation of an LB Ingress Controller, and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource in the cluster.
[ALB Docs](https://Kubernetes-sigs.github.io/aws-load-balancer-controller/latest/)
`alb_ingress_controller_enable = true`
* [Traefik Ingress Controller](kuberenets-addons/traefik-ingress/README.md) can be deployed by specifying the following line in `base.tfvars` file.
**Traefik is an open source Kubernetes Ingress Controller**. The Traefik Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more details about [Traefik can be found here](https://doc.traefik.io/traefik/providers/Kubernetes-ingress/)
* [Traefik Ingress Controller](kuberenets-addons/traefik-ingress/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
**Traefik is an open source Kubernetes Ingress Controller**. The Traefik Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more details about [Traefik can be found here](https://doc.traefik.io/traefik/providers/Kubernetes-ingress/)
`traefik_ingress_controller_enable = true`
* [Nginx Ingress Controller](kuberenets-addons/nginx-ingress/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
**Nginx is an open source Kubernetes Ingress Controller**. The Nginx Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more details about [Nginx can be found here](https://kubernetes.github.io/ingress-nginx/)
## Autoscaling Modules
## Autoscaling Modules
**Cluster Autoscaler** and **Metric Server** Helm Modules gets deployed by default with the EKS Cluster.
* [Cluster Autoscaler](kuberenets-addons/cluster-autoscaler/README.md) can be deployed by specifying the following line in `base.tfvars` file.
* [Cluster Autoscaler](kuberenets-addons/cluster-autoscaler/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. It's not deployed by default in EKS clusters.
That is, the AWS Cloud Provider implementation within the Kubernetes Cluster Autoscaler controls the **DesiredReplicas** field of Amazon EC2 Auto Scaling groups.
The Cluster Autoscaler is typically installed as a **Deployment** in your cluster. It uses leader election to ensure high availability, but scaling is one done by a single replica at a time.
`cluster_autoscaler_enable = true`
* [Metrics Server](kuberenets-addons/metrics-server/README.md) can be deployed by specifying the following line in `base.tfvars` file.
* [Metrics Server](kuberenets-addons/metrics-server/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
The Kubernetes Metrics Server, used to gather metrics such as cluster CPU and memory usage over time, is not deployed by default in EKS clusters.
`metrics_server_enable = true`
## Logging and Monitoring
**FluentBit** is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources, enrich them with filters and send them to multiple destinations.
* [aws-for-fluent-bit](kuberenets-addons/aws-for-fluent-bit/README.md) can be deployed by specifying the following line in `base.tfvars` file.
* [aws-for-fluent-bit](kuberenets-addons/aws-for-fluent-bit/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
AWS provides a Fluent Bit image with plugins for both CloudWatch Logs and Kinesis Data Firehose. The AWS for Fluent Bit image is available on the Amazon ECR Public Gallery.
For more details, see [aws-for-fluent-bit](https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit) on the Amazon ECR Public Gallery.
`aws-for-fluent-bit_enable = true`
* [fargate-fluentbit](kuberenets-addons/fargate-fluentbit) can be deployed by specifying the following line in `base.tfvars` file.
* [fargate-fluentbit](kuberenets-addons/fargate-fluentbit) can be deployed by enabling the add-on in `<env>.tfvars` file.
This module ships the Fargate Container logs to CloudWatch
`fargate_fluent_bit_enable = true`
## Bottlerocket OS
[Bottlerocket](https://aws.amazon.com/bottlerocket/) is an open source operating system specifically designed for running containers. Bottlerocket build system is based on Rust. It's a container host OS and doesn't have additional software's or package managers other than what is needed for running containers hence its very light weight and secure. Container optimized operating systems are ideal when you need to run applications in Kubernetes with minimal setup and do not want to worry about security or updates, or want OS support from cloud provider. Container operating systems does updates transactionally.
Bottlerocket has two containers runtimes running. Control container **on** by default used for AWS Systems manager and remote API access. Admin container **off** by default for deep debugging and exploration.
Bottlerocket [Launch templates userdata](modules/launch-templates/templates/bottlerocket-userdata.sh.tpl) uses the TOML format with Key-value pairs. Remote API access API via SSM agent. You can launch trouble shooting container via user data `[settings.host-containers.admin] enabled = true`.
Bottlerocket [Launch templates userdata](modules/aws-eks-managed-node-groups/templates/userdata-bottlerocket.tpl) uses the TOML format with Key-value pairs.
Remote API access API via SSM agent. You can launch trouble shooting container via user data `[settings.host-containers.admin] enabled = true`.
### Features
* [Secure](https://github.com/bottlerocket-os/bottlerocket/blob/develop/SECURITY_FEATURES.md) - Opinionated, specialized and highly secured
......@@ -146,17 +266,16 @@ Ensure that you have installed the following tools in your Mac or Windows Laptop
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html)
3. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
4. [wget](https://www.gnu.org/software/wget/)
5. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
4. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
## Deployment Steps
The following steps walks you through the deployment of example [DEV cluster](deploy/live/preprod/eu-west-1/application/dev/dev.tfvars) configuration. This config deploys a private EKS cluster with public and private subnets.
Two managed worker nodes with On-demand and Spot instances along with one fargate profile for default namespace placed in private subnets. ALB placed in Public subnets created by LB Ingress controller.
Two managed worker nodes with On-Demand and Spot instances along with one fargate profile for default namespace placed in private subnets. ALB placed in Public subnets created by AWS LB Ingress controller.
It also deploys few kubernetes apps i.e., LB Ingress Controller, Metrics Server, Cluster Autoscaler, aws-for-fluent-bit CloudWatch logging for Managed node groups, FluentBit CloudWatch logging for Fargate etc.
It also deploys few kubernetes apps i.e., AWS LB Ingress Controller, Metrics Server, Cluster Autoscaler, aws-for-fluent-bit CloudWatch logging for Managed node groups, FluentBit CloudWatch logging for Fargate etc.
### Provision VPC (optional) and EKS cluster with selected Helm modules
### Provision VPC (optional) and EKS cluster with enabled Kubernetes Addons
#### Step1: Clone the repo using the command below
......@@ -164,13 +283,15 @@ It also deploys few kubernetes apps i.e., LB Ingress Controller, Metrics Server,
git clone https://github.com/aws-samples/aws-eks-accelerator-for-terraform.git
```
#### Step2: Update base.tfvars file
#### Step2: Update <env>.tfvars file
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/base.tfvars` file with the instructions specified in the file (OR use the default values). You can choose to use an existing VPC ID and Subnet IDs or create a new VPC and subnets by providing CIDR ranges in `base.tfvars` file
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/dev.tfvars` file with the instructions specified in the file (OR use the default values).
You can choose to use an existing VPC ID and Subnet IDs or create a new VPC and subnets by providing CIDR ranges in `dev.tfvars` file
#### Step3: Update Terraform backend config file
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/backend.conf` with your local directory path. [state.tf](state.tf) file contains backend config.
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/backend.conf` with your local directory path or s3 path.
[state.tf](state.tf) file contains backend config.
Local terraform state backend config variables
......@@ -187,17 +308,13 @@ It's highly recommended to use remote state in S3 instead of using local backend
```
#### Step4: Assume IAM role before creating a EKS cluster.
This role will become the Kubernetes Admin by default.
```shell script
aws-mfa --assume-role arn:aws:iam::<ACCOUNTID>:role/<IAMROLE>
```
This role will become the Kubernetes Admin by default. Please see this document for [assuming a role](https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/)
#### Step5: Run Terraform INIT
to initialize a working directory with configuration files
```shell script
terraform init -backend-config /live/preprod/eu-west-1/application/dev/backend.conf
terraform init -backend-config deploy/live/preprod/eu-west-1/application/dev/backend.conf
```
......@@ -205,14 +322,14 @@ terraform init -backend-config /live/preprod/eu-west-1/application/dev/backend.c
to verify the resources created by this execution
```shell script
terraform plan -var-file /live/preprod/eu-west-1/application/dev/base.tfvars
terraform plan -var-file deploy/live/preprod/eu-west-1/application/dev/dev.tfvars
```
#### Step7: Finally, Terraform APPLY
to create resources
```shell script
terraform apply -var-file /live/preprod/eu-west-1/application/dev/base.tfvars
terraform apply -var-file deploy/live/preprod/eu-west-1/application/dev/<env>.tfvars
```
**Alternatively you can use Makefile to deploy by skipping Step5, Step6 and Step7**
......@@ -255,17 +372,17 @@ EKS Cluster details can be extracted from terraform output or from AWS Console t
The `examples` folder contains multiple cluster templates with pre-populated `.tfvars` which can be used as a quick start. Reuse the templates from `examples` and follow the above Deployment steps as mentioned above.
# EKS Addons update
Amazon EKS doesn't modify any of your Kubernetes add-ons when you update a cluster to newer versions.
Amazon EKS doesn't modify any of your Kubernetes add-ons when you update a cluster to newer versions.
It's important to upgrade EKS Addons [Amazon VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s), [DNS (CoreDNS)](https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html) and [KubeProxy](https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html) for each EKS release.
This [README](eks_cluster_addons_upgrade/README.md) guides you to update the EKS Cluster abd the addons for newer versions that matches with your EKS cluster version
This [README](eks_cluster_addons_upgrade/README.md) guides you to update the EKS Cluster and the addons for newer versions that matches with your EKS cluster version
Updating a EKS cluster instructions can be found in [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html).
# Important note
This module tested only with **Kubernetes v1.20 version**. Helm Charts addon modules aligned with k8s v1.20. If you are looking to use this code to deploy different versions of Kubernetes then ensure Helm charts and docker images aligned with k8s version.
This module tested only with **Kubernetes v1.20 version**. Kubernetes addons modules aligned with k8s v1.20. If you are looking to use this code to deploy different versions of Kubernetes then ensure Helm charts and docker images aligned with k8s version.
The `Kubernetes _version="1.20"` is the required variable in `base.tfvars`. Kubernetes is evolving a lot, and each major version includes new features, fixes, or changes.
The `Kubernetes_version="1.20"` is the required variable in `<env>.tfvars`. Kubernetes is evolving a lot, and each major version includes new features, fixes, or changes.
Always check [Kubernetes Release Notes](https://Kubernetes.io/docs/setup/release/notes/) before updating the major version. You also need to ensure your applications and Helm addons updated,
or workloads could fail after the upgrade is complete. For action, you may need to take before upgrading, see the steps in the EKS documentation.
......@@ -321,4 +438,3 @@ See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more inform
## License
This library is licensed under the MIT-0 License. See the LICENSE file.
......@@ -486,12 +486,12 @@ cluster_autoscaler_helm_version = "9.10.7"
#---------------------------------------------------------//
# ENABLE AWS LB INGRESS CONTROLLER
#---------------------------------------------------------//
lb_ingress_controller_enable = false
aws_lb_image_repo_name = "amazon/aws-load-balancer-controller"
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
aws_lb_helm_repo_url = "https://aws.github.io/eks-charts"
aws_lb_helm_helm_chart_name = "aws-load-balancer-controller"
aws_lb_ingress_controller_enable = false
aws_lb_image_repo_name = "amazon/aws-load-balancer-controller"
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
aws_lb_helm_repo_url = "https://aws.github.io/eks-charts"
aws_lb_helm_helm_chart_name = "aws-load-balancer-controller"
#---------------------------------------------------------//
# ENABLE PROMETHEUS
......
......@@ -255,9 +255,9 @@ cluster_autoscaler_helm_version = "9.10.7"
#---------------------------------------------------------//
# ENABLE AWS LB INGRESS CONTROLLER
#---------------------------------------------------------//
lb_ingress_controller_enable = false
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
aws_lb_ingress_controller_enable = false
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
#---------------------------------------------------------//
# ENABLE PROMETHEUS
......
......@@ -143,7 +143,7 @@ cluster_autoscaler_enable = true
#---------------------------------------------------------//
# ENABLE ALB INGRESS CONTROLLER
#---------------------------------------------------------//
#lb_ingress_controller_enable = true
#aws_lb_ingress_controller_enable = true
#---------------------------------------------------------#
# ENABLE AWS_FLUENT-BIT
......
......@@ -203,6 +203,6 @@ cluster_autoscaler_helm_version = "9.10.7"
//---------------------------------------------------------//
// ENABLE ALB INGRESS CONTROLLER
//---------------------------------------------------------//
lb_ingress_controller_enable = true
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
aws_lb_ingress_controller_enable = true
aws_lb_image_tag = "v2.2.4"
aws_lb_helm_chart_version = "1.2.7"
......@@ -214,4 +214,4 @@ cluster_autoscaler_enable = true
#---------------------------------------------------------//
# ENABLE AWS LB INGRESS CONTROLLER
#---------------------------------------------------------//
lb_ingress_controller_enable = true
aws_lb_ingress_controller_enable = true
......@@ -51,14 +51,14 @@ module "helm" {
traefik_image_repo_name = var.traefik_image_repo_name
# ------- AWS LB Controller
lb_ingress_controller_enable = var.lb_ingress_controller_enable
aws_lb_image_tag = var.aws_lb_image_tag
aws_lb_helm_chart_version = var.aws_lb_helm_chart_version
eks_oidc_issuer_url = module.eks.eks_cluster_oidc_issuer_url
eks_oidc_provider_arn = module.eks.oidc_provider_arn
aws_lb_helm_repo_url = var.aws_lb_helm_repo_url
aws_lb_helm_helm_chart_name = var.aws_lb_helm_helm_chart_name
aws_lb_image_repo_name = var.aws_lb_image_repo_name
aws_lb_ingress_controller_enable = var.aws_lb_ingress_controller_enable
aws_lb_image_tag = var.aws_lb_image_tag
aws_lb_helm_chart_version = var.aws_lb_helm_chart_version
eks_oidc_issuer_url = module.eks.eks_cluster_oidc_issuer_url
eks_oidc_provider_arn = module.eks.oidc_provider_arn
aws_lb_helm_repo_url = var.aws_lb_helm_repo_url
aws_lb_helm_helm_chart_name = var.aws_lb_helm_helm_chart_name
aws_lb_image_repo_name = var.aws_lb_image_repo_name
# ------- Nginx Ingress Controller
nginx_ingress_controller_enable = var.nginx_ingress_controller_enable
......
......@@ -17,8 +17,9 @@
*/
module "metrics_server" {
count = var.metrics_server_enable == true ? 1 : 0
source = "./metrics-server"
count = var.metrics_server_enable == true ? 1 : 0
source = "./metrics-server"
private_container_repo_url = var.private_container_repo_url
public_docker_repo = var.public_docker_repo
metric_server_helm_chart_version = var.metric_server_helm_chart_version
......@@ -28,8 +29,9 @@ module "metrics_server" {
}
module "cluster_autoscaler" {
count = var.cluster_autoscaler_enable == true ? 1 : 0
source = "./cluster-autoscaler"
count = var.cluster_autoscaler_enable == true ? 1 : 0
source = "./cluster-autoscaler"
private_container_repo_url = var.private_container_repo_url
eks_cluster_id = var.eks_cluster_id
public_docker_repo = var.public_docker_repo
......@@ -39,8 +41,9 @@ module "cluster_autoscaler" {
}
module "lb_ingress_controller" {
count = var.lb_ingress_controller_enable == true ? 1 : 0
source = "./lb-ingress-controller"
count = var.aws_lb_ingress_controller_enable == true ? 1 : 0
source = "./lb-ingress-controller"
private_container_repo_url = var.private_container_repo_url
clusterName = var.eks_cluster_id
eks_oidc_issuer_url = var.eks_oidc_issuer_url
......@@ -54,8 +57,9 @@ module "lb_ingress_controller" {
}
module "traefik_ingress" {
count = var.traefik_ingress_controller_enable == true ? 1 : 0
source = "./traefik-ingress"
count = var.traefik_ingress_controller_enable == true ? 1 : 0
source = "./traefik-ingress"
private_container_repo_url = var.private_container_repo_url
account_id = data.aws_caller_identity.current.account_id
s3_nlb_logs = var.s3_nlb_logs
......@@ -67,8 +71,9 @@ module "traefik_ingress" {
}
module "nginx_ingress" {
count = var.nginx_ingress_controller_enable == true ? 1 : 0
source = "./nginx-ingress"
count = var.nginx_ingress_controller_enable == true ? 1 : 0
source = "./nginx-ingress"
private_container_repo_url = var.private_container_repo_url
account_id = data.aws_caller_identity.current.account_id
public_docker_repo = var.public_docker_repo
......@@ -78,8 +83,9 @@ module "nginx_ingress" {
}
module "aws-for-fluent-bit" {
count = var.aws_for_fluent_bit_enable == true ? 1 : 0
source = "./aws-for-fluent-bit"
count = var.aws_for_fluent_bit_enable == true ? 1 : 0
source = "./aws-for-fluent-bit"
private_container_repo_url = var.private_container_repo_url
cluster_id = var.eks_cluster_id
ekslog_retention_in_days = var.ekslog_retention_in_days
......@@ -97,8 +103,9 @@ module "fargate_fluentbit" {
}
module "agones" {
count = var.agones_enable == true ? 1 : 0
source = "./agones"
count = var.agones_enable == true ? 1 : 0
source = "./agones"
public_docker_repo = var.public_docker_repo
private_container_repo_url = var.private_container_repo_url
cluster_id = var.eks_cluster_id
......@@ -108,8 +115,9 @@ module "agones" {
}
module "prometheus" {
count = var.prometheus_enable == true ? 1 : 0
source = "./prometheus"
count = var.prometheus_enable == true ? 1 : 0
source = "./prometheus"
private_container_repo_url = var.private_container_repo_url
public_docker_repo = var.public_docker_repo
pushgateway_image_tag = var.pushgateway_image_tag
......@@ -125,8 +133,9 @@ module "prometheus" {
}
module "cert_manager" {
count = var.cert_manager_enable == true ? 1 : 0
source = "./cert-manager"
count = var.cert_manager_enable == true ? 1 : 0
source = "./cert-manager"
private_container_repo_url = var.private_container_repo_url
public_docker_repo = var.public_docker_repo
cert_manager_helm_chart_version = var.cert_manager_helm_chart_version
......@@ -139,8 +148,9 @@ module "cert_manager" {
}
module "windows_vpc_controllers" {
count = var.windows_vpc_controllers_enable == true ? 1 : 0
source = "./windows-vpc-controllers"
count = var.windows_vpc_controllers_enable == true ? 1 : 0
source = "./windows-vpc-controllers"
private_container_repo_url = var.private_container_repo_url
public_docker_repo = var.public_docker_repo
resource_controller_image_tag = var.windows_vpc_resource_controller_image_tag
......@@ -151,8 +161,9 @@ module "windows_vpc_controllers" {
}
module "aws_opentelemetry_collector" {
count = var.aws_open_telemetry_enable == true ? 1 : 0
source = "./aws-otel-eks"
count = var.aws_open_telemetry_enable == true ? 1 : 0
source = "./aws-otel-eks"
aws_open_telemetry_aws_region = var.aws_open_telemetry_aws_region == "" ? data.aws_region.current.id : var.aws_open_telemetry_aws_region
aws_open_telemetry_emitter_image = var.aws_open_telemetry_emitter_image
aws_open_telemetry_collector_image = var.aws_open_telemetry_collector_image
......@@ -164,8 +175,9 @@ module "aws_opentelemetry_collector" {
}
module "opentelemetry_collector" {
count = var.opentelemetry_enable == true ? 1 : 0
source = "./opentelemetry-collector"
count = var.opentelemetry_enable == true ? 1 : 0
source = "./opentelemetry-collector"
private_container_repo_url = var.private_container_repo_url
public_docker_repo = var.public_docker_repo
opentelemetry_command_name = var.opentelemetry_command_name
......
......@@ -26,7 +26,7 @@ resource "kubernetes_namespace" "opentelemetry_system" {
}
}
resource "helm_release" "opentelemetry-collector" {
resource "helm_release" "opentelemetry_collector" {
name = "opentelemetry-collector"
repository = var.opentelemetry_helm_chart_url
chart = var.opentelemetry_helm_chart
......
......@@ -33,7 +33,7 @@ variable "traefik_ingress_controller_enable" {
}
variable "lb_ingress_controller_enable" {
variable "aws_lb_ingress_controller_enable" {
type = bool
default = false
description = "Enabling LB Ingress controller on eks cluster"
......
......@@ -17,7 +17,7 @@
*/
resource "aws_eks_addon" "vpc-cni" {
resource "aws_eks_addon" "vpc_cni" {
count = var.enable_vpc_cni_addon ? 1 : 0
cluster_name = var.cluster_name
addon_name = "vpc-cni"
......@@ -41,7 +41,7 @@ resource "aws_eks_addon" "coredns" {
)
}
resource "aws_eks_addon" "kube-proxy" {
resource "aws_eks_addon" "kube_proxy" {
count = var.enable_kube_proxy_addon ? 1 : 0
cluster_name = var.cluster_name
addon_name = "kube-proxy"
......
resource "aws_eks_fargate_profile" "eks-fargate" {
resource "aws_eks_fargate_profile" "eks_fargate" {
cluster_name = var.eks_cluster_name
fargate_profile_name = "${var.eks_cluster_name}-${local.fargate_profiles["fargate_profile_name"]}"
pod_execution_role_arn = aws_iam_role.fargate.arn
......
......@@ -2,7 +2,7 @@
#----------------------------------------------------------
#IAM Policy for Fargate Fluentbit
#----------------------------------------------------------
resource "aws_iam_policy" "eks-fargate-logging-policy" {
resource "aws_iam_policy" "eks_fargate_logging" {
name = "${var.eks_cluster_name}-${local.fargate_profiles["fargate_profile_name"]}"
description = "Allow fargate profiles to writ logs to CW"
......@@ -37,7 +37,7 @@ resource "aws_iam_role_policy_attachment" "fargate-AmazonEKSFargatePodExecutionR
}
resource "aws_iam_role_policy_attachment" "eks-fargate-logging-policy-attach" {
policy_arn = aws_iam_policy.eks-fargate-logging-policy.arn
policy_arn = aws_iam_policy.eks_fargate_logging.arn
role = aws_iam_role.fargate.name
}
......@@ -7,5 +7,5 @@ output "eks_fargate_profile_role_name" {
output "eks_fargate_profile_id" {
description = "EKS Cluster name and EKS Fargate Profile name separated by a colon"
value = aws_eks_fargate_profile.eks-fargate.id
value = aws_eks_fargate_profile.eks_fargate.id
}
......@@ -18,6 +18,7 @@ resource "aws_iam_instance_profile" "managed_ng" {
}
}
#TODO Allow IAM policies can be passed from tfvars file
resource "aws_iam_role_policy_attachment" "managed_ng_AmazonEKSWorkerNodePolicy" {
policy_arn = "${local.policy_arn_prefix}/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.managed_ng.name
......
......@@ -37,12 +37,29 @@ output "configure_kubectl" {
value = var.create_eks ? "aws eks --region ${data.aws_region.current.id} update-kubeconfig --name ${module.eks.eks_cluster_id}" : "EKS Cluster not enabled"
}
output "cluster_security_group_id" {
description = "EKS Control Plane Security Group ID"
value = module.eks.eks_cluster_security_group_id
}
output "cluster_primary_security_group_id" {
description = "EKS Cluster Security group ID"
value = module.eks.eks_cluster_primary_security_group_id
}
output "worker_security_group_id" {
description = "EKS Worker Security group ID created by EKS module"
value = module.eks.eks_worker_security_group_id
}
output "amp_work_id" {
value = var.prometheus_enable ? module.aws_managed_prometheus[0].amp_workspace_id : "AMP not enabled"
description = "AWS Managed Prometheus workspace id"
value = var.prometheus_enable ? module.aws_managed_prometheus[0].amp_workspace_id : "AMP not enabled"
}
output "amp_work_arn" {
value = var.prometheus_enable ? module.aws_managed_prometheus[0].service_account_amp_ingest_role_arn : "AMP not enabled"
description = "AWS Managed Prometheus workspace ARN"
value = var.prometheus_enable ? module.aws_managed_prometheus[0].service_account_amp_ingest_role_arn : "AMP not enabled"
}
output "self_managed_node_group_iam_role_arns" {
......@@ -51,49 +68,46 @@ output "self_managed_node_group_iam_role_arns" {
}
output "managed_node_group_iam_role_arns" {
description = "IAM role arn's of self managed node groups"
description = "IAM role arn's of managed node groups"
value = var.create_eks && var.enable_managed_nodegroups ? values({ for nodes in sort(keys(var.managed_node_groups)) : nodes => join(",", module.managed-node-groups[nodes].manage_ng_iam_role_arn) }) : []
}
output "fargate_profiles_iam_role_arns" {
description = "IAM role arn's for Fargate Profiles"
value = var.create_eks && var.enable_fargate ? { for nodes in sort(keys(var.fargate_profiles)) : nodes => module.fargate-profiles[nodes].eks_fargate_profile_role_name } : null
}
output "managed_node_groups" {
description = "Outputs from EKS node groups "
description = "Outputs from EKS Managed node groups "
value = var.create_eks && var.enable_managed_nodegroups ? module.managed-node-groups.* : []
}
output "fargate_profiles" {
description = "Outputs from EKS node groups "
value = var.create_eks && var.enable_fargate ? module.fargate-profiles.* : []
output "self_managed_node_groups" {
description = "Outputs from EKS Self-managed node groups "
value = var.create_eks && var.enable_self_managed_nodegroups ? module.aws-eks-self-managed-node-groups.* : []
}
output "fargate_profiles_iam_role_arns" {
description = "IAM role arn's of Fargate Profiles"
value = var.create_eks && var.enable_fargate ? { for nodes in sort(keys(var.fargate_profiles)) : nodes => module.fargate-profiles[nodes].eks_fargate_profile_role_name } : null
output "fargate_profiles" {
description = "Outputs from EKS Fargate profiles groups "
value = var.create_eks && var.enable_fargate ? module.fargate-profiles.* : []
}
output "self_managed_node_group_aws_auth_config_map" {
value = local.self_managed_node_group_aws_auth_config_map.*
description = "Self managed node groups AWS auth map"
value = local.self_managed_node_group_aws_auth_config_map.*
}
output "windows_node_group_aws_auth_config_map" {
value = local.windows_node_group_aws_auth_config_map.*
description = "Windows node groups AWS auth map"
value = local.windows_node_group_aws_auth_config_map.*
}
output "managed_node_group_aws_auth_config_map" {
value = local.managed_node_group_aws_auth_config_map.*
description = "Managed node groups AWS auth map"
value = local.managed_node_group_aws_auth_config_map.*
}
output "fargate_profiles_aws_auth_config_map" {
value = local.fargate_profiles_aws_auth_config_map.*
}
output "cluster_security_group_id" {
value = module.eks.eks_cluster_security_group_id
}
output "cluster_primary_security_group_id" {
value = module.eks.eks_cluster_primary_security_group_id
}
output "worker_security_group_id" {
value = module.eks.eks_worker_security_group_id
description = "Fargate profiles AWS auth map"
value = local.fargate_profiles_aws_auth_config_map.*
}
......@@ -140,7 +140,6 @@ variable "enable_irsa" {
variable "create_eks" {
type = bool
default = false
}
variable "kubernetes_version" {
type = string
......@@ -213,7 +212,6 @@ variable "fargate_profiles" {
type = any
default = {}
}
variable "enable_windows_support" {
type = string
default = false
......@@ -257,8 +255,6 @@ variable "aws_auth_additional_labels" {
#----------------------------------------------------------
# HELM CHART VARIABLES
#----------------------------------------------------------
variable "private_container_repo_url" {
type = string
default = ""
......@@ -284,7 +280,7 @@ variable "traefik_ingress_controller_enable" {
default = false
description = "Enabling Traefik Ingress Controller on eks cluster"
}
variable "lb_ingress_controller_enable" {
variable "aws_lb_ingress_controller_enable" {
type = bool
default = false
description = "enabling LB Ingress Controller on eks cluster"
......@@ -299,19 +295,16 @@ variable "aws_for_fluent_bit_enable" {
default = false
description = "Enabling aws_fluent_bit module on eks cluster"
}
variable "fargate_fluent_bit_enable" {
type = bool
default = false
description = "Enabling fargate_fluent_bit module on eks cluster"
}
variable "ekslog_retention_in_days" {
default = 90
description = "Number of days to retain log events. Default retention - 90 days."
type = number
}
variable "agones_enable" {
type = bool
default = false
......@@ -322,27 +315,22 @@ variable "expose_udp" {
default = false
description = "Enabling Agones Gaming Helm Chart"
}
variable "agones_image_repo" {
type = string
default = "gcr.io/agones-images"
}
variable "agones_image_tag" {
type = string
default = "1.15.0"
}
variable "agones_helm_chart_name" {
type = string
default = "agones"
}
variable "agones_helm_chart_url" {
type = string
default = "https://agones.dev/chart/stable"
}
variable "agones_game_server_maxport" {
type = number
default = 8000
......@@ -351,102 +339,82 @@ variable "agones_game_server_minport" {
type = number
default = 7000
}
variable "aws_lb_image_repo_name" {
type = string
default = "amazon/aws-load-balancer-controller"
}
variable "aws_lb_helm_repo_url" {
type = string
default = "https://aws.github.io/eks-charts"
}
variable "aws_lb_helm_helm_chart_name" {
type = string
default = "aws-load-balancer-controller"
}
variable "aws_lb_image_tag" {
type = string
default = "v2.2.4"
}
variable "aws_lb_helm_chart_version" {
type = string
default = "1.2.7"
}
variable "metric_server_image_repo_name" {
type = string
default = "bitnami/metrics-server"
}
variable "metric_server_image_tag" {
type = string
default = "0.5.0-debian-10-r83"
}
variable "metric_server_helm_chart_version" {
type = string
default = "5.10.1"
}
variable "metric_server_helm_repo_url" {
type = string
default = "https://charts.bitnami.com/bitnami"
}
variable "metric_server_helm_chart_name" {
type = string
default = "metrics-server"
}
variable "cluster_autoscaler_helm_repo_url" {
type = string
default = "https://kubernetes.github.io/autoscaler"
}
variable "cluster_autoscaler_helm_chart_name" {
type = string
default = "cluster-autoscaler"
}
variable "cluster_autoscaler_image_repo_name" {
type = string
default = "k8s.gcr.io/autoscaling/cluster-autoscaler"
}
variable "cluster_autoscaler_image_tag" {
type = string
default = "v1.21.0"
}
variable "cluster_autoscaler_helm_version" {
type = string
default = "9.10.7"
}
variable "aws_managed_prometheus_workspace_name" {
type = string
default = "aws-managed-prometheus-workspace"
}
variable "prometheus_helm_chart_url" {
type = string
default = "https://prometheus-community.github.io/helm-charts"
}
variable "prometheus_helm_chart_name" {
type = string
default = "prometheus"
}
variable "prometheus_helm_chart_version" {
type = string
default = "14.7.0"
}
variable "prometheus_image_tag" {
type = string
default = "v2.26.0"
......@@ -456,62 +424,50 @@ variable "alert_manager_image_tag" {
type = string
default = "v0.21.0"
}
variable "configmap_reload_image_tag" {
type = string
default = "v0.5.0"
}
variable "node_exporter_image_tag" {
type = string
default = "v1.1.2"
}
variable "pushgateway_image_tag" {
type = string
default = "v1.3.1"
}
variable "prometheus_enable" {
type = bool
default = false
}
variable "aws_managed_prometheus_enable" {
type = bool
default = false
}
variable "traefik_image_repo_name" {
type = string
default = "traefik"
}
variable "traefik_helm_chart_name" {
type = string
default = "traefik"
}
variable "traefik_helm_chart_url" {
type = string
default = "https://helm.traefik.io/traefik"
}
variable "traefik_helm_chart_version" {
type = string
default = "10.0.0"
}
variable "traefik_image_tag" {
type = string
default = "v2.4.9"
}
variable "nginx_image_repo_name" {
type = string
default = "ingress-nginx/controller"
}
variable "nginx_helm_chart_url" {
type = string
default = "https://kubernetes.github.io/ingress-nginx"
......@@ -520,44 +476,36 @@ variable "nginx_helm_chart_name" {
type = string
default = "ingress-nginx"
}
variable "nginx_helm_chart_version" {
type = string
default = "3.33.0"
}
variable "nginx_image_tag" {
type = string
default = "v0.47.0"
}
variable "aws_for_fluent_bit_image_repo_name" {
type = string
default = "amazon/aws-for-fluent-bit"
}
variable "aws_for_fluent_bit_helm_chart_url" {
type = string
default = "https://aws.github.io/eks-charts"
}
variable "aws_for_fluent_bit_helm_chart_name" {
type = string
default = "aws-for-fluent-bit"
}
variable "aws_for_fluent_bit_image_tag" {
type = string
default = "2.13.0"
description = "Docker image tag for aws_for_fluent_bit"
}
variable "aws_for_fluent_bit_helm_chart_version" {
type = string
default = "0.1.11"
description = "Helm chart version for aws_for_fluent_bit"
}
variable "cert_manager_enable" {
type = bool
default = false
......@@ -573,28 +521,23 @@ variable "cert_manager_helm_chart_version" {
default = "v1.5.3"
description = "Helm chart version for cert-manager"
}
variable "cert_manager_install_crds" {
type = bool
description = "Whether Cert Manager CRDs should be installed as part of the cert-manager Helm chart installation"
default = true
}
variable "cert_manager_helm_chart_url" {
type = string
default = "https://charts.jetstack.io"
}
variable "cert_manager_helm_chart_name" {
type = string
default = "cert-manager"
}
variable "cert_manager_image_repo_name" {
type = string
default = "jetstack/cert-manager-controller"
}
variable "windows_vpc_resource_controller_image_tag" {
type = string
default = "v0.2.7"
......@@ -608,40 +551,31 @@ variable "windows_vpc_admission_webhook_image_tag" {
#-----------AWS OPEN TELEMETRY HELM CHART-------------
variable "aws_open_telemetry_enable" {}
variable "aws_open_telemetry_namespace" {
description = "WS Open telemetry namespace"
}
variable "aws_open_telemetry_emitter_otel_resource_attributes" {
description = "AWS Open telemetry emitter otel resource attributes"
}
variable "aws_open_telemetry_emitter_name" {
description = "AWS Open telemetry emitter image name"
}
variable "aws_open_telemetry_emitter_image" {
description = "AWS Open telemetry emitter image id and tag"
}
variable "aws_open_telemetry_collector_image" {
description = "AWS Open telemetry collector image id and tag"
}
variable "aws_open_telemetry_aws_region" {
description = "AWS Open telemetry region"
}
variable "aws_open_telemetry_emitter_oltp_endpoint" {
description = "AWS Open telemetry OLTP endpoint"
}
variable "aws_open_telemetry_mg_node_iam_role_arns" {
type = list(string)
default = []
}
variable "aws_open_telemetry_self_mg_node_iam_role_arns" {
type = list(string)
default = []
......@@ -652,19 +586,16 @@ variable "opentelemetry_enable" {
default = false
description = "Enabling opentelemetry module on eks cluster"
}
variable "opentelemetry_enable_standalone_collector" {
type = bool
default = false
description = "Enabling the opentelemetry standalone gateway collector on eks cluster"
}
variable "opentelemetry_enable_agent_collector" {
type = bool
default = true
description = "Enabling the opentelemetry agent collector on eks cluster"
}
variable "opentelemetry_enable_autoscaling_standalone_collector" {
type = bool
default = false
......@@ -675,13 +606,11 @@ variable "opentelemetry_image_tag" {
default = "0.31.0"
description = "Docker image tag for opentelemetry from open-telemetry"
}
variable "opentelemetry_image" {
type = string
default = "otel/opentelemetry-collector"
description = "Docker image for opentelemetry from open-telemetry"
}
variable "opentelemetry_helm_chart_version" {
type = string
default = "0.5.9"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment