Code development platform for open source projects from the European Union institutions

Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.

EKS Cluster with External DNS

This example demonstrates how to leverage External DNS, in concert with Ingress Nginx and AWS Load Balancer Controller. It demonstrates how you can easily provision multiple services with secure, custom domains which sit behind a single load balancer.

The pattern deploys the sample workloads that reside in the EKS Blueprints Workloads repo via ArgoCD. The configuration for team-riker will deploy an Ingress resource which contains configuration for both path-based routing and the custom hostname for the team-riker service. Once the pattern is deployed, you will be able to reach the team-riker sample workload via a custom domain you supply.

How to Deploy

Prerequisites:

Tools

Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

  1. AWS CLI
  2. Kubectl
  3. Terraform

AWS Resources

This example requires the following AWS resources:

  • A Route53 Hosted Zone for a domain that you own.
  • A SSL/TLS certificate for your domain stored in AWS Certificate Manager (ACM).

For information on Route53 Hosted Zones, see Route53 documentation. For instructions on requesting a SSL/TLS certificate for your domain, see ACM docs.

Deployment Steps

Step1: Clone the repo

git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git

Step2: Terraform INIT

Initialize a working directory with configuration files

cd examples/eks-cluster-with-external-dns
terraform init

Step3: Replace placeholder values in terraform.tfvars

Both values in terraform.tfvars must be updated.

  • eks_cluster_domain - the domain for your cluster. Value is used to look up a Route53 Hosted Zone that you own. DNS records created by ExternalDNS will be created in this Hosted Zone.
  • acm_certificate_domain - the domain for a certificate in ACM that will be leveraged by Ingress Nginx. Value is used to look up an ACM certificate that will be used to terminate HTTPS connections. This value should likely be a wildcard cert for your eks_cluster_domain.
eks_cluster_domain      = "example.com"
acm_certificate_domain  = "*.example.com"

Step3: Terraform PLAN

Verify the resources created by this execution

export AWS_REGION=<ENTER YOUR REGION>   # Select your own region
terraform plan

Step4: Terraform APPLY

terraform apply

Enter yes to apply

Step5: Update local kubeconfig

~/.kube/config file gets updated with cluster details and certificate from the below command.

$ aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>

Step6: List all the worker nodes by running the command below

$ kubectl get nodes

Step7: List all the pods running in kube-system namespace

$ kubectl get pods -n kube-system

Step 8: Verify the Ingress resource was created for Team Riker

$ kubectl get ingress -n team-riker

Navigate to the HOST url which should be guestbook-ui.<eks_cluster_domain>. At this point, you should be able to view the guestbook-ui application in the browser at the HOST url.

How to Destroy

The following command destroys the resources created by terraform apply

terraform destroy --auto-approve

Requirements

Name Version
terraform >= 1.0.1
aws >= 3.66.0
helm >= 2.4.1
kubernetes >= 2.6.1

Providers

Name Version
aws >= 3.66.0

Modules

Name Source Version
aws_vpc terraform-aws-modules/vpc/aws 3.2.0
eks-blueprints-kubernetes-addons ../../modules/kubernetes-addons n/a
eks_cluster ../.. n/a

Resources

Name Type
aws_acm_certificate.issued data source
aws_availability_zones.available data source
aws_eks_cluster.cluster data source
aws_eks_cluster_auth.cluster data source
aws_region.current data source
aws_route53_zone.selected data source

Inputs

Name Description Type Default Required
acm_certificate_domain *.example.com string n/a yes
cluster_version Kubernetes Version string "1.21" no
eks_cluster_domain Route53 domain for the cluster. string "example.com" no
environment Environment area, e.g. prod or preprod string "preprod" no
tenant Account Name or unique account unique id e.g., apps or management or aws007 string "aws001" no
zone zone, e.g. dev or qa or load or ops etc... string "dev" no

Outputs

Name Description
configure_kubectl Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig