Code development platform for open source projects from the European Union institutions

Skip to content
Snippets Groups Projects
Unverified Commit 4025b5f0 authored by Vara Bonthu's avatar Vara Bonthu Committed by GitHub
Browse files

Feature/metrics server (#36)

* Added additional pre commit config

* metrics server addon helm chart updated

* Updated metrics server helm chart to kubernetes-sigs.github.io

* Fixed github workflow by removing source folder
parent 28661c1c
No related branches found
No related tags found
No related merge requests found
Showing
with 112 additions and 118 deletions
......@@ -47,8 +47,6 @@ jobs:
terraform_version: ${{ steps.minMax.outputs.minVersion }}
- name: Install pre-commit dependencies
run: pip install pre-commit
- name: Change directory to SOURCE folder
run: cd source
- name: Execute pre-commit
# Run only validate pre-commit check on min version supported
if: ${{ matrix.directory != '.' }}
......@@ -98,4 +96,4 @@ jobs:
- name: Execute pre-commit
# Run all pre-commit checks on max version supported
if: ${{ matrix.version == needs.getBaseVersion.outputs.maxVersion }}
run: pre-commit run --color=always --show-diff-on-failure --all-files
\ No newline at end of file
run: pre-commit run --color=always --show-diff-on-failure --all-files
......@@ -4,7 +4,7 @@ on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
branches: [ main ]
jobs:
tfsec:
......@@ -18,13 +18,13 @@ jobs:
steps:
- name: Clone repo
uses: actions/checkout@master
- name: Run tfsec
uses: tfsec/tfsec-sarif-action@master
with:
sarif_file: tfsec.sarif
sarif_file: tfsec.sarif
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v1
with:
sarif_file: tfsec.sarif
sarif_file: tfsec.sarif
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
hooks:
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: check-merge-conflict
- id: no-commit-to-branch
args: [--branch, main]
- id: detect-private-key
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- repo: git://github.com/antonbabenko/pre-commit-terraform
rev: v1.50.0
hooks:
- id: terraform_fmt
- id: terraform_docs
- id: terraform_validate
- id: terraform_tflint
\ No newline at end of file
- id: terraform_tflint
default_stages: [commit, push]
......@@ -12,4 +12,3 @@ FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
......@@ -4,40 +4,40 @@
This project provides a framework for deploying best-practice multi-tenant [EKS Clusters](https://aws.amazon.com/eks) with [Kubernetes Addons](https://kubernetes.io/docs/concepts/cluster-administration/addons/), provisioned via [Hashicorp Terraform](https://www.terraform.io/) and [Helm charts](https://helm.sh/) on [AWS](https://aws.amazon.com/).
# Overview
The AWS EKS Accelerator for Terraform module helps you to provision [EKS Clusters](https://aws.amazon.com/eks), [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with [On-Demand](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) and [Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html), [AWS Fargate profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html), and all the necessary Kubernetes add-ons for a production-ready EKS cluster. The [Terraform Helm provider](https://github.com/hashicorp/terraform-provider-helm) is used to deploy common Kubernetes Addons with publicly available [Helm Charts](https://artifacthub.io/).
The AWS EKS Accelerator for Terraform module helps you to provision [EKS Clusters](https://aws.amazon.com/eks), [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) with [On-Demand](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) and [Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html), [AWS Fargate profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html), and all the necessary Kubernetes add-ons for a production-ready EKS cluster. The [Terraform Helm provider](https://github.com/hashicorp/terraform-provider-helm) is used to deploy common Kubernetes Addons with publicly available [Helm Charts](https://artifacthub.io/).
This project leverages the official [terrafor-aws-vpc](https://github.com/terraform-aws-modules/terraform-aws-vpc) and [terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks) community modules to create VPC and EKS Cluster.
The intention of this framework is to help you design config driven solution. This will help you to create EKS clusters for various environments and AWS accounts across multiple regions with a **unique Terraform configuration and state file** per EKS cluster.
The intention of this framework is to help you design config driven solution. This will help you to create EKS clusters for various environments and AWS accounts across multiple regions with a **unique Terraform configuration and state file** per EKS cluster.
The top-level `deploy` folder provides an example of how you can structure your folders and files to define multiple EKS Cluster environments and consume this accelerator module. This approach is suitable for large projects, with clearly defined sub directory and file structure.
This can be modified the way that suits your requirement. You can define a unique configuration for each EKS Cluster and making this module as central source of truth. Please note that `deploy` folder can be moved to a dedicated repo and consume this module using `main.tf` file([see example file here](deploy/live/preprod/eu-west-1/application_acct/dev/dev.tfvars) ).
e.g. folder/file structure for defining multiple clusters
├── deploy
│ └── live
│ └── preprod
│ └── eu-west-1
│ └── application
│ └── dev
│ └── backend.conf
│ └── dev.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
│ └── test
│ └── backend.conf
│ └── test.tfvars
│ └── prod
│ └── eu-west-1
│ └── application
│ └── prod
│ └── backend.conf
│ └── prod.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
│ └── preprod
│ └── eu-west-1
│ └── application
│ └── dev
│ └── backend.conf
│ └── dev.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
│ └── test
│ └── backend.conf
│ └── test.tfvars
│ └── prod
│ └── eu-west-1
│ └── application
│ └── prod
│ └── backend.conf
│ └── prod.tfvars
│ └── main.tf
│ └── variables.tf
│ └── outputs.tf
Each folder under `live/<region>/application` represents an EKS cluster environment(e.g., dev, test, load etc.).
This folder contains `backend.conf` and `<env>.tfvars`, used to create a unique Terraform state for each cluster environment.
......@@ -66,8 +66,8 @@ This module provisions the following EKS resources
3. [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html)
4. [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
NOTE: VPC/Subnets creation can be disabled using `create_vpc = false` in TFVARS file and import the existing VPC resources.
`test-vpc.tfvars` and `test-eks.tfvars` [example](deploy/live/preprod/eu-west-1/application_acct/test/) shows how to create VPC with a unique state file and import that state file with resources into EKS Cluster creation.
NOTE: VPC/Subnets creation can be disabled using `create_vpc = false` in TFVARS file and import the existing VPC resources.
`test-vpc.tfvars` and `test-eks.tfvars` [example](deploy/live/preprod/eu-west-1/application_acct/test/) shows how to create VPC with a unique state file and import that state file with resources into EKS Cluster creation.
## EKS Cluster resources
......@@ -107,9 +107,9 @@ NOTE: VPC/Subnets creation can be disabled using `create_vpc = false` in TFVARS
# Node Group Modules
This module uses dedicated sub modules for creating [AWS Managed Node Groups](modules/aws-eks-managed-node-groups), [Self-managed Node groups](modules/aws-eks-self-managed-node-groups) and [Fargate profiles](modules/aws-eks-fargate-profiles).
Mixed Node groups with Fargate profiles can be defined simply as a map variable in `<env>.tfvars`.
This approach provides flexibility to add or remove managed/self-managed node groups/fargate profiles by just adding/removing map of values to the existing `<env>.tfvars`. This allows you to define unique node configuration for each EKS Cluster in the same account.
AWS auth config map handled by this module ensures new node groups successfully join with the EKS Cluster.
Mixed Node groups with Fargate profiles can be defined simply as a map variable in `<env>.tfvars`.
This approach provides flexibility to add or remove managed/self-managed node groups/fargate profiles by just adding/removing map of values to the existing `<env>.tfvars`. This allows you to define unique node configuration for each EKS Cluster in the same account.
AWS auth config map handled by this module ensures new node groups successfully join with the EKS Cluster.
Each Node Group can have dedicated IAM role, Security Group and Launch template to improve the security.
Please refer to the `dev.tfvars` for [full example](deploy/live/preprod/eu-west-1/application_acct/dev/dev.tfvars).
......@@ -133,19 +133,19 @@ Please refer to the `dev.tfvars` for [full example](deploy/live/preprod/eu-west-
max_size = 3
min_size = 3
max_unavailable = 1 # or percentage = 20
# 3> Node Group compute configuration
ami_type = "AL2_x86_64" # AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM
capacity_type = "ON_DEMAND" # ON_DEMAND or SPOT
instance_types = ["m4.large"] # List of instances used only for SPOT type
disk_size = 50
# 4> Node Group network configuration
subnet_type = "private" # private or public
subnet_ids = [] # Define your private/public subnets list with comma seprated subnet_ids = ['subnet1','subnet2','subnet3']
k8s_taints = []
k8s_labels = {
Environment = "preprod"
Zone = "dev"
......@@ -164,7 +164,7 @@ Please refer to the `dev.tfvars` for [full example](deploy/live/preprod/eu-west-
**Fargate Profiles Example**
enable_fargate = true
fargate_profiles = {
default = {
fargate_profile_name = "default"
......@@ -176,16 +176,16 @@ Please refer to the `dev.tfvars` for [full example](deploy/live/preprod/eu-west-
env = "fargate"
}
}]
subnet_ids = [] # Provide list of private subnets
additional_tags = {
ExtraTag = "Fargate"
}
},
finance = {...}
}
# Kubernetes Addons Module
Kubernetes Addons Module within this framework allows you to deploy Kubernetes Addons using Terraform Helm provider and Kubernetes provider with simple **true/false** feature in `<env>.tfvars`.
......@@ -200,9 +200,9 @@ e.g., `<env>.tfvars` config for enabling AWS LB INGRESS CONTROLLER. Refer to exa
aws_lb_helm_chart_version = "1.2.7"
aws_lb_helm_repo_url = "https://aws.github.io/eks-charts"
aws_lb_helm_helm_chart_name = "aws-load-balancer-controller"
This module currently configured to fetch the Helm Charts from Open Source repos and Docker images from Docker Hub/Public ECR repos which requires outbound Internet connection from your EKS Cluster. Alternatively you can download the Docker images for each Addon and push it to AWS ECR repo and this can be accessed within VPC using ECR endpoint.
You can find the README for each Helm module with instructions on how to download the images from Docker Hub or third-party repos and upload it to your private ECR repo. This module provides the option to use internal Helm and Docker image repos from `<env>.tfvars`.
This module currently configured to fetch the Helm Charts from Open Source repos and Docker images from Docker Hub/Public ECR repos which requires outbound Internet connection from your EKS Cluster. Alternatively you can download the Docker images for each Addon and push it to AWS ECR repo and this can be accessed within VPC using ECR endpoint.
You can find the README for each Helm module with instructions on how to download the images from Docker Hub or third-party repos and upload it to your private ECR repo. This module provides the option to use internal Helm and Docker image repos from `<env>.tfvars`.
For example, [ALB Ingress Controller](kubernetes-addons/lb-ingress-controller/README.md) for AWS LB Ingress Controller module.
......@@ -219,7 +219,7 @@ Ingress is an API object that defines the traffic routing rules (e.g., load bala
* [Nginx Ingress Controller](kubernetes-addons/nginx-ingress/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
**Nginx is an open source Kubernetes Ingress Controller**. The Nginx Kubernetes Ingress provider is a Kubernetes Ingress controller; that is to say, it manages access to cluster services by supporting the Ingress specification. For more details about [Nginx can be found here](https://kubernetes.github.io/ingress-nginx/)
## Autoscaling Modules
## Autoscaling Modules
**Cluster Autoscaler** and **Metric Server** Helm Modules gets deployed by default with the EKS Cluster.
* [Cluster Autoscaler](kubernetes-addons/cluster-autoscaler/README.md) can be deployed by enabling the add-on in `<env>.tfvars` file.
......@@ -246,7 +246,7 @@ This module ships the Fargate Container logs to CloudWatch
Bottlerocket has two containers runtimes running. Control container **on** by default used for AWS Systems manager and remote API access. Admin container **off** by default for deep debugging and exploration.
Bottlerocket [Launch templates userdata](modules/aws-eks-managed-node-groups/templates/userdata-bottlerocket.tpl) uses the TOML format with Key-value pairs.
Bottlerocket [Launch templates userdata](modules/aws-eks-managed-node-groups/templates/userdata-bottlerocket.tpl) uses the TOML format with Key-value pairs.
Remote API access API via SSM agent. You can launch trouble shooting container via user data `[settings.host-containers.admin] enabled = true`.
### Features
......@@ -290,12 +290,12 @@ git clone https://github.com/aws-samples/aws-eks-accelerator-for-terraform.git
#### Step2: Update <env>.tfvars file
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/dev.tfvars` file with the instructions specified in the file (OR use the default values).
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/dev.tfvars` file with the instructions specified in the file (OR use the default values).
You can choose to use an existing VPC ID and Subnet IDs or create a new VPC and subnets by providing CIDR ranges in `dev.tfvars` file
#### Step3: Update Terraform backend config file
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/backend.conf` with your local directory path or s3 path.
Update `~/aws-eks-accelerator-for-terraform/live/preprod/eu-west-1/application/dev/backend.conf` with your local directory path or s3 path.
[state.tf](state.tf) file contains backend config.
Local terraform state backend config variables
......
......@@ -104,7 +104,7 @@ module "aws_eks_fargate_profiles" {
# ---------------------------------------------------------------------------------------------------------------------
module "aws_eks_addon" {
count = var.create_eks && var.enable_managed_nodegroups || var.create_eks && var.enable_self_managed_nodegroups ? 1 : 0
count = var.create_eks && var.enable_managed_nodegroups || var.create_eks && var.enable_self_managed_nodegroups || var.create_eks && var.enable_fargate ? 1 : 0
source = "./modules/aws-eks-addon"
cluster_name = module.aws_eks.cluster_id
......
# terraform-aws-eks-accelerator-patterns
The following steps walks you through the deployment of this example
The following steps walks you through the deployment of this example
This example deploys the following Basic EKS Cluster with VPC
- Creates a new sample VPC, 3 Private Subnets and 3 Public Subnets
- Creates Internet gateway for Public Subnets and NAT Gateway for Private Subnets
- Creates EKS Cluster Control plane with one managed node group
# How to Deploy
## Prerequisites:
......
......@@ -2,9 +2,9 @@
#### Objective:
The purpose of this document is to provide an overview of the steps for upgrading the EKS Cluster from one version to another. Please note that EKS upgrade documentation gets published by AWS every year.
The purpose of this document is to provide an overview of the steps for upgrading the EKS Cluster from one version to another. Please note that EKS upgrade documentation gets published by AWS every year.
The current version of the upgrade documentation while writing this [README](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html)
The current version of the upgrade documentation while writing this [README](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html)
#### Pre-Requisites:
......@@ -22,13 +22,13 @@ This table shows the supported plugin versions for each EKS Kubernetes version
#### Steps to upgrade EKS cluster:
1. Change the version in Terraform to desired version under `base.tfvars`. See the example below
```hcl-terraform
kubernetes_version = "1.20"
```
2. Apply the changes to the cluster with Terraform. This step will upgrade the Control Plane and Data Plane to the newer version, and it will roughly take 35 mins to 1 hour
3. Once the Cluster is upgraded to desired version then please updated the following plugins as per the instructions
#### Steps to upgrade Add-ons:
......@@ -41,14 +41,14 @@ Just update the latest versions in `base.tfvars` file as shown below. EKS Addon
enable_kube_proxy_addon = true
kube_proxy_addon_version = "v1.20.4-eksbuild.2"
```
##### CoreDNS
##### CoreDNS
```hcl-terraform
enable_coredns_addon = true
coredns_addon_version = "v1.8.3-eksbuild.1"
```
##### VPC CNI
```hcl-terraform
......@@ -57,6 +57,6 @@ vpc_cni_addon_version = "v1.8.0-eksbuild.1"
```
Apply the changes to the cluster with Terraform.
## Important Note
Please note that you may need to update other Kubernetes Addons deployed through Helm Charts to match with new Kubernetes upgrade version
\ No newline at end of file
## Important Note
Please note that you may need to update other Kubernetes Addons deployed through Helm Charts to match with new Kubernetes upgrade version
......@@ -40,12 +40,8 @@ module "kubernetes_addons" {
cluster_autoscaler_image_repo_name = var.cluster_autoscaler_image_repo_name
# ------- Metric Server
metrics_server_enable = var.metrics_server_enable
metric_server_image_repo_name = var.metric_server_image_repo_name
metric_server_image_tag = var.metric_server_image_tag
metric_server_helm_chart_version = var.metric_server_helm_chart_version
metric_server_helm_repo_url = var.metric_server_helm_repo_url
metric_server_helm_chart_name = var.metric_server_helm_chart_name
metrics_server_enable = var.metrics_server_enable
metrics_server_helm_chart = var.metrics_server_helm_chart
# ------- AWS LB Controller
aws_lb_ingress_controller_enable = var.aws_lb_ingress_controller_enable
......
......@@ -3,33 +3,33 @@
###### Instructions to upload Agones Docker image to AWS ECR
Step1: Get the latest docker image from this link
https://github.com/googleforgames/agones
Step2: Download the docker image to your local Mac/Laptop
$ docker pull gcr.io/agones-images/agones-controller:1.15.0
Step3: Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
$ aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin <account id>.dkr.ecr.eu-west-1.amazonaws.com
Step4: Create an ECR repo for Metrics Server if you don't have one
$ aws ecr create-repository --repository-name gcr.io/agones-images/agones-controller --image-scanning-configuration scanOnPush=true
Step4: Create an ECR repo for Metrics Server if you don't have one
$ aws ecr create-repository --repository-name gcr.io/agones-images/agones-controller --image-scanning-configuration scanOnPush=true
Step5: After the build completes, tag your image so, you can push the image to this repository:
$ docker tag gcr.io/agones-images/agones-controller:1.15.0 <accountid>.dkr.ecr.eu-west-1.amazonaws.com/gcr.io/agones-images/agones-controller:1.15.0
Step6: Run the following command to push this image to your newly created AWS repository:
$ docker push <accountid>.dkr.ecr.eu-west-1.amazonaws.com/gcr.io/agones-images/agones-controller:1.15.0
### Instructions to download Helm Charts
Helm Chart
https://artifacthub.io/packages/helm/agones/agones
Helm Repo Maintainers
......@@ -101,4 +101,3 @@ No modules.
No outputs.
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
......@@ -9,4 +9,4 @@ openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650
echo "caBundle:"
base64 -w 0 server.crt
echo "done"
\ No newline at end of file
echo "done"
......@@ -77,4 +77,4 @@ resource "aws_security_group_rule" "agones_sg_ingress_rule" {
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
security_group_id = data.aws_security_group.eks_security_group.id
}
\ No newline at end of file
}
......@@ -176,4 +176,4 @@ gameservers:
podPreserveUnknownFields: false
helm:
installTests: false
\ No newline at end of file
installTests: false
......@@ -3,33 +3,33 @@
###### Instructions to upload aws-for-fluent-bit Docker image to AWS ECR
Step1: Get the latest docker image from this link
https://github.com/aws/aws-for-fluent-bit
Step2: Download the docker image to your local Mac/Laptop
$ docker pull amazon/aws-for-fluent-bit:2.13.0
Step3: Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
$ aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin <account id>.dkr.ecr.eu-west-1.amazonaws.com
Step4: Create an ECR repo for Metrics Server if you don't have one
$ aws ecr create-repository --repository-name amazon/aws-for-fluent-bit --image-scanning-configuration scanOnPush=true
Step4: Create an ECR repo for Metrics Server if you don't have one
$ aws ecr create-repository --repository-name amazon/aws-for-fluent-bit --image-scanning-configuration scanOnPush=true
Step5: After the build completes, tag your image so, you can push the image to this repository:
$ docker tag amazon/aws-for-fluent-bit:2.13.0 <accountid>.dkr.ecr.eu-west-1.amazonaws.com/amazon/aws-for-fluent-bit:2.13.0
Step6: Run the following command to push this image to your newly created AWS repository:
$ docker push <accountid>.dkr.ecr.eu-west-1.amazonaws.com/amazon/aws-for-fluent-bit:2.13.0
### Instructions to download Helm Charts
#### Helm Chart
https://artifacthub.io/packages/helm/aws/aws-for-fluent-bit
Helm Repo Maintainers
......@@ -100,4 +100,3 @@ No modules.
| <a name="output_cw_loggroup_arn"></a> [cw\_loggroup\_arn](#output\_cw\_loggroup\_arn) | EKS Cloudwatch group arn |
| <a name="output_cw_loggroup_name"></a> [cw\_loggroup\_name](#output\_cw\_loggroup\_name) | EKS Cloudwatch group Name |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
......@@ -47,4 +47,3 @@ resource "helm_release" "aws-for-fluent-bit" {
region = data.aws_region.current.name
})]
}
......@@ -24,5 +24,3 @@ output "cw_loggroup_arn" {
description = "EKS Cloudwatch group arn"
value = aws_cloudwatch_log_group.eks_worker_logs.arn
}
......@@ -137,4 +137,4 @@ volumeMounts:
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
\ No newline at end of file
readOnly: true
......@@ -56,5 +56,3 @@ variable "aws_for_fluent_bit_image_tag" {
type = string
default = "2.13.0"
}
......@@ -37,4 +37,3 @@ variable "aws_open_telemetry_self_mg_node_iam_role_arns" {
type = list(string)
default = []
}
......@@ -9,14 +9,14 @@ Cert Manager adds certificates and certificate issuers as resource types in Kube
### Instructions to use the Helm Chart
See the [cert-manager documentation](https://cert-manager.io/docs/installation/helm/).
# Docker Image for Cert Manager
cert-manager docker image is available at this repo:
https://quay.io/repository/jetstack/cert-manager-controller?tag=latest&tab=tags
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
......@@ -73,7 +73,3 @@ No modules.
No outputs.
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment