https://link.medium.com/wx00lnCZZ8
You can run the latest EKS version with Kubernetes 1.17.9 with a single docker run
command, without the need to know anything about Terraform. And the best thing about this is the support of Auto-Scaling of Spot Instances, which helps to reduce our AWS EKS costs by up to 70% with AMD64 and up-to 90% with ARM Mg6 Graviton instances!
Why not? This implementation provides an easy way to deploy an AWS EKS cluster with autoscaling support for spot instances with a single command with additional addons like the ingress controller along with the EFS provisioner, Vault, Prometheus, Grafana and more to use with EKS. You can easily extend the implementation to use on-demand instances as well by extending the node groups in the main.tf
file (more about this later).
Build a Docker image docker/eks
from the Dockerfile provided in this folder with some additional tools and then run it to deploy an EKS Cluster with a single command.
You can either build your own image from the Dockerfile provided in this repo or use the kubernautslabs/eks image and run it to deploy EKS.
After running the docker run command below, you'll be asked to provide the cluster name and the AWS region.
Please note:
- The cluster name should be unique, since we use the cluster name as the S3 bucket name!
- It is recommended to use something like project-customer-stage-eks-spot as a convention for the cluster name, e.g. myproject-business-unit-dev-qa-eks-spot.
git clone https://github.com/arashkaffamanesh/terraform-aws-eks.git
cd terraform-aws-eks
docker build -t docker-eks .
docker run -it --rm -v "$HOME/.aws/:/root/.aws" -v "$PWD:/tmp" docker-eks -c "cd /tmp; ./2-deploy-eks-cluster.sh"
# Do this on your own risk, if you might want to trust me :-)
git clone https://github.com/arashkaffamanesh/terraform-aws-eks.git
cd terraform-aws-eks
docker run -it --rm -v "$HOME/.aws/:/root/.aws" -v "$PWD:/tmp" kubernautslabs/eks -c "cd /tmp; ./2-deploy-eks-cluster.sh"
An EKS cluster with 1 spot instance and the cluster autoscaler installed.
After the deployment you should run:
docker run -it --rm -v "$HOME/.aws/:/root/.aws" -v "$PWD:/tmp" kubernautslabs/eks
cd /tmp/<your cluster name>
ll
export KUBECONFIG=kubeconfig_<your cluster name>
k get pods -A
If you get something like this, you'd be happy:
root@9aef95e6699a:/tmp/arash-docker-eks# k get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-4z9dm 1/1 Running 0 6m28s
kube-system cluster-autoscaler-5f84699764-jbb66 1/1 Running 0 6m3s
kube-system coredns-5fdf64ff8-bxpgx 1/1 Running 0 10m
kube-system coredns-5fdf64ff8-ncbqw 1/1 Running 0 10m
kube-system kube-proxy-464gp 1/1 Running 0 6m28s
kube-system metrics-server-7578984995-6pptz 1/1 Running 0 6m20s
Please refer to the documentation and additional notes in the README.md
file under the base module for testing or how to add additional components like the ingress controller along with the EFS provisioner, Vault, Prometheus, Grafana and more.
Please refer to the Clean-Up: delete clusters
section in your cluster module folder.
You should love to run something like this after running make destroy from the cluster module folder:
# please replace <cluster_name> with your provided cluster name
make destroy
aws s3 rb s3://<cluster_name> --force
aws ec2 delete-key-pair --key-name <cluster_name>-key &>/dev/null
A terraform module to create a managed Kubernetes cluster on AWS EKS. Available through the Terraform registry. Inspired by and adapted from this doc and its source code. Read the AWS docs on EKS to get connected to the k8s dashboard.
- You want to create an EKS cluster and an autoscaling group of workers for the cluster.
- You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
- You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources. The VPC satisfies EKS requirements.
The default cluster_version
is now 1.16. Kubernetes 1.16 includes a number of deprecated API removals, and you need to ensure your applications and add ons are updated, or workloads could fail after the upgrade is complete. For more information on the API removals, see the Kubernetes blog post. For action you may need to take before upgrading, see the steps in the EKS documentation.
Please set explicitly your cluster_version
to an older EKS version until your workloads are ready for Kubernetes 1.16.
A full example leveraging other community modules is contained in the examples/basic directory.
data "aws_eks_cluster" "cluster" {
name = module.my-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.my-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "my-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "my-cluster"
cluster_version = "1.16"
subnets = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
vpc_id = "vpc-1234556abcdef"
worker_groups = [
{
instance_type = "m4.large"
asg_max_size = 5
}
]
}
Sometimes you need to have a way to create EKS resources conditionally but Terraform does not allow to use count
inside module
block, so the solution is to specify argument create_eks
.
Using this feature and having manage_aws_auth=true
(the default) requires to set up the kubernetes provider in a way that allows the data sources to not exist.
data "aws_eks_cluster" "cluster" {
count = var.create_eks ? 1 : 0
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
count = var.create_eks ? 1 : 0
name = module.eks.cluster_id
}
# In case of not creating the cluster, this will be an incompletely configured, unused provider, which poses no problem.
provider "kubernetes" {
host = element(concat(data.aws_eks_cluster.cluster[*].endpoint, list("")), 0)
cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.cluster[*].certificate_authority.0.data, list("")), 0))
token = element(concat(data.aws_eks_cluster_auth.cluster[*].token, list("")), 0)
load_config_file = false
version = "1.10"
}
# This cluster will not be created
module "eks" {
source = "terraform-aws-modules/eks/aws"
create_eks = false
# ... omitted
}
- Autoscaling: How to enable worker node autoscaling.
- Enable Docker Bridge Network: How to enable the docker bridge network when using the EKS-optimized AMI, which disables it by default.
- Spot instances: How to use spot instances with this module.
- IAM Permissions: Minimum IAM permissions needed to setup EKS Cluster.
- FAQ: Frequently Asked Questions
Code formatting and documentation for variables and outputs is generated using pre-commit-terraform hooks which uses terraform-docs.
Follow these instructions to install pre-commit locally.
And install terraform-docs
with go get github.com/segmentio/terraform-docs
or brew install terraform-docs
.
Report issues/questions/feature requests on in the issues section.
Full contributing guidelines are covered here.
- The changelog captures all important release notes from v11.0.0
- For older release notes, refer to changelog.pre-v11.0.0.md
Created by Brandon O'Connor - [email protected]. Maintained by Max Williams and Thierno IB. BARRY. Many thanks to the contributors listed here!
MIT Licensed. See LICENSE for full details.
Name | Version |
---|---|
terraform | >= 0.12.9 |
aws | >= 2.55.0 |
kubernetes | >= 1.11.1 |
local | >= 1.4 |
null | >= 2.1 |
random | >= 2.1 |
template | >= 2.1 |
Name | Version |
---|---|
aws | >= 2.55.0 |
kubernetes | >= 1.11.1 |
local | >= 1.4 |
null | >= 2.1 |
random | >= 2.1 |
template | >= 2.1 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
attach_worker_cni_policy | Whether to attach the Amazon managed AmazonEKS_CNI_Policy IAM policy to the default worker IAM role. WARNING: If set false the permissions must be assigned to the aws-node DaemonSet pods via another method or nodes will not be able to join the cluster. |
bool |
true |
no |
cluster_create_security_group | Whether to create a security group for the cluster or attach the cluster to cluster_security_group_id . |
bool |
true |
no |
cluster_create_timeout | Timeout value when creating the EKS cluster. | string |
"30m" |
no |
cluster_delete_timeout | Timeout value when deleting the EKS cluster. | string |
"15m" |
no |
cluster_enabled_log_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) |
[] |
no |
cluster_encryption_config | Configuration block with encryption configuration for the cluster. See examples/secrets_encryption/main.tf for example format | list(object({ |
[] |
no |
cluster_endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool |
false |
no |
cluster_endpoint_private_access_cidrs | List of CIDR blocks which can access the Amazon EKS private API server endpoint, when public access is disabled | list(string) |
[ |
no |
cluster_endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool |
true |
no |
cluster_endpoint_public_access_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | list(string) |
[ |
no |
cluster_iam_role_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string |
"" |
no |
cluster_log_kms_key_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string |
"" |
no |
cluster_log_retention_in_days | Number of days to retain log events. Default retention - 90 days. | number |
90 |
no |
cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string |
n/a | yes |
cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | string |
"" |
no |
cluster_version | Kubernetes version to use for the EKS cluster. | string |
"1.16" |
no |
config_output_path | Where to save the Kubectl config file (if write_kubeconfig = true ). Assumed to be a directory if the value ends with a forward slash / . |
string |
"./" |
no |
create_eks | Controls if EKS resources should be created (it affects almost all resources) | bool |
true |
no |
eks_oidc_root_ca_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | string |
"9e99a48a9960b14926bb7f3b02e22da2b0ab7280" |
no |
enable_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | bool |
false |
no |
iam_path | If provided, all IAM roles will be created on this path. | string |
"/" |
no |
kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list(string) |
[] |
no |
kubeconfig_aws_authenticator_command | Command to use to fetch AWS EKS credentials. | string |
"aws-iam-authenticator" |
no |
kubeconfig_aws_authenticator_command_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. | list(string) |
[] |
no |
kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map(string) |
{} |
no |
kubeconfig_name | Override the default name used for items kubeconfig. | string |
"" |
no |
manage_aws_auth | Whether to apply the aws-auth configmap file. | bool |
true |
no |
manage_cluster_iam_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified. | bool |
true |
no |
manage_worker_iam_resources | Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers. | bool |
true |
no |
map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) |
[] |
no |
map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(object({ |
[] |
no |
map_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(object({ |
[] |
no |
node_groups | Map of map of node groups to create. See node_groups module's documentation for more details |
any |
{} |
no |
node_groups_defaults | Map of values to be applied to all node groups. See node_groups module's documentaton for more details |
any |
{} |
no |
permissions_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string |
null |
no |
subnets | A list of subnets to place the EKS cluster and workers within. | list(string) |
n/a | yes |
tags | A map of tags to add to all resources. | map(string) |
{} |
no |
vpc_id | VPC where the cluster and workers will be deployed. | string |
n/a | yes |
wait_for_cluster_cmd | Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT | string |
"for i in seq 1 60`; do wget --no-check-certificate -O - -q $ENDPOINT/healthz \u003e/dev/null \u0026\u0026 exit 0 |
|
wait_for_cluster_interpreter | Custom local-exec command line interpreter for the command to determining if the eks cluster is healthy. | list(string) |
[ |
no |
worker_additional_security_group_ids | A list of additional security group ids to attach to worker instances | list(string) |
[] |
no |
worker_ami_name_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string |
"" |
no |
worker_ami_name_filter_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string |
"" |
no |
worker_ami_owner_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string |
"602401143452" |
no |
worker_ami_owner_id_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string |
"801119661308" |
no |
worker_create_cluster_primary_security_group_rules | Whether to create security group rules to allow communication between pods on workers and pods using the primary cluster security group. | bool |
false |
no |
worker_create_initial_lifecycle_hooks | Whether to create initial lifecycle hooks provided in worker groups. | bool |
false |
no |
worker_create_security_group | Whether to create a security group for the workers or attach the workers to worker_security_group_id . |
bool |
true |
no |
worker_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | any |
[] |
no |
worker_groups_launch_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any |
[] |
no |
worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. | string |
"" |
no |
worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | number |
1025 |
no |
workers_additional_policies | Additional policies to be added to workers | list(string) |
[] |
no |
workers_group_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | any |
{} |
no |
workers_role_name | User defined workers role name. | string |
"" |
no |
write_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to config_output_path . |
bool |
true |
no |
Name | Description |
---|---|
cloudwatch_log_group_name | Name of cloudwatch log group created |
cluster_arn | The Amazon Resource Name (ARN) of the cluster. |
cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. |
cluster_endpoint | The endpoint for your EKS Kubernetes API. |
cluster_iam_role_arn | IAM role ARN of the EKS cluster. |
cluster_iam_role_name | IAM role name of the EKS cluster. |
cluster_id | The name/id of the EKS cluster. |
cluster_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer |
cluster_primary_security_group_id | The cluster primary security group ID created by the EKS cluster on 1.14 or later. Referred to as 'Cluster security group' in the EKS console. |
cluster_security_group_id | Security group ID attached to the EKS cluster. On 1.14 or later, this is the 'Additional security groups' in the EKS console. |
cluster_version | The Kubernetes server version for the EKS cluster. |
config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. |
kubeconfig | kubectl config file contents for this EKS cluster. |
kubeconfig_filename | The filename of the generated kubectl config. |
node_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys |
oidc_provider_arn | The ARN of the OIDC Provider if enable_irsa = true . |
security_group_rule_cluster_https_worker_ingress | Security group rule responsible for allowing pods to communicate with the EKS cluster API. |
worker_iam_instance_profile_arns | default IAM instance profile ARN for EKS worker groups |
worker_iam_instance_profile_names | default IAM instance profile name for EKS worker groups |
worker_iam_role_arn | default IAM role ARN for EKS worker groups |
worker_iam_role_name | default IAM role name for EKS worker groups |
worker_security_group_id | Security group ID attached to the EKS workers. |
workers_asg_arns | IDs of the autoscaling groups containing workers. |
workers_asg_names | Names of the autoscaling groups containing workers. |
workers_default_ami_id | ID of the default worker group AMI |
workers_launch_template_arns | ARNs of the worker launch templates. |
workers_launch_template_ids | IDs of the worker launch templates. |
workers_launch_template_latest_versions | Latest versions of the worker launch templates. |
workers_user_data | User data of worker groups |