Published October 10, 2023
Written by Praveen Kumar Patidar
You can see the video presentation for the architectural walkthrough –
tf-vpc/main.tf Using VPC terraform module to simplify the VPC creation. The current solution is creating Public and Private VPC.
module "vpc" { source = "terraform-aws-modules/vpc/aws" name = local.workspace["vpc_name"] cidr = local.workspace["vpc_cidr"] azs = local.workspace["azs"] private_subnets = local.workspace["private_subnets"] public_subnets = local.workspace["public_subnets"] single_nat_gateway = local.workspace["single_nat_gateway"] enable_nat_gateway = local.workspace["enable_nat_gateway"] enable_vpn_gateway = local.workspace["enable_vpn_gateway"] enable_dns_hostnames = true enable_dns_support = true enable_ipv6 = true public_subnet_assign_ipv6_address_on_creation = true create_egress_only_igw = true public_subnet_ipv6_prefixes = [0, 1, 2] private_subnet_ipv6_prefixes = [3, 4, 5] enable_flow_log = true create_flow_log_cloudwatch_iam_role = true create_flow_log_cloudwatch_log_group = true public_subnet_tags = { "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/role/internal-elb" = 1 } tags = { Environment = terraform.workspace } }
tf-eks/cluster.tf The main cluster file contains most of the configuration for the EKS cluster. Along with NodeGroups and AddOns. The module terraform-aws-modules/eks/aws is used to minimize the code complexity.
module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = local.workspace.cluster_name cluster_version = local.workspace.cluster_version enable_irsa = true cluster_endpoint_public_access = true # IPV6 #cluster_ip_family = "ipv6" #create_cni_ipv6_iam_policy = true tags = { Environment = "training" } cluster_addons = { coredns = { most_recent = true } . . . .
tf-eks/irsa.tf The file consists of all IRSA roles required for the solution using the latest IAM terraform module terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks that come with various predefined roles and policies. e.g. alb, autoscaler, cni etc.
module "vpc_cni_irsa_role" { source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" role_name = "eks-${module.eks.cluster_name}-vpc-cni-irsa" attach_vpc_cni_policy = true vpc_cni_enable_ipv4 = true vpc_cni_enable_ipv6 = true oidc_providers = { ex = { provider_arn = module.eks.oidc_provider_arn namespace_service_accounts = ["kube-system:aws-node"] } } tags = local.tags } module "alb_role_irsa" { source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" version = "v5.22.0" . . . .
ef-eks/karpenter.tf Using Karpenter terraform module to create required irsa roles along with SQS and policies for node role.
module "karpenter" { source = "terraform-aws-modules/eks/aws//modules/karpenter" cluster_name = local.workspace.cluster_name iam_role_name = "eks-${local.workspace.cluster_name}-karpenter-instance-profile" iam_role_use_name_prefix = false irsa_name = "eks-${local.workspace.cluster_name}-karpenter-irsa" irsa_use_name_prefix = false irsa_oidc_provider_arn = module.eks.oidc_provider_arn irsa_namespace_service_accounts = ["platform:karpenter"] tags = { Environment = terraform.workspace } }
Deploying EKS
Run the below command from the root directory of the repository –
TERRAFORM_ROOT_MODULE=tf-eks TERRAFORM_WORKSPACE=demo make applyAuto
On completion, the output will look like as below
Apply complete! Resources: 64 added, 0 changed, 0 destroyed. Outputs: cluster_endpoint = "https://23550A7D71C998764F87D62B1A11D6A1.yl4.ap-southeast-2.eks.amazonaws.com" cluster_security_group_id = "sg-0221d4acc0f89a2b4"
Note: The two nodes running are part of the node group created by the eks cluster. (defined in tf-eks/cluster.tf). We consider them as core nodes running critical workloads and recommended to use of on-demand nodes.