Create Amazon EKS Cluster within its VPC using Terraform

piyushjajoo

Piyush Jajoo

Posted on July 16, 2023

Create Amazon EKS Cluster within its VPC using Terraform

In this blog post, we'll help you dive into the world of Kubernetes and Infrastructure as Code using Amazon Elastic Kubernetes Service (EKS) and Terraform. We'll walk you through setting up your own EKS cluster in a dedicated Virtual Private Cloud (VPC) with public and private subnets. Don't worry if you're new to Terraform and EKS; we'll explain everything step by step and provide a GitHub repository with all the code for reference. Get ready for an exciting learning journey!

In this blog, we will cover the following topics:

  1. Implementing modularization in Terraform and utilizing external modules.

  2. Creating an EKS Cluster within a fully configured VPC.

  3. Establishing a connection to the EKS Cluster and installing a Helm chart.


Prerequisites

  • Basic understanding of AWS, EKS, VPC, and Terraform

  • An AWS account with necessary permissions to create VPC, Subnets, EKS Cluster etc..

  • Configure aws cli to point to your aws account, you will need this to generate the kubeconfig to connect to the cluster.

  • Install kubectl compatible with the EKS version you are installing.

  • Try to work with Latest of Terraform. I have used v1.5.2 on mac for this blog. If you want to manage multiple versions of Terraform use tfswitch, I love it.

  • If you want to learn how to generate the documentation from terraform files, install terraform-docs

  • Install helm a package manager for Kubernetes manifests, we will use it to install nginx helm chart once the cluster is created.


What are we going to create?

In this blog, the Terraform modules we develop will generate the following necessary resources for a functional EKS Cluster. Additionally, we will demonstrate the functionality of the EKS Cluster by installing an nginx Helm chart.

  • VPC

  • Subnets (3 public and 3 private)

  • 1 NAT Gateway per AZ with corresponding Elastic IPs

  • Internet Gateway

  • Public and Private Route tables

  • EKS Cluster with OIDC Provider

  • EKS AddOns

    • coredns
    • vpc-cni
    • kube-proxy
  • EKS Managed node group

Here is the github repository consisting all the code we write/discuss in this blog. Feel free to clone and play around.


Terraform modules directory structure

To structure your terraform modules effectively, follow these steps, see the outline of the directories and files below:

  1. Create a directory called my-eks-tf on your machine.

  2. Within the my-eks-tf directory, organize your modules as follows:

    • Create a modules directory, which contains the eks and vpc_and_subnets modules. These modules should have opinionated defaults set by your Core Platform team, allowing developers to only modify certain values.
    • Create a cluster directory, which includes the scaffold module. This module invokes the eks and vpc_and_subnets modules and provides additional abstraction. It can have hardcoded values, simplifying the parameters that developers need to specify.
  3. The cluster module is invoked by the code in my-eks-tf/main.tf, which is written by your team's Developer team member.

  4. Each module should have the following files: main.tf, variables.tf, and outputs.tf.

    • main.tf contains the actual terraform code, including provider settings and invocations of external APIs to create resources in the cloud provider.
    • variables.tf declares input variables that can be overridden when invoking the module.
    • outputs.tf declares variables that can be used by other modules. For example, the vpc module might output subnet IDs that are used by the eks module.
  5. Use .tfvars files as input files for your terraform modules. Consider having separate files for each environment, such as dev.tfvars, stage.tfvars, and prod.tfvars.

Although it's possible to structure the discussed modules into multiple repositories, for simplicity, we kept everything within the same module. However, you can consider separating the modules directory, the scaffolding modules (e.g., cluster module) built by your team's Platform team member, and a separate repository for .tfvars files and the module that invokes the scaffolding modules.

If you want to explore the code and its structure, you can clone it from the corresponding github repository, which follows the same structure we discussed above.

By following this structure, you can organize your terraform modules effectively and simplify the process for developers to provision infrastructure using the Platform APIs.

my-eks-tf/
.
├── README.md
├── cluster               
│   ├── README.md
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── modules               
│   ├── README.md
│   ├── eks
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── vpc_and_subnets
│       ├── README.md
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── main.tf
├── outputs.tf
├── sample.tfvars
└── variables.tf
Enter fullscreen mode Exit fullscreen mode

vpc and subnets module

In this section, we will discuss the vpc and subnets module. We utilize the open-source module terraform-aws-modules/vpc/aws to create a VPC with associated components such as subnets (private and public), internet gateway, NAT gateways, and route tables (private and public). For detailed information, please refer to the module documentation.

In our opinionated vpc_and_subnets module, we prompt the user for parameters like VPC name, CIDR block, and whether they want to create an internet gateway or NAT gateways. We place the module invocation in main.tf and define variables in variables.tf. The outputs.tf file contains the outputs required for other modules.

It is worth noting that we specify the module source as version 5.0.0 of the open-source module. This is a good practice as it locks down the module version, preventing unexpected behavior due to compatibility issues in newer versions. We create the VPC in the first three availability zones returned by the AWS Terraform API's aws_availability_zones data resource. Additionally, we reference local variables using local.private_subnets and local.public_subnets within the module. In the locals block, we leverage the Terraform cidrsubnet function to generate 3 public and 3 private subnets.

Please note that the provided code snippet may have been abbreviated for brevity. You can find the complete working code for main.tf here.

data "aws_availability_zones" "available" {}

locals {
  newbits = 8
  netcount = 6
  all_subnets = [for i in range(local.netcount) : cidrsubnet(var.vpc_cidr, local.newbits, i)]
  public_subnets  = slice(local.all_subnets, 0, 3)
  private_subnets = slice(local.all_subnets, 3, 6)
}

# vpc module to create vpc, subnets, NATs, IGW etc..
module "vpc_and_subnets" {
  # invoke public vpc module
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.0.0"

  # vpc name
  name = var.name

  # availability zones
  azs = slice(data.aws_availability_zones.available.names, 0, 3)

  # vpc cidr
  cidr = var.vpc_cidr

  # public and private subnets
  private_subnets = local.private_subnets
  public_subnets  = local.public_subnets

  # create nat gateways
  enable_nat_gateway     = var.enable_nat_gateway
  single_nat_gateway     = var.single_nat_gateway
  one_nat_gateway_per_az = var.one_nat_gateway_per_az

  # enable dns hostnames and support
  enable_dns_hostnames = var.enable_dns_hostnames
  enable_dns_support   = var.enable_dns_support

  # tags for public, private subnets and vpc
  tags                = var.tags
  public_subnet_tags  = var.additional_public_subnet_tags
  private_subnet_tags = var.additional_private_subnet_tags

  # create internet gateway
  create_igw       = var.create_igw
  instance_tenancy = var.instance_tenancy

}
Enter fullscreen mode Exit fullscreen mode

Sample variables.tf file looks as below, for working code please refer here.

variable "name" {
  type        = string
  description = "name of the vpc"
}

variable "vpc_cidr" {
  type        = string
  description = <<EOT
    vpc cidr
    e.g. 10.0.0.0/16
  EOT
}

variable "enable_nat_gateway" {
  description = "Should be true if you want to provision NAT Gateways for each of your private networks"
  type        = bool
  default     = true
}

variable "single_nat_gateway" {
  description = "Should be true if you want to provision a single shared NAT Gateway across all of your private networks"
  type        = bool
  default     = false
}

variable "one_nat_gateway_per_az" {
  description = "Should be true if you want only one NAT Gateway per availability zone."
  type        = bool
  default     = true
}

variable "enable_dns_hostnames" {
  description = "Should be true to enable DNS hostnames in the VPC"
  type        = bool
  default     = true
}

variable "enable_dns_support" {
  description = "Should be true to enable DNS support in the VPC"
  type        = bool
  default     = true
}

variable "tags" {
  description = "A mapping of tags to assign to all resources"
  type        = map(string)
  default     = {}
}

variable "additional_public_subnet_tags" {
  description = "Additional tags for the public subnets"
  type        = map(string)
  default     = {}
}

variable "additional_private_subnet_tags" {
  description = "Additional tags for the private subnets"
  type        = map(string)
  default     = {}
}

variable "create_igw" {
  description = "Controls if an Internet Gateway is created for public subnets and the related routes that connect them."
  type        = bool
  default     = true
}

variable "instance_tenancy" {
  description = "A tenancy option for instances launched into the VPC"
  type        = string
  default     = "default"
}
Enter fullscreen mode Exit fullscreen mode

Sample outputs.tf file looks as below, for working code please refer here. Below is an example on how to declare an output variable retrieving from output of the remote invoked module. Here it's retrieving outputs from vpc_and_subnets module which invokes terraform-aws-modules/vpc/aws module.

output "vpc_id" {
  description = "The ID of the VPC"
  value       = module.vpc_and_subnets.vpc_id
}

output "private_subnets" {
  description = "List of IDs of private subnets"
  value       = module.vpc_and_subnets.private_subnets
}

output "public_subnets" {
  description = "List of IDs of public subnets"
  value       = module.vpc_and_subnets.public_subnets
}

output "public_route_table_ids" {
  description = "List of IDs of public route tables"
  value       = module.vpc_and_subnets.public_route_table_ids
}

output "private_route_table_ids" {
  description = "List of IDs of private route tables"
  value       = module.vpc_and_subnets.private_route_table_ids
}

output "nat_ids" {
  description = "List of allocation ID of Elastic IPs created for AWS NAT Gateway"
  value       = module.vpc_and_subnets.nat_ids
}

output "nat_public_ips" {
  description = "List of public Elastic IPs created for AWS NAT Gateway"
  value       = module.vpc_and_subnets.nat_public_ips
}

output "natgw_ids" {
  description = "List of NAT Gateway IDs"
  value       = module.vpc_and_subnets.natgw_ids
}

output "igw_id" {
  description = "The ID of the Internet Gateway"
  value       = module.vpc_and_subnets.igw_id
}
Enter fullscreen mode Exit fullscreen mode

eks module

In this section we will discuss eks cluster and eks managed node groups. We utilize the open-source module terraform-aws-modules/eks/aws to create an EKS Cluster in given subnets along with a managed EKS node group. For detailed information, please refer to the module documentation.

In our opinionated eks module, we prompt the user for parameters like VPC id, subnet ids for EKS, subnet ids for eks node groups, and EKS cluster name. By default. the module creates EKS cluster with k8s version 1.27 but you can override that as well. By default, this module creates an EKS managed worker named worker but this configuration can be overridden. We place the module invocation in main.tf and define variables in variables.tf. The outputs.tf file contains the outputs required for other modules.

It is worth noting that we specify the module source as version 19.5.3 of the open-source module. This is a good practice as it locks down the module version, preventing unexpected behavior due to compatibility issues in newer versions. We create the EKS Cluster in the provided subnets and EKS Managed node groups in the provided subnet. And by default this module creates a public and private endpoint for EKS Cluster, enables IRSA by creating OIDC provider and installs coredns, vpc-cni and kube-proxy addons.

Please note that the provided code snippet may have been abbreviated for brevity. You can find the working code for main.tf here.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.15.3"

  # eks cluster name and version
  cluster_name    = var.eks_cluster_name
  cluster_version = var.k8s_version

  # vpc id where the eks cluster security group needs to be created
  vpc_id = var.vpc_id

  # subnets where the eks cluster needs to be created
  control_plane_subnet_ids = var.control_plane_subnet_ids

  # to enable public and private access for eks cluster endpoint
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true

  # create an OpenID Connect Provider for EKS to enable IRSA
  enable_irsa = true

  # install eks managed addons
  # more details are here - https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
  cluster_addons = {
    # extensible DNS server that can serve as the Kubernetes cluster DNS
    coredns = {
      preserve    = true
      most_recent = true
    }

    # maintains network rules on each Amazon EC2 node. It enables network communication to your Pods
    kube-proxy = {
      most_recent = true
    }

    # a Kubernetes container network interface (CNI) plugin that provides native VPC networking for your cluster
    vpc-cni = {
      most_recent = true
    }
  }

  # subnets where the eks node groups needs to be created
  subnet_ids = var.eks_node_groups_subnet_ids

  # eks managed node group named worker
  eks_managed_node_groups = var.workers_config
}
Enter fullscreen mode Exit fullscreen mode

Sample variables.tf file looks as below, for working code please refer here.

variable "eks_cluster_name" {
  type        = string
  description = "eks cluster name"
}

variable "k8s_version" {
  type        = string
  description = "kubernetes version"
  default     = "1.27"
}

variable "control_plane_subnet_ids" {
  type        = list(string)
  description = "subnet ids where the eks cluster should be created"
}

variable "eks_node_groups_subnet_ids" {
  type        = list(string)
  description = "subnet ids where the eks node groups needs to be created"
}

variable "vpc_id" {
  type        = string
  description = "vpc id where the cluster security group needs to be created"
}

variable "workers_config" {
  type        = map(any)
  description = "workers config"
  default = {
    worker = {
      min_size     = 1
      max_size     = 2
      desired_size = 1

      instance_types = ["t3.large"]
      capacity_type  = "SPOT"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Sample outputs.tf file looks as below, for working code please refer here. Below is an example on how to declare an output variable retrieving from output of the remote invoked module. Here it's retrieving outputs from eks module which invokes terraform-aws-modules/eks/aws module.

output "cluster_arn" {
  description = "The Amazon Resource Name (ARN) of the cluster"
  value       = module.eks.cluster_arn
}

output "cluster_certificate_authority_data" {
  description = "Base64 encoded certificate data required to communicate with the cluster"
  value       = module.eks.cluster_certificate_authority_data
}

output "cluster_endpoint" {
  description = "Endpoint for your Kubernetes API server"
  value       = module.eks.cluster_endpoint
}

output "cluster_oidc_issuer_url" {
  description = "The URL on the EKS cluster for the OpenID Connect identity provider"
  value       = module.eks.cluster_oidc_issuer_url
}

output "oidc_provider" {
  description = "The OpenID Connect identity provider (issuer URL without leading `https://`)"
  value       = module.eks.oidc_provider
}

output "oidc_provider_arn" {
  description = "The ARN of the OIDC Provider"
  value       = module.eks.oidc_provider_arn
}
Enter fullscreen mode Exit fullscreen mode

Setup aws provider version

When working with Terraform modules, it is considered a best practice to specify the version of the AWS provider you are using. By explicitly defining the version, you ensure that the module is built and tested consistently, mitigating the risk of compatibility issues. If the version is not specified, Terraform will automatically retrieve the latest version, which may lead to unexpected behavior and potential compatibility problems. In this example I am setting it to current latest version 5.6.2 (docs) at the time of writing this blog.

Copy paste the following code block to eks and vpc_and_subnet module's main.tf in modules/ directory.

# setup aws terraform provider version to be used
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.6.2"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

cluster module

So far we explored how a Platform team creates EKS and VPC module APIs. In this section we will delve into how a Platform member, belonging to a developer team, can leverage these opinionated Terraform modules to build a scaffold module for their team. This scaffold module simplifies the process for developers, enabling them to create the desired EKS Cluster by invoking the module with just a few parameters. Although the module we build here is tailored to the specific needs of an individual team, the parameterized nature of the APIs (modules) developed by the Platform team allows them to be utilized by multiple teams.

Below is the main.tf of the cluster module, which is supposed to create an EKS Cluster in private VPC and a EKS Node group in private VPC.

You will observe that both vpc_with_subnets and eks_with_node_group modules refer to eks and vpc_and_subnets modules we built in sections above. You will also observe that eks_with_node_group module is using vpc_id and private_subnets output values from vpc_with_subnets module. This creates a dependency order, such that, EKS module invocation will wait until VPC and Subnets are created.

The following code may have been abbreviated for brevity, please refer the working code here.

# invoking vpc and subnets modules
module "vpc_with_subnets" {
  # invoke vpc_and_subnets module under modules directory
  source = "../modules/vpc_and_subnets"

  # passing the required parameters
  name     = var.vpc_name
  vpc_cidr = var.vpc_cidr
}

# invoking eks module to eks cluster and node group
module "eks_with_node_group" {
  # invoke eks module under modules directory
  source = "../modules/eks"

  # passing the required parameters
  eks_cluster_name = var.eks_cluster_name
  k8s_version      = var.k8s_version

  # pass vpc and subnet details from vpc_with_subnets module
  vpc_id                     = module.vpc_with_subnets.vpc_id
  eks_node_groups_subnet_ids = module.vpc_with_subnets.private_subnets
  control_plane_subnet_ids   = module.vpc_with_subnets.private_subnets
}
Enter fullscreen mode Exit fullscreen mode

Since this API is tailored for a specific team, the variables.tf file will have fewer parameters compared to the actual module. It will either use default values from the modules or set its own defaults. In our example, users of the cluster module only need to provide vpc_name, vpc_cidr, eks_cluster_name, and k8s_version. The vpc_id and private_subnets are obtained from the VPC module's output. Even vpc_name, vpc_cidr, and k8s_version have default values. In the next section, we will demonstrate how invoking this module is as simple as specifying the eks_cluster_name to create our EKS module.

The variables.tf file may have been abbreviated for brevity, please refer the working code here.

variable "vpc_name" {
  type        = string
  description = "name of the vpc to be created"
  default     = "platformwale"
}

variable "vpc_cidr" {
  type        = string
  description = "vpc cidr block to be used"
  default     = "10.0.0.0/16"
}

variable "eks_cluster_name" {
  type        = string
  description = "eks cluster name"
}

variable "k8s_version" {
  type        = string
  description = "kubernetes version"
  default     = "1.27"
}
Enter fullscreen mode Exit fullscreen mode

This module will simply output vpc detatils and eks cluster details which may be useful for the developer. The following outputs.tf may have been abbreviated for brevity, please refer the working code here.

output "vpc_id" {
  description = "The ID of the VPC"
  value       = module.vpc_with_subnets.vpc_id
}

output "private_subnets" {
  description = "List of IDs of private subnets"
  value       = module.vpc_with_subnets.private_subnets
}

output "public_subnets" {
  description = "List of IDs of public subnets"
  value       = module.vpc_with_subnets.public_subnets
}

output "cluster_certificate_authority_data" {
  description = "Base64 encoded certificate data required to communicate with the cluster"
  value       = module.eks_with_node_group.cluster_certificate_authority_data
}

output "cluster_endpoint" {
  description = "Endpoint for your Kubernetes API server"
  value       = module.eks_with_node_group.cluster_endpoint
}

output "cluster_oidc_issuer_url" {
  description = "The URL on the EKS cluster for the OpenID Connect identity provider"
  value       = module.eks_with_node_group.cluster_oidc_issuer_url
}
Enter fullscreen mode Exit fullscreen mode

Prepare to invoke cluster module

In the previous section, we demonstrated the construction of the cluster module, which establishes an EKS Cluster within its own VPC, using private subnets and an EKS Node Group. Now, all that remains is to prepare the files necessary for invoking the cluster module. This step can be accomplished by either the Platform team member or the developer themselves.

During the invocation process, we need to configure three crucial elements:

  1. The Terraform backend: This informs the Terraform CLI where to store the tfstate file generated during execution, typically utilizing an S3 backend.

  2. The AWS provider: This enables the Terraform CLI to authenticate with AWS and determine where the resources should be created.

  3. The invocation of the cluster module: This triggers the creation of the desired resources.

In the following scenario, we inform Terraform that we will use an S3 backend and specify that the AWS provider should utilize the credentials provided via the command line. This configuration will allow us to create resources within the designated AWS region. In the next section, we will explore how to configure the Terraform CLI.

The following main.tf may have been abbreviated for brevity, please refer the working code here.

# to use s3 backend 
# s3 bucket is configured at command line
terraform {
  backend "s3" {}
}

# setup terraform aws provider to create resources
provider "aws" {
  region = var.region
}

# invoke cluster module which creates vpc, subnets and eks cluter
module "cluster" {
  source = "./cluster"

  eks_cluster_name = var.cluster_name
}
Enter fullscreen mode Exit fullscreen mode

In the variables.tf file, only the EKS cluster name needs to be provided by the user. Other values can be overridden if desired. During invocation, you will observe that we only pass the EKS cluster name. This is because the cluster module only requires the cluster name as a parameter, while all other parameters have default values.

The variables.tf below may have been abbreviated for brevity, please refer the working code here.

variable "region" {
  type        = string
  description = "aws region where the resources are being created"
}

variable "vpc_name" {
  type        = string
  description = "name of the vpc to be created"
  default     = "platformwale"
}

variable "vpc_cidr" {
  type        = string
  description = "vpc cidr block to be used"
  default     = "10.0.0.0/16"
}

variable "cluster_name" {
  type        = string
  description = "eks cluster name"
  default     = "platformwale"
}

variable "k8s_version" {
  type        = string
  description = "k8s version"
  default     = "1.27"
}
Enter fullscreen mode Exit fullscreen mode

The outputs.tf will only output the eks details as that may only be the details the developer might be interested in. The outputs.tf below may have been abbreviated for brevity, please refer the working code here.

output "cluster_certificate_authority_data" {
  description = "Base64 encoded certificate data required to communicate with the cluster"
  value       = module.cluster.cluster_certificate_authority_data
}

output "cluster_endpoint" {
  description = "Endpoint for your Kubernetes API server"
  value       = module.cluster.cluster_endpoint
}

output "cluster_oidc_issuer_url" {
  description = "The URL on the EKS cluster for the OpenID Connect identity provider"
  value       = module.cluster.cluster_oidc_issuer_url
}
Enter fullscreen mode Exit fullscreen mode

Finally, we must create the .tfvars file. This file can be tailored to your specific environment, such as dev.tfvars or test.tfvars, containing environment-specific configurations. In our example, we only create a sample.tfvars file where we specify the region and EKS cluster name. This is all that's required to create an EKS cluster within its own VPC, thanks to the modularization capabilities of Terraform and the creation of an API.

# aws region
region = "us-east-2"

# eks cluster name
cluster_name = "platformwale"
Enter fullscreen mode Exit fullscreen mode

Deploy Terraform to create VPC, Subnets and EKS Cluster

Make sure your terminal is configured to talk to the desired AWS Account where you want to create the resources. I have provided a couple of ways in my README in How to execute section.

To initialize Terraform and download the necessary provider plugins, navigate to the my-eks-tf directory containing main.tf and .tfvars file as created in the sections above. Make sure .tfvars file is prepared, you can refer the sample.tfvars file in the github repository.

Below we are creating an s3 bucket to store the tfstate file for our execution. And configuring terraform CLI -backend-config to point to the s3 bucket created above and intialize the terraform module.

Execute as below:

# tfstate s3 bucket name
tfstate_bucket_name="unique s3 bucket name"

# make sure to create the s3 bucket for tfstate file if it doesn't exist
aws s3api create-bucket --bucket "${tfstate_bucket_name}" --region "us-east-1"

# tfstate file name
tfstate_file_name="<some name e.g. eks-1111111111>"

# initialize the terraform module
terraform init -backend-config "key=${tfstate_file_name}" -backend-config "bucket=${tfstate_bucket_name}" -backend-config "region=us-east-1"
Enter fullscreen mode Exit fullscreen mode

To validate the setup and see what resources Terraform will create, use:

terraform plan -var-file="path/to/your/terraform.tfvars"

# example
terraform plan -var-file="sample.tfvars"
Enter fullscreen mode Exit fullscreen mode

To apply the changes and create the VPC, Subnets and EKS cluster, use:

terraform apply -var-file="path/to/your/terraform.tfvars"

# example
terraform apply -var-file="sample.tfvars"
Enter fullscreen mode Exit fullscreen mode

Terraform will show you an execution plan, indicating what resources it will create. If everything looks good, type "yes" to proceed.

After ~15 minutes, your EKS cluster should be up and running!


Connect to your EKS Cluster

After the EKS cluster is created, you need to update your kubeconfig to point kubectl to the newly created EKS cluster and install nginx helm chart.

Run the following commands:

# retrieve kubeconfig
aws eks update-kubeconfig --region "<aws region>" --name "<eks cluster name>"
Enter fullscreen mode Exit fullscreen mode

This will update existing kubeconfig at ~/.kube/config location on your local machine and set the current-context to point to this new EKS Cluster. Now, you can interact with your EKS cluster using kubectl.

Check if the kubeconfig context is now pointing to installed cluster, example as below shows that the kubectl is able to point to the new EKS cluster -

$ kubectl config current-context
arn:aws:eks:us-east-2:xxxxxxxx:cluster/platformwale
Enter fullscreen mode Exit fullscreen mode

Install nginx helm chart as below, this will create an external loadbalancer:

# add bitnami helm chart repo
helm repo add bitnami https://charts.bitnami.com/bitnami

# install nginx helm chart
helm install -n default nginx bitnami/nginx
Enter fullscreen mode Exit fullscreen mode

Make sure the nginx pod is running and an external loadbalancer svc is created, example a below -

$ kubectl get pods -n default
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7c8ff57685-77ln4   1/1     Running   0          6m45s

$ kubectl get svc -n default nginx
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)        AGE
nginx        LoadBalancer   172.20.45.97   xxxxxxxxxxxx.us-east-2.elb.amazonaws.com   80:31008/TCP   5s
Enter fullscreen mode Exit fullscreen mode

This means you have functional EKS Cluster and you are able successfully deploy services to it.

You can click on the EXTERNAL-IP and open in the browser, that's the loadbalancer public record, you should see something like below if the nginx pod is running correctly.

Refer this section of the README for complete instructions.


Cleanup

This is the most important step, make sure you destroy all the resources you created using Terraform earlier, otherwise you may see the unexpected costs in your AWS Account.

First uninstall nginx helm chart to remove the loadbalancer created:

# uninstall nginx chart
helm uninstall -n default nginx

# make sure nginx svc is gone
$ kubectl get svc -n default nginx
Error from server (NotFound): services "nginx" not found
Enter fullscreen mode Exit fullscreen mode

Now destroy infrastructure using following terraform command, in about ~10 mins all the infrastructure will be destroyed:

# destroy infrastructure
terraform destroy -var-file="sample.tfvars"
Enter fullscreen mode Exit fullscreen mode

Refer this section of the README for complete instructions.


Formatting and Documentation

In the sections below we are introducing few handy tools/commands to manage your terraform.

Keep Terraform Formatted

It's always a good practice to format the terraform manifests for better readability. As you might have seen the terraform files in github repository are well formatted. We have used following command to format the terraform.

# for recursively formatting all the files in current and sub-directories
terraform fmt -recursive

# for formatting files in a current directory only
terraform fmt
Enter fullscreen mode Exit fullscreen mode

Generate Documentation

This is a bonus section, introducing Terraform Docs CLI to autogenerate the markdown of the terraform. All sections about terraform parameters in the readmes you see in this github repository are autogenerated by this cli tool.

Below is an example on how we generated doc for modules/eks -

cd my-eks-tf/modules/eks
terraform-docs markdown .
Enter fullscreen mode Exit fullscreen mode

This command should generate the markdown using the main.tf, variables.tf and outputs.tf files. This CLI supports other formats as well, feel free to explore.


Conclusion

In this blog post, we demonstrated how to create a VPC with private and public subnets, an Amazon EKS cluster in private subnets using Terraform. You can now deploy your applications to your EKS cluster and enjoy the scalability and reliability offered by Kubernetes.

We have witnessed the remarkable capabilities of Terraform, enabling us to swiftly construct intricate systems without writing extensive code. Additionally, we explored essential aspects such as Terraform modules, formatting, and documentation.


References


Disclaimer

This blog post serves as a guide to create an Amazon Elastic Kubernetes Service (EKS) cluster in a Virtual Private Cloud (VPC) using private subnets with Terraform. While we aim to provide clear and accurate instructions, you are solely responsible for any actions taken based on this guide.

Please be aware that creating and using resources within AWS, including the creation of an EKS cluster and associated resources, may incur costs. We strongly recommend you to check and understand the pricing details for Amazon EKS, EC2 instances, and other related services on the AWS Pricing page (https://aws.amazon.com/pricing/) before you proceed. Be sure to clean up the resources (by running 'terraform destroy') after their use to avoid any unnecessary costs.

Always adhere to the best practices of security and infrastructure management while using AWS services. This guide assumes that you have necessary permissions to create and manage resources in your AWS account.

Use this guide at your own risk. The author or the organization represented by the author will not be held liable for any damage, data loss, or cost incurred due to the direct or indirect use of the provided guide.

Stay safe and manage your resources wisely!


Author Notes

Feel free to reach out with any concerns or questions you have, either on the GitHub repository or directly on this blog. I will make every effort to address your inquiries and provide resolutions. Stay tuned for the upcoming blog in this series dedicated to Platformwale (Engineers who work on Infrastructure Platform teams).


Originally published at https://platformwale.blog on July 15, 2023.

💖 💪 🙅 🚩
piyushjajoo
Piyush Jajoo

Posted on July 16, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related