Infrastructure as Code, part 2: build Docker images and deploy to Kubernetes with Terraform

punkdata

Angel Rivera

Posted on November 11, 2021

Infrastructure as Code, part 2: build Docker images and deploy to Kubernetes with Terraform

This series shows you how to get started with infrastructure as code (IaC). The goal is to help developers build a strong understanding of IaC through tutorials and code examples.

In this post, I will demonstrate how to how to create a Docker image for an application, then push that image to Docker Hub. I will also discuss how to create and deploy the Docker image to a Google Kubernetes Engine (GKE) cluster using HashiCorp’s Terraform.

Here is a quick list of things we will accomplish in this post:

  1. Build a new Docker image
  2. Push the new Docker image to the Docker Hub registry
  3. Create a new GKE cluster using Terraform
  4. Create a new Terraform Kubernetes Deployment using the Terraform Kubernetes provider
  5. Destroy all the resources created using Terraform

Note: Before you can go through this part of the tutorial, make sure you have completed all the actions in the prerequisites section of part 1.

Our first task is learning how to build a Docker image based on the example Node.js application included in this code repo.

Building a Docker image

In the previous post, we used Terraform to create a new GKE cluster, but that cluster was unusable because no application or service was deployed. Because Kubernetes (K8s) is a container orchestrator, apps and services must be packaged into Docker images, which can then spawn Docker containers that execute applications or services.

Docker images are created using the docker build command and you will need a Dockerfile to specify how to build your Docker images. I will discuss Dockerfiles, but first I want to address .dockerignore files.

What is the .dockerignore file?

The .dockerignore file excludes the files and directories that match the patterns declared in it. Using this file helps to avoid unnecessarily sending large or sensitive files and directories to the daemon, and potentially adding them to public images. In this project, the .dockerignore file excludes unnecessary files related to Terraform and Node.js local dependencies.

Understanding the Dockerfile

The Dockerfile is critical for building Docker images. It specifies how to build and configure the image, in addition to what files to import into it. Dockerfile files are dynamic, so you can accomplish many objectives in different ways. It is important that you have a solid understanding of Dockerfile capabilities so you can build functional images. Here is a breakdown the Dockerfile contained in this project’s code repo.

FROM node:12

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./

RUN npm install --only=production

# Bundle app source
COPY . .

EXPOSE 5000
CMD ["npm", "start"]

Enter fullscreen mode Exit fullscreen mode

The FROM node:12 line defines an image to inherit from. When building images, Docker inherits from a parent image. In this case it is the node:12 image, which is pulled from Docker Hub, if it does not exist locally.

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./

RUN npm install --only=production

Enter fullscreen mode Exit fullscreen mode

This code block defines the WORKDIR parameter, which specifies the working directory in the Docker image. The COPY package*.json ./ line copies over any package-related files to the Docker image. The RUN npm install line installs the application dependencies listed in the package.json file.

COPY . .

EXPOSE 5000
CMD ["npm", "start"]

Enter fullscreen mode Exit fullscreen mode

This code block copies all the files into the Docker image, except for the files and directories listed in the .dockerignore file. The EXPOSE 5000 line specifies the port to expose for this Docker image. The CMD ["npm", "start"] line defines how to start this image. In this case, it is executing the start section specified in the package.json file for this project. This CMD parameter is the default execution command. Now that you understand the Dockerfile, you can use it to build an image locally.

Using the Docker build command

Using the docker build command, the Dockerfile build a new imaged based on the directives defined in it. There are some naming conventions to keep in mind when you are building Docker images. Naming conventions are especially important if you plan on sharing the images.

Before we start building an image, I will take a moment to describe how to name them. Docker images use tags composed of slash-separated name components. Because we will be pushing the image to Docker Hub, we need to prefix the image name with our Docker Hub username. In my case, that is ariv3ra/. I usually follow that with the name of the project, or a useful description of the image. The full name of this Docker image will be ariv3ra/learniac:0.0.1. The :0.0.1 is a version tag for the application, but you could also use that to describe other details about the image.

Once you have a good, descriptive name, you can build an image. The following command must be executed from within the root of the project repo (be sure to replace ariv3ra with your Docker Hub name):

docker build -t ariv3ra/learniac -t ariv3ra/learniac:0.0.1 .

Enter fullscreen mode Exit fullscreen mode

Next, run this command to see a list a of Docker images on your machine:

docker images

Enter fullscreen mode Exit fullscreen mode

This was my output.

REPOSITORY TAG IMAGE ID CREATED SIZE
ariv3ra/learniac 0.0.1 ba7a22c461ee 24 seconds ago 994MB
ariv3ra/learniac latest ba7a22c461ee 24 seconds ago 994MB

Enter fullscreen mode Exit fullscreen mode

The Docker push command

Now we are ready to push this image to Docker Hub and make it available publicly. Docker Hub requires authorization to access the service, so we need to use the login command to authenticate. Run this command to log in:

docker login

Enter fullscreen mode Exit fullscreen mode

Enter your Docker Hub credentials in the prompts to authorize your account. You will need to log in only once per machine. Now you can push the image.

Using the image name listed in your docker images command, run this command:

docker push ariv3ra/learniac

Enter fullscreen mode Exit fullscreen mode

This was my output.

The push refers to repository [docker.io/ariv3ra/learniac]
2109cf96cc5e: Pushed 
94ce89a4d236: Pushed 
e16b71ca42ab: Pushed 
8271ac5bc1ac: Pushed 
a0dec5cb284e: Mounted from library/node 
03d91b28d371: Mounted from library/node 
4d8e964e233a: Mounted from library/node

Enter fullscreen mode Exit fullscreen mode

You now have a Docker image available in Docker Hub and ready to be deployed to a GKE cluster. All the pieces are in place to deploy your application to a new Kubernetes cluster. The next step is to build the Kubernetes Deployment using Terraform.

Using Terraform to deploy Kubernetes

In part 1 of this series, we learned how to create a new Google Kubernetes Engine (GKE) cluster using Terraform. As I mentioned earlier, that cluster was not serving any applications or services because we did not deploy any to it. In this section I will describe what it takes to deploy a Kubernetes Deployment using Terraform.

Terraform has a Kubernetes Deployment resource that lets you define a and execute a Kubernetes deployment to your GKE cluster. In part 1 we created a new GKE cluster using the Terraform code in the part01/iac_gke_cluster/ directory. In this post, we will use the part02/iac_gke_cluster/ and part02/iac_kubernetes_app/ directories, respectively. The iac_gke_cluster/ is the same code we used in part 1. We will be using it again here in conjunction with the iac_kubernetes_app/ directory.

Terraform Kubernetes provider

We previously used the Terraform Google Cloud Platform provider to create a new GKE cluster. The Terraform provider is specific to the Google Cloud Platform, but it is still Kubernetes under the hood. Because GKE is essentially a Kubernetes cluster, we need to use the Terraform Kubernetes provider and Kubernetes Deployment resource to configure and deploy our application to the GKE cluster.

Terraform code files

The part02/iac_kubernetes_app/ directory contains these files:

  • providers.tf
  • variables.tf
  • main.tf
  • deployments.tf
  • services.tf
  • output.tf

These files maintain all the code that we are using to define, create, and configure our application deployment to a Kubernetes cluster. Next, I will break down these files to give you a better understanding of what they do.

Breakdown: providers.tf

The provider.tf file is where we define the Terraform provider we will be using: the Terraform Kubernetes provider. The provider.tf:

provider "kubernetes" {

}

Enter fullscreen mode Exit fullscreen mode

This code block defines the providers that will be used in this Terraform project. The { } blocks are empty because we will be handling the authentication requirements with a different process.

Breakdown: variables.tf

This file should seem familiar and is similar to the part 1 variables.tf file. This particular file specifies only the input variables that this Terraform Kubernetes project uses.

variable "cluster" {
  default = "cicd-workshops"
}
variable "app" {
  type = string
  description = "Name of application"
  default = "cicd-101"
}
variable "zone" {
  default = "us-east1-d"
}
variable "docker-image" {
  type = string
  description = "name of the docker image to deploy"
  default = "ariv3ra/learniac:latest"
}

Enter fullscreen mode Exit fullscreen mode

The variables defined in this file will be used throughout the Terraform project in code blocks in the project files. All of these variables have default values that can be changed by defining them in the CLI when executing the code. These variables add much needed flexibility to the Terraform code and allows the reuse of valuable code. One thing to note here is that the variable "docker-image" default parameter is set to my Docker image name. Replace that value with the name of your Docker image.

Breakdown: main.tf

The elements of the main.tf file start with the terraform block, which specifies the type of Terraform Backend. A “backend” in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage and remote execution among other things. In this code block, we are using the remote backend. It uses the Terraform Cloud, and is connected to the iac_kubernetes_app workspace you created in the Prerequisites section of the part 1 post.

terraform {
  required_version = "~>0.12"
  backend "remote" {
    organization = "datapunks"
    workspaces {
      name = "iac_kubernetes_app"
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Breakdown: deployments.tf

Next up is a description of the syntax in the deployments.tf file. This file uses the Terraform Kubernetes Deployment resource to define, configure, and create all the Kubernetes resources required to release our application to the GKE cluster.

resource "kubernetes_deployment" "app" {
  metadata {
    name = var.app
    labels = {
      app = var.app
    }
  }
  spec {
    replicas = 3

    selector {
      match_labels = {
        app = var.app
      }
    }
    template {
      metadata {
        labels = {
          app = var.app
        }
      }
      spec {
        container {
          image = var.docker-image
          name = var.app
          port {
            name = "port-5000"
            container_port = 5000
          }
        }
      }
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Time to review the code elements to gain a better understanding of what is going on.

resource "kubernetes_deployment" "app" {
  metadata {
    name = var.app
    labels = {
      app = var.app
    }
  }

Enter fullscreen mode Exit fullscreen mode

This code block specifies the use of the Terraform Kubernetes Deployment resources, which defines our deployment object for Kubernetes. The metadata block is used to assign values for the parameters used within the Kubernetes services.

  spec {
    replicas = 3

    selector {
      match_labels = {
        app = var.app
      }
    }

    template {
      metadata {
        labels = {
          app = var.app
        }
      }

      spec {
        container {
          image = var.docker-image
          name = var.app
          port {
            name = "port-5000"
            container_port = 5000
          }
        }
      }
    }
  }

Enter fullscreen mode Exit fullscreen mode

In the resources spec{...} block, we’re specifying that we want three Kubernetes pods running for our application in the cluster. The selector{...} block represents label selectors. These are a core grouping primitive in Kubernetes, which will let the users select a set of objects.

The resource template{...} block has a spec{...} block in it which has a container{...} properties block. This block has parameters that define and configure the container used in the deployment. As you can tell from the code, this is where we will define the pod’s Docker image (the image we want to use) and the container’s name as it should appear in Kubernetes. This is also where we define the port to expose on the container that will allow ingress access to the application running. The values come from the variables.tf file, found in the same folder. The Terraform Kubernetes Deployment resource is capable of performing very robust configurations. I encourage you and your team to experiment with some of the other properties to gain broader familiarity with the tooling.

Breakdown: services.tf

We have created a Terraform Kubernetes Deployment resource file and defined our Kubernetes deployment for this application. That leaves one detail left to complete the deployment of our app. The application we are deploying is a basic web site. As with all web sites, it needs to be accessible for it to be useful. At this point, our deployments.tf file specifies the directives for deploying a Kubernetes pod with our Docker image and the number of pods required. We are missing a critical element for our deployment: a Kubernetes service. This is an abstract way to expose an application running on a set of pods as a network service. With Kubernetes, you do not need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods, and can load-balance across them.

The services.tf file is where we define a Terraform Kubernetes service. It will wire up the Kubernetes elements to provide ingress access to our application running on pods in the cluster. Here is the services.tf file.

resource "kubernetes_service" "app" {
  metadata {
    name = var.app
  }
  spec {
    selector = {
      app = kubernetes_deployment.app.metadata.0.labels.app
    }
    port {
      port = 80
      target_port = 5000
    }
    type = "LoadBalancer"
  }
} 

Enter fullscreen mode Exit fullscreen mode

At this point, it might be helpful to describe the spec{...} block and the elements within it. The selector{ app...} block specifies a name that was defined in the deployments.tf file and represents the app value in the label property of the metadata block in the deployments resource. This is an example of reusing values that have already been assigned in related resources. It also provides a mechanism that streamlines important values and establishes a form of referential integrity for important data like this.

The port{...} block has two properties: port and target_port. These parameters define the external port that the service will listen on for requests to the application. In this example, it is port 80. The target_port is the internal port our pods are listening on, which is port 5000. This service will route all traffic from port 80 to port 5000.

The last element to review here is the type parameter that specifies the type of service we are creating. Kubernetes has three types of services. In this example, we’re using the LoadBalancer type, which exposes the service externally using a cloud provider’s load-balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. In this case, GCP will create and configure a LoadBalancer that will control and route traffic to our GKE cluster.

Breakdown: output.tf

Terraform uses Output Values to return values of a Terraform module, which provides a child module with outputs after running terraform apply. These outputs are used to expose a subset of its resource attributes to a parent module, or to print certain values in the CLI output. The output.tf blocks we are using output values to readout values like Cluster name and the ingress IP address of our newly created LoadBalancer service. This address is where we can access our application hosted in Pods on the GKE cluster.

  output "gke_cluster" {
    value = var.cluster
  }

  output "endpoint" {
    value = kubernetes_service.app.load_balancer_ingress.0.ip
  }

Enter fullscreen mode Exit fullscreen mode

Initializing the Terraform part02/iac_gke_cluster

Now that you have a better understanding of our Terraform project and syntax, you can start provisioning our GKE cluster using Terraform. Change directory into the part02/iac_gke_cluster directory:

cd part02/iac_gke_cluster

Enter fullscreen mode Exit fullscreen mode

While in part02/iac_gke_cluster, run this command:

terrform init

Enter fullscreen mode Exit fullscreen mode

This was my output.

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "google" (hashicorp/google) 3.31.0...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.google: version = "~> 3.31"

Terraform has been successfully initialized!

Enter fullscreen mode Exit fullscreen mode

This is great! Now we can create the GKE cluster.

Terraform apply part02/iac_gke_cluster

Terraform has a command that allows you to dry run and validate your Terraform code without actually executing anything. The command is called terraform plan and it also graphs all the actions and changes that Terraform will execute against your existing infrastructure. In the terminal, run:

terraform plan

Enter fullscreen mode Exit fullscreen mode

This was my output.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
-----------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_container_cluster.primary will be created
  + resource "google_container_cluster" "primary" {
      + additional_zones = (known after apply)
      + cluster_ipv4_cidr = (known after apply)
      + default_max_pods_per_node = (known after apply)
      + enable_binary_authorization = false
  ...

Enter fullscreen mode Exit fullscreen mode

Terraform will create new GCP resources for you based on the code in the main.tf file. Now you are ready to create the new infrastructure and deploy the application. Run this command in the terminal:

terraform apply

Enter fullscreen mode Exit fullscreen mode

Terraform will prompt you to confirm your command. Type yes and press Enter.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Enter fullscreen mode Exit fullscreen mode

Terraform will build your new Google Kubernetes Engine cluster on GCP.

Note : It will take 3-5 minutes for the cluster to complete. It is not an instant process because the backend systems are provisioning and bringing things online.

After my cluster was completed, this was my output.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

cluster = cicd-workshops
cluster_ca_certificate = <sensitive>
host = <sensitive>
password = <sensitive>
username = <sensitive>

Enter fullscreen mode Exit fullscreen mode

The new GKE cluster has been created and the Outputs results are displayed. Notice that the output values that were marked sensitive are masked in the results with <sensitive> tags. This ensures sensitive data is protected but available when needed.

Next, we will use the code in the part02/iac_kubernetes_app/ directory to create a Kubernetes deployment and accompanying LoadBalancer service.

Terraform Initialize part02/iac_kubernetes_app/

We can now deploy our application to this GKE cluster using the code in the part02/iac_kubernetes_app/ directory. Change directory into the directory with this command:

cd part02/iac_kubernetes_app/

Enter fullscreen mode Exit fullscreen mode

While in part02/iac_kubernetes_app/, run this command to initialize the Terraform project:

terrform init

Enter fullscreen mode Exit fullscreen mode

This was my output.

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.3...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.kubernetes: version = "~> 1.11"

Terraform has been successfully initialized!

Enter fullscreen mode Exit fullscreen mode

GKE cluster credentials

After creating a google_container_cluster with Terraform, authentication to the cluster is required. You can use the Google Cloud CLI to configure cluster access, and generate a kubeconfig file. Execute this command:

gcloud container clusters get-credentials cicd-workshops --zone="us-east1-d"

Enter fullscreen mode Exit fullscreen mode

Using this command, gcloud will generate a kubeconfig entry that uses gcloud as an authentication mechanism. This command uses the cicd-workshops value as the cluster name which is also specified in the variables.tf.

Terraform apply part02/iac_kubernetes_app/

Finally, we are ready to deploy our application to the GKE cluster using Terraform. Execute this command:

terraform plan

Enter fullscreen mode Exit fullscreen mode

This was my output.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
-----------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # kubernetes_deployment.app will be created
  + resource "kubernetes_deployment" "app" {
      + id = (known after apply)
      + metadata {
          + generation = (known after apply)
          + labels = {
              + "app" = "cicd-101"
            }
          + name = "cicd-101"
          + namespace = "default"
          + resource_version = (known after apply)
          + self_link = (known after apply)
          + uid = (known after apply)
        }
  ...

Enter fullscreen mode Exit fullscreen mode

Terraform will create new GCP resources for you based on the code in the deployment.tf and services.tf files. Now you can create the new infrastructure and deploy the application. Run this command in the terminal:

terraform apply

Enter fullscreen mode Exit fullscreen mode

Terraform will prompt you to confirm your command. Type yes and press Enter.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Enter fullscreen mode Exit fullscreen mode

Terraform will build your new Kubernetes application deployment and related LoadBalancer.

Note : It will take 3-5 minutes for the cluster to complete. It is not an instant process because the backend systems are provisioning and bringing things online.

After my cluster was completed, this was my output.

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

endpoint = 104.196.222.238
gke_cluster = cicd-workshops

Enter fullscreen mode Exit fullscreen mode

The application has now been deployed. The endpoint value, and the output, is the IP address to the public ingress of the cluster LoadBalancer. It also represents the address where you can access the application. Open a web browser and use the output value to access the application. There will be a web page with the text “Welcome to CI/CD 101 using CircleCI!”.

Using Terraform destroy

You have proof that your Kubernetes deployment works and that deploying the application to a GKE cluster has been successfully tested. You can leave it up and running, but be aware that there is a cost associated with any assets running on the Google Cloud Platform and you will be liable for those costs. Google gives a generous $300 credit for its free trial sign-up, but you could easily eat through that if you leave assets running.

Running the terraform destroy will terminate any running assets that you created in this tutorial.

Run this command to destroy the GKE cluster.

terraform destroy

Enter fullscreen mode Exit fullscreen mode

Remember that the above command will only destroy the part02/iac_kubeernetes_app/ deployment and you need to run the following to destroy all the resources created in this tutorial.

cd ../iac_gke-cluster/

terraform destroy

Enter fullscreen mode Exit fullscreen mode

This will destroy the GKE cluster we created earlier.

Conclusion

Congratulations! You have completed part 2 of this series, and leveled up your experience by building and publishing a new Docker image, along with provisioning and deploying an application to a Kubernetes cluster using infrastructure as code and Terraform.

Continue to part 3 of the tutorial where you will learn how to automate all of this awesome knowledge into CI/CD pipelines using CircleCI.

The following resources will help you expand your knowledge from here:

💖 💪 🙅 🚩
punkdata
Angel Rivera

Posted on November 11, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related