Infrastructure as Code: Build & Test GCP Load Balancer with Terraform

kawineal

Kawi Neal

Posted on November 20, 2020

Infrastructure as Code: Build & Test GCP Load Balancer with Terraform

The primary goal of this post is to :

  • Describe configuration & infrastructure build out and testing of Google Cloud Platform (GCP) HTTP Load Balancer using Hashicorp Terraform, an open source "Infrastructure As Code" (IaC) tool.

  • Provide a high-level overview of Terraform and highlight a number of key elements of Hashicorp's Configuration Language (HCL) used in the configuring resources for deploying HTTP Load Balancer.

Google Cloud (GCP) load balancing is implemented at the edge of GCP network, offering load balancing to distribute incoming network traffic across multiple virtual machines (VM) instances. This allows for your network traffic to be distributed & load balanced across single or multiple regions close to your users.

Some of the features offered by GCP Load Balancing are :

  • Automatic intelligent autoscaling of your backends based on CPU utilization, load capacity & monitoring metrics.
  • Traffic routing to the closest virtual instance.
  • Global load balancing for when your applications are available across the world.
  • High availability & redundancy which means that if a component(e.g virtual instance) fails, it is automatically restarted or replaced.

Prerequisites / Setup

This article will assume that you have some familiarity with cloud computing infrastructure & resources, Infrastructure as Code (IaC) and Terraform. In order to set up your environment & create components you will need a Google account , have access to Google Cloud Console and rights within that account to create and administer projects via Google Console.

GCP SETUP

Setup needed within GCP :

  1. Create project
  2. Create Service Account & associated key to allow Terraform to access GCP Project. We will only grant the Service Account minimum permission required for this effort.
  3. Create a storage bucket to store infrastructure state via Terraform.
  4. Add public SSH key to GCP so that Terraform can connect to GCP via remote SSH with a private key.

Create Project

Log into your google account and use URL below to create project. For this effort we can name the project "http-loadbalancer".

https://console.cloud.google.com/projectcreate

alt text

Service Account

Before we start creating infrastructure resources via Terraform we need to create a Service Account via Google Console. Service Accounts can be used by applications(e.g Terraform) to make authorized API calls to create infrastructure resources. Service Accounts are not user accounts and it does not have passwords associated with them. Service Accounts are associated with private/public RSA key-pairs that are used for authentication to Google.
Select your project, Click IAM & Admin menu, Service Accounts option and then click + Create Service Account button.

alt text

Enter a name and description for the Service Account and click the CREATE button.

alt text

Give the newly created Service Account project permissions. Add the following roles (Compute Admin & Storage Admin) below and click the CONTINUE button.

alt text

Next is to generate our authentication key file (JSON) that will be used by Terraform to log into to GCP. Click the on Actions column as shown and select Create key to create key.
alt text


Select JSON , click on the CREATE button and JSON file is downloaded to your computer. Rename the file to "http-loadbalancer.json" and store in a secure folder for use later in our Terraform project.

alt text

Storage Bucket

We will need to create a GCP storage bucket to support the remote state feature of Terraform backends. By default, Terraform stores infrastructure state locally in a file, terraform.tfstate. We could have used local state for this effort, we however are using remote state(GCP storage bucket) to highlight this feature in Terraform. With remote state enabled Terraform writes the state (infrastructure) data to a remote data store. Remote state can be shared between team members and depending on the provider allows for locking & versioning.


Click on the Storage menu in Google Console or use URL below to get to Storage, in order to create a storage bucket for the http-loadbalancer project.
https://console.cloud.google.com/storage/browser?project=http-loadbalancer


Click the CREATE BUCKET menu, enter http-loadbalancer for bucket name and then click the CREATE button to create a storage bucket.

alt text

After creating the bucket, if you select the http-loadbalancer bucket and go the the Permissions tab you should see terraform-account service account as a member with Admin Role for this storage bucket.


In Google Console, from the Navigation menu (top left) select the Compute Engine to make sure the Compute Engine API is enabled for your project (http-loadbalancer).

SSH Key

If you don't already have an SSH key you can use the following link to generate it first. This will result in two files (e.g id_rsa & id_rsa.pub). Contents of your xxxx.pub file needs to be added to GCP and the associated key (id_rsa) file needs to be stored for use later with Terraform.

Within GCP , go to Compute Engine → Metadata


alt text

Select SSH Keys tab and add contents of your xxxx.pub (e.g id_rsa.pub) file.


alt text


The user name associated with the key creation should be displayed in the user name column.

The net result of the above steps should result in two files (service account JSON & SSH private key) that will be needed to be placed into the Terraform project once it has been downloaded.

Getting Started with Terraform on GCP


Terraform Basics

The HTTP Load Balancer can manually be configured and provisioned via Google Console. We, however, want to take advantage of key benefits that IaC (e.g Terraform) provides with respects to provisioning and maintaining cloud infrastructure. We are essentially applying the same principles around developing software applications to infrastructure definition and provisioning. These benefits include :

  • Reuse & Efficiency - Reliaby rebuild any resource of infrastructure reducing risk. With IaC, once you have created code to set up one environment(e.g DEV), it can be easily configured to replicate another environment (QA/PROD). Code once and reuse many times (e.g Terraform modules)
  • Version Control & Collaboration - Provide history of changes & traceability of infrastructure when your infrastructure is managed via code. Allows for internal teams to share code between and applies policies to manage infrastructure as it would apply to code.
  • Validation - Allows for effective testing of components individually or entire systems to support specific workflow.
  • Documentation - Code/comments serves to document infrastructure.

Terraform is an IaC tool for provisioning, updating and managing infrastructure via Hashicorp Configuration Language(HCL). HCL is a declarative language where you specify(declare) the end state and terraform executes a plan to build out that infrastructure. Using providers plug-ins Terraform supports multiple cloud environments (AWS, Google, Azure & many more). The HCL language & core concepts are applicable to all providers and do not change per provider.

Introduction to Hashicorp Terraform

Below is an excellent overview of Terraform.


Introduction to HashiCorp Terraform with Armon Dadgar

The Terraform lifecycle/workflow consist of :

INIT - Terraform initializes the working directory containing the configuration files and installs all the required plug-ins that are referenced in configuration files.

PLAN - Stage where Terraform determines what needs to be created, updated, or destroyed to move from the real/current state of the infrastructure to the desired state. Plan run will result in an update of Terraform state to reflect the intended state.

APPLY - Terraform apply executes that the generated plan to apply the changes in order to move infrastructure resources to the desired state.

DESTROY - Terraform destroy is used to remove/delete only Terraform managed resources.

alt text


Below are some key terms used in Terraform that will touch upon as part of this article.

Provider: It is a plugin to interact with APIs of public cloud providers (GCP, AWS, Azure) in order to access & create Terraform managed resources.

Variables: Also used as input-variables, it is a key-value pair used by Terraform modules to allow customization. Instead of using hard-coded strings in your resource definition/module you can seperate the values out into data files(vars) and reference
via variables.

State: It consists of cached information about the infrastructure managed by Terraform and the related configurations.

Modules: Reusable container for one or more resources that are used together. Modules have defined input variables which are used to create/update resources and allow for defined output variables that other resources or modules can use.

Data Source: It is implemented by providers to return reference on resources within infrastructure to Terraform.

Install Terraform

The Terraform distribution is a single binary file that you can download and install on your system Hashicorp Download. Find the right binary for your operating system (Windows, Mac,etc) to install. A single binary named terraform from zip file is needed and it has to be added to your system PATH.

After completing installation verify install by running 'terraform -version' on command line:

$ terraform -version
Terraform v0.13.4
Enter fullscreen mode Exit fullscreen mode

You can get list of available commands by running 'terraform' without any arguments :

$ terraform
Usage: terraform [-version] [-help]  [args]

The available commands for execution are listed below.
...
...
Enter fullscreen mode Exit fullscreen mode

Install GIT & Clone project

If you don't already have GIT installed, use this link to install GIT locally in order to pull down Terraform code for this effort. After installing GIT, clone the project locally by running :

git clone https://github.com/KawiNeal/http-loadbalancer.git

cd http-loadbalancer/envs
Enter fullscreen mode Exit fullscreen mode

Copy the generated service account JSON file and private key file (e.g http-loadbalancer.json & id_rsa) into the envs folder of project. In the envs folder edit dev.env.tfvars to make sure that the variable assignments for gcp_auth_file and id_rsa match the names of the files.


../http-loadbalancer/envs/dev.env.tfvars




```

# GCP authentication file
gcp_auth_file = "http-loadbalancer.json"
Enter fullscreen mode Exit fullscreen mode

```



# remote provisioning - private key file
stress_vm_key = "id_rsa"
Enter fullscreen mode Exit fullscreen mode

Replace the user name with your user name used to create SSH key.




```

# remote provisioning - user
user = "kawi.neal"  <---- Add your user name here
Enter fullscreen mode Exit fullscreen mode

Edit dev.env.tf to make sure that the bucket name and credentials are assigned to bucket name created and JSON filename. Backend resource definitions in Terraform does not allow the use of variables.


```



backend "gcs" {
    bucket      = "http-loadbalancer"
    prefix      = "dev"
    credentials = "http-loadbalancer.json"
  }
Enter fullscreen mode Exit fullscreen mode

Project Structure

The diagram below provides the components that are used to build out and test your GCP HTTP Load Balancer. Having a clear picture of the components of your infrastructure & their relationships serves as a guide to defining Terraform project code for provisioning your infrastructure.

alt text

This infrastructure can be broken down into these sets of resources :

  1. Compute Resources - Instance Group manager for creating/scaling compute resources.
  2. Network - Cloud Network and subnets
  3. Network Services - Network components for cloud balancing service.
  4. Stress Test VM - Virtual machine to test load balancer.

The Terraform folder structure has been defined to map to the resource grouping with each component within the group represented as a module.

   
├───compute
│   ├───auto_scaler
│   ├───instance_template
│   └───region_instancegroupmgr
├───envs
├───network
│   ├───firewall_rule
│   └───network_subnet
├───networkservices
│   └───load_balancer
│       ├───backend_service
│       ├───forwarding_rule
│       ├───health_check
│       ├───target_proxy
│       └───url_map
└───test
Enter fullscreen mode Exit fullscreen mode

The envs folder is where the Http Load Balancer Terraform project is defined. It contains the provisioner, variables, remote backend, modules and data sources for this project. We will start with the main.tf, which serves as the root module (starting point) for the Terraform configuration. The root module makes all the calls to child-modules & data sources needed to create all resources for HTTP Load Balancer.




```

├───envs                        
│   │   dev.env.tf              ----> all variables needed for DEV environment
│   │   dev.env.tfvars          ----> variables assignments for DEV
│   │   http-loadbalancer.json  * copied into project (service account)
│   │   id_rsa                  * copied into project (SSH key)
│   │   main.tf                  -----> terraform, GCP provider & modules 
Enter fullscreen mode Exit fullscreen mode

The dev.env.tf has all the variables associated with the DEV configuration including Terraform block to define require version and cloud provider (GCP). Took the approach of isolating all variables for a specific environment into one file.

dev.env.tf

{% gist https://gist.github.com/KawiNeal/6f0dbe46045cfb444d66646bbe6c59fd.js %}

The terraform block sets which provider to retrieve from the Terraform Registry. Given that this is for GCP infrastructure we need to use the google provider source ("hashicorp/google"). Within the terraform block the 'required_version' sets a version of Terraform to use in your configuration when the configuration is initialized. The 'required_version' takes a version constraint string which ensures that a range of acceptable versions can be used. In our project we are specifying that any version that is greater than or equal to 0.13.

The provider block sets which provider to retrieve from the Terraform Registry. Providers are essentially plug-ins that give the Terraform configuration access to a set of resource types per each provider. Note that multiple providers can be specified in one configuration. You can also define multiple configurations for the same provider and select the provider to use within each module or by resource. The provider block sets the version, project & GCP credentials to allow for access to a specific project within GCP. The provider uses variables that are declared and defined in dev.env.tf and dev.env.tfvars.

The backend block enables storing the infrastructure state in a remote data store. Remote backends are highly recommended when working in teams to modify same infrastructure or parts of the same infrastructure. Advantages are with collaboration, security (sensitive info) and remote operations. The backend we have defined is GCP ("gcs") using the storage bucket we defined as part of the setup. Access to the storage bucket is obtained with a service account key(JSON). One thing to note, you can not use variables within the definition of backend, all input must be hard-coded. You can see that difference between the definition of the provider block and the backend block.


The variables after the backend block defines that variable types that are needed to be passed to all the modules.

dev.env.tfvars

{% gist https://gist.github.com/KawiNeal/73171999e47eb57246b65f438dbd4902.js %}

Hard-coding values in Terraform configuration is not recommended. The use of variables is to ensure configuration can be easily maintain, reused and also serve as parameters to Terraform modules. Variables declarations are put defined in variables TF file and their associated values assignments are put into TFVARS file. The variables in these files represent sets of inputs to modules for this infrastructure.

For example, the VPC Network and Subnets input from dev.env.tfvars is defined as :

{% gist https://gist.github.com/KawiNeal/182b0a8c7b2acc88c3280d3dba362afd.js %}

The inputs (project_id, vpc,vpc_subnets) are passed to the network_subnet module within the network group folder(../network/network_subnet).

{% gist https://gist.github.com/KawiNeal/e7ddb523615e97ce01bd4e6f4f8f187d.js %}

Modules

The network_subnet module illustrates how a module can be used to call/re-use other modules. The network_subnet module calls "version 2.5.0" of an available & verified network module that creates network/subnets using required input parameters. There is a Terraform registry of modules that can be used to create resources for multiple providers (AWS, GCP, Azure, etc).

Module network_vpc

{% gist https://gist.github.com/KawiNeal/b6bcbc1969970a917fa6af39d68559aa.js %}
Modules not only allow you to re-use configuration but also makes it easier to organize your configuration into clear and logical components of your infrastructure. Proper definition and grouping of modules will allow for easier navigation & understanding of larger cloud infrastructures that could exist across multiple cloud providers and have hundreds of resources.

Similar to web services, modules should follow an "input-output" pattern. We want to have a clear contract that defines our inputs to the module and outputs from the module. These reusable pieces of components(modules) can then be logically glued together to produce a functional infrastructure.

Example of two network services modules below :


```



...
...
│   │───target_proxy
│   │     ├───input.tf
│   │     ├───output.tf
│   │     └───target_proxy.tf
│   │───url_map
│   │     ├───input.tf
│   │     ├───output.tf
│   │     └───url_map.tf
...
...
Enter fullscreen mode Exit fullscreen mode

Output values defined in output.tf are the return values of the Terraform module that can be used to pass resource attributes/references to the parent module. Other modules in the root module can use these attributes as input, creating an implicit dependency. In the example above, the target_proxy has a dependency on an url map. The output from url_map child-module to the root module is url_map_id, which is passed as an input to the target_proxy child-module.
Module url_map output


In the root/parent module, outputs from the child module can be referenced and made available as module.MODULE_NAME.OUTPUT_NAME. In case of url_map output it can be referenced as module.url_map.id as shown below from the root module in main.tf.
Module http_proxy input

Terraform by default will take into account the implicit dependency as far as the order in which resources are created. In the case of url_map and target_proxy above, the url_map will be created prior target_proxy.
Terraform also allows for declaring explicit dependencies with the use of depends_on.
One method of testing the HTTP Load Balancer was to create a virtual instance (stress_test_vm) and drive traffic from that virtual instance to the load balancer. The load balancer should forward traffic to the region that is closest to the virtual machine's region/location. The (stress_test_vm) is a stand-alone instance that has no implicit dependency on resources/modules defined in the root module. The (stress_test_vm) does require that the resources associated with HTTP Load Balancer be in place in order to forward traffic to it. The depends_on = [module.network_subnet, module.fowarding_rule] sets this explicit dependency. Before creating a test VM we want to ensure that the network/subnets and externally exposed IP address are in place prior to generating traffic to external IP.

Module - test

Load Balancer & Testing

Additional details for configuring GCP Load Balancer can be found here. From GCP perspective per our architecture diagram, our configuration consists of:

  1. HTTP, health check, and SSH firewall rules

    To allow HTTP traffic to backends, TCP traffic from GCP Health checker & remote SSH from Terraform to stress test VM.

  2. Instance templates (2)

    Resource to create VM instances and managed instance groups(mig). Templates define machine type, boot disk, and other instance properties. Startup script is also executed on all instances create by instance template to install Apache.

  3. Managed instance groups (2)

    Managed instance groups use instance templates to create a group of identical instances that offers autoscaling based on autoscaling policy/metrics.

  4. HTTP Load Balancer(IPv4 & IPV6)

    Load balancer consists of backend service to balance traffic between two backend managed instance groups( mig in US & EU). Load balancer includes creating HTTP health checks(port80) to determine when instances will receive new connections. Forwarding-rule(frontend) is created as part of load balancer. Frontends determine how traffic will be directed. For our configuration we are defaulting to http port(80).

  5. Stress Test VM

    VM is created to simulate load on the HTTP Load Balancer. As part of VM startup siege, a http load testing utility is installed. Via Terraform's remote-exec we execute siege utility to direct traffic to HTTP Load Balancer.

Terraform data sources were used in the test module to retrieve the external IP address (frontend) of the load balancer, in order for siege to route traffic to it. Data sources allow Terraform to retrieve existing resource configuration information. One item to note here is that Terraform's data source can query any resource within the provider, it does not have to be a resource managed/created by Terraform.

Module - "test" (stress_test_vm.tf)




```

# get forward rule to obtain frontend IP
data "google_compute_global_forwarding_rule" "http" {
  name = var.forward_rule_name

}
Enter fullscreen mode Exit fullscreen mode

```



inline = [
      "sleep 280",
      "siege -c 255 -t580 http://${data.google_compute_global_forwarding_rule.http.ip_address} &",
      "sleep 600"
    ]
Enter fullscreen mode Exit fullscreen mode

The inline block contains the command line for executing siege utility on the stress_test_vm. There is a pause (280 seconds) prior to running siege to allow for Load Balancer front-end IP to become available. The command generates 255 concurrent user requests at a rate of 1-3 seconds between each request for 580 seconds. One notable & interesting issue I ran into was the siege command would not continue running over a time period. After SSH connection and command line was executed it would immediately end the session and terminate the siege process. Work-around this issue was to run siege as background process and add sleep to delay closing of SSH session and terminating siege prior to the needed execution time.


Although available, provisioning an instance with Terraform over SSH(remote-exec) is not recommended by Hashicorp. The issue faced with the siege process seems to highlight their recommendation. For this effort it was convenient for testing purposes. Hashicorp provides configuration management provisioner product, Hashicorp Packer, that automates creation of a VM instance image.

INIT

We can now proceed to go through this Terraform project lifecycle : INIT, PLAN , APPLY and eventually DESTROY when done.

Run 'terraform init'




```

C:\http-loadbalancer\envs>terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/google v3.46.0
Terraform has been successfully initialized!.
Enter fullscreen mode Exit fullscreen mode

PLAN

Terraform PLAN needs to be executed to set to variables from the dev.env.tfvars file. Terminal output will display all resources that will be generated and provide the number of resources at the end of plan output.

Run 'terraform plan -var-file dev.env.tfvars'


```



C:\http-loadbalancer\envs>terraform plan -var-file dev.env.tfvars

..
..
Plan: 22 to add, 0 to change, 0 to destroy.
Enter fullscreen mode Exit fullscreen mode

APPLY

Terraform PLAN will indicate that 22 GCP resources will be created. Next we will to run APPLY to execute the generated plan in order to move our infrastructure to desired stated. Note when we run APPLY the stress_test_vm will also be provisioned after all other resources. After that short period of time (1-2 minutes) web traffic will be directed to the load balancer.

Run 'terraform apply -var-file dev.env.tfvars -auto-approve'




```

C:\http-loadbalancer\envs>terraform apply -var-file dev.env.tfvars -auto-approve
PS C:\Users\Kawi\Terraform\Repos\http-loadbalancer\envs> terraform apply -var-file dev.env.tfvars -auto-approve
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Creating...
module.healthcheck.google_compute_health_check.healthcheck: Creating...
module.healthcheck.google_compute_health_check.healthcheck: Creation complete after 3s [id=projects/http-loadbalancer/global/healthChecks/http-lb-health-check]
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Still creating... [10s elapsed]
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Creation complete after 15s [id=projects/http-loadbalancer/global/networks/http-lb]
..
..
Enter fullscreen mode Exit fullscreen mode

After Terraform has created GCP resources and the remote-exec process is running, you can use GCP console to view traffic flow to backends. Given that stress_test_vm is in a closer region the majority of traffic will be routed to europe-west managed instance groups. The managed instance group will create additional VMs to handle the uptick in web traffic to the load balancer.

From the the GCP console navigation menu select :

Network Services --> Load Balancing --> Backend (tab) Select "http-lb-backend" from list below.

alt text

To view the instances created to handle traffic from stress_test_vm. From the the GCP console navigation menu select :

Compute Engine --> VM instances

After the remote-exec process completes the number of instances created would scale down via instance group manager. If you need to run testing again, the stress_test_vm can be marked as TAINTed and APPLY can be re-executed which will only destroy the stress_test_vm and then re-created it.

Run 'terraform taint module.test.google_compute_instance.stress_test_vm' and then

'terraform apply -var-file dev.env.tfvars'


```



C:\http-loadbalancer\envs>terraform taint module.test.google_compute_instance.stress_test_vm
Resource instance module.test.google_compute_instance.stress_test_vm has been marked as tainted.

C:\http-loadbalancer\envs> terraform apply -var-file dev.env.tfvars
..
..
Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:
Enter fullscreen mode Exit fullscreen mode

DESTROY

Terraform DESTROY needs to be executed to clean up all resources. To be able to check that destroy will remove 22 resources, run DESTROY without the -auto-approve parameter. You will then get prompted to answer 'yes' to accept removal of all resources.

Run 'terraform destroy -var-file dev.env.tfvars'




```



C:\http-loadbalancer\envs>terraform destroy -var-file dev.env.tfvars -auto-approve
..
..
Plan: 0 to add, 0 to change, 22 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: 
Enter fullscreen mode Exit fullscreen mode

Last but not least, the resources (storage bucket, service account) created as part of the project setup will need to be deleted when you are done.

Future Posts/Topics

For future posts, I will build upon this project to add support for additional Terraform features to address these topics below :

Terraform Workspaces - The current project implementation maps to one set of infrastructure resources to a DEV environment . With Terraform workspaces you can manage collections of infrastructure resources, which allows you to segregate your environments (DEV, QA, PROD) while using same infrastructure resource definitions.

Provisioning - The stress_test_vm used in this effort created with Terraform over SSH(remote-exec). As previously stated, this not recommended by Hashicorp. Hashicorp provides configuration management provisioner product, Hashicorp Packer to handling provisioning.

And that's all folks...hope this post provided some insight into Terraform.

💖 💪 🙅 🚩
kawineal
Kawi Neal

Posted on November 20, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related