Automatically Provision AWS Resources with Terraform

giftcup

Tambe Salome

Posted on September 20, 2023

Automatically Provision AWS Resources with Terraform

Infrastructure as Code (IaC) provides a way of managing and provisioning infrastructure through code instead of manually. This helps improve infrastructure consistencies and increases speed for deployments as the same code can be used to provision multiple deployment environments.

Terraform is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently. It allows you monitor and build infrastructure for multiple cloud platforms.
Terraform plugins (providers) let you interact with cloud platforms and other services through their APIs.
A provider is a plugin that terraform uses to create and manage resources. You can view a list of providers offered by terraform providers here

Overview

In this tutorial, we would show how to automatically deploy an Amazon RDS MySQL instance, an ElastiCache Redis cluster and a Lambda function all in the same VPC with security group rules that enable the Lambda function to interact with the Redis cluster and the MySQL database. This deployment would be done using terraform.

Clone the Sample Repository

git clone git@github.com:giftcup/terraform.git
Enter fullscreen mode Exit fullscreen mode

Then move into the lambda-serverless directory to view the sample code

cd lambda-serverless
Enter fullscreen mode Exit fullscreen mode

Prerequisites

Verify you have terraform installed by running the following command in your terminal:

terraform version
Enter fullscreen mode Exit fullscreen mode

Using the AWS Provider

The AWS provider allows you connect and interact with services and resources offered by AWS.
In our configuration, we will specify the provider and its version, the region, and the availability zones we want our resources to be deployed in

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Configure AWS Provider
provider "aws" {
  region = "us-west-2"
}

data "aws_availability_zones" "available" {}
Enter fullscreen mode Exit fullscreen mode

Create a VPC [Virtual Private Cloud]

Using the terraform-aws-vpc module, we create a VPC resource where we want all other resources to reside in

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name                 = "second-vpc"
  cidr                 = "10.10.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  public_subnets       = ["10.10.3.0/24", "10.10.4.0/24", "10.10.5.0/24"]
  enable_dns_hostnames = true
  enable_dns_support   = true
}
Enter fullscreen mode Exit fullscreen mode

Add Security Group Rules

The security group rules here should enable us connect with the Elasticache Redis cluster and the RDS MySQL database from our Lambda function, all of which we will create later on.

You must specify the from_port and the to_port in the egress and ingress rules

resource "aws_security_group" "second-sg" {
  name   = "second-sg"
  vpc_id = module.vpc.vpc_id

  ingress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  ingress {
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  egress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }

  egress {
    from_port   = 6379
    to_port     = 6379
    protocol    = "tcp"
    cidr_blocks = ["10.10.0.0/16"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Configure the RDS MySQL Database

Firstly define the subnet group that you would want your RDS instance to be in:

resource "aws_db_subnet_group" "second-subnet" {
  name       = "second"
  subnet_ids = module.vpc.public_subnets

  tags = {
    Name = "Second"
  }
}
Enter fullscreen mode Exit fullscreen mode

The subnets specified here are the subnets that belong to the VPC above.

The database instance is created as shown below:

resource "aws_db_instance" "firsTerraDB" {
  identifier             = "second-terra-db"
  allocated_storage      = 10
  db_name                = var.db_name
  engine                 = "mysql"
  engine_version         = "8.0"
  instance_class         = "db.t2.micro"
  username               = var.db_username
  password               = var.db_password
  parameter_group_name   = "default.mysql8.0"
  db_subnet_group_name   = aws_db_subnet_group.second-subnet.name
  vpc_security_group_ids = [aws_security_group.second-sg.id]
  publicly_accessible    = true
  skip_final_snapshot    = true
}
Enter fullscreen mode Exit fullscreen mode

The value, publicly_accessible is set to true only for the sake of this tutorial. You would not want set this configuration as 'true' for a database in a production environment.

Set skip_final_snapshot as 'true' if you do not want a snapshot of the instance to be taken upon deletion.

Managing Sensitive Variables

Sensitive values like the database password, username and db_name should not be written in plain text. These variables should first be declared as input variables in the variables.tf file:

variable "db_name" {
  description = "Database name"
  type        = string
  sensitive   = true
}

variable "db_username" {
  description = "Master Username"
  type        = string
  sensitive   = true
}

variable "db_password" {
  description = "Master password"
  type        = string
  sensitive   = true
}
Enter fullscreen mode Exit fullscreen mode

Variables declared as sensitive are redacted from Terraform's output when commands like apply, plan or destroy are executed. However, note that these values will appear as plain text in the terraform state files, so ensure that the state file is kept safely.

With this, each time you run terraform apply, you will be prompted to enter the value of each variable, but this can be quite time consuming and error prone. To solve this, Terraform supports setting values within a variable definition(.tfvars) file.

Create a new file called secrets.tfvars, and assign values for the variables which were created earlier.

db_name     = "databaseName"
db_username = "username"
db_password = "insecurepassword1"

Enter fullscreen mode Exit fullscreen mode

Now, the variables with values can easily be used with terraform apply:

terraform apply -var-file=secrets.tfvars
Enter fullscreen mode Exit fullscreen mode

Since these values are sensitive, make sure to maintain and share the tfvars file with only the appropriate people and also ensure you do not check these files into version control.

Configure the ElastiCache Redis Cluster

First create the elasticache subnet group. Here, we would use the subnets that belong to the vpc module:

resource "aws_elasticache_subnet_group" "second-cluster-subnet" {
  name       = "second-cluster-subnet"
  subnet_ids = module.vpc.public_subnets
}
Enter fullscreen mode Exit fullscreen mode

The redis instance is then created as shown:

resource "aws_elasticache_cluster" "second-cluster" {
  cluster_id           = "second-cluster-id"
  engine               = "redis"
  node_type            = "cache.t4g.micro"
  num_cache_nodes      = 1
  parameter_group_name = "default.redis5.0"
  engine_version       = "5.0.6"
  port                 = 6379
  security_group_ids   = [aws_security_group.second-sg.id]
  subnet_group_name    = aws_elasticache_subnet_group.second-cluster-subnet.name
}
Enter fullscreen mode Exit fullscreen mode

The security group that was created earlier is used here.

Configure the Lambda Function

This is done last so that some output from the above configurations can be used as input to the Lambda function.

Firstly, create an IAM role for that will be used to manage the Lambda function. This role should have the permission to create a VPC. This would enable the function connect to the VPC that was created earlier.

data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "iam_for_lambda" {
  name               = "iam_for_lambda"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy_attachment" "AWSLambdaVPCAccessExecutionRole" {
  role       = aws_iam_role.iam_for_lambda.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
Enter fullscreen mode Exit fullscreen mode

Since Lambda does not install any python packages, a lambda layer is created where all the modules used in our Lambda function are installed:

resource "aws_lambda_layer_version" "function_packages" {
  filename            = "./code/packages.zip"
  layer_name          = "function_packages"
  compatible_runtimes = ["python3.9"]
}
Enter fullscreen mode Exit fullscreen mode

The filename should be the relative path to this package folder and compatible_runtimes is a list of runtime environments where the layer can work. Using Lambda layers is good because they enhance reusability.

Next, create and archive for the Lambda function itself:

data "archive_file" "lambda_function" {
  type        = "zip"
  source_file = "./code/lambda_function.py"
  output_path = "deployment_payload.zip"
}
Enter fullscreen mode Exit fullscreen mode
  • type can be a zip file or an s3 bucket. For larger file sizes, it is advisable to use s3 buckets.
  • source_file: path to the source code file
  • output_path: the file name were you want the zip of the function to be stored in. It does not have to be an existing file as it is created by terraform.

Finally, the configuration for the Lambda function is as follows:

resource "aws_lambda_function" "first_lambda" {
  filename      = "deployment_payload.zip"
  function_name = "first_function"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "lambda_function.lambda_handler"
  layers        = [aws_lambda_layer_version.function_packages.arn]
  timeout       = 150

  source_code_hash = data.archive_file.lambda_function.output_base64sha256

  runtime = "python3.9"

  vpc_config {
    subnet_ids         = module.vpc.public_subnets
    security_group_ids = [aws_security_group.second-sg.id]
  }

  environment {
    variables = {
      MYSQL_HOST     = aws_db_instance.firsTerraDB.address
      MYSQL_PORT     = aws_db_instance.firsTerraDB.port
      MYSQL_USER     = aws_db_instance.firsTerraDB.username
      MYSQL_PASSWORD = aws_db_instance.firsTerraDB.password
      MYSQL_DB       = aws_db_instance.firsTerraDB.db_name

      REDIS_URL  = "${aws_elasticache_cluster.second-cluster.cache_nodes.0.address}"
      REDIS_PORT = "${aws_elasticache_cluster.second-cluster.cache_nodes.0.port}"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

In case your Lambda function uses some environment variables, they can be passed directly to the resource upon creation within the environment block.

Deploying the Resources

To provision the RDS instance, Redis Cluster, Lambda function and additional resource, first initialize the Terraform configuration:

terraform init
Enter fullscreen mode Exit fullscreen mode

Next, apply the configuration.

terraform apply -var-file=secrets.tfvars
Enter fullscreen mode Exit fullscreen mode

Terraform will now provision your resource. It may take some time for this to complete then you will see a message like:

Apply complete

You can visit your AWS management consoles to view the various resources and test your Lambda function to ensure the connections to the Redis Cluster and the MySQL instance were established.

Output Variables

Another way to view if our configuration details is to work with output variables.

In an outputs.tf file, define the values of the resources you want Terraform to show you after the configuration is applied:

output "redis_host" {
  description = "Redis Host"
  value       = aws_elasticache_cluster.second-cluster.cache_nodes.0.address
  sensitive   = false
}

output "redis_port" {
  description = "Redis port"
  value       = aws_elasticache_cluster.second-cluster.cache_nodes.0.port
  sensitive   = false
}

output "mysql_host" {
  description = "mysql host"
  value       = aws_db_instance.firsTerraDB.address
  sensitive   = false
}

output "mysql_port" {
  description = "mysql port"
  value       = aws_db_instance.firsTerraDB.port
  sensitive   = false
}

output "elasticache-sg" {
  description = "Elasticache security group name"
  value       = aws_elasticache_cluster.second-cluster.security_group_ids
}

output "database-sg" {
  description = "database sg"
  value       = aws_db_instance.firsTerraDB.vpc_security_group_ids
}
Enter fullscreen mode Exit fullscreen mode

When terraform apply is executed, all this values would be displayed in displayed in the terminal.

Clean Up Infrastructure

In this tutorial, you have provisioned an RDS instance, a Redis cluster and a Lambda function using Terraform.
Clean up the infrastructure you created with:

terraform destroy -var-file=secrets.tfvars
Enter fullscreen mode Exit fullscreen mode

Thank you for reading through till the end😊. I hope it helped you in one way or the other to understand and use a particular concept. If you enjoyed reading this, do leave a like ❀️ and a comment stating how I can improve πŸ’‘

πŸ’– πŸ’ͺ πŸ™… 🚩
giftcup
Tambe Salome

Posted on September 20, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related