Revathi Joshi
Posted on January 24, 2023
This is in continuation of the 1st article - Query data sources using state file in Terraform - 1, where we have configured VPC Infrastructure.
In this article, I am going to deploy application infrastructure defined by a separate Terraform configuration and use the terraform_remote_state
data source to query information about your VPC.
Finally, you will use the aws_ami
data source to configure the correct AMI for the current region.
Please visit my GitHub Repository for Terraform articles on various topics being updated on constant basis.
Let’s get started!
Objectives:
1. Create infrastructure for application block
2. Change to the Application directory and Run terraform init
to initialize Terraform.
3. Configure Terraform remote state
4. Scale EC2 instances
5. Configure region-specific AMIs
6. Configure EC2 subnet and security groups
7. Run terraform apply
to apply the configuration
Pre-requisites:
- AWS user account with admin access, not a root account.
- Cloud9 IDE with AWS CLI.
Resources Used:
Terraform documentation for AMI.
data source for pulling in an AMI ID.
Steps for implementation to this project:
1. Create infrastructure for application block
Let’s create the following organizational structure as shown below.
Create a directory -
terraform-data-sources-app
Create 4 files -
terraform.tf
,main.tf
,variables.tf
,outputs.tf
file.
- Create a
terraform.tf
file.
# terraform-data-sources-app/terraform.tf
# PROVIDERS BLOCK
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.23"
}
}
required_version = ">= 1.2.0"
}
- Create a
main.tf
file.
# terraform-data-sources-app/main.tf
# Application BLOCK
provider "aws" {
region = "us-east-1"
}
resource "random_string" "lb_id" {
length = 3
special = false
}
module "elb_http" {
source = "terraform-aws-modules/elb/aws"
version = "4.0.0"
# Ensure load balancer name is unique
name = "lb-${random_string.lb_id.result}-data-sources"
internal = false
security_groups = []
subnets = []
number_of_instances = length(aws_instance.app)
instances = aws_instance.app.*.id
listener = [{
instance_port = "80"
instance_protocol = "HTTP"
lb_port = "80"
lb_protocol = "HTTP"
}]
health_check = {
target = "HTTP:80/index.html"
interval = 10
healthy_threshold = 3
unhealthy_threshold = 10
timeout = 5
}
}
resource "aws_instance" "app" {
ami = "ami-0b5eea76982371e91"
instance_type = var.instance_type
subnet_id = ""
vpc_security_group_ids = []
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install httpd -y
sudo systemctl enable httpd
sudo systemctl start httpd
echo "<html><body><div>Welcome to Data Sources Infrastructure!</div></body></html>" > /var/www/html/index.html
EOF
}
- Create a
variables.tf
file.
# terraform-data-sources-app/variables.tf
# VARIABLES BLOCK
variable "instances_per_subnet" {
description = "Number of EC2 instances in each private subnet"
type = number
default = 2
}
variable "instance_type" {
description = "Type of EC2 instance to use"
type = string
default = "t2.micro"
}
- Create an
outputs.tf
file.
# terraform-data-sources-app/outputs.tf
# OUTPUTS BLOCK
output "lb_url" {
description = "URL of load balancer"
value = "http://${module.elb_http.elb_dns_name}/"
}
output "web_instance_count" {
description = "Number of EC2 instances"
value = length(aws_instance.app)
}
2. Change to the Application directory and run terraform init
cd ../terraform-data-sources-app
- Run
terraform init
to initialize Terraform.
3. Configure Terraform remote state
Like the VPC block, this configuration includes hard-coded values for the us-east-1 region. You can use the
terraform_remote_state
data source to use another Terraform workspace's output data.Add a
terraform_remote_state
data source to themain.tf
file inside theterraform-data-sources-app
directory, replacing YOUR_ORG with your own Terraform Cloud organization name.This remote state block uses the
local backend
to load state data from the path in the config section.
# terraform-data-sources-app/main.tf
data "terraform_remote_state" "vpc" {
backend = "local"
config = {
path = "../terraform-data-sources-vpc/terraform.tfstate"
}
}
- Now, update your aws provider configuration in
main.tf
to use the same region as the VPC configuration instead of a hardcoded region.
# terraform-data-sources-app/main.tf
provider "aws" {
# region = "us-east-1"
region = data.terraform_remote_state.vpc.outputs.aws_region
}
- The VPC configuration also included outputs for subnet and security group IDs. Configure the load balancer
security group
andsubnet
arguments for the elb module with those values.
# terraform-data-sources-app/main.tf
module "elb_http" {
###...
/*
security_groups = []
subnets = []
*/
security_groups = data.terraform_remote_state.vpc.outputs.lb_security_group_ids
subnets = data.terraform_remote_state.vpc.outputs.public_subnet_ids
###...
}
4. Scale EC2 instances
You can use values from data sources just like any other Terraform values, including by passing them to functions.
The configuration in
main.tf
only uses a single EC2 instance.Update the configuration to use the
instances_per_subnet
variable to provision multiple EC2 instances per subnet.
# terraform-data-sources-app/main.tf
resource "aws_instance" "app" {
###...
count = var.instances_per_subnet * length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)
ami = "ami-0b5eea76982371e91"
###...
}
- Now when you apply this configuration, Terraform will provision
var.instances_per_subnet
instances for each private subnet configured in your VPC workspace.
5. Configure region-specific AMIs
The AWS instance configuration also uses a hard-coded AMI ID, which is only valid for the
us-east-1
region.Use an
aws_ami
data source to load the correct AMI ID for the current region.Add the following to
main.tf
.
# terraform-data-sources-app/main.tf
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
- Replace the hard-coded AMI ID with the one loaded from the new data source.
# terraform-data-sources-app/main.tf
resource "aws_instance" "app" {
count = var.instances_per_subnet * length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)
/*
ami = "ami-0b5eea76982371e91"
*/
ami = data.aws_ami.amazon_linux.id
###...
}
6. Configure EC2 subnet and security groups
- Finally, update the EC2 instance configuration to use the
subnet
andsecurity group
configuration from the VPC block.
# terraform-data-sources-app/main.tf
resource "aws_instance" "app" {
###...
/*
subnet_id = ""
vpc_security_group_ids = []
*/
subnet_id = data.terraform_remote_state.vpc.outputs.private_subnet_ids[count.index % length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)]
vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.app_security_group_ids
###...
}
7. Run terraform apply
to apply the application infrastructure
- Run
terraform apply
to apply the configuration and typeyes
when prompted.
After a few minutes, the load balancer health checks will pass, and will return this response.
Wait for 4-5 minutes for the load balancer to be active
run this
curl $(terraform output -raw lb_url)
- Copy and paste the
lb_url
onto a browswer
http://lb-Dju-data-sources-551760788.us-west-1.elb.amazonaws.com/
- You will this successful message
Cleanup
You must destroy the application infrastructure before the VPC infrastructure.
Since the resources in the application infrastructure depend on those in the VPC infrastructure, the AWS API will return an error if you destroy the VPC first.
destroy the application infrastructure, prompt with
yes
.
terraform destroy
- Now, change to the VPC directory.
cd ../terraform-data-sources-vpc
- Destroy this VPC infrastructure as well, prompted with
yes.
terraform destroy -var aws_region=us-west-1
What we have done so far
We have successfully demonstrated how to use data sources to make your configuration more dynamic.
We deployed two separate configurations for your
network (VPC)
andapplication
resources and used theterraform_remote_state
data source to share data between them.We also replaced region-specific configuration with dynamic values from AWS provider data sources.
Posted on January 24, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.