terraform-aws-ecs-cluster
Terraform module used to create a new AWS ECS cluster with VPC, IAM roles and networking components
Posted on July 4, 2022
At Camptocamp, we're using multiple Blackbox Exporters hosted in a few different cloud providers and world regions. We're using them to monitor availability and ssl certificate validity and expiration of many websites.
They were all deployed inside Linux VMs provisioned by Terraform and configured by our Puppet infrastructure. However, in order to achieve more simplicity and high availability, we wanted to deploy containers instead of these VMs.
AWS ECS (Elastic Container Service) is a fully managed, highly scalable and docker compatible container orchestration service.
It is widely used to host microservice applications like webservers, APIs or machine learning applications.
With ECS, you're free to choose between EC2 or Fargate instances to run your apps.
Fargate is a serverless compute engine which allows you to just focus on building and deploying your apps by taking away all infrastructure deployments and maintenance. No need to worry about security or operating systems, AWS will handle that.
On the other hand, EC2 is more flexible than Fargate and less expensive. It can also be interesting for some customers to manage the security themselves.
In our case, we opted for a serverless approach using Fargate in order to take advantage of the simplicity of a managed infrastructure since for blackboxes we have no specific security constraints for the infrastructure.
To deploy an application on ECS using Fargate you will need three different components:
At Camptocamp, we're doing IAC (infrastructure as code) using mostly Terraform. In order to simplify the deployment of all the resources necessary for the implementation of these components, I created two distinct Terraform modules: one to create an ECS Cluster and one to create Services within an existing cluster.
They have been designed to be flexible and reusable, and we will take a closer look at them to find out what they do and how they work.
Firstly, I created a module aiming to deploy:
To use this module, we must provide some inputs variables:
Link:
Terraform module used to create a new AWS ECS cluster with VPC, IAM roles and networking components
Then, this second module aims to deploy a Fargate Service in an existing ECS Cluster (in this case deployed with the previous module).
It will also create everything necessary to be able to access our service. Here is the full list of resources that will be created:
Once again, this module requires some variables to be used but this time the list is a little bit longer so here are just the most important ones:
Link:
Terraform module used to create a new Fargate Service in an existing ECS cluster with networking components (ALB, Target Group, Listener)
So, our use case is to have a serverless Blackbox Exporter deployed on AWS ECS using a Fargate instance in the eu-west-1 region.
Furthermore, it must be accessible only by https with a valid ssl certificate and with basic authentication.
In order to achieve that, we must add a Nginx sidecar container which will handle basic auth and proxying of the traffic to the Blackbox for authenticated entities.
There is a simple architecture diagram of what we will achieve:
First, we will begin by creating the ECS Cluster using terraform-aws-ecs-cluster module, so, with all nested resources (VPC, subnets, etc.).
# versions.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = "eu-west-1"
}
# main.tf
module "ecs-cluster" {
source = "git@github.com:camptocamp/terraform-aws-ecs-cluster.git"
project_name = "ecs-cluster-blackbox-exporters"
project_environment = "prod"
availability_zones = ["eu-west-1a", "eu-west-1b"]
public_subnets = ["10.0.0.0/24", "10.0.10.0/24"]
private_subnets = ["10.0.20.0/24", "10.0.30.0/24"]
}
As you can see, a minimum of two availability zones is required in order to create VPC subnets. You also need to provide at least two public and two private cidr blocks.
Now that we have declared the module which will create a fresh ECS cluster with all networking stuff associated with it, we can create the Task Definition of our Blackbox application task that we will need after to define the ECS Service.
A Task Definition is a template where we define the containers that we will be executed on our ECS service (docker image to run, port mapping, environments values, log configuration, etc.), the resources required (CPU / Memory), the network mode of the task (with Fargate we must use awsvpc mode), and much more!
So, as we saw earlier, we will need two containers:
We will use the Cloudwatch Log Group created by the ecs-cluster module for the logs of these two containers.
Furthermore, we will also use IAM users created by the module for execution and task role ARNs of our Task Definition.
# main.tf
resource "aws_ecs_task_definition" "blackbox_fargate_task" {
family = "blackbox-exporter-task"
container_definitions = <<DEFINITION
[
{
"name": "ecs-service-blackbox-prod-container",
"image": "prom/blackbox-exporter:latest",
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${module.ecs-cluster.cloudwatch_log_group_id}",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs-service-blackbox-exporter-prod"
}
},
"portMappings": [
{
"containerPort": 9115
}
],
"cpu": 256,
"memory": 512
},
{
"name": "ecs-service-nginx-prod-container",
"image": "beevelop/nginx-basic-auth:v2021.04.1",
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${module.ecs-cluster.cloudwatch_log_group_id}",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs-service-nginx-exporter-prod"
}
},
"environment": [
{
"name": "HTPASSWD",
"value": "${var.blackbox_htpasswd}"
},
{
"name": "FORWARD_HOST",
"value": "localhost"
},
{
"name": "FORWARD_PORT",
"value": "9115"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"networkMode": "awsvpc"
}
]
DEFINITION
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = "512"
cpu = "256"
execution_role_arn = module.ecs-cluster.ecs_task_execution_role_arn
task_role_arn = module.ecs-cluster.ecs_task_execution_role_arn
tags = {
Name = "ecs-service-blackbox-exporter-td"
Environment = "prod"
}
}
In this exemple, I get the htpasswd from a Terraform variable var.blackbox_htpasswd
. You can define it like this:
# variables.tf
variable "blackbox_htpasswd" {
type = string
sensitive = true
}
Next, we will need a DNS Zone where the ECS Service module will create the record.
# dns.tf
resource "aws_route53_zone" "alb_dns_zone" {
name = "example.com"
delegation_set_id = "<Delegation_set_id>"
}
Optionally, you can create a delegation set in your AWS account, if you don't already have one, and add a delegation set id on your Route53 zone resource in order to always have the same DNS servers.
Finally, we can now create our ECS Service :
module "ecs-cluster-service-blackbox" {
source = "git@github.com:camptocamp/terraform-aws-ecs-service-fargate.git"
app_name = "ecs-service-blackbox"
app_environment = "prod"
dns_zone = "example.com"
dns_host = "blackbox.example.com"
vpc_id = module.ecs-cluster.vpc_id
vpc_cidr_blocks = module.ecs-cluster.vpc_cidr_blocks
ecs_cluster_id = module.ecs-cluster.ecs_cluster_id
task_definition = aws_ecs_task_definition.blackbox_fargate_task
task_lb_container_name = "ecs-service-nginx-prod-container"
task_lb_container_port = 80
subnet_private_ids = module.ecs-cluster.private_subnets.*.id
subnet_public_ids = module.ecs-cluster.public_subnets.*.id
generate_public_ip = true
depends_on = [
aws_route53_zone.alb_dns_zone
]
}
As you can see, you must provide to the module some of the previously created resources including: vpc id and cidr_blocks, ECS cluster id, DNS zone, the task definition ressource and the subnets.
You must also set on which container the load balancer will redirect requests and on which port.
Once all your resources are properly configured, you can run a terraform apply
to create them.
That's it 🥳! You now have a nice serverless Blackbox accessible on blackbox.example.com
with basic auth! 🎉
Posted on July 4, 2022
Sign up to receive the latest update from our blog.