Getting Started with AWS and Terraform: Multi-Attaching Elastic File System (EFS) Volumes to EC2 instances using Terraform
Chinmay Tonape
Posted on February 3, 2024
In previous post, we explored how to enhance storage resiliency by multi-attaching Elastic Block Store (EBS) volumes to multiple EC2 instances.
Continuing on the path of improving storage capabilities, in this post, we will delve into the process of multi-attaching Elastic File System (EFS) volumes to EC2 instances using Terraform.
Architecture Overview
Before we dive into the configuration, let's understand the architecture we'll be working with:
Step 1: Creating the VPC and Network Components
Create VPC with IGW, 4 public subnets in separate AZs and route table with association. Please refer to my github repo in resources section below.
Step 2: Creating Linux EC2 Web Server Instances in Multiple AZs
Deploy Linux EC2 instances in multiple Availability Zones (AZs). Please refer to my github repo in resources section below.
Step 3: Creating an EFS File System with security groups and mount targets
Create an EFS, EFS mount targets in VPC subnets and security group for EFS mount targets which will allow inbound traffic on port 2049 only from the EC2 instances security group.
Create EFS
####################################################
# Create EFS
####################################################
resource "aws_efs_file_system" "efs_file_system" {
creation_token = "efs-test"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
}
Create Security Group and Mount Targets:
####################################################
# Create the security group for EFS Mount Targets
####################################################
resource "aws_security_group" "aws-sg-efs" {
description = "Security Group for EFS mount targets"
vpc_id = module.vpc.vpc_id
ingress {
description = "EFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = tolist(module.vpc.security_group_ec2)
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
security_groups = tolist(module.vpc.security_group_ec2)
}
}
####################################################
# Create EFS mount targets
####################################################
resource "aws_efs_mount_target" "mount_targets" {
count = 2
file_system_id = aws_efs_file_system.efs_file_system.id
subnet_id = module.vpc.public_subnets[count.index]
security_groups = [aws_security_group.aws-sg-efs.id]
}
Step 4: Mounting EFS on EC2 instances
Generate a custom script to mount the EFS, push and execute on EC2 instances
Generate script to mount EC2 instances:
####################################################
# Generate script for mounting EFS
####################################################
resource "null_resource" "generate_efs_mount_script" {
provisioner "local-exec" {
command = templatefile("efs_mount.tpl", {
efs_mount_point = var.efs_mount_point
file_system_id = aws_efs_file_system.efs_file_system.id
})
interpreter = [
"bash",
"-c"
]
}
}
efs_mount.tpl - This mounts the file system and creates an entry in /etc/fstab so that EFS gets mounted in case of EC2 restarts.
cat << EOF >> efs_mount.sh
#! /bin/bash
sudo mkdir -p ${efs_mount_point}
sudo su -c "echo '${file_system_id}:/ ${efs_mount_point} efs _netdev,tls 0 0' >> /etc/fstab"
sleep 120
sudo mount ${efs_mount_point}
df -k
EOF
Execute script on running EC2 instances
####################################################
# Execute scripts on existing running EC2 instances
####################################################
resource "null_resource" "execute_script" {
count = 2
# Changes to any instance of the cluster requires re-provisioning
triggers = {
instance_id = module.web.instance_ids[count.index]
}
provisioner "file" {
source = "efs_mount.sh"
destination = "efs_mount.sh"
}
connection {
host = module.web.public_ip[count.index]
type = "ssh"
user = "ec2-user"
## private_key = file(var.private_key_location) # Location of the Private Key
private_key = file("D:/AWS/MyKeyPair")
timeout = "4m"
}
provisioner "remote-exec" {
# Bootstrap script called for each node in the cluster
inline = [
"bash efs_mount.sh",
]
}
}
Purge local script at destroy:
####################################################
# Cleanup existing script
####################################################
resource "null_resource" "clean_up" {
provisioner "local-exec" {
when = destroy
command = "rm -rf efs_mount.sh"
interpreter = [
"bash",
"-c"
]
}
}
Steps to Run Terraform
Follow these steps to execute the Terraform configuration:
terraform init
terraform plan
terraform apply -auto-approve
Upon successful completion, Terraform will provide relevant outputs.
null_resource.execute_script[0] (remote-exec): Filesystem 1K-blocks Used Available Use% Mounted on
null_resource.execute_script[0] (remote-exec): devtmpfs 488756 0 488756 0% /dev
null_resource.execute_script[0] (remote-exec): tmpfs 496748 0 496748 0% /dev/shm
null_resource.execute_script[0] (remote-exec): tmpfs 496748 508 496240 1% /run
null_resource.execute_script[0] (remote-exec): tmpfs 496748 0 496748 0% /sys/fs/cgroup
null_resource.execute_script[0] (remote-exec): /dev/xvda1 8376300 1613400 6762900 20% /
null_resource.execute_script[0] (remote-exec): tmpfs 99352 0 99352 0% /run/user/1000
null_resource.execute_script[0] (remote-exec): 127.0.0.1:/ 9007199254739968 0 9007199254739968 0% /home/ec2-user/content/test
null_resource.execute_script[1] (remote-exec): Filesystem 1K-blocks Used Available Use% Mounted on
null_resource.execute_script[1] (remote-exec): devtmpfs 488756 0 488756 0% /dev
null_resource.execute_script[1] (remote-exec): tmpfs 496748 0 496748 0% /dev/shm
null_resource.execute_script[1] (remote-exec): tmpfs 496748 512 496236 1% /run
null_resource.execute_script[1] (remote-exec): tmpfs 496748 0 496748 0% /sys/fs/cgroup
null_resource.execute_script[1] (remote-exec): /dev/xvda1 8376300 1613396 6762904 20% /
null_resource.execute_script[1] (remote-exec): tmpfs 99352 0 99352 0% /run/user/1000
null_resource.execute_script[1] (remote-exec): 127.0.0.1:/ 9007199254739968 0 9007199254739968 0% /home/ec2-user/content/test
null_resource.execute_script[0]: Creation complete after 2m15s [id=6876713525896539269]
null_resource.execute_script[1]: Creation complete after 2m16s [id=4318800104925290772]
Apply complete! Resources: 18 added, 0 changed, 0 destroyed.
Outputs:
ec2_instance_ids = [
"i-009d9725d44b9a4af",
"i-0cbafebadc3e979ab",
]
ec2_public_ips = [
"18.207.209.158",
"3.92.84.59",
]
efs_system-id = "fs-0a6a8d2a0bf361e82"
Testing the outcome
EFS created with mount targets:
Cleanup:
Remember to stop AWS components to avoid large bills.
terraform destroy -auto-approve
Side Notes:
If you are using wsl on windows and vscode to create the bash script using local-exec, you need to save template file with EOL conversion to Unix (Use Edit->EOL Conversion in Notepad ++)
-
If you have installed docker-desktop, it creates its own distro, change it as follows so that wsl works as expected
> wsl -l Windows Subsystem for Linux Distributions: docker-desktop-data (Default) docker-desktop
wsl -s docker-desktop
wsl -l
Windows Subsystem for Linux Distributions:
docker-desktop (Default)
docker-desktop-data3. Generally it is advised not to use null_resources, I have used it in this exercise is just to explain the concept of EFS.
- EFS automount can fail for various reasons. /etc/fstab entry also may not work sometimes, refer this link to troubleshoot:
https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-mounting.html#automount-fails
In this exercise, we successfully created an EFS volume and attached it to multiple EC2 instances, thereby achieving a resilient and scalable storage solution.
As a next step, we will explore AWS networking concepts, focusing on Bastion Host, in an upcoming post.
Resources:
Github Link: https://github.com/chinmayto/terraform-aws-linux-webserver-ec2-EFS
Elastic File System: https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
Posted on February 3, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.