For AWS newcomers: How to deploy your service using AWS Fargate - Network Configuration & Deployment

aashishb

Aashish Balaji

Posted on December 25, 2022

For AWS newcomers: How to deploy your service using AWS Fargate - Network Configuration & Deployment

Though Amazon CDK and Fargate can make it really easy to spin up and manage the compute resources you need for deployment, manually going through the networking setup can be helpful and provide clarity. Ultimately, you'll be using ECS and by extension, EC2 instances, so the networking that underlies all this can be useful and transferable knowledge.

We won't get into the fine details of each component since the intent of this guide is to familiarize you with AWS networking just enough, so that you can have a good idea of why we need these steps and why their order is significant.

Key points about our use case:

  • Our service will be containerized with Docker and deployed as an ECS task.
  • Our service will need to act as both client and server.
  • Assuming valid authorization, the incoming and outgoing connections can be anywhere on the internet, and specifically, any existing cloud service providers.

Prerequisites:

  1. Install the AWS CLI.

  2. Upload your Docker-containerized image to a repository using AWS ECR

An important point to note is that the task/container should be on a private network, whereas public access will be enabled by a load balancer - in this case, Amazon's Application Load Balancer (ALB).

Step 1: Create a VPC (Virtual Private Cloud)

While there may already be a default VPC on your account, let's start by clicking the blue "Create VPC" button. Choose a memorable name (eg. myFirstVPC) and for the CIDR range, specify it as 10.0.0.0/16 since this allows for the maximum amount of IP addresses that can be used (https://www.aelius.com/njh/subnet_sheet.html) for your VPC. Leave all other settings as-is and click "Create."

You should now see it in the list of active VPCs with a state of "Available."

Step 2: Configuring subnets

We will now need both public and private subnets in our VPC. The basic idea is that the load balancer we mentioned before will require public subnets to allow public internet connections, and the task itself will run within the private subnets. For our example, we'll make two private subnets and two public subnets.

On the VPC sidebar under "Your VPCs", navigate to "Subnets" and click the blue "Create Subnet" button. Choose a name that's easy to remember (eg. public1), and make sure you choose the VPC that you just created. Choose an availability zone that you prefer (each subnet will have a different one). For the last step which involves declaring the IPv4 CIDR block, we'll divide our VPC CIDR into 4 different parts, for each of the subnets.

For the first public subnet, you can type in 10.0.1.0/24 (we'll use different blocks for each subnet) and click "Create".

After a few seconds, refresh your page and you should see the newly created subnet in the "Available" state.

Repeat the steps above for the second public subnet but with the following changes:

  • A new name
  • A different availability zone (for eg, if you chose us-east-1a previously, choose us-east-1b this time)
  • A CIDR block of 10.0.2.0/24

At this point, you should have two public subnets up and running.

Now, we'll create two private subnets. Repeat the same steps as above, but name them appropriately (eg. private1). You can choose new availability zones, or stick to the previous ones. For the CIDR blocks, specify them as 10.0.3.0/24 and 10.0.4.0/24 respectively.

We've got all 4 of our subnets created!

Step 3: Internet Gateway

For the public subnets to be accessible, we need to attach an Internet Gateway. On the VPC sidebar, navigate to "Internet Gateway" and click the blue "Create Internet Gateway" button. Name it and click "Create". After a refresh, you should see it with a state of "Detached." At the top next to the blue Create button, click the "Actions" drop down and select "Attach to VPC." Choose your VPC that we created earlier and click "Attach." Now, you can refresh and check that the state of the gateway is "Attached".

Step 4: Route Tables

In order to direct network traffic appropriately, we use route tables. This is what we'll use to attach our internet gateway to our public subnets specifically. Navigate to the "Route Tables" section in the VPC sidebar and click the blue "Create route table" button. This one is for public access so name it appropriately (eg. publicRT1), choose the VPC that you created, and click "Create."

Select your newly created route table, and click the "Routes" tab underneath. Click "Edit Routes" so that we can add a route that points to our internet gateway. The "Destination" can be set to 0.0.0.0/0 to signify any possible route other than the local route. Set the "Target" to your newly created internet gateway and click "Save routes." It should now be visible with a state of "Active" under "Routes." Next, click the "Subnet Associations" tab and click "Edit Subnet Associations." Here, choose the two public subnets that you created earlier, and click "Save."

Let's create another route table for our private subnets with a new name. Choose your VPC and and click "Create." Since we don't need a gateway for this one, click the "Subnet Associations" tab underneath. Click "Edit Subnet Associations" and here's where we can choose our two private subnets that we created earlier. Once they're selected, click "Save."

Now, we're finished with creating a VPC, divided the IP address ranges for 4 subnets, configured an internet gateway, and we've routed connections properly so that public access is handled correctly.

It's time to create our task definition!

Step 5: The ECS Cluster - Creating a Task Definition

Search for "ECS" on the AWS website and navigate to your ECS dashboard. Click the blue "Create cluster" button and choose "Networking Only - Powered by AWS Fargate." Click "Next Step" and name your cluster. We need a VPC for this cluster so check the box for "Create a new VPC" and use the default CIDR blocks. Check the box for "Enable Container Insights" at the bottom since CloudWatch is very useful and click "Create." Click "View Cluster" and you should now be on the page for your new cluster!

The next step is to create a task and a service. Navigate to "Task Definitions" on the sidebar and click "Create new Task Definition." Choose "Fargate" as the launch type and name your task. The task role is to specify an IAM role with correct permissions in case your task involves active use of other AWS services. Our use case does not so we'll skip this. The network mode should be the default "awsvpc" and we can move on and create a new task execution role. For the task size, you can choose the configurations you need but our requirements needed 1GB for the task memory, and 0.5vCPU for the task CPU.

Now, click the blue "Add container" button and give it a name. To specify the location of the Docker image, go to your repository on ECR, and copy the image URI of your Docker image. Return to the "Add container" page and paste it inside the "Image" box. Scroll down to Port mappings, where you can specify any ports that your container might need to expose. In our case, it was 9001 so that's what we used.

None of the other settings on this page are relevant to us, so go ahead and click "Add." Click "Create" and you should be able to view your new task definition.

Step 6: The ECS Cluster - Creating a Service Definition

Navigate to the "Clusters" page and in the "Services" tab underneath, click the blue "Create" button. Choose "Fargate" as the launch type and select the correct task defintion pointing to the one you just created. Assuming you want to use the latest version of your image, the other settings can be left as default and you can name your service. The "Number of Tasks" is dependent on your use case; for us, we chose 3 because we wanted 3 tasks to be active at any given time. Assuming you're okay with a rolling update, the rest of the settings can be the default values as well.

Click "Next Step" and now, make sure you choose the VPC you created earlier. To be sure, you can navigate to your VPC dashboard and doublecheck the ID of your VPC so you can select it from the menu. Next, choose both private subnets for the "Subnets" selection since we want our tasks running in the private ones. Click "Edit" in the "Security Groups" section. Though the default setting accepts all incoming connections, we want to specifically route the connections hitting our load balancer. Since we haven't configured our load balancer yet, we can't confirm this setting but we will return to this in Step 9. Keep the security group name handy.

Click "Cancel" and scroll down to the "Load balancing" section. Our use case requires the Application Load Balancer so click that one and for the health check grace period above, you can specify a custom range. We chose "50s".

There's an important note to remember about the health check used by Amazon's Elastic Load Balancer (ELB). In order to successfully pass these checks and not have your tasks randomly crash, be sure to specify a health check path within your service/container. It should return a 200 OK response so that all pings from the ELB are handled properly. This path will be referenced again in the Load Balancers section.

Underneath, you should see that no load balancers were found so click the link to the EC2 Console which should redirect you to the load balancers page.

Step 7: Creating the ALB (Application Load Balancer)

Here, click "Create", choose "ALB" and give it a name. The scheme should be internet-facing and the "Listeners" section can be set to default. Under "Availability Zones", select your VPC and choose the availability zones you want. For the subnets, be sure to choose only the public subnets for any availability zone you select.

Click "Next" and you should now be on the page to create "Security Groups." Name it appropriately (eg. albSG) and add the following rule: Type: HTTP, Protocol: TCP, Port Range: 80, Source: Custom - 0.0.0.0/0 and a separate rule for ::/0 so that anybody can access the load balancer. Click "Next" and you should be on the page to configure a "Target Group." Name it and make sure to set the target type to "IP". Leave the rest of the fields unchanged and for the health checks section, be sure to specify the path you created earlier.

Click "Next" until you see the blue "Create" button and click it. Return to the cluster settings page so we can continue from where we left off. Your new load balancer should be automatically selected and your container should be added to the load balancer. If not, you should see a blue button for "Add to load balancer" next to your container name, which you can click.

For the specific container settings in the "Container to Load Balance" section, select 80:HTTP as your Production listener port. Then, for the Target group name, select the target group you created earlier. Essentially, all traffic that hits the load balancer on port 80 will be redirected to the load balancer's target group. The rest of the settings including the health check path should be automatically configured.

Scroll down to "Service Discovery" and uncheck that box since we're not using other microservices with our use case. Click "Next" until you reach the blue "Create" button and click it.

Step 8: NAT Gateway

Now return to your Clusters page and after a refresh, you should see that your service is up and running. On the tasks tab, you should see that the tasks will move through a "Pending -> Running -> Stopped" cycle over a period of 30-60 seconds. Since our tasks are running in private subnets, anything in the container that would require internet access (eg. fetching and installing dependencies within the Dockerfile) cannot run. A solution is to add a NAT gateway for our public subnets, so that any process in a private subnet can access external resources - while still preventing external access to the task itself.

On the VPC Dashboard, navigate to "NAT Gateways" on the sidebar and click the blue "Create NAT Gateway" button. Choose a public subnet since we want the gateway to have internet acccessibility for our tasks. For the Elastic IP, you can click the "Create new EIP" button to attach a new one to the gateway. Now that it's created, click the "Edit Route Table" button to view the public and private route tables. Select the private route table and check the "Routes" tab underneath; click "Edit Routes" and add a new route with a destination of 0.0.0.0/0 and a target with the NAT gateway you just created. Click "Save Routes" and with this new rule, your task can access the internet!

Step 9: Final configurations

Lastly, we need to return to the security group settings we skipped during Step 6. From your "Service" page in the Cluster settings, click the blue "Update" button. In the "Configure Network" page, click the Security Group so that it opens up in a new page. We need to redirect all traffic to our load balancer's security group so that the overarching service isn't accessible externally, but the task can still be accessed.

To do this, open up a separate tab and navigate to your load balancer. Scroll down until you find its Security Group ID under the "Security" section. Copy that ID or open the link elsewhere. Go back to the page where you have your service's security group. Click the "Inbound Rules" tab and click the "Edit Rules" button. Add a rule with the type "All TCP" and for "Source", paste the load balancer's security group ID. Save the rule and return to your load balancer's page. After a refresh, click the "Listeners" tab and you should clearly see that all traffic hitting port 80 for HTTP is being forwarded to your service's target group. Clicking on that target group and then clicking the "Targets" tab will confirm that all your tasks are healthy.

A visual for this entire setup can be found here courtesy of AWS and this article

Summary:

We've successfully set up everything we need for our task to run smoothly! Should you choose to simply use CDK scripts, paying attention to the code and logs will show you that all of this stuff is happening behind the scenes. But not understanding the purpose of this networking setup can lead to frustrations when there's a problem or you need to debug. Hope this guide was helpful to you in making the process of working with AWS more transparent.

Credits:
The guide very closely parallels this great video

Thanks to Michael Jung, Aryan Binazir, and Jordan Swartz for valuable contributions.

💖 💪 🙅 🚩
aashishb
Aashish Balaji

Posted on December 25, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related