Can you use ECS without DevOps?

3sky

Jakub Wołynko

Posted on March 16, 2023

Can you use ECS without DevOps?

Welcome

Use your imagination and assume that some team was encouraged to migrate from Azure AppService. It was a decision at a high level, so the small dev team needs to migrate into AWS. What's the issue you can ask? They have no experience with Amazon Web Services, also they don’t have a "DevOps" person. The app is a JAR file. AppService allowed them to put a file into the service and run it on production, without any issues. When they started looking for an alternative on the AWS side, the only solution they found was Elastic Beanstack, unfortunately, the workload fits better into the "docker platform". ECS from another hand doesn’t provide "drop-in" functionality. And that's the case. Let's check how easily we can build the bridge!

Tools used in this episode

  • Java
  • ECS
  • CloudFormation(again!)
  • Github Action

Rules

  1. As a starting point let's assume that we can use random Java app, as it's not a today's topic. Maybe you know, or not. There is great site with HelloWorld apps.
  2. We can't write Dockerfiles (our devs has no idea about it!)
  3. CloudFormation scripts - we must fit into 3-tier architecture model.
  4. CICD should be very simple to setup and use, as developers has no time for it.

Containers

As an example, I decided to use openliberty-realworld-example-app written by

OpenLiberty. The app is a standard Maven project, wasn’t updated for the last 2 years, and it's REST at the end, also there is no Dockerfile. That means I will need to write one for comparison purposes.

Next, I started looking for a "container generator" - an app or service, which can easily build a secure container for us. I found a solution called BuildPacks. It can probably generate a docker image with one, maybe two commands. First, we need to install pack, a CLI tool. Documentation can be found here. The first action besides pack --version, was checking something, available builders. What is it, builder?
It's a container that contains all dependencies, for building final images. The funny thing is that we can even write our builder, in the end, is a similar solution to RedHat 's2i'. Which builders are available out of the box? At least a few.

pack builder suggest
Suggested builders:
        Google:                gcr.io/buildpacks/builder:v1      Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python                                                      
        Heroku:                heroku/builder:22                 Base builder for Heroku-22 stack, based on ubuntu:22.04 base image                                                                
        Heroku:                heroku/buildpacks:20              Base builder for Heroku-20 stack, based on ubuntu:20.04 base image                                                                
        Paketo Buildpacks:     paketobuildpacks/builder:base     Ubuntu bionic base image with buildpacks for Java, .NET Core, NodeJS, Go, Python, Ruby, Apache HTTPD, NGINX and Procfile          
        Paketo Buildpacks:     paketobuildpacks/builder:full     Ubuntu bionic base image with buildpacks for Java, .NET Core, NodeJS, Go, Python, PHP, Ruby, Apache HTTPD, NGINX and Procfile     
        Paketo Buildpacks:     paketobuildpacks/builder:tiny     Tiny base image (bionic build image, distroless-like run image) with buildpacks for Java, Java Native Image and Go      
Enter fullscreen mode Exit fullscreen mode

Looks like we can use paketobuildpacks/builder:tiny as we have a Java app. That is why it's handy to use popular programming languages. Probably it will be supported everywhere.
Now as we have the builder chosen, let's can run the build command.

pack build openliberty-realworld-example-app --tag pack --builder paketobuildpacks/builder:tiny
Enter fullscreen mode Exit fullscreen mode

First note is just doesn’t work on Macbook M1, I mean maybe works, however, it just stacks. So I switched to my x86 workstation.
In the beginning, it looks good, the app was built in 2 minutes and the docker size was acceptable at ~ 241MB. Also, I was able to run the container. In one word - success. Pack was able to detect the result filetype, and use the correct middleware for it. It's impressive!

Next, I created a "regular" Dockerfile and compared the size. The funny thing is that I needed to spend around one hour writing Dockerfile, for an unknown application.

FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package

FROM tomcat:9.0.73-jre11

COPY --from=build /home/app/target/realworld-liberty.war /usr/local/tomcat/webapps/
EXPOSE 8080
Enter fullscreen mode Exit fullscreen mode

Below you can see the size of the container images. Version with tag pack is 80MB smaller 25% less. That is impressive.

|Image | Size|
|---|---|
|openliberty-realworld-example-app:manual | 291MB |
|openliberty-realworld-example-app:pack | 211MB |
Enter fullscreen mode Exit fullscreen mode

The next step was an execution of security scans with trivy.

|Image | Low finding| Medium findings |
|---|---|---|
|openliberty-realworld-example-app:manual | 15 | 4 |
|openliberty-realworld-example-app:pack | 3 | 0 |
Enter fullscreen mode Exit fullscreen mode

Seems that image built with a pack is more secure, and smaller than the image built by me. So yes, I'm personally impressed. However in my opinion more complex app could produce a lot of new issues, which need many hours of digging and tweaking. However, If're using Java, Buildpacks should be able to handle your code, without serious issues. At least based on my short tests.

Platform

Now let me introduce ECS - Elastic Containers Services. It's Amazon's way of managing containers in the cloud. It's not Kubernetes or docker-compose. The best thing is that we can use Fargate(managed, pay-as-you-go compute) as the muscles of our solution. Based on that our infrastructure can be very cost-efficient. In our case, I will try to fit architecture into a corporate 3-tier model. Whole ~500 lines long file can be found here. Diagram was created with cfn-diagram. And it's awful.

Great image, for showing on entry-level demos

Deployment

If we wanted to easily deploy our application, the simplest way will be using CodeDeploy. However, in many organizations, CI/CD tool is an external solution without vendor lock. Let's focus then on an easy and generic way to update our code on QA and PROD, with some tests as well.
For example, 5 years ago I did an interview task, those days there was a tool called silinternational/ecs-deploy. Let's see if we can skip it, as the plan is to use
GitHub Action today!

First impression

As it's a really nice tool. I started with the panel and build-in or rather pre-created actions.

Simple deployment example

After that, I was a bit shocked. I received an almost complete pipeline, yes it's simple, but ready to use and it's a good starting point

# This workflow will build and push a new container image to Amazon ECR,
# and then will deploy a new task definition to Amazon ECS, when there is a push to the "main" branch.
#
# To use this workflow, you will need to complete the following set-up steps:
#
# 1. Create an ECR repository to store your images.
#    For example: `aws ecr create-repository --repository-name my-ecr-repo --region us-east-2`.
#    Replace the value of the `ECR_REPOSITORY` environment variable in the workflow below with your repository's name.
#    Replace the value of the `AWS_REGION` environment variable in the workflow below with your repository's region.
#
# 2. Create an ECS task definition, an ECS cluster, and an ECS service.
#    For example, follow the Getting Started guide on the ECS console:
#      https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/firstRun
#    Replace the value of the `ECS_SERVICE` environment variable in the workflow below with the name you set for the Amazon ECS service.
#    Replace the value of the `ECS_CLUSTER` environment variable in the workflow below with the name you set for the cluster.
#
# 3. Store your ECS task definition as a JSON file in your repository.
#    The format should follow the output of `aws ecs register-task-definition --generate-cli-skeleton`.
#    Replace the value of the `ECS_TASK_DEFINITION` environment variable in the workflow below with the path to the JSON file.
#    Replace the value of the `CONTAINER_NAME` environment variable in the workflow below with the name of the container
#    in the `containerDefinitions` section of the task definition.
#
# 4. Store an IAM user access key in GitHub Actions secrets named `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
#    See the documentation for each action used below for the recommended IAM policies for this IAM user,
#    and best practices on handling the access key credentials.

name: Deploy to Amazon ECS

on:
  push:
    branches: [ "main" ]

env:
  AWS_REGION: MY_AWS_REGION                   # set this to your preferred AWS region, e.g. us-west-1
  ECR_REPOSITORY: MY_ECR_REPOSITORY           # set this to your Amazon ECR repository name
  ECS_SERVICE: MY_ECS_SERVICE                 # set this to your Amazon ECS service name
  ECS_CLUSTER: MY_ECS_CLUSTER                 # set this to your Amazon ECS cluster name
  ECS_TASK_DEFINITION: MY_ECS_TASK_DEFINITION # set this to the path to your Amazon ECS task definition
                                               # file, e.g. .aws/task-definition.json
  CONTAINER_NAME: MY_CONTAINER_NAME           # set this to the name of the container in the
                                               # containerDefinitions section of your task definition

permissions:
  contents: read

jobs:
  deploy:
    name: Deploy
    runs-on: ubuntu-latest
    environment: production

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: ${{ env.AWS_REGION }}

    - name: Login to Amazon ECR
      id: login-ecr
      uses: aws-actions/amazon-ecr-login@v1

    - name: Build, tag, and push image to Amazon ECR
      id: build-image
      env:
        ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
        IMAGE_TAG: ${{ github.sha }}
      run: |
        # Build a docker container and
        # push it to ECR so that it can
        # be deployed to ECS.
        docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
        docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
        echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT

    - name: Fill in the new image ID in the Amazon ECS task definition
      id: task-def
      uses: aws-actions/amazon-ecs-render-task-definition@v1
      with:
        task-definition: ${{ env.ECS_TASK_DEFINITION }}
        container-name: ${{ env.CONTAINER_NAME }}
        image: ${{ steps.build-image.outputs.image }}

    - name: Deploy Amazon ECS task definition
      uses: aws-actions/amazon-ecs-deploy-task-definition@v1
      with:
        task-definition: ${{ steps.task-def.outputs.task-definition }}
        service: ${{ env.ECS_SERVICE }}
        cluster: ${{ env.ECS_CLUSTER }}
        wait-for-service-stability: true
Enter fullscreen mode Exit fullscreen mode

In general, it's a unexpected gift. Especially if you're not such familiar with all this piping stuff.
What do we need to change? Only build part, we should use:

    - name: Build, tag, and push image to Amazon ECR
      id: build-image
      container:
          image: buildpacksio/pack:latest
      env:
        ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
        IMAGE_TAG: ${{ github.sha }}
      run: |
        pack build $ECR_REGISTRY/$ECR_REPOSITORY --tag $IMAGE_TAG --builder paketobuildpacks/builder:tiny  
        pack ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --publish
        echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
Enter fullscreen mode Exit fullscreen mode

At the end, we can copy the whole sub-job as dev, and paste it before 'production job'. It wouldn’t be the cleanest pipeline, I have ever seen, but c'mon. The solution (without CloudFormation), was created in 30 minutes, according to the rule

Summary

Uff, to be host I spent most of the time building the CloudFormation template. During that time, I was also preparing a presentation for the AWS User Group. So in the end I become just more tired than I expected. What is the overall result? Quite impressive, I'm very happy with buildpack, which works well. The problem is that automagic solutions are hard to debug, If you run into some issues with packing, probably writing Dockerfile will be much faster.
What I can recommend then? Probably learning Dockers, even if it requires some time, it's a standard these days, which provides almost ultimate flexibility.

What about ECS? It's easy to set up (don't look at the template, it's overcomplicated), If you decided to go with console, or basic config it will be a 5-minute task. Especially if your application is rather simple, with one/two services included. Management of large, complex, multi-microservice setups, probably will be very annoying. Ah, I almost forgot. If you like open-source stuff and cutting-edge tech from KubeCon - don’t expect that. Besides that? If you have a small app, or two, probably it's a much better solution than EKS, or raw Kubernetes!

💖 💪 🙅 🚩
3sky
Jakub Wołynko

Posted on March 16, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Can you use ECS without DevOps?
devops Can you use ECS without DevOps?

March 16, 2023