Custom AWS Lambda Docker image for local development

nipatiitti

Niilo Jaakkola

Posted on April 27, 2023

Custom AWS Lambda Docker image for local development

tl:dr; Custom docker image for AWS Lambda Functions that support local TypeScript development, hot reloading and testing

The Dilemma

I recently had to make a serverless function, that converts files from one format to another. Before writing a single line of code I laid some ground rules for the project:

  • It needs to run in AWS Lambda
  • Use CLI tools to convert the files
  • Development must be smooth and local
  • Local environment must be as close to the real deal as possible

Now there are few problems with this set of rules:

  • The default Lambda Node.js runtime doesn't have CLI access or the CLI tools needed
  • Local Lambda development is extremely cumbersome and hard

Luckily there is one solution for both of these problems: Lambda Container Images!

The Idea and Theory

When starting on this journey I quite quickly realized there is not much information on projects like this and the use case is super niche.

To run something in AWS Lambda we need two basic things. Lambda Runtime Interface Client (aws-lambda-ric) and Lambda Runtime Interface Emulator (aws-lambda-rie)

aws-lambda-execution-enviroment For the production image we only need to provide Runtime + Function but for the development image we also need to provide the Runtime API

AWS also generously provides base images for multiple languages and even vanilla ones that have no runtime by default. Sadly for us, these all use the CentOS like Amazon Linux and do not offer hot reloading.

If you are fine with using EPEL packages and don't need hot reloading I suggest you use the provided base images and turn away now.

The Execution part 1: Getting started

Oh you are still here? Welcome aboard for the ride!

In my use case I needed some very specific Debian packages and the project would be complex enough, that hot reloading in local development would be nice. So custom Docker image it is!

I opted to craft two different images. One for local development and testing, the other for production. The local one will use only one stage but for the production one we will use a multi-stage Dockerfile to slim down the final image and make lambda start-ups as fast as possible.

We will also have a docker-compose.yml to make local running convenient. The compose will spin up 2 versions of the development image. One which will run the tests and one that will run the dev server.

So go ahead and initialize the project any way you would like. I will use file structure like so with yarn as the package manager:



├── function
│   ├── index.ts
|   tests
│   ├── index.spec.ts
│   package.json
│   tsconfig.json
│   Dockerfile
│   docker-compose.yml


Enter fullscreen mode Exit fullscreen mode

During this deep dive I will also use esbuild and Nodemon. These will be used for watching, bundling and transpiling the TypeScript code but you can replace them with whatever you want.

For testing I will use Mocha with Chai assertions but you are once again free to use what ever you want. I will also use ts-node to run Mocha inside the container.

Some other useful dependencies might include:

  • @types/aws-lambda
  • @types/node

For the complete setup go and have a peek here as I wont be explaining it all in this post.

You will also need to download the aws-lambda-rie binary so that we can copy it to the development image.

The Execution part 2: Code

To see if the images work we need something to run in them. So go ahead and create a function/index.ts file with the following code:



import { APIGatewayEvent, APIGatewayProxyResult, Context } from 'aws-lambda'

export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => {
  console.info('EVENT\n' + JSON.stringify(event, null, 2))

  // The local aws-lambda-rie will pass data straight to the event but the real one to event.body
  const data = JSON.parse(event.body || JSON.stringify(event))
  console.info('DATA\n' + JSON.stringify(data, null, 2))

  return {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Hello from Lambda!',
    }),
  }
}



Enter fullscreen mode Exit fullscreen mode

In here you can do anything you would normally do in Lambda functions but today we will KISS and just use hello-world!

One notable thing is that for some reason the runtime emulator AWS provides passes POST data straight to the event object while the real runtime in cloud puts it to event.body. This might pose some problems if you use pre-made adapters e.g. aws -> express.

An example of a code used for testing could be (tests/index.spec.ts):



import chai from 'chai'
import chaiHttp from 'chai-http'
import { describe } from 'mocha'

chai.use(chaiHttp)
chai.should()

const baseUrl = 'http://localhost:8080/2015-03-31/functions/function/invocations'

describe('it-runs', () => {
  it('Should return 200', () => {
    chai
      .request(baseUrl)
      .get('/')
      .end((err, res) => {
        res.should.have.status(200)
      })
  })
})



Enter fullscreen mode Exit fullscreen mode

The Execution Part 3: Main Course

When building Docker images I like to start with the bigger ones and then optimize away as I go towards a production version, so lets start with the Development image.

It needs to atleast:

So lets get started (Final file):

First things first let's initialize the Docker image. For this I will use node:18-bullseye image



FROM node:18-bullseye
WORKDIR /usr/app


Enter fullscreen mode Exit fullscreen mode

After that we have to install all the cpp dependencies needed to build the aws-lambda-ric for Node.js. This is a really slow step so it's a good idea to keep it as high in the Dockerfile as possible to maximize the re-usage of cached layers.



RUN apt-get update && \
    apt-get install -y \
    g++ \
    make \
    cmake \
    unzip \
    libcurl4-openssl-dev \
    lsof


Enter fullscreen mode Exit fullscreen mode

This is also a good place to install any dependencies you might need e.g.:



RUN apt-get install -y \
  inkscape \
  imagemagick


Enter fullscreen mode Exit fullscreen mode

Copy only the package.json and other necessary files to install dependencies for the same cached layers reason:



COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile


Enter fullscreen mode Exit fullscreen mode

Now we are ready to install the aws-lambda-ric



RUN yarn add aws-lambda-ric


Enter fullscreen mode Exit fullscreen mode

After this we can copy the rest of stuff over. Just make sure you have a .dockerignore to exclude node_modules and other unwanted stuff.



COPY . .


Enter fullscreen mode Exit fullscreen mode

To bundle and transpile the code we can use the following esbuild commands (package.json):



"scripts": {
  "build:dev": "esbuild function/index.ts --platform=node --bundle --target=node14 --outfile=dist/index.js",
  "build": "esbuild function/index.ts --platform=node --bundle --minify --target=node14 --outfile=dist/index.js"
}


Enter fullscreen mode Exit fullscreen mode

Redoing the build on code changes is quite easy with Nodemon but what's not easy is to restart the RIC and RIE. They are designed to be run in production enviroments and are harder to kill than cockroaches. They also exit with the code 2 which is no bueno for Nodemon.

To make all this bit simpler I devised a entrypoint.dev.sh which is just a bash script that kills everything running in port 8080 and then rebuilds the code and restarts the RIC and RIE processes.



#!/bin/bash

PID=$(lsof -t -i:8080)
if [ -z "$PID" ]
then
    echo "No PID found"
else
    echo "Killing PID $PID"
    kill $PID
fi

yarn build:dev && ./aws-lambda-rie yarn aws-lambda-ric dist/index.handler || exit 1


Enter fullscreen mode Exit fullscreen mode

It also makes sure to always exit with code 1 to keep Nodemon happy. Now we can just create a nodemon.json to run this:



{
  "watch": ["function"],
  "ext": "ts,json",
  "exec": "./entrypoint.dev.sh"
}


Enter fullscreen mode Exit fullscreen mode

and copy all the necessary files and add their execution rights:



COPY entrypoint.dev.sh ./
RUN chmod +x ./entrypoint.dev.sh

COPY aws-lambda-rie ./aws-lambda-rie
RUN chmod +x ./aws-lambda-rie


Enter fullscreen mode Exit fullscreen mode

Toss in a default start command for good measure and we are almost ready to go:



CMD ["yarn", "nodemon"]


Enter fullscreen mode Exit fullscreen mode

Now the only thing left to do is figuring out how to pass the code changes from local files to the container. Luckily this is quite easy and we can just mount the our local code folder as a volume in the container.

To make this process and running the container in general easier I made the following docker-compose.yml:



services:
  development:
    build:
      context: .
      dockerfile: Development.Dockerfile
    ports:
      - 9000:8080
    volumes:
      - ./function:/usr/app/function


Enter fullscreen mode Exit fullscreen mode

So now if you just run:



docker compose  -f "docker-compose.yml" up -d --build development


Enter fullscreen mode Exit fullscreen mode

Followed by:



curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"body": "test"}'


Enter fullscreen mode Exit fullscreen mode

You should get Hello World back! Go ahead and try editing and saving the code. You should see a updated response!

The Execution Part 4: Testing

For testing we don't need no Nodemon and hot reloading but instead we have to run the testing library, while the RIE&RIC combo is running, and then exit the container image with the exit code of the testing library. This way we can use the testing image in e.g. pipeline quite handily.

To get started let's make a new entrypoint file entrypoint.test.sh:



#!/bin/bash

PID=$(lsof -t -i:8080)
if [ -z "$PID" ]
then
    echo "No PID found"
else
    echo "Killing PID $PID"
    kill $PID
fi

yarn build
nohup ./aws-lambda-rie yarn aws-lambda-ric dist/index.handler > /dev/null 2>&1 &
yarn mocha


Enter fullscreen mode Exit fullscreen mode

As was with the development version, we first kill everything in the 8080 port, then we build the code using the production configuration (Difference to development being the --minify flag). After this we need to spin up the Lambda Runtimes, forget about them and move on. This can be achieved with the nohup <command> & I will also pipe the output of these to /dev/null so they don't bother us.

As the last command you can run your testing library of choice any way you want. For me this is with the yarn mocha command in combination with the .mocharc.json file:



{
  "extension": ["ts"],
  "spec": "tests/**/*.spec.ts",
  "require": "ts-node/register"
}


Enter fullscreen mode Exit fullscreen mode

Don't forgot to add these to the Development.Dockerfile above the CMD:



COPY entrypoint.test.sh ./
RUN chmod +x ./entrypoint.test.sh


Enter fullscreen mode Exit fullscreen mode

To make things easier we can also add this to the docker-compose.yml:



tests:
  build:
    context: .
    dockerfile: Development.Dockerfile
  command: ./entrypoint.test.sh
  environment:
    - NODE_ENV=test


Enter fullscreen mode Exit fullscreen mode

The important part here is how we override the default CMD with our own entrypoint.

The Execution Part 5: Production

To make a production ready image we just need slight modifications to our Development.Dockerfile so go ahead and create a Dockerfile and put in it:



FROM node:18-bullseye as build-image

WORKDIR /usr/app

# Install aws-lambda-ric cpp dependencies
RUN apt-get update && \
    apt-get install -y \
    g++ \
    make \
    cmake \
    unzip \
    libcurl4-openssl-dev \
    lsof


# Install dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
RUN yarn add aws-lambda-ric

# Copy the source code
COPY function ./function
COPY tsconfig.json ./

# Build the minified and bundled code to /dist
RUN yarn build

# Create the final image
FROM node:18-bullseye-slim

WORKDIR /usr/app

# Copy the built code from the build image
COPY --from=build-image /usr/app/dist ./dist
COPY --from=build-image /usr/app/package.json ./package.json

RUN apt-get update

# Install any dependencies you might need e.g.
# RUN apt-get update && apt-get install -y \
#   inkscape \
#   imagemagick

# Install the aws-lambda-ric
COPY --from=build-image /usr/app/node_modules/aws-lambda-ric ./node_modules/aws-lambda-ric

# Run the aws-lambda-ric
ENTRYPOINT [ "node", "node_modules/aws-lambda-ric/bin/index.js" ]
CMD [ "dist/index.handler" ]


Enter fullscreen mode Exit fullscreen mode

The process is almost identical but we do not need the aws-lambda-rie as it will be provided by the AWS Servers. We also use 2 stage build and use the node:18-bullseye-slim for the final image. This allows us to leave the node_modules and other unnecessary clutter in the first stage.

In my testing we go from a >1Gb Development image to <350Mb Production image which is quite the saving.

To publish and use this image in Lambda we can e.g. use AWS-CDK deployment pipeline or push it to a container registry in a custom pipeline and download it from there to Lambda.

If you would like to see a tutorial on that please leave a like and comment 🙌🏻

💖 💪 🙅 🚩
nipatiitti
Niilo Jaakkola

Posted on April 27, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related