Container Image Support in AWS Lambda Deep Dive

eoinsha

Eoin Shanaghy

Posted on December 1, 2020

Container Image Support in AWS Lambda Deep Dive

AWS today announced support for packaging Lambda functions as container images! This post takes a look under the hood of this new feature from my experience during the beta period.

Lambda functions started to look a bit more like container images when Lambda Layers and Custom Runtimes were announced in 2018, albeit with a very different developer experience. Today, the arrival of Container Image support for Lambda makes it possible to use actual Docker/OCI container images up to 10GB in size as the code and runtime for a Lambda function.

But what about Fargate?! Wasn’t that supposed to be the serverless container service in AWS? While it might seem a bit confusing, support for Image Functions in Lambda makes sense and brings huge benefits that were probably never going to happen in the world of Fargate, ECS and EKS. Container Image deployment to Lambda enables Lambda’s incredibly rapid and responsive scaling as well as Lambda’s integrations, error handling, destinations, DLQs, queueing, throttling and metrics.

Of course, Lambda functions are stateless and short-lived. That means that a lot of container workloads in their current form may still suit the Fargate/ECS/EKS camp better. Having personally spent too much time optimising Fargate task scheduling in the past, I will be glad to use Lambda for bursty batch processing workloads where the cost trade-offs work for the business. (We all want Lambda performance at Fargate Spot pricing!) Fargate will remain useful for more traditional, longer-lived workloads that don’t have a need to scale quickly to 100’s or 1000’s of containers.

Let’s take a look at the experience of building and deploying Lambda functions based on container images. In this post, we’ll cover development, deployment, versioning and some of the pros and cons of using image functions.

Development

Container images are typically designed to run either tasks or servers. Tasks usually take parameters in through the container’s CMD arguments and exit when complete. Servers will listen for requests and stay up until they are explicitly stopped.

With Lambda functions, neither of these models applies. Instead, functions deployed from container images operate like functions packaged as ZIPs, staying alive for 30 minutes and handling events one at a time. To support this, a runtime fetches events from the Lambda environment and passes them to the handler function. Since this isn’t something that Docker/OCI containers support, images need to include the Lambda Runtime Interface Client.

Images can be built with any tools that support The Open Container Initiative (OCI) Specification v1.0 or later or Docker Image Manifest V2 Schema 2.

There are two options to pick from in order to build a container image for use with Lambda:

  1. Take an AWS Lambda base image and add your own layers for code, modules and data
  2. Take an existing base image and add the AWS Lambda Runtime Interface Client.

Container Image packaging options for AWS Lambda

The AWS Lambda Runtime Interface Client is an open source native binary written in C++ with bindings for the supported runtimes (.NET Core, Go, Java, Node.js, Python and Ruby). Containers can use these flavours of the runtime client or implement the Lambda Runtime API to respond to and process events. This is the same API used in Custom Lambda Runtimes.

Using the AWS-provided base images, the Dockerfile for building your image is relatively straightforward:

FROM public.ecr.aws/lambda/python:3.8
RUN mkdir -p /var/task
WORKDIR /var/task
COPY app/requirements.txt /var/task
RUN pip install -r requirements.txt
COPY app/ /var/task/app/
CMD [app/handler.handle_event]
Enter fullscreen mode Exit fullscreen mode

To see how this works in practice, you can take a look at our example based on an AWS-provided Node.js base image. It uses Firefox, FFmpeg and Xvfb to capture a video of a webpage loading process and is available on GitHub.

To use your own base image instead of an AWS-provided image, you will need to add the runtime interface client. This is available for Python (PyPi), Node.js (NPM), Ruby (Gem), Java (Maven), Go (GitHub) and .NET (NuGet).

An example of this can be found in our PyTorch-based machine learning example.

Deployment

Functions deployed using container images must refer to a pre-existing repository + tag in ECR (other image repositories are not yet supported). Deployment of a function is therefore always a three-step process. This isn’t much different from functions packaged as a ZIP, where code is typically uploaded to S3 and referenced when the function is created or updated. It will however require some thought when planning your deployment.

The three steps may be performed automatically by serverless packaging tools but you may also wish to deploy the ECR repository and push the container images during separate build phases. In the latter case, there is more control but also more complexity since the order is strict - you cannot deploy a function before a tagged image is in place in ECR. This is a consideration for organisations who want to leverage existing container image build and deployment pipelines and handle it separately to infrastructure deployment.

Lambda function code and resources are deployed in three stages

It is important to note that the image tag is resolved to the image digest during function deployment time so changes to a tag after deployment have no effect.

When it comes to the AWS SDK, CloudFormation and the CLI, differences between image-packaged and ZIP-packaged functions is small.

With boto3:

lambda_client.create_function(
    FunctionName=name,
    PackageType=’Image’,
    Code={‘ImageUri’: ecr_repo_tag},..
)

Enter fullscreen mode Exit fullscreen mode

Note that you do not have to specify the handler when creating functions packaged as container images since this can be configured in the image, most likely using the CMD configuration. The entrypoint, cmd and workdir can be specified when the function is created or updated.

With the Node.js AWS SDK:

await lambda.updateFunctionConfiguration({
  FunctionName: functionName,
  Code: {ImageUri: ecr_repo_ui},
  PackageType: ‘Image’,
  ImageConfig: {Command: [‘index.handleEvent’]}
}).promise();
Enter fullscreen mode Exit fullscreen mode

At this time, once you create a function, it’s not possible to migrate to a different package type. This is set to change so you will soon be able to port existing functions packaged as a ZIP to container images.

Lambda Function configuration contains many properties. The code configuration references a ZIP inline or on S3 or a container image defined by a tagged ECR repository

Once a function has been deployed, it may not be available for invocation just yet! When the cold-start behaviour of Lambdas in VPCs was improved last year, you might recall that functions entered a Pending state while the VPC resources were created. You can check the status of a function and wait for it to enter the Active state.

A state machine for Lambda Functions as they are deployed, become inactive and fail

These states also apply to Lambdas using container images. Functions stay in the Pending state for a few seconds while the container image is “optimised” and cached for AWS Lambda.

Local Development and Testing

When it comes to testing in development, I have not yet found a better experience than that provided by Docker tooling. Once you build a container image, you have an immutable artifact that you can run in development, test and production environments. You gain confidence that the runtime is consistent across all environments. When you need to make changes, you can iterate quickly, only modifying the layers that change. I would love to have similar speed and confidence in the development workflow for Lambda functions packaged as ZIPs but that has yet to materialise.

Local function testing is enabled through the AWS Lambda Runtime Interface Emulator (RIE). The emulator is included in the AWS-provided base images. To test locally, you can just run the container:

docker run -p 9000:8080 your_image
Enter fullscreen mode Exit fullscreen mode

Your function can then be triggered by posting an event using a HTTP request:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d @test-events/event.json
Enter fullscreen mode Exit fullscreen mode

I found this development and testing workflow to be simple, efficient and easy to understand.

Versioning

Within Lambda, versioning support remains the same. Every time you update the code for a function, a new version is created with a single numeric version that automatically increments. When a version is published, it receives the $LATEST alias. Developers can create additional aliases too. Aliases or version numbers can form part of a fully-qualified ARN to invoke specific versions as part of a deployment strategy.

A common complaint with this versioning system is that it is incompatible with semantic versioning widely used today. With container images, we can at least apply semantic version tags to images and use these tags to point to the function’s code in Lambda. Again, bear in mind that if the tag is moved to point to a different image digest, the Lambda function version will still point to the digest referenced at deployment time.

Lambda function versions reference images defined by the digest resolve at deployment time

Creating a Lambda with the Serverless Framework, AWS SAM or other tools sometimes makes it feel like the “infrastructure” (resources) and code are deployed as a single unit. In reality, deployment of the code and the resources are separate. Using container images with version tags will allow developers experienced with container deployment to employ a familiar versioning scheme.

Layers vs. Layers

Let’s take a look at how container image layers differ from AWS Lambda layers. Lambda functions packaged as ZIPs can have up to five layers. The layers themselves are explicitly defined and packaged in a similar way to function code. When layers were introduced, they enabled teams to support sharing pre-packaged libraries and modules or, in more rare cases, custom runtimes.

Container image layers are very different. They are more implicitly defined and you can have as many as you need (up to 127, it appears). Image layers are created as part of the image build and do not need to be individually deployed.

Lambda Layers Container Image Layers
Limited to 5 Up to 127
Explicitly defined Implicitly defined as part of the image build
Single number versioning (though packaging as a SAR application allows semantic versioning) One or more layers can be tagged as an image using any versioning scheme
Deployed as Lambda Layer resource Pushed automatically to the image registry (e.g., ECR) when an image is pushed

The simple yet powerful relationship between a Dockerfile and the layers is one of the benefits that made Docker and containers successful in the early days. It only takes a single line to add a new layer and the layer is automatically rebuilt only if that line or any of the previous layers change. Layer caching can make the development feedback loop super fast.

Runtimes

Lambdas use AWS-provided runtimes by specifying one of the supported Node.js, Go, Java, Python, Ruby or .NET versions in the Runtime property. Custom runtimes, packaged as layers, are also possible by specifying the runtime property value “provided”.

For Container Image Lambdas, the runtime is always essentially provided by the user. AWS does however provide container base images with runtimes for Java, Python, Node.js, .NET and Go. In addition, Amazon Linux base layers for custom runtimes are available.

To add Lambda support to existing container images, developers are required to include the Lambda Runtime Interface Client for the language of choice (Java, Python, Node.js, .NET, Go and Ruby). The runtime interface clients are open source implementations of the AWS Lambda Runtime API. Lambda functions of all types use this API to get events and provide results. The Runtime API and AWS Lambda execution environment are nicely documented and worth reading to understand the context in which your function is invoked.

The Runtime Interface Client talks to the Runtime API to pass events and responses to and from the handler

More Heavy Lifting

You might notice that using container images gives you more control over the execution environment for a Lambda function. While there are clear benefits, something smells a bit unserverless about this! It is always worth choosing the simplest option, the one that hands control and responsibility for maintenance and patches to the cloud provider.

When you deploy a Lambda using a container image, you define the full code stack including OS, standard libraries, dependencies, runtime and application code. Even if you use an AWS-provided base image, you need a process to update the full image when that base image is patched. Make no mistake, this is extra heavy lifting that you should strive to avoid if possible.

The premise of Lambda and serverless computing in general is to let you focus on the minimal amount of code needed to deploy features that are unique to you. The responsibility of managing and maintaining all these base layers is not something that comes for free. Container Image support may be a bridge to Lambda for many applications but it doesn’t mean it’s the final destination. All applications should aim to eliminate any of this maintenance burden over time. That means creating small, single-purpose Lambda functions using a supported runtime or, better still, looking for ways to eliminate that function altogether!

Issues Encountered

There were only a few problems we encountered over the past few weeks of working with Container Image support.

Firstly, we noticed that Billed Duration was calculated differently to ZIP-packaged functions. The billed duration reported in each REPORT log seemed to be the sum of Init Duration and Duration.

Here is a log example for a ZIP-packaged function, where the billed duration was 300ms, even though the init duration was over 750ms.

REPORT RequestId: 5aa36dcc-db7b-4ce6-9132-eae75a97466f 
Duration: 292.24 ms Billed Duration: 300 ms
...
Init Duration: 758.94 ms
Enter fullscreen mode Exit fullscreen mode

For one of our Image-packaged functions, we were being billed for 5200ms, the sum of duration (502.81ms) + init duration (4638.39) rounded up to the nearest 100ms:

REPORT RequestId: 679c6323-7dff-434d-9b63-d9bdb054a4ba
Duration: 502.81 ms Billed Duration: 5200 ms
...
Init Duration: 4638.39 ms
Enter fullscreen mode Exit fullscreen mode

I spoke to AWS and they clarified that this billing behaviour is because we are using a custom runtime, not because we are using a function packaged as an image. This is the same behaviour as with custom runtimes packaged as a ZIP.

The second issue we encountered was for our machine learning case. We ran into an issue with PyTorch DataSet loaders which use Python multiprocessing Queues (and thus /dev/shm) to allow parallel data fetching during model execution. Lambda does not provide /dev/shm. This is a known issue with all types of Lambda functions (see this article from AWS and StackOverflow here).

We had to work around it by setting the loader to use the main CPU rather than separate processes. With Lambda’s remit expanding to handle larger modelling workloads, particularly with multiple vCPUs, issues like this are going to become more prevalent. The traceback is included here in case it helps anyone who's searching for this problem.

[ERROR] OSError: [Errno 38] Function not implemented
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/aws_lambda_powertools/logging/logger.py", line 247, in decorate
return lambda_handler(event, context)
File "/var/task/handler.py", line 12, in handle_event
result = run_test(jobs)
File "/src/aws_test_densenet.py", line 89, in run_test
for data in dataloaders[split_name]:
File "/usr/local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "/usr/local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 684, in __init__
self._worker_result_queue = multiprocessing_context.Queue()
File "/usr/local/lib/python3.8/multiprocessing/context.py", line 103, in Queue
return Queue(maxsize, ctx=self.get_context())
File "/usr/local/lib/python3.8/multiprocessing/queues.py", line 42, in __init__
self._rlock = ctx.Lock()
File "/usr/local/lib/python3.8/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/usr/local/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/usr/local/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
[ERROR] OSError: [Errno 38] Function not implemented
Enter fullscreen mode Exit fullscreen mode

Conclusion

If you are comfortable with container tooling and deployment, container image support in AWS Lambda will be a big win. If, on the other hand, you are more familiar with ZIP-packaged Lambas and see no need to use container tooling, there is no change required. This feature brings options for new use cases and new types of users with different concerns and perspectives.

It feels like a lot of thought has gone into providing support for container images in a way that doesn’t disrupt the AWS Lambda experience for existing developers. There’s not a lot to learn if you are familiar with containers and Lambdas as separate topics already. The addition of the open source Runtime Interface Client and Runtime Interface Emulator are really welcome as it allows you to really get to grips with what’s going on under the hood. Even for a managed service, this kind of context can be really valuable when unexpected problems arise.

If you haven’t already, check out our high level overview of Container Image support for AWS Lambda here.

💖 💪 🙅 🚩
eoinsha
Eoin Shanaghy

Posted on December 1, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related