Continuous Engineering with the Serverless Framework
Eduardo Romero
Posted on October 5, 2017
We are working with microservices, and I’m enjoying it. Our implementation is based on AWS Lambda serverless compute. This technology launched in 2014, now in 2017 there are still no well-established engineering best practices for it.
We always follow best practices when we develop a product. In this post, I describe how we apply our workflow to the function as a service paradigm.
Workflow
We use feature branches. Each story gets its branch. When the code is ready, we create a Pull Request (PR) to have changes merged to develop.
The team peer-reviews the code and we suggest changes when needed. We require at least two approvals before merging changes.
Before changes get merged, CI/CD runs. It checks that we stick to our coding standard and runs our tests. If everything passes, you can merge your branch.
When it is merged to develop it gets auto-deployed to our development environment on AWS.
Code Structure
Working with Functions as a Service (FaaS) means coding simpler, easy to understand modules.
Each microservice has its own repository. Inside it we structured our code around the subdomain it belongs to. Clustering all related functions of a service together in the same folder.
We are using the Serverless Framework to manage, build and deploy our functions as a service microservices.
We create one service per subdomain, with its own serverless.yml config file, external packages it needs (package.json) and several JavaScript files that implement its functionality.
Events Definitions
Function as a Service is an event driven paradigm. Functions get spawned in response to an event.
The event definition goes in the serverless.yml config file for each service. It can be an HTTP request through API Gateway, a file uploaded to S3, a record updated in Aurora DB, a document inserted in DynamoDB, etc.
We are currently only using HTTP, Kinesis streams and SNS events for our project. The framework has support for more event sources.
Environment Variables
We use environment variables as much as we can. ARNs, Endpoints, AWS Credentials, AWS Region, etc. Serverless lets you define resources from environment variables. It will let you know if any of the variables are not defined in your environment before it tries to deploy your lambdas.
The team shares environment variables with the help of Torus. Our CI/CD tool reads these environment variables to deploy our lambdas automatically when we approve a PR.
Release / Deploy
We use Continuous Integration and Deployment. Code Linting and Tests runs before a PR can be merged. Deployments to dev and staging are automated.
Our CI/CD pipeline runs a basic NodeJS docker image, finds all our services, goes into each folder, runs yarn to install all dependencies and deploys to AWS. Nothing particularly sophisticated.
A few bash get run commands, et voilà ! A fully functional set of lambdas running.
Everything’s been taken care of 🎉.
If everything goes well, we will get a message on Slack that everything’s been taken care of. All services are up to date.
At this point we are ready to work on the next feature and continue the development cycle.
More resources 📚
- The Serverless Framework and many examples.
- Micro async http node microservices. Great first approach to self-hosted simple functions to build services.
- Apex another framework for managing FaaS. It has runtimes for Go, Clojure, and Rust, so you can write functions on those languages that are not supported by AWS Lambda.
- WebTasks FaaS by the Auth0 team. Integrates really well with Slack and GitHub. Has Cron-like support for scheduled tasks.
- stdlib a slightly different approach. They strive to become a Standard Library for FaaS. Has a Service Directory and functions can be invoked from regular NodeJS, Python or Ruby projects.
Posted on October 5, 2017
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.