Building A Simple Serverless Application

rwxdash

Oguzhan Ozdemir

Posted on September 27, 2020

Building A Simple Serverless Application

Backstory

Quite recently1, I joined Thundra2 as a Solutions Engineer. Although my title seems non-technical, Thundra and its customers are; and so am I. During this time, I'll also be responsible for all the infrastructure need Thundra has. To do this, I must get used to the serverless world. Hence this post.

I haven't actively worked on serverless architectures so far or with NodeJS. I know, I'm a bit late to the party. So, in my first week at Thundra, I started to play with all that and built myself a simple serverless flow using NodeJS and AWS services.


Prerequisite

We need a couple of things as we build this flow.

  1. An AWS account, which you can open easily at aws.amazon.com.
  2. Install and configure AWS CLI on your computer.
  3. Install NodeJS. Version 12.X will suffice.
  4. Install Serverless framework. See serverless.com.

If all these pieces are installed and working fine on your computer, you are good to go.


The Application

Now, let's talk about what we are going to build. I didn't want my first serverless application to be difficult, but I also wanted to use AWS services other than AWS Lambda. So, I've decided to use SQS and S3 along with it.

AWS services we use in this post.

The application flow is also quite simple and it is something like this;

  1. Lambda #1 has a POST endpoint to take a payload.
  2. Lambda #1 then sends this payload to an SQS Queue.
  3. Whenever our queue receives a message, it then triggers Lambda #2.
  4. Once Lambda #2 triggered, it prepares a document with the message and uploads it to an S3 Bucket.
  5. That's it. It's up to you what to do with all the documents in your bucket.

The application's flow on AWS.

As shown in the diagram above, it's not challenging. But that's ok. That's what I wanted.


Coding

This is the fun part. As I mentioned in the prerequisite, we are going to use the Serverless framework to handle all the deployment and resources on AWS. Let's break our application into pieces.

  1. For Lambda #1 we need;
    1. A simple lambda function.
    2. An SQS queue.
    3. The necessary permissions for our function to push a message into our queue.
    4. The code.
  2. For Lambda #2 we need;
    1. Another lambda function.
    2. An S3 Bucket.
    3. Again, necessary permissions for our function to upload a document to our bucket.
    4. The code.

Lambda #1

First, we need to create a project using the Serverless framework. Let's run the following commands to create a project.

$ mkdir sampleLambda
$ cd sampleLambda
$ serverless create --template aws-nodejs
Enter fullscreen mode Exit fullscreen mode

This will give us the following files.

.
├── .gitignore
├── handler.js
└── serverless.yml

0 directory, 3 files
Enter fullscreen mode Exit fullscreen mode

OK, that's good. But, we should see a few steps ahead, so let's update the files as such.

.
├── .gitignore
├── api
│   └── sqsWrite.js
└── serverless.yml

1 directory, 3 files
Enter fullscreen mode Exit fullscreen mode

What we did is to create an api folder and move our handler.js file into it and rename it to sqsWrite.js. At this point I also highly suggest you use git, so, just run git init and commit now and then.

Now, it's time to update the serverless.yml file according to our needs. You'll see comments in each section in the yaml to give you the idea of what we are doing.

service: samplelambda
frameworkVersion: "2"

# Add some variables to use later on.
custom:
  stage: dev
  region: eu-west-1

# Let's use the variables above in the provider.
provider:
  name: aws
  runtime: nodejs12.x
  stage: ${self:custom.stage}
  region: ${self:custom.region}

  # Lambda #1 needs the `sqs:SendMessage` permission
  # to send data to our queue
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "sqs:SendMessage"
      Resource:
        Fn::GetAtt:
          - lambdaPayload         # This is the name I choose for our queue. See the resources.
          - Arn

functions:
  # This is the sqsWrite function definition.
  sqsWrite:
    handler: api/sqsWrite.push    # We're going to name the function `push`.
    memorySize: 128
    description: Send the payload to the SQS Queue

    # We need the SQS URL to use in our code. So, setting it to an env variable.
    environment:
      SQS_URL:
        Ref: lambdaPayload

    # We said that we accept a POST request.
    events:
      - http:
          path: /sqs
          method: post

resources:
  Resources:
    # Here, we defined the SQS Queue.
    lambdaPayload:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: lambdaPayload

Enter fullscreen mode Exit fullscreen mode

Before we apply all this, let's go to our sqsWrite.js file and update it as such to see if it's working properly. The code might not be the best. Again, remember that I'm fairly new to the NodeJS. However, it'll get things done. It also doesn't have the best error handling, but let's move on for now.

'use strict';

const AWS = require('aws-sdk');

const sqsQueue = new AWS.SQS();
const sqsUrl = process.env['SQS_URL'];

module.exports.push = (event, context, callback) => {
  const params = {
    MessageBody: event.body,
    QueueUrl: sqsUrl,
  };

  sqsQueue.sendMessage(params, (err, data) => {
    if (err) {
      console.error(err);
      callback(new Error('Couldn\'t send the message to SQS.'));
      return;
    } else {
      console.log('Successfully sent the message to SQS.');

      callback(null, {
        statusCode: 200,
        body: JSON.stringify({
          message: 'Successfully sent the message to SQS.'
        })
      });
      return;
    }
  });
}
Enter fullscreen mode Exit fullscreen mode

Let's apply all this with the following command.

# sls is short for serverless
$ sls deploy
Enter fullscreen mode Exit fullscreen mode

This will take a short amount of time but in the end, it should give us a URL similar to the following to trigger our lambda.

Service Information
service: samplelambda
stage: dev
region: eu-west-1
stack: samplelambda-dev
resources: 12
api keys:
  None
endpoints:
  POST - https://XXXXXXXXXX.execute-api.eu-west-1.amazonaws.com/dev/sqs
functions:
  sqsWrite: samplelambda-dev-sqsWrite
layers:
  None
Enter fullscreen mode Exit fullscreen mode

Now, let's check things on AWS Console. If we go to AWS Lambda and SQS respectively, we should see our resources created and ready for action.

Lambda Function.

SQS Queue.

And, if we go in our lambda function by clicking on it, we should see that our permissions are all clear and our environment variables are set to our queue's URL.

It's time to test the function. You can use curl or Postman to send an HTTP request. Here's the request.

$ curl -L -X POST 'https://XXXXXXXXXX.execute-api.eu-west-1.amazonaws.com/dev/sqs' -H 'Content-Type: application/json' --data-raw '{
    "message": "Sent using curl!"
}'
Enter fullscreen mode Exit fullscreen mode

You should get the following message as a response. If it's not the message you are getting, you might need to do some debugging.

{"message":"Successfully sent the message to SQS."}%
Enter fullscreen mode Exit fullscreen mode

If so, Hurray! You should see your message number go up in the AWS Console as well.

SQS Queue Message Number.


This is a good place to make a git commit :)


Lambda #2

OK, now it's time to start with the next function which will get triggered automatically when the SQS queue receives a message and upload the received document to S3.

Let's create a file called s3Upload.js in our api folder first. We'll fill it right after we finish writing the new definitions in serverless.yml file. This yaml file, with all the things in itself, should look like this. I'll comment the parts I added.

service: samplelambda
frameworkVersion: "2"

custom:
  stage: dev
  region: eu-west-1

  # We need a `globally` unique bucket name. You can name it anything you want.
  # I used my name.
  s3Bucket: sample-lambda-sqs

provider:
  name: aws
  runtime: nodejs12.x
  stage: ${self:custom.stage}
  region: ${self:custom.region}

  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "sqs:SendMessage"
      Resource:
        Fn::GetAtt:
          - lambdaPayload
          - Arn

    # We need permission, one that allows us to put an object into S3.
    - Effect: "Allow"
      Action:
        - "s3:Put*"
      Resource:
        Fn::Join:
          - ""
          - - "arn:aws:s3:::"
            - "Ref": "lambdaBucket"
            - "/*"

functions:
  sqsWrite:
    handler: api/sqsWrite.push
    memorySize: 128
    description: Send the payload to the SQS Queue
    environment:
      SQS_URL:
        Ref: lambdaPayload
    events:
      - http:
          path: /sqs
          method: post

  # This is the s3Upload function definition.
  s3Upload:
    handler: api/s3Upload.push
    memorySize: 128
    description: Upload the message to S3

    # Again, we need the S3 Bucket name in our code.
    environment:
      S3_BUCKET:
        Ref: lambdaBucket

    # This is the catch.
    # This event will add the ability to
    # get triggered by a new message in our queue.
    events:
      - sqs:
          arn:
            Fn::GetAtt:
              - lambdaPayload
              - Arn
          batchSize: 1
resources:
  Resources:
    lambdaPayload:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: lambdaPayload

    # Here, we defined the S3 Bucket.
    lambdaBucket:
      Type: AWS::S3::Bucket
      Properties:
        AccessControl: BucketOwnerFullControl
        BucketName: ${self:custom.s3Bucket}-${self:service}

Enter fullscreen mode Exit fullscreen mode

Again, before we apply this, let's write the s3Upload function.

'use strict';

const AWS = require('aws-sdk');

const s3 = new AWS.S3();
const s3Bucket = process.env['S3_BUCKET'];

module.exports.push = (event, _, callback) => {
  const object = {
    MessageId: event.Records[0].messageId,
    Attributes: event.Records[0].attributes,
    Body: JSON.parse(event.Records[0].body),
  };

  const buffer = Buffer.from(JSON.stringify(object));

  const params = {
    Bucket: s3Bucket,
    Key: `${event.Records[0].messageId}.json`,
    Body: buffer,
    ContentEncoding: 'base64',
    ContentType: 'application/json',
    ACL: 'public-read',
  };

  s3.putObject(params, function (err, _) {
    if (err) {
      console.log(err, err.stack);
      callback(new Error('Couldn\'t send the document to S3.'));
      return;
    } else {
      console.log('Successfully sent the document to S3.');

      callback(null, {
        statusCode: 200,
        body: JSON.stringify({
          message: 'Successfully sent the document to S3.'
        })
      });
      return;
    }
  });
}
Enter fullscreen mode Exit fullscreen mode

OK, we are ready to apply this. Let's run sls deploy. Once it's done, we should see the second function and our S3 bucket on AWS Console.

New Lambda Function.
S3 Bucket.

If we go into our new function's details, we'll see that SQS trigger is there and ready.

Lambda 2 Design.

It looks like everything is ready to work all together, so let's test it.

$ curl -L -X POST 'https://XXXXXXXXXX.execute-api.eu-west-1.amazonaws.com/dev/sqs' -H 'Content-Type: application/json' --data-raw '{
    "message": "Really important message!"
}'
Enter fullscreen mode Exit fullscreen mode

And when we got back a success message saying that our message is sent to SQS, we can check our bucket to see if our message is in there.

S3 Document.

And if we view the document, we'll see the really important message and a few details we added in our code is all there.

S3 Document Detail.

Et voilà!


This was a success in my opinion and I do hope that this was helpful. Of course, if you are experienced and already knew all this, I'd love you to hear your suggestions.


Thundra Integration

For this part, I want to do another post. I mean, I already did this part, but I'm quite new to Thundra. So, I don't have enough information yet or have a scenario in my mind to write a post. Plus, this post already has gotten too long.

However, if you want to do the integration part yourself and discover Thundra, I suggest you go to our website and play with it.


Let's wrap it up here. See you in another post!


  1. Just this week. Really. 

  2. Thundra is an end-to-end observability and debugging service for your serverless architecture. See more at thundra.io

💖 💪 🙅 🚩
rwxdash
Oguzhan Ozdemir

Posted on September 27, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related