Building a serverless API with email notifications in AWS with Terraform - Part 1

alopes2

Andre Lopes

Posted on January 9, 2024

Building a serverless API with email notifications in AWS with Terraform - Part 1

Cover Image by Fotis Fotopoulos on Unsplash

Hi earthlings!

In this journey, we'll build a system using Serverless architecture, Terraform, and GitHub Actions. Buckle up as we embark on an adventure.

Our main project will now be a serverless architecture featuring API Gateway, Lambdas, DynamoDB, SNS, and SQS. We'll work with Terraform, our infrastructure-as-code tool, and use GitHub Actions for our continuous integration and deployment, where we'll deploy our infrastructure and lambda apps with a simple push to the main branch.

We'll organize our HTTP methods, configure the AWS provider in Terraform, and even set up an S3 bucket for storing our precious Terraform state.

Let us begin!

Requirements

  • An AWS account
  • Any Code Editor of your choice - I use Visual Studio Code
  • NodeJS
  • GitHub account - We'll be using GitHub Actions to deploy our Terraform code

Regarding AWS Costs

Everything we'll be using here is free or has a very low cost that will not be charged unless you add very high usage. If you are afraid of unknown charges, you can set up a $0.01 budget to alert you if you are being charged for anything.

The Project

Here, we are going to be building a whole serverless architecture:

  • API Gateway: This is where the endpoints will be mapped and exposed.
  • Lambdas: They will handle API Gateway events and the SQS events.
  • DynamoDB: It will be our database.
  • SQS: Our message queue, where the email notification lambda will be notified whenever a movie is created, deleted, or updated.
  • SNS: Notification services to send events to SQS for a fanout pattern
  • SES: AWS Simple Email System to manage and send emails from AWS

Project Serverless Architecture

We'll also be using:

  • Terraform: Our infrastructure as code that will create and manage our whole AWS infrastructure.
  • GitHub Actions: Our CI/CD, which will build and deploy our infrastructure and our lambdas

In this part 1, we'll be implementing the following components of our project.

First part of the project

Why Serverless?

Serverless computing is a cloud computing model where you don't have to provision or manage servers. Instead, the cloud provider automatically manages the infrastructure, allowing developers to focus solely on writing code and deploying applications.

The term "serverless" doesn't mean no servers are involved. It means you don't have to worry about the underlying server infrastructure.

Some of the benefits of serverless are:

  • Cost Savings: You only pay for the computing resources your code consumes.
  • Scalability: Serverless platforms automatically scale your applications based on demand without manual intervention.
  • Zero Idle Capacity: Your resources are only allocated when needed, so you won't have provisioned resources without being used.

Let's Start

Let's begin our project. We are adding our first lambda to get a movie by its ID.

Lambda Module

Create a folder for your project, and inside it, create a folder named iac. This is where we'll be adding all our infrastructure as a code. Now create a new folder inside it named modules. Here, we'll be adding our reusable Terraform modules. And now, add a folder lambda for our Lambda function module.

Inside the lambda folder, create three files: main.tf, datasources.tf, and variables.tf.

main.tf:

resource "aws_iam_role" "iam_for_lambda" {
  name               = "${var.name}-lambda-role"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_lambda_function" "lambda" {
  filename      = data.archive_file.lambda.output_path
  function_name = var.name
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = var.handler
  runtime       = "nodejs20.x"
}
Enter fullscreen mode Exit fullscreen mode

We also declare a role, which all lambda functions need, and the lambda code itself in aws_lambda_function.

Note the keywords data and var. The first is for data from the data sources, and the second is for anything passed to the module through variables.

Now for the variables.tf:

variable "name" {
  description = "The name of the Lambda function"
  type        = string
  nullable    = false
}

variable "handler" {
  description = "The handler function in your code for he Lambda function"
  type        = string
  default     = "index.handler"
}
Enter fullscreen mode Exit fullscreen mode

And for the datasources.tf:

locals {
  filename = strcontains(var.runtime, "node") ? "index.mjs" : "main"
}

data "archive_file" "lambda" {
  type        = "zip"
  source_file = "./modules/lambda/init_code/${local.filename}"
  output_path = "${var.name}_lambda_function_payload.zip"
}

data "aws_iam_policy_document" "assume_role" {

  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]

  }
}
Enter fullscreen mode Exit fullscreen mode

Here, we define the IAM policy for the lambda role and the file that will be added to the lambda.

This file is required for our Terraform code, even if you are doing deployment in a different flow, which you will be doing.

This is also seen in the filename local variable, where we assign the file depending on the lambda runtime. So, let's add our seed code.

data "aws_iam_policy_document" "lambda_logging" {
  statement {
    effect = "Allow"

    actions = [
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents",
    ]

    resources = ["arn:aws:logs:*:*:*"]
  }
}
Enter fullscreen mode Exit fullscreen mode

And then, in the main.tf, add the policy attachment:

resource "aws_iam_policy" "lambda_logging" {
  name        = "lambda_logging_${aws_lambda_function.lambda.function_name}"
  path        = "/"
  description = "IAM policy for logging from a lambda"
  policy      = data.aws_iam_policy_document.lambda_logging.json
}

resource "aws_iam_role_policy_attachment" "lambda_logs" {
  role       = aws_iam_role.iam_for_lambda.name
  policy_arn = aws_iam_policy.lambda_logging.arn
}
Enter fullscreen mode Exit fullscreen mode

Create a folder named init_code under the lambda module folder.

For the Node.js seed code, you can create a new file index.mjs and add the following code:

// Default handler generated in AWS
export const handler = async (event) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify('Hello from Lambda!'),
  };
  return response;
};
Enter fullscreen mode Exit fullscreen mode

Note that it needs to be mjs file because we are not adding a project.json file to define the module. The file needs to be with this extension so Node.js will handle the code as ECMAScript modules.

Adding the main infra code

In the iac folder, create a lambdas.tf file with the following code:

module "get_movie_lambda" {
  source        = "./modules/lambda"
  name          = "get-movie"
  runtime       = "nodejs20.x"
  handler       = "index.handler"
}
Enter fullscreen mode Exit fullscreen mode

We also need to configure Terraform to use AWS as its provider. Create a provider.tf file with the following code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = var.region
}
Enter fullscreen mode Exit fullscreen mode

Now, create two files, variables.tf to declare the default variables of our IaC:

variable "region" {
  description = "Default region of your resources"
  type        = string
  default     = "eu-central-1"
}
Enter fullscreen mode Exit fullscreen mode

And for variables.tfvars to pass variable values that are not secret, but we might want to change depending on the deployment configuration:

region="eu-central-1" // Chage here to your region here
Enter fullscreen mode Exit fullscreen mode

If you'd like Terraform to keep track of the changes to update the components, you need to add where it will save and manage the state. Here, we'll be using an S3 bucket for that.

Create an S3 bucket with the name terraform-medium-api-notification and modify the provider.tf file with the following code:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }

  backend "s3" {
    bucket = "terraform-medium-api-notification"
    key    = "state"
    region = "eu-central-1" // Chage here to your region here
  }
}

# Configure the AWS Provider
provider "aws" {
  region = var.region
}
Enter fullscreen mode Exit fullscreen mode

Note that you can choose the region that you are nearest to instead of eu-central-1. I just chose it because it is the closest to me. You could also provide this through the AWS_DEFAULT_REGION or AWS_REGION environment variables.

We are ready to build the workflow to deploy our infrastructure to AWS.

Deploying the infrastructure

To deploy our infrastructure, we'll be using Github Actions.
The CI solutions in GitHub allow us to run scripts in our code when we change it. If you'd like to know more about it, check the documentation here.

To perform this step, you'll need to generate an AWS Access Key and Secret for a user that has the rights to create the resources you define in AWS.

Add these secrets to your repository action secrets in your in Settings:

GitHub settings

Now, in the root folder, let's create a .github folder and a workflows folder inside of it. Now, create a file named deploy-infrastructure.yml and add the following code:

name: Deploy Infrastructure
on:
  push:
    branches:
      - main
    paths:
      - iac/**/*
      - .github/workflows/deploy-infra.yml

defaults:
  run:
    working-directory: iac/

jobs:
  terraform:
    name: "Terraform"
    runs-on: ubuntu-latest
    steps:
      # Checkout the repository to the GitHub Actions runner
      - name: Checkout
        uses: actions/checkout@v3

      - name: Configure AWS Credentials Action For GitHub Actions
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-central-1 # Use your preferred region

      # Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      # Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
      - name: Terraform Init
        run: terraform init

      # Checks that all Terraform configuration files adhere to a canonical format
      - name: Terraform Format
        run: terraform fmt -check

      # Generates an execution plan for Terraform
      - name: Terraform Plan
        run: terraform plan -out=plan -input=false -var-file="variables.tfvars"

        # On push to "main", build or change infrastructure according to Terraform configuration files
        # Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
      - name: Terraform Apply
        run: terraform apply -auto-approve -input=false plan
Enter fullscreen mode Exit fullscreen mode

Every time you push a change to a file inside the iac folder, it will trigger this action, and it will automatically generate the resources in AWS for you:

Deploy Infrastructure workflow run

Note that in the Terraform Plan step, terraform outputs all the changes it will perform in AWS based on the current state in the S3 bucket.

Now you can go to AWS on the Lambda page and see your recently created function:

AWS Lambda page with get-movie lambda

To test it, you click on it and then on the Test tab.

get-movie lambda details

There, you can give a name to the test event that will be sent to the Lambda and then click on the Test button.

Lambda Test tab

You should see a success notification with the return from the lambda:

get-movie lambda test result

Adding GET endpoint

Now, let's add a GET endpoint through API Gateway so we can call our lambda through HTTP requests.

Let's first create a module for our HTTP methods. Under the folder modules, create a folder rest-api-method. Then, create three files: main.tf, variables.tf, and outputs.tf.

For the variables.tf, add the following code:

variable "http_method" {
  description = "The HTTP method"
  type        = string
}

variable "resource_id" {
  description = "The ID of the resource this method is attached to"
  type        = string
}

variable "api_id" {
  description = "The ID of the API this method is attached to"
  type        = string
}

variable "integration_uri" {
  description = "The URI of the integration this method will call"
  type        = string
}

variable "resource_path" {
  description = "The path of the resource"
  type        = string
}

variable "lambda_function_name" {
  description = "The name of the Lambda function that will be called"
  type        = string
}

variable "region" {
  description = "The region of the REST API resources"
  type        = string
}

variable "account_id" {
  description = "The ID of the AWS account"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

For the outputs.tf:

output "id" {
  value = aws_api_gateway_method.method.id
}

output "integration_id" {
  value = aws_api_gateway_integration.integration.id
}
Enter fullscreen mode Exit fullscreen mode

Now for the main.tf:

resource "aws_api_gateway_method" "method" {
  authorization = "NONE"
  http_method   = var.http_method
  resource_id   = var.resource_id
  rest_api_id   = var.api_id
}

resource "aws_api_gateway_integration" "integration" {
  http_method             = aws_api_gateway_method.method.http_method
  integration_http_method = "POST" # Lambda functions can only be invoked via POST
  resource_id             = var.resource_id
  rest_api_id             = var.api_id
  type                    = "AWS_PROXY"
  uri                     = var.integration_uri
}

resource "aws_lambda_permission" "apigw_lambda" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = var.lambda_function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "arn:aws:execute-api:${var.region}:${var.account_if}:${var.api_id}/*/${aws_api_gateway_method.method.http_method}${var.resource_path}"
}
Enter fullscreen mode Exit fullscreen mode

This will generate an HTTP method attached to your API and use lambda proxy integration. We want our Lambda to be responsible for the HTTP behavior of the request and response. So, only the API Gateway will pass it through.
Now, in the root iac folder, create a rest-api.tf file and add the following code:

# API Gateway
resource "aws_api_gateway_rest_api" "movies_api" {
  name = "movies-api"
}

resource "aws_api_gateway_deployment" "movies_api_deployment" {
  rest_api_id = aws_api_gateway_rest_api.movies_api.id
  stage_name  = aws_api_gateway_stage.live.stage_name

  triggers = {
    redeployment = sha1(jsonencode([
      aws_api_gateway_resource.movies_root_resource.id,
      module.get_movie_method.id,
      module.get_movie_method.integration_id,
    ]))
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_api_gateway_stage" "live" {
  deployment_id = aws_api_gateway_deployment.movies_api_deployment.id
  rest_api_id   = aws_api_gateway_rest_api.movies_api.id
  stage_name    = "live"
}

resource "aws_api_gateway_resource" "movies_root_resource" {
  parent_id   = aws_api_gateway_rest_api.movies_api.root_resource_id
  path_part   = "movies"
  rest_api_id = aws_api_gateway_rest_api.movies_api.id
}

module "get_movie_method" {
  source               = "./modules/rest-api-method"
  api_id               = aws_api_gateway_rest_api.movies_api.id
  http_method          = "GET"
  resource_id          = aws_api_gateway_resource.movies_root_resource.id
  resource_path        = aws_api_gateway_resource.movies_root_resource.path
  integration_uri      = module.get_movie_lambda.invoke_arn
  lambda_function_name = module.get_movie_lambda.name
  region               = var.region
  account_id           = vat.account_id
}
In the variables.tf, add the variable for account_id:
variable "account_id" {
  description = "The ID of the default AWS account"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

You can either add your account ID to the variables.tfvars files, or you can add it as an inline variable to the workflow Terraform Plan and the Terraform Apply steps:

# Generates an execution plan for Terraform
- name: Terraform Plan
  run: terraform plan -out=plan -input=false -var-file="variables.tfvars" -var account_id="YOUR_ACCOUNT_ID"

# On push to "main", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
  run: terraform apply -auto-approve -input=false plan
Enter fullscreen mode Exit fullscreen mode

Just remember to replace the value of YOUR_ACCOUNT_ID with your account account ID value.

This will generate an API with the resource movies, which will be the path /movies of your API.

It will also create a stage named live. Stages are the equivalent of deployment environments. You need the API deployed to a stage to call it. So for our movies endpoint, it will be /live/movies.

Then, it will create a deployment that will configure a rule that it should deploy to the live stage whenever we make changes to the method, integration, or resource.

Now, push it to GitHub and wait for the workflow and your API to be created. After it is finished, you can go to the API Gateway page of AWS and see your API.

movies-api in API Gateway page

And when you click on it, you can see all the details about the resources:

Resources of movies-api

To see the public URL, you can go to the Stages section:

movies-api stages

Now, if you call the /movies in your browser, you should get the response from the Lambda

GET /movies test result

We must make one adjustment to ensure we use the correct path.

We created the resource /movies and added the method GET there, but our lambda will fetch a movie by ID in the future, so we need to create a new resource to attach our lambda to it correctly.

So, let's create a new resource by adding the following code to the root rest-api.tf file:

resource "aws_api_gateway_resource" "movie_resource" {
  parent_id   = aws_api_gateway_resource.movies_root_resource.id
  path_part   = "{movieID}"
  rest_api_id = aws_api_gateway_rest_api.movies_api.id
}
Enter fullscreen mode Exit fullscreen mode

Add it to the redeployment trigger in the movies_api_deployment :

resource "aws_api_gateway_deployment" "movies_api_deployment" {
  rest_api_id = aws_api_gateway_rest_api.movies_api.id

  triggers = {
    redeployment = sha1(jsonencode([
      aws_api_gateway_resource.movies_root_resource.id,
      aws_api_gateway_resource.movie_resource.id,
      module.get_movie_method.id,
      module.get_movie_method.integration_id,
    ]))
  }

  lifecycle {
    create_before_destroy = true
  }
}
Enter fullscreen mode Exit fullscreen mode

And then modifying the get_movie_method module to point to the new resource:

module "get_movie_method" {
  source               = "./modules/rest-api-method"
  api_id               = aws_api_gateway_rest_api.movies_api.id
  http_method          = "GET"
  resource_id          = aws_api_gateway_resource.movie_resource.id
  resource_path        = aws_api_gateway_resource.movie_resource.path
  integration_uri      = module.get_movie_lambda.invoke_arn
  lambda_function_name = module.get_movie_lambda.name
}
Enter fullscreen mode Exit fullscreen mode

Push the code to GitHub, and Terraform will modify your infrastructure.
Your API should look like this:

Updated Resources with GET /movies/{movieID}

Adding DynamoDB

Now that we have a functioning API let's add our database, DynamoDB, and hook it to our GET endpoint with some seed data.

Terraforming DynamoDB

So, let's start by adding a new file to our iac folder named dynamodb.tf with the following code:

resource "aws_dynamodb_table" "movies-table" {
  name           = "Movies"
  billing_mode   = "PROVISIONED"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "ID"
  range_key      = "Title"

  attribute {
    name = "ID"
    type = "S"
  }
}
Enter fullscreen mode Exit fullscreen mode

This will generate a minimum capacity table named Movies, and with a partition key named ID of type string.

When you push the code to GitHub, and the action runs, you can go to the DynamoDB section of AWS and see the Movies table there.

DynamoDB Movies table in AWS

Let's add a few seed items. In the dynamodb.tf file, add the following code for four table items:

resource "aws_dynamodb_table_item" "the_matrix" {
  table_name = aws_dynamodb_table.movies-table.name
  hash_key   = aws_dynamodb_table.movies-table.hash_key
  range_key  = aws_dynamodb_table.movies-table.range_key

  item = jsonencode(
    {
      ID    = { S = "1" },
      Title = { S = "The Matrix" },
      Genres = { SS = [
        "Action",
        "Sci-Fi",
        ]
      },
      Rating = { N = "8.7" }
    }
  )
}

resource "aws_dynamodb_table_item" "scott_pilgrim" {
  table_name = aws_dynamodb_table.movies-table.name
  hash_key   = aws_dynamodb_table.movies-table.hash_key
  range_key  = aws_dynamodb_table.movies-table.range_key

  item = jsonencode(
    {
      ID    = { S = "2" },
      Title = { S = "Scott Pilgrim vs. the World" },
      Genres = { SS = [
        "Action",
        "Comedy",
        ]
      },
      Rating = { N = "7.5" }
    }
  )
}

resource "aws_dynamodb_table_item" "star_wars" {
  table_name = aws_dynamodb_table.movies-table.name
  hash_key   = aws_dynamodb_table.movies-table.hash_key
  range_key  = aws_dynamodb_table.movies-table.range_key

  item = jsonencode(
    {
      ID    = { S = "3" },
      Title = { S = "Star Wars: Episode IV - A New Hope" },
      Genres = { SS = [
        "Action",
        "Adventure",
        "Fantasy",
        "Sci-Fi",
        ]
      },
      Rating = { N = "8.6" }
    }
  )
}

resource "aws_dynamodb_table_item" "star_wars_v" {
  table_name = aws_dynamodb_table.movies-table.name
  hash_key   = aws_dynamodb_table.movies-table.hash_key
  range_key  = aws_dynamodb_table.movies-table.range_key

  item = jsonencode(
    {
      ID    = { S = "4" },
      Title = { S = "Star Wars: Episode V - The Empire Strikes Back" },
      Genres = { SS = [
        "Action",
        "Adventure",
        "Fantasy",
        "Sci-Fi",
        ]
      },
      Rating = { N = "8.7" }
    }
  )
}
Enter fullscreen mode Exit fullscreen mode

Now push to GitHub, wait for the workflow to run, and go to the DynamoDB Table in AWS to explore the table items and see the created records:

Movies DynamoDB table seed data

Updating the Lambda to fetch by ID

Now that we have our data, we need to modify our lambda to fetch our items.
First, we need to give our Lambda role rights to do GetItem actions in the Movies table.
Add the following code to the outputs.tf file in the lambda module folder:

output "role_name" {
  value = aws_iam_role.iam_for_lambda.name
}
Enter fullscreen mode Exit fullscreen mode

Now, in the iac folder, add a file named iam-polices.tf with the following code:

data "aws_iam_policy_document" "get_movie_item" {
  statement {
    effect = "Allow"

    actions = [
      "dynamodb:GetItem",
    ]

    resources = [
      aws_dynamodb_table.movies-table.arn
    ]
  }
}

resource "aws_iam_policy" "get_movie_item" {
  name        = "get_movie_item"
  path        = "/"
  description = "IAM policy allowing GET Item on Movies DynamoDB table"
  policy      = data.aws_iam_policy_document.get_movie_item.json
}

resource "aws_iam_role_policy_attachment" "allow_getitem_get_movie_lambda" {
  role       = module.get_movie_lambda.role_name
  policy_arn = aws_iam_policy.get_movie_item.arn
}
Enter fullscreen mode Exit fullscreen mode

This will generate a policy that allows GetItem in the Movies table and attach it to the current lambda IAM role.

Now, in the root folder, create a folder named apps, and then a folder get-movie.

Inside this folder, let's start a npm project with:

npm init -y
Enter fullscreen mode Exit fullscreen mode

This will generate a new package.json file.

Most packages required for the lambda to work and connect with AWS are already packed in AWS and updated occasionally. We are creating this mostly to have the packages available in our local development environment and to set up our module types.

In the package.json file, add the following property:

"type": "module"
Enter fullscreen mode Exit fullscreen mode

Your file should look similar to:

{
  "name": "get-movie",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC"
}
Enter fullscreen mode Exit fullscreen mode

Note that if the package is unavailable in the AWS environment, you must pack the node_modules folder with your lambda function code. Or create a Lambda layer that will hold the node_modules and can be shared between lambdas.

Let's install the packages we'll need with:

npm i --save aws-sdk @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
Enter fullscreen mode Exit fullscreen mode

Now, create a folder named src and add a file index.js in it.

We'll add the code to fetch a movie by its ID with:

import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, GetCommand } from "@aws-sdk/lib-dynamodb";

const tableName = "Movies";

export const handler = async (event) => {
  const movieID = event.pathParameters?.movieID;

  if (!movieID) {
    return {
      statusCode: 400,
      body: JSON.stringify({
        message: "Movie ID missing",
      }),
    };
  }

  console.log("Getting movie with ID ", movieID);

  const client = new DynamoDBClient({});
  const docClient = DynamoDBDocumentClient.from(client);

  const command = new GetCommand({
    TableName: tableName,
    Key: {
      ID: movieID.toString(),
    },
  });

  try {
    const dynamoResponse = await docClient.send(command);
    if (!dynamoResponse.Item) {
      return {
        statusCode: 404,
        body: JSON.stringify({
          message: "Movie not found",
        }),
      };
    }

    const body = {
      title: dynamoResponse.Item.Title,
      rating: dynamoResponse.Item.Rating,
      id: dynamoResponse.Item.ID,
    };

    body.genres = Array.from(dynamoResponse.Item.Genres);

    const response = {
      statusCode: 200,
      body: JSON.stringify(body),
    };

    return response;
  } catch (e) {
    console.log(e);

    return {
      statusCode: 500,
      body: JSON.stringify({
        message: e.message,
      }),
    };
  }
};
Enter fullscreen mode Exit fullscreen mode

This lambda gets the event sent by API Gateway and extracts the movie ID. Then we do some simple validations and get the movie from DynamoDB, transform the data to an API resource so we don't expose our data model, and return it to the API Gateway to send to the client.

You can see the documentation here if you'd like to learn more about the event from API Gateway with Lambda proxy integration.

Remember to stringify the body, or you'll face 500 errors.

Building and deploying

Lastly, we must create a quick build script to organize our code.
First, install the following package:

npm i -D copyfiles
Enter fullscreen mode Exit fullscreen mode

I'm using it because it makes commands to copy files independent from operating systems.

In the package.json file, add the following build script:

{
  "name": "get-movie",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "build": "copyfiles -u 1 src/**/* build/ && copyfiles package.json build/",
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.468.0",
    "@aws-sdk/lib-dynamodb": "^3.468.0",
    "aws-sdk": "^2.1513.0"
  },
  "devDependencies": {
    "copyfiles": "^2.4.1"
  }
}
Enter fullscreen mode Exit fullscreen mode

And now, let's add the workflow that will push our code to the get-movie lambda.

Create a deploy-get-movie-lambda.yml file in the .github/workflows folder and add the following code:

name: Deploy Get Movie Lambda
on:
  push:
    branches:
      - main
    paths:
      - apps/get-movie/**/*
      - .github/workflows/deploy-get-movie-lambda.yml

defaults:
  run:
    working-directory: apps/get-movie/

jobs:
  terraform:
    name: "Deploy GetMovie Lambda"
    runs-on: ubuntu-latest
    steps:
      # Checkout the repository to the GitHub Actions runner
      - name: Checkout
        uses: actions/checkout@v3

      - name: Setup NodeJS
        uses: actions/setup-node@v4
        with:
          node-version: 20

      - name: Configure AWS Credentials Action For GitHub Actions
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: eu-central-1

      - name: Install packages
        run: npm install

      - name: Build
        run: npm run build

      - name: Zip build
        run: zip -r -j main.zip ./build

      - name: Update Lambda code
        run: aws lambda update-function-code --function-name=get-movie --zip-file=fileb://main.zip
Enter fullscreen mode Exit fullscreen mode

Remember to set the correct region.

Now push the code to GitHub, wait for it to run, and then call the API Gateway endpoint /movies/1. You should receive a response similar to:

{
  "id":"1",
  "title":"The Matrix",
  "rating":8.7,
  "genres":[
    "Action",
    "Sci-Fi"
  ]
}
Enter fullscreen mode Exit fullscreen mode

Amazing! We have our first endpoint completed!

Conclusion

In this first part, we built the foundation for our project, a serverless CRUD API with email notification.

We saw how to write and deploy our infrastructure using Terraform, a powerful IaC tool that facilitates how we build and manage our infrastructure.

We wrote our first lambda using NodeJS and easily connected it to DynamoDB to fetch our data.

In the next part, we'll dive deeper and finish our CRUD API with more lambdas, but we'll build some of them using other languages, like Go.

Hope to see you there!

The code for this project can be found here.

Happy coding! 💻

💖 💪 🙅 🚩
alopes2
Andre Lopes

Posted on January 9, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related