Serverless APIs on AWS with API Gateway and Lambda Functions (First Steps)

artur_ceschin

Artur Ceschin

Posted on February 29, 2024

Serverless APIs on AWS with API Gateway and Lambda Functions (First Steps)

In this article will explore how serverless computing works; despite its name, it still depends on servers. Additionally, we will provide an AWS tutorial, so make sure you have an active account.

Understanding ways to deploy your application

First, let's examine the traditional method of running and storing your project: Typically, we "rent" computing resources, such as a virtual machine, from cloud providers like **AWS, Azure, or GCP, among can access we have access to these resources, we configure our project by installing dependencies and running it on a designated port.

Using the traditional method on AWS, a suitable service for this approach is Amazon EC2 (Elastic Compute Cloud). With EC2, you can select the operating system and customize the hardware specifications according to your project's requirements. However, it's important to note that the more hardware resources you allocate, the higher the cost will be.

EC2 guarantees 24/7 availability for your application. The cost varies based on instance type and allocated resources. A primary EC2 instance with 500MB of memory typically costs USD 4.00/month. However, this method may have problematic characteristics depending on your specific needs.

  1. Manual Updates: To keep your project up-to-date on services like Amazon EC2, you'll need to update it manually. Although this gives you complete control over your machine, it also means you are responsible for ensuring that everything is up-to-date.

  2. Scaling Responsibility: When you scale your server, you can either increase the power of existing instances (vertical scaling) or create new instances (horizontal scaling). This provides you with greater flexibility but requires careful planning to ensure that your infrastructure effectively meets your needs.

  3. EC2 incurs charges per hour, even when it's not in use. Although you can save costs by shutting down your project during idle periods, it also means that it won't be accessible in case of emergencies. However, you can downscale or upscale your server to adjust your project's computational needs during idle hours.

  4. Using EC2 requires an understanding of infrastructure.

Serverless

At first glance, 'Serverless' might imply no servers at all. However, in reality, it means utilizing a server to host your APIs without needing you to manage it directly. Instead, cloud providers like AWS take care of the server management on your behalf.

When discussing Serverless, we refer to 'FaaS' (Function As a Service). Each cloud provider offers Serverless products, with AWS offering AWS Lambda.

Serverless computing typically comes with significantly lower costs, especially for small and medium applications.

In Serverless architecture, our application only runs upon API requests, which can result in a 'Cold Start' the first time. Our serverless platform checks if the application is running and looks for a previously started container to handle requests quickly. If it is not running, it creates one within milliseconds using technologies like Firecracker. Subsequent requests for the same service are handled without starting a new container, known as a 'Warm,' ensuring speedy response times for users.

Keeping our Serverless Functions small and granular is crucial to ensure that our 'Cold Start' is as fast as possible. Moreover, the container will shut down if there is no activity on the server for a while, usually between 5 to 15 minutes. This process helps us save significant costs by using resources only when necessary.

It is important to note that we do not pay for the time the container executes. Instead, we are billed based on how many times the function is executed and the duration of each execution.

Hands-on Tutorial

Let's see this process in practice on AWS.
Create a function first step

I'll name my function listUser and use Node.js with the arm64 architecture. It's cost-effective and works well for our needs, as our application won't rely heavily on system methods.

To change the default execution role, we'll create a new role with basic permissions. Lambda allows us to customize permissions for each function. By default, Lambda enables CloudWatch, ensuring all our logs are readily available. This is particularly useful for tracking errors and viewing the execution details of our applications.

For "Advanced settings", we'll keep everything as default, leaving it empty. Then, we'll move on to creating the function.

As mentioned earlier, Lambda automatically integrates with CloudWatch to log every request, which is particularly useful for error handling. To observe this in action, insert a throw new Error() function before the main function continues. For example:

export const handler = async (event) => {
  throw new Error('An error occurred here');
  const response = {
    statusCode: 200,
    body: JSON.stringify('Hello from Lambda!'),
  };
  return response;
};
Enter fullscreen mode Exit fullscreen mode

After inserting this, click 'Deploy,' navigate to the 'Test' tab, and send a request. It doesn't matter what you send for this example; I typically send an object with an email. Then, click on ' Test'.

Next, go to CloudWatch > Logs > Log Group, and you'll find the logs there! You'll likely see something similar to this:
Error in CloudWatch

After creating the function, navigate to the Configuration page. Here, you'll find options for "Memory", "Ephemeral storage", and "Timeout". It's important to note that increasing these values can affect pricing. By default, "Memory" is set to 128 MB, but you can increase it to 10 GB. The "Ephemeral storage" is temporary and cleared each time the Lambda function is invoked, making it essential for our function to be stateless. The minimum storage value is 512 MB, and the maximum is 10 GB.

Additionally, consider the "Timeout" setting. The minimum timeout is 1 second, and the maximum is 15 minutes. However, Lambda functions are intended to be fast, and if your process exceeds 15 minutes, the function will be terminated.

Lambda Configuration

Now, how can we see that working? Lambda wasn't just built for API development! It was designed with event-driven scenarios in mind. Lambda works with triggers. For example, we might want to execute our Lambda function when an HTTP request is sent or our S3 bucket changes, such as an upload. In such cases, we can configure our Lambda function to be invoked.

Let's start with an HTTP trigger example; go to your Lambda > Configuration > Function URL > Create function URL
Create function

In the 'Auth type' section, I'll leave it set to 'NONE, which will generate a basic 'Policy statement.' I won't make any changes there. Next, in the 'Invoke mode' section, I'll stick with the default option, 'BUFFERED.' I'll keep the CORS configuration as it is for now and then click on Save.

Once saved, we'll get a URL. If you click on that URL, you'll probably see something like this:

Lambda URl

Now, let's explore how to upload multiple files containing logic and libraries to AWS Lambda.

This exercise will demonstrate that Lambda functions can consist of multiple files and utilize various libraries. While the application we're building is simple, it will showcase the flexibility of Lambda in handling multiple packages and files.

I'll create a new folder and initialize a Node.js project by running yarn init -y. This will set up the basic configuration for our project.

Next, I'll create a src folder within the project directory. Inside this folder, I'll create an index. mjs file. Utilizing the new ECMAScript Modules syntax with import and export, this file will serve as the entry point for our Lambda function.

Now, let's create another folder named utils and add a file named response.mjs. In this file, we'll define a function called response that takes a statusCode and a body, converting the body to a string before returning it as part of the response object.

export function response(statusCode, body) {
  return {
    statusCode,
    body: JSON.stringify(body)
  };
}
Enter fullscreen mode Exit fullscreen mode

Now, in our index.mjs file, let's define a 'handler' function. For demonstration purposes, we'll import randomUUID from the node:crypto module and jwt from the jsonwebtoken library, which we'll install using yarn add jsonwebtoken.

import { randomUUID } from 'node:crypto';
import jwt from 'jsonwebtoken';
import { response } from "./utils/response.mjs";

export async function handler(event) {
  const token = jwt.sign({
    sub: randomUUID(),
  }, process.env.JWT_SECRET);

  return response(200, {
    users: [
      {
        id: randomUUID(),
        name: 'Artur',
        token
      }
    ]
  });
}
Enter fullscreen mode Exit fullscreen mode

To set **environment variables **in Lambda, navigate to your Lambda configuration and locate the 'Environment variables' section below the Function URL. Here, you can add the key-value pairs for your variables.
Lambda Environment variable
Now, our code will have access to the environmental variable.

Now, let's discuss how to upload our code. We have two options: uploading a zip file directly or using the code stored in an S3 bucket. To start, click the 'Upload from' button in the top right corner and choose your preferred option; I will use the zip option for now.

Let's upload the zip file containing the code we just created. Once uploaded, your Lambda function configuration should look like the following image:
Code in Lambda

But if we access the URL provided, you will probably see the Internal Server Error error. We haven't set the correct path to access our handler. To fix that, go to the Runtime settings section, click on edit, and change it to the desired path for the handler. In my case, that will be live009/src/index.handler

We've uploaded our code, but how do we see it in action? It's a bit of a manual process—like flipping a switch and hoping for the best! But fear not. There are better ways to do this; we'll dive into those in the following tutorials.

But first, let's address a couple of things. Firstly, we haven't told our Lambda function which methods it should be able to handle—like POST, PUT, GET, or DELETE. And have you noticed how our URLs are all over the place and don't make much sense? Yeah, we need to fix that, too.

Introducing Amazon API Gateway! This fantastic service enables us to define precisely how our** HTTP routes** should operate*. It acts as a **traffic controller for our requests, guaranteeing they **always reach the correct destination. Additionally, it performs other essential tasks such as **caching and configuring authentication*, ensuring that only authorized personnel can access our resources!

To get started, go to API Gateway and select the HTTP API option. This option is faster and more cost-effective than the REST API option. After making your selection, click on the 'Build' button. This will take you to a screen that looks similar to the one below:

API Gateway first step screen

In the image, I selected Lambda as my integration, chose our listUser Lambda function, and gave our API a name (you can name it whatever you like).

Next, I clicked' Next' to set the HTTP method. In the 'Configure routes' section, I put the URL code to GET, changed the path to '/users', and specified that it should be redirected to the listUsers function when this request happens.

Then, in the Configure stages section, I kept the settings as default for now and clicked Create to complete the process.

Once created, you can access the Invoke URL, and add '/users' at the end of the URL. This will trigger the Lambda function we set up earlier, and you'll see the return we defined previously.

Using S3 as our Trigger

To conclude this tutorial, we will now create a function that will execute each time a new file is uploaded to an S3 bucket.

To begin, follow the same steps we did earlier to create a new Lambda function. Once the function is created, navigate to the 'Configuration' section and choose 'Triggers' in the left sidebar. Click on the 'Add trigger' button.

In the trigger configuration, select 'S3' as the trigger source, pick the bucket you have previously created, and specify that the event should be triggered when a new object is created. If you only want to trigger for JSON files, you can set the event suffix to '.json'. Before clicking 'Add', make sure to check the 'Recursive invocation' option.

Lambda trigger configuration

To see what data is passed in the event object to our handler function, add a console.log(event) within the function. Then, open your S3 bucket and CloudWatch. Create a simple JSON file like the one below:

[{"email": "artur.ceschin@gmail.com", "name": "Artur Ceschin"}, {"email": "Joe.doeh@gmail.com", "name": "Joe Doeh"}]
Enter fullscreen mode Exit fullscreen mode

Upload this file to your S3 bucket, and check CloudWatch to see the console.log output. You should see something like this:

Records: [
    {
      eventVersion: '2.1',
      eventSource: 'aws:s3',
      awsRegion: 'us-east-1',
      eventTime: '2024-02-27T22:09:31.025Z',
      eventName: 'ObjectCreated:Put',
      userIdentity: [Object],
      requestParameters: [Object],
      responseElements: [Object],
      s3: [Object]
    }
  ]
Enter fullscreen mode Exit fullscreen mode

For better readability, you can stringify the event object using console.log('EVENT=>', JSON.stringify(event, null, 2)). This will provide a nicely formatted output in your CloudWatch logs.
If you try to upload a file that is not a .json file, it does not trigger our Lambda.

Let's now read the file uploaded to our S3 bucket. First, let's modify our Lambda function to log the bucket name and key of the uploaded file:

import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';

export const handler = async (event) => {
  const [record] = event.Records;

  const bucket = record.s3.bucket.name;
  const key = record.s3.object.key;

  console.log('FILES', { bucket, key });
};
Enter fullscreen mode Exit fullscreen mode

After uploading a new file, check CloudWatch logs. You should see output like this:

FILES { bucket: 'artur.ceschin.dev', key: 'users2.json'}
Enter fullscreen mode Exit fullscreen mode

Now, let's read the contents of the file. We'll use the @aws-sdk/client-s3 package, which comes pre-installed in AWS Lambda. We'll import the S3Client and GetObjectCommand methods to locate and retrieve the file from our S3 bucket. Here's the updated code:

import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';

export const handler = async (event) => {
  const [record] = event.Records;

  const bucket = record.s3.bucket.name;
  const key = record.s3.object.key;

  const s3Client = new S3Client({});
  const command = new GetObjectCommand({ Bucket: bucket, Key: key });

  const response = await s3Client.send(command);

  console.log('RESPONSE =>', response);
};
Enter fullscreen mode Exit fullscreen mode

However, if you run this code and upload a file, you'll likely encounter an 'Access Denied' error. Navigate to your Lambda function's Configuration > General Configuration> Edit to fix this. At the bottom of 'Basic settings,' click the link: View the processJSON-role-x5iz5f89. This will take you to the IAM role associated with your Lambda function.

Next, click Add permission > Create inline policy. Select the service (S3) and search for 'GetObject.' Add the ARN of your bucket as the resource. Leave all actions as ',' and click **Add ARN. Finally, click **Next, provide a description, and click **Create policy*. In the two images below, you can see the two processes you will encounter as described:
Actions in policy

Add RNSARN
After completing that process, you will likely see your request in CloudWatch. But what if we want to read the JSON file inside our object?

In the code below, we retrieve the Body from the request, push chunks (pieces from our object) to our array, and then display them in the console.log.

export const handler = async (event) => {
    const [record] = event.Records

    const bucket = record.s3.bucket.name
    const key = record.s3.object.key

    const s3Client = new S3Client({})
    const command = new GetObjectCommand({Bucket: bucket, Key: key})

    const { Body } = await s3Client.send(command)
    const chunks = []
    for await (const chunk of Body) {
      chunks.push(chunk)
    }

    const buffer = Buffer.concat(chunks).toString('utf-8')

    console.log('Buffer =>', buffer)
};
Enter fullscreen mode Exit fullscreen mode

Wow, we've covered a lot in this tutorial! I hope you found it enjoyable and informative. If you have any doubts or suggestions, please leave them in the comments below. Before we wrap up, let's briefly discuss the pros and cons of using Lambda functions:

Props ✅

  1. Having a deep understanding of your infrastructure is not necessary.
  2. Maintaining your infrastructure is easier than maintaining other services.
  3. Automatic configuration is available through CloudWatch.
  4. The service is highly scalable.
  5. You only pay for what you use.
  6. The responsibilities are minimal.
  7. Triggers can be set up based on events.

Cons 🛑

  1. A cold start can be an issue if there are too many requests, as it can slow down the process.
  2. Lambda has some size limitations that may impact your usage. These include:
    • a maximum code size of 250MB
    • 10GB of Ephemeral storage
    • a timeout limit of 15 minutes
    • a maximum memory usage of 10GB
  3. It can get complicated, especially when one Lambda function calls another.
  4. Cost can be a concern if there are too many requests, mainly when using API Gateway. In this case, it may be more expensive than providing an instance that runs 24/7.

Conclusion

In this article, we explored serverless computing using AWS Lambda, a cost-effective and scalable solution. We have compared it with traditional methods and highlighted its automation benefits and the simplified deployment process it offers.

Our tutorial has covered the process of setting up Lambda functions, configuring triggers, and handling events such as S3 file uploads. We have also shown some challenges, such as cold starts and size limitations, and provided solutions to overcome them.

Overall, AWS Lambda is a powerful tool for modern applications that streamlines deployment reduces costs, and simplifies scalability.

If you have any questions or suggestions for the next articles, please comment below! 👋

💖 💪 🙅 🚩
artur_ceschin
Artur Ceschin

Posted on February 29, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related