How to Secure Your AWS Serverless Application?

imohd23

Mohamed Latfalla

Posted on July 8, 2021

How to Secure Your AWS Serverless Application?

security

Security is no joke, one of the key things you have to visit whenever you’re architecting a new serverless application.

Serverless application is basically an idea, like any other application, that you have and it has a logic. You need a computing power to execute your logic, an object storage in case you need it, a place to store your information and a way to tell your logic: go go go!

Pretty simple!

The beauty of Serverless is that we don’t care where and why our logic got stored the way it does, although Information Security teams would get really mad if they read this statement. Anyway, what really matters for us as a cloud developers, as I like to name ourself, is that our logic is getting executed, customers are happy, win-win situation!
But, I wish it was that way. We all need to care about our own applications security. Some people would say: Security!! Hack! We got exposed! Well, sometimes these things happen. But, you as a cloud developer, can help preventing this from happening.

Let’s dive into it together shall we?

Standard Services:

Most basic Serverless applications would need the following services:

1- Execution endpoint -> API GW

2- Object storage -> S3

3- Computing unit -> Lambda

4- Records storage -> DynamoDB

Of course you can have more than this, but let’s keep focusing on the main parts that shape the basic Serverless application.

All these services are managed by AWS, in case of batching, updating or deprecating runtime, scaling services based on our usage. All the major tasks are handled by them. Which is the soul of this architecture, focusing on developing and leaving operation tasks for them. Which I’m really fine with. But they can’t do the job without us, the cloud developers.

We have to care about our applications. We must secure it, which in fact, is pretty simple.

Securing your Serverless application starts at the beginning, best practice.

Wait Mohamed, are you telling me that if I followed the best practice per service, that would secure my application? YESSS.

Let’s take a closer look on some of these steps per service:

API Gateway (API GW):

A- Use WAF:

AWS provides a service called WAF, which is a Web Application Firewall. Self explanatory name that tells you why to use this service. It provides a protection against harmful usage of your resources that might affect the availability of your application and consume its resources, which will lead to interrupting the usability of the application .

Use WAF to monitor your API GW executions and act based on specific metrics. Using pre-defined roles would save you time planning and testing techniques to prevent common online threats.

B- Configure Endpoint Throttling:

You know your application and your customers, if an endpoint is getting unusual executions, this means that there is something wrong. Setting up throttling threshold would stop the exhaustion of your resources, especially if this endpoint is executing Lambda, which will visit in a minute.

C- Validate Your API GW Execution:

One of the major risks in Serverless applications is broken-authentication. If your endpoints are being called and consumed by unauthorized actor, this means you’re in a big trouble.

Use services like AWS Cognito to authenticate the executer. In this case, you would have a better understanding about your customers and to insure that no one will execute a function that he’s not supposed to.

S3:

A- Block Public Access (in some cases):

If you’re hosting sensitive data like identity cards, passports, secret reports, you have to block public access to this bucket. This will prevent any attempts to expose your files.

But, this recommendation won’t work if you’re hosting publicly available data in your bucket as it needs public access. Yes, there are some other options to share these files publicly while it’s publicly blocked. But, why the hassle? If you reached this point, you need a help on your S3 strategy.

B- Encrypt At-Rest And On-Transit Files:

The main 3 operations against S3 are: putting object, store it and eventually retreive it. So, to do these 3 operations securly, consider the following:

To upload securely into S3, use presigned url for that. The reason for it is that you, as a cloud developer, allow PUT action into a specific bucket for a specific file. In this way, you give the API an authority just to do a single action on a single file. Less privilege less concerns.

To store the file, you have to enable server-side encryption. This recommendation comes from AWS themself as they advice this move to enhance the security level and privacy. Also, it comes with no additional cost. Cool right?

To share the file, consider the presigned url, the same concept as upload. When you generate a presigned link for sharing a file, it gives you a url that consist of object path and an access token that is valid to retrieve only that specific file for a period of time. Once it gets expired, it is unusable.

C- IAM Role Access:

Creating an IAM role is really one of the best “best practice” actions that you as a cloud developer needs to enhance. For S3 case, creating one with specific actions would limit the level of S3 bucket/content exposure.

Create a role that has the specific action on the specific bucket. Why? Because this will make sure, even in case of compromised access, that this access won’t cause a great harm on your data.

D- Bucket Policy:

Bucket policy is a list of rules that guards the bucket from misused intentionally or unintentionally. I had a hard time understanding this concept and some examples need articles to describe but lets summaries it in the next example:

You have a bucket that can accept files generated only from any resources that is deployed in “Core-Production-VPC”. Usually we guard our bucket with an IAM role, but in this case, it’s not the suitable thing as anyone who can have an IAM role allowing actions against this bucket will be able to perform actions against it. So, we create an S3 bucket policy that can accept PutObject only from “Core-Production-VPC”.

Who will win? IAM role or Bucket Policy in this case? Well, it’s a combination of both. If your IAM role has an access to this bucket, the call to this bucket will pass through. But, if you don’t fulfilled policy requirements, then sorry, you can’t PutObject in it.

Really cool way to make sure you don’t have misconfiguration or misuse from IAM roles.

Lambda:

A- One IAM Role Per Function:

Lambda has an access to many services and when you use SDK, oh man, you have them all! So, should you be happy? NO, because the risks are countless.

So, let’s focus on this as its really important point. Less privilege less concerns. AWS advice that you limit your function with a unique, sophisticated IAM role privileges. In this case, you can be sure that this function won’t cause unexpected harm as its boundary is well defined and known. One of the new security risks is Event Injection. Did you test your function well? Think twice.

B- Temporary Credentials For AWS Access:

As we said in the previous point, Lambda can access almost all your AWS resources. Tell this information to your Information Security team and run away! So, How to set its access boundaries ? In API GW section, we talked about authorization, and yes it applies here. You have an application that one of your customers stores data into DynamoDB. You generate a short living token to be used from this customer with the specific target. Once the time comes, the access is revoked. Simple and affective.

C- Limit Execution Duration

When it comes to execution duration, it comes to what you actually pay AWS. As Lambda is pay-as-you-use, you need to know what you’re paying for.

One of the concerns is that if your functions are under DDoS attack (which is not good), your function will be running all the time. This has a financial and usability impacts. But, to reduce the affect from these kind of attacks is to limit execution duration to what your tests show. Test the function, know how long usually needs, and set it as a hard limit for your team. Function will timeout and will free up the resources. But, this would make a bigger attack rates? Not necessarily as you might face a small amount of load, which can reduce the amount of affected customer. How?

Ok, if you’re under attack, your resources are bing executed in a high rate calls, which could make them busy. To avoid that, to a certain rate, limit the execution. This will result to quickly release them and free up the place for “some” customers calls. It seems not to be a great option but if the attacker reached the functionality level, you don’t have that much of options. Do you?

D- Coding Best Practice:

Your coding practices can have a great impact on your security. How? Let’s find out.

Let’s assume you have a wrong IAM role attached to your function, and you’re using a 3rd party library that has vulnerabilities, guess what can happen? Always choose a trusted 3rd party libraries.

If you have a gigantic function that had an enormous payload. The passed payload has an event injection script and you didn’t validate it. Man you’re doomed. Always validate your payload.

I can keep writing about this section forever. But I believe the message received. Best practice can save you from threats.

DynamoDB:

A- Encrypt At-Rest Data:

So, when you have an important data that you need to store in your table, like sensitive information about the clients, or your customer business secrets, you defiantly need to encrypt them in your table. The reason is to protect these data from unauthorized access via your API or even from your developers! Yes, sometimes the risk can come from inside.

How to store your keys? Well, you have many options, like System Manager, Secret Manager or in your lambda environment variables.

B- IAM Roles to Access DynamoDB:

This point is not new, you have to lock the access into your tables with IAM role. In this case, you, as we described earlier, can insure no function will do anything that is not supposed to do, which will decrease the surface of potential harm from your functions that could be a victim to any Event Injection attack.

C- VPC endpoint to access DynamoDB:

If you have functions that get deployed within a specific VPC and only these are being able to access a specific table, you can setup a policy that allows the access from only the wanted VPC. Cool! This also will decrease the attack surface as if one of your functions that belongs to another source got compromised, you’re good to some extend as not everything in DynamoDB will be exposed to the attacker.

Summary

So, to summarize, following these steps will enhance your Serverless application security as you cover most of the main points. As you can see, these are really not hard steps. It has a lot of documentation behind it and it will enhance the team standards. It is a win-win situation.

Stay Safe!

💖 💪 🙅 🚩
imohd23
Mohamed Latfalla

Posted on July 8, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related