Serve Your Assets Automatically with Bitbucket and AWS
David💻
Posted on August 25, 2024
This article details how to serve our assets as static content to be consumed on our webpages. The aim is to showcase how we can automate this process using our Bitbucket repositories and some AWS services.
What are assets in web?
In web development, assets are the essential files that make up a website, including images, CSS, JavaScript, fonts, videos, and documents. They can define the site's appearance, functionality, and content.
What is a CDN?
A Content Delivery Network (CDN) is a network of distributed servers that deliver web content to users based on their geographic location. CDNs store copies of website assets, like images, videos, and scripts, in multiple data centers around the world, enabling faster access by serving content from the closest server to the user. This reduces load times, improves site performance, and provides better protection against traffic spikes and distributed denial-of-service (DDoS) attacks.
Requirements
- AWS account
- Bitbucket repository
- An ACM and a Domain in Route53(optional)
Walkthrough
First, we are going to start by creating a Bitbucket repository. There isn't really any specific consideration we should take into account.
After that, let's clone our repository:
git clone https://xxxx@bitbucket.org/my-projects/assets-micro.git
Inside our repository, let's upload any of our assets. Say we have a bunch of .svg files that we want to share.
After adding our assets, let's push them into our repository:
git add .
git commit -m "first commit"
git push origin main
Great ! Now we have our assets in our git repository, but how can we push this to the internet?
Introducing AWS Services. My first idea was to create an S3 bucket with a Lambda function. The purpose of this Lambda was to act as a webhook, whether this Lambda had a Function URL or an API Gateway, and then, using Python and the Boto3 library, upload this code into S3.
However, I found a more streamlined and easy-to-use solution for this use case.
The constraint of this use case was that only the main branch can be pushed and shown to the internet, as other developers will work in other branches and need to create a Pull Request that needs to be merged to make their changes valid.
So, I found a really neat solution that meets my needs and is extremely easy to set up.
But before going deeper into the architecture, let's create an S3 bucket that will serve as our assets repository in AWS.
For the Object Ownership, leave the recommended option.
For the Block the public access, leave the following options checked
Feel free to decide on the next options and then create the bucket.
Now let's modify our bucket and be sure to add this bucket policy.
This policy allows anyone (public access) to read or download any object stored in the S3 bucket named mf-assets. It's typically used to make the contents of a bucket publicly accessible, such as hosting public assets like images, videos, or other static files.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mf-assets/*"
}
]
}
Introducing Codebuild. AWS CodeBuild is a fully managed build service provided by Amazon Web Services (AWS) that automates the process of compiling source code, running tests, and producing software packages ready for deployment.
Let's start by creating a codebuild for this.
Now let's set up our Bitbucket configuration as the source. For this, you will need to have enough permissions to authenticate AWS with Bitbucket. We are going to read the code only from the main branch in our repository.
Now configure a webhook. Every time we create a PR and it is merged, this CodeBuild project will be triggered.
For the next step we can use the default image that AWS provide us.
Note: For the role of this codebuild be sure to create a role that has enough permissions for S3, it should look a bit like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mf-assets",
"arn:aws:s3:::mf-assets/*"
]
}
]
}
For the buildspec add this:
version: 0.2
phases:
pre_build:
commands:
- echo "Setting up environment"
- echo "Current branch is $CODEBUILD_WEBHOOK_HEAD_REF"
build:
commands:
- echo "Starting to sync repository files to S3"
- aws s3 sync . s3://mf-assets/ --exclude ".git/*"
artifacts:
files:
- '**/*'
discard-paths: yes
And done! Now, the next time we merge our code into the main branch, our code is going to be inside our S3 bucket.
But how do we actually serve this to the internet? Introducing CloudFront. Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services (AWS). It securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront caches copies of your content at edge locations worldwide, which are closer to your users, ensuring faster access to the content.
Let's create a distribution. Be sure to select the S3 bucket and to create an Origin Access Control for our bucket.
Origin Access Control (OAC) in Amazon CloudFront is a feature that enhances the security of your content delivery by restricting direct access to your origin
Leave the other options according to your needs. However, for this example, I didn't modify anything else except the Custom SSL certificate (optional), where I added the ACM of my domain.
After our distribution is created, be sure to edit it and add an Alternate domain name (CNAME) - optional, if you have a domain registered.
After waiting for deployment, we can check our assets.
Great! Now for the last step, let's add this automation to CodeBuild. Going back to our policy, we need to add the following code:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mf-assets",
"arn:aws:s3:::mf-assets/*"
]
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetDistribution",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:ListDistributions"
],
"Resource": "arn:aws:cloudfront::,account-id:distribution/distribution-id"
}
]
}
And for the codebuild:
version: 0.2
phases:
pre_build:
commands:
- echo "Setting up environment"
- echo "Current branch is $CODEBUILD_WEBHOOK_HEAD_REF"
build:
commands:
- echo "Starting to sync repository files to S3"
- aws s3 sync . s3://mf-assets/ --exclude ".git/*"
post_build:
commands:
- echo "Invalidating CloudFront distribution cache"
- aws cloudfront create-invalidation --distribution-id your-distribution-id --paths "/*"
- echo "Build and cache invalidation completed successfully"
artifacts:
files:
- '**/*'
discard-paths: yes
Optional
If you have a Route53 configured, just create a new record as a CNAME, add the cloudfront distribution link, like this
Now you can server your content from your domain
Conclusions:
For such a simple problem, I initially overcomplicated it with a Lambda function. However, I was able to find a streamlined solution with CodeBuild. There are tons of AWS services that, with the proper knowledge, can significantly reduce the overhead of work in just a few minutes!
I hope this article helps you a little bit ! See you in the next one.
Posted on August 25, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.