Serverless Micro-Frontends with the AWS CDK

kayis

K

Posted on March 28, 2021

Serverless Micro-Frontends with the AWS CDK

After some extended silence, born out of fear of failure and procrastination, I'm back. This time with some technical advice.

As I already wrote before, I wanted to host the whole system on AWS because... I don't know anything else :D

Anyway, I looked into different frontend architectures, and micro-frontends seem to be a reasonable approach.

Some people said the overhead would be too big, but I saw some implementations with basic iframes, and they didn't look much more complicated than building a frontend monolith.

I think the micro-frontend approach works well if applied right from the start. This way, you can split your features into reasonable chunks of UI-each, forming an encapsulated micro-frontend and then towing them together with a container.

In the end, you should have a bunch of very simple frontends and not a single complex one.

I guess the devil lies in the details, as always, but what do you get if you don't try, right? :D

Stack

Anyway, I wanted to build the whole thing with serverless technologies because they look like they could really ease some pain points. Especially when starting and when transitioning from a simple MVP to a complex regular product.

I believe AWS has the most comprehensive serverless portfolio right now, and I'm already pretty deep into the whole eco-system, so I think going all AWS could be a good idea.

So, building on AWS serverless technology, let's start with the infrastructure required for a frontend or multiple micro-frontends, to be specific.

IaC

My infrastructure as code (IaC) tool will be the AWS CDK. I think it hits the sweet spot between CloudFormation and Amplify. It comes with more convenience than CloudFormation, but less structure than Amplify, which should lead to more flexibility, I hope.

File Hosting

I'd host the files in S3 buckets. One bucket per micro-frontend. That way, I can deploy every frontend on its own. And if I make the version part of the bucket name, I will get a new URL automatically after every deployment.

Content Delivery Network

Next, I will put all these buckets behind a CloudFront distribution. This will allow me to give every bucket a path under a single domain, and CloudFront automatically caches the files at edge locations all over the world to make access fast. Also, it's cheaper than using S3 directly, haha.

Glue Code

S3 is a pretty dumb web server, and CloudFront doesn't help that much making it better. That's why I use Lambda@Edge to put some glue code between them.

Lambda@Edge can intercept requests between CloudFront and S3. These "origin requests" are only fired when CloudFront has a cache miss and has to go to S3, so it shouldn't be that expensive.

This way, I can solve some tricky problems with S3.

First, index files in sub-directories. S3 doesn't have subdirectories, so it can only deliver one index.html for the bucket's root. With Lambda@Edge, I can add these to the origin's request URL.

Second, path prefixes. If I add a bucket as an origin to a CloudFront distribution path, the whole path will be sent to S3.

For example, if I request https://example.com/admin/dashboard and my admin bucket is an origin with the path pattern admin/* then CloudFront would request /admin/dashboard from the bucket.

Lambda@Edge can now delete the /admin/ part and request the right file from S3.

Lambda@Edge Request Flow

Domain Handling

Finally, the CloudFront distribution can get a domain from Route53. It might seem a bit overkill to use a whole service if only one domain is required for all the frontends, but if the backend is built with microservices, it will need many subdomains, too, so it's probably a good idea to let handle AWS the domains.

That way, everything can be configured with CloudFormation, and in turn, the CDK.

I made a small diagram to show how it should all play together.

Architecture Diagram

Example Code

Every micro-frontend can be a single stack for the S3 bucket, pulled together via a container stack that takes care of CloudFront and Route53.

Micro-Frontend Stack

I'm using two packages: @aws-cdk/aws-s3 and @aws-cdk/aws-s3-deployment.

const bucket = new s3.Bucket(this, "Bucket", {
  bucketName: props.name + "-" + props.version,
  publicReadAccess: true,
  websiteIndexDocument: "index.html",
  websiteErrorDocument: "error.html",
});

new s3deploy.BucketDeployment(this, "Deployment", {
  sources: [s3deploy.Source.asset(props.frontendDir)],
  destinationBucket: bucket,
});

new cdk.CfnOutput(this, "BucketUrl", {
  value: bucket.bucketWebsiteUrl,
});
Enter fullscreen mode Exit fullscreen mode

The bucket needs a unique name and version, and the bucket deployment will fill the bucket on deploy with files from a directory. The output will display the bucket URL automatically generated by AWS.

If you release a new version, you can change the version string, and a new bucket with a new name, a new URL, and the new content will be created.

The CDK retains buckets instead of deleting them; this means old versions can still be found in their respective buckets and URLs. This way, a roll-back is simply a switch back to an old origin URL in CloudFront.

This is just the absolute basic. You can add code to generate names and versions automatically if needed.

Container Stack

In the container stack I used these packages: @aws-cdk/aws-certificatemanager, @aws-cdk/aws-cloudfront, @aws-cdk/aws-cloud-front-origins, @aws-cdk/aws-route53, and @aws-cdk/aws-route53-targets.

const certificate = acm.Certificate.fromCertificateArn(
  this, "certificate", props.certArn
);

const distribution = new cloudfront.Distribution(this, "distribution", {
  certificate,
  defaultBehavior: { origin: new origins.S3Origin(props.frontends.root) },
  domainNames: ["example.com"],
});

const originPathTrimmer = new (<any>cloudfront).experimental.EdgeFunction(
  this,
  "originPathTrimmer",
  {
    runtime: lambda.Runtime.NODEJS_12_X,
    handler: "index.handler",
    code: lambda.Code.fromInline(`
      exports.handler = async (e) => {
        const { request } = e.Records.pop().cf;
        if(request.uri.includes("/admin/")) 
          request.uri = request.uri.replace("/admin/", "");
        return request;
      };
    `),
  }
);

distribution.addBehavior("/admin/*", new origins.S3Origin(props.frontend.admin), {
  edgeLambdas: [
    {
      functionVersion: originPathTrimmer.currentVersion,
      eventType: cloudfront.LambdaEdgeEventType.ORIGIN_REQUEST,
    },
  ],
});

new route53.ARecord(this, "record", {
  zone: route53.HostedZone.fromLookup(this, "zone", {
    domainName: "example.com",
  }),
  target: route53.RecordTarget.fromAlias(
    new targets.CloudFrontTarget(distribution)
  ),
});

new cdk.CfnOutput(this, "frontendUrl", {
  value: "https://example.com",
});

new cdk.CfnOutput(this, "distributionId", {
  value: distribution.distributionId,
});
Enter fullscreen mode Exit fullscreen mode

Let's go through this from top to bottom.

First, we need to get a certificate for the domain we want to use. We have to get the ARN of that from the Certificate Manager before we can deploy.

CloudFront uses the cert to check if we really own the domain.

Then we create the CloudFront distribution. It needs the domain name and the certificate as proof of ownership. We can also define a default behavior right away. All requests that don't match any path will be fetched from that default origin we give here. It's like the root frontend or the entry/landing page.

It takes a bucket object from our micro-frontend stacks.

This will couple the two stacks, which isn't optimal, even though it makes deploying much simpler.

Instead of an S3Origin, you can also use an HttpOrigin with the bucket domain (just the domain, without protocol, otherwise -> error). But this requires you to configure the bucket as a website bucket, which makes it publicly accessible.

Next comes the Lambda@Edge function, which trims the prefixes from the origin requests. This way, you can put an index.html into your admin bucket and don't have it named admin/index.html to be found just because it's under the admin/* path in CloudFront.

Lambda@Edge does not (yet?) support environment variables. So I used inline code here. This way, I could dynamically generate code based on the number of micro-frameworks available later. The inline code only supports Node.js v12 right now. So, don't get too crazy with the syntax here.

Then we create the first behavior for the CloudFront distribution. This tells CloudFront where to get files for a specific path. In our case, URLs that match /admin/* should be fetched from an admin S3 bucket. This behavior also gets the Lambda@Edge function to clean up the URLs.

If you have more frontends, you have to create more behaviors.

Finally, we have to fetch our domain's hosted zone from Route53 and create a new record for it and link it with the CloudFront distribution.

The CloudFront distribution ID is added as output here because we need it later for invalidation.

Again, all these are the basics. Your container stack could receive all micro-frontends as props and loop through them to create CloudFront behaviors.

The Lambda@Edge function is also elementary here. You can add more code to make the whole stack behave more like a regular web server.

Cache Invalidation

If you deploy a new micro-frontend version, you need to invalidate the CloudFront cache; otherwise, it delivers outdated versions.

This can be done with the AWS CLI.

For example, if we deployed a new admin frontend (micro-frontend stack and container stack). The command looks like that:

$ aws cloudfront create-invalidation \
  --distribution-id <DISTRIBUTION_ID> \
  --paths "/admin/*"
Enter fullscreen mode Exit fullscreen mode

Here we need the distribution ID from the output of the container stack.

How does it Work?

We create files for a static website. When we deploy the micro-frontend stack, a new bucket is created, and the files are uploaded into that bucket.

Then we deploy the container stack, which creates the CloudFront distribution and creates a behavior for our website bucket. It will also give the distribution a custom domain.

When we later update the micro-frameworks. We will end up with new buckets that have to be linked in the container stack.

The old buckets won't be deleted, so we can roll back the container stack to a previous version if something goes wrong.

Conclusion

I think the overhead isn't too big to start with such an approach. Both stacks are right now under 100 lines of code.

Sure, it would be nice to have some additional features in there, so I wouldn't have to modify the stack code when I add a new frontend, but even if we follow the Pareto principle here I should end up with a rather low-code solution.

What's next?

After my interviews, I had the impression that search will be the heartbeat of my product. While I can use third-party solutions for features that aren't core to the product, I think I would have to look deeper into serverless search solutions. Seems besides Algolia there is nothing out there.

💖 💪 🙅 🚩
kayis
K

Posted on March 28, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related