Rajan Prasad
Posted on August 9, 2020
Yaaaay !! This is my first Article on Hashnode. I'll cover Why and How to Create Thumbnails using AWS Lambda from images on the first part. Later on, possibly in another article i'll be showing how to create Thumbnails and extract metadata like duration, resolution and sizes of videos as well.
We will start from understanding why is it necessary then How To's.
To get Started, we will be choosing an image processing Library first. In our case we'll be using JIMP which is a quite popular npm library.
Then we'll be creating Lambda Layer since the size of the Lambda function will get significantly large which is gonna take way too much time to upload, deploy and so very frustrating to debug. We will use S3 event as a trigger for our Lambda as we want our Thumbnail generation process to be automated and then reading our S3 Event for details of the uploaded image and process it.
Why:
Consider You're building some webApp which shows a list of Users registered with their Profile Picture. So in order to make the website light and faster, it will not be a good idea to load the entire HQ image in smaller size since it's gonna take way too much time giving a very bad User Experience. If You're 100 users and each images are only 1MB then the page have to load 100MB to just display the page but with the thumbnails, say its 20KB, then it only has to load 2MB which results in 50x less load size making our website lightening Fast. Once the Thumbnail view is displayed and if user choose to view someone's full picture then its gonna have to load for 1 more MB.
How To:
First we start by installing the required Libraries. To create thumbnails just from images we only need JIMP but if we need videos thumbnails as well then 2 more Libraries will be added. So to cover all our use cases, we will install all those libraries at once and create the lambda layer. The Libraries list are as follow:
- jimp: To process image (in our case, to resize it to be a thumbnail size)
- ffmpeg: NPM video processing library
- ffmpeg-extract-frames: ffmpeg wrapper to take frame/frames of defined duration
- get-vide-info-url: ffmpeg wrapper to extract video meta-data
So to install these libraries, we're gonna follow the following steps:
mkdir media-layer
cd media-layer
npm init -y
npm i jimp @ffmpeg-installer/ffmpeg ffmpeg-extract-frames get-video-info-url
Now, what we have to do is, create a folder and zip it containing all our node_modules and upload it to S3 in order to create the Lambda layer. We can do it through the AWS Console However i don't prefer it for real projects as there is going to be many stages and you've to do it manually every time, such a pain in the neck. Instead we will be using Serverless Library to automate our deployment process via CloudFormation (i.e. Infra As Code).
So considering you already have installed the Serverless framework and set up the credentials (Programmatic Access) to AWS follow along. If not then you can look into Serverless Quick Start Guide . So inside our media-layer directory, use the commands:
serverless create -t aws-nodejs
rm handler.js
mkdir mediaLib
mkdir mediaLib/nodejs
cp -r node_modules mediaLib/nodejs
So what we did here is, we created a serverless template file which creates Serverless.yml & handler.js file. We don't really need the handler.js file since we're creating a layer not a function and then created folders mediaLib and inside mediaLib a nodejs folder. This is the convention used when creating Lambda layer using Infra As Code. Then we copied our node_modules inside that folder.
Now, lets configure our Serverless.yml file to get ready for the deployment of the lambda layer.
service: media-layer
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
profile: default
region: ${opt:region, 'us-east-1'}
deploymentBucket: my-bucket # Replace with your bucket name
layers:
medialayer:
path: mediaLib
name: mediaInfo
description: "Dependencies for thumbnail generation & extracting mediadata"
compatibleRuntimes:
- nodejs12.x
- nodejs10.x
retain: false # Set true if you want the previous version also to co-exist
Now, all we need to do is just deploy the stack and our lambda layer will be created. YAAAY !! We're almost there.
To deploy the stack:
sls deploy --stage test --region us-west-2
Now at the end of the deployment it will return our layer arn which we can use with our lambda function or you can manually go to AWS console and get the layer arn which will be in the format:
arn:aws:lambda:us-east-1:XXXXXXXXXXXX:layer:medialayer:1
Now, we can finally head to create our lambda function and set S3 as trigger.
cd ..
mkdir mediaFunction
cd mediaFunction
sls create -t aws-nodejs
Now, the serverless.yml file should be as :
service: media-function
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
profile: default
region: ${opt:region, 'us-east-1'}
deploymentBucket: my-bucket # Replace with your bucket name
iamRoleStatements:
- Effect: Allow
Action:
- s3:*
Resource:
- "*"
functions:
mediafunction:
handler: handler.mediahandler
layers:
- arn:aws:lambda:us-east-1:XXXXXXXXXXXX:layer:medialayer:1 # Put Your layer ARN here
timeout: 20
events:
- s3:
bucket: mybucket # replace with the bucket name in which images will be uploaded
existing: true
event: s3:ObjectCreated:*
rules:
- prefix: contents/
Now, one important thing i want to explain here. We're listening to mybucket objectcreated event. So what we're gonna do in our handler file is that we'll put the created thumbnail in different directory since if we put our created thumbnail in the same contents/ directory, it will trigger the same lambda function again which will cause a chain trigger and it will keep creating the thumbnails unless the functions times out. I vividly remember it created something like 100 images for one image and it took a while to figure out what is wrong.
Now, lets head to our handler file. Our handler file will look something like this:
"use strict";
const fs = require("fs");
const Jimp = require("jimp");
const AWS = require("aws-sdk");
const S3 = new AWS.S3();
module.exports.mediahandler = async (event) => {
let bucket = event.Records[0].s3.bucket.name;
let key = event.Records[0].s3.object.key;
let request = key.split("/");
let mediaName = request[1];
let newKey = `${request[0]}/thumbnails/${request[1]}`
const viewUrl = await S3.getSignedUrl("getObject", {
Bucket: bucket,
key: key,
Expires: 600
}
}
const myimage = await Jimp.read(viewUrl);
const bufferData = await myphoto
.cover(250, 250)
.quality(60)
.getBufferAsync("image/" +"png");
const params = {
Bucket: bucket,
key: newKey,
Body: bufferData,
ACL: "public-read",
ContentType: "image/png"
}
const result = await S3.upload(params).promise();
So Essentially what we did here was, we read the S3 events for the bucket and key, we changed the folder as not to chain trigger the event and uploaded the thumbnail image once generated via jimp.
Hope this article will be helpful. In the next article i'll be explaining how to generate thumbnail from videos as well as how to extract the meta-data.
Thanks For Reading !!
Posted on August 9, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.