Serverless web scraper in Ruby - tutorial
Marcin K.
Posted on October 5, 2019
Imagine you have this awesome web app that will make you very rich someday. This app has some end-user tests. You used Selenium to automate all the manual stuff requiring a browser interaction.
As your app gets bigger, end-user tests are taking more and more time that could be spent on something else.
You recruit more QA engineers, and they all have to configure selenium, chrome driver, and a proper browser binary. This gets cumbersome and error-prone.
Why not run those tests in parallel? Why not keep them totally separate from our app? Why not keep them, configure them and run them on a separate machine?
We can do it with serverless chrome!
It's just a chrome binary designed to be used on AWS Lambda (at the moment of writing this article GCP and Microsoft Azure is not yet supported).
Let's build a very simple web scraping app with it. We are going to write it in Ruby - If you prefer writing in Python, here's an article for you.
Requirements:
- AWS account (and some very basic knowledge)
- Ruby installed (version 2.5.x)
- Serverless chrome (1.0.0-37)
- Chromedriver (2.37)
- Ruby gems: selenium-webdriver (I used 3.142.4) and bundler (2.0.x)
Note: Your lambda function, s3 bucket, role, and the user should be created in the same region.
Create role
Create an IAM role and attach to it existing AWSLambdaFullAccess policy. Here is a tutorial for creating roles.
Create a user with programmatic access
Create an AWS user and attach to it the same policy as above.
Here is a tutorial for adding new users.
Write the access id and secret that you have obtained. Set them as the environment variables (link) and configure them in your AWS profiles.
Create an S3 bucket
Create an AWS S3 bucket. Here is how to do it.
Create a lambda function
Now let's go to the AWS console again and create our lambda function.
When asked to enter the basic information for your function, add a name you want and choose Ruby 2.5.
Once it's created, go to the "Basic settings" in the function view and set the memory to 512MB and the timeout to 1 min.
Have a look at the template for our function:
Also, assign the role that we have created earlier as the execution role.
Install chromedriver and serverless chrome
Let's grab serverless chrome:
wget https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip
unzip stable-headless-chromium-amazonlinux-2017-03.zip -d bin/
rm stable-headless-chromium-amazonlinux-2017-03.zip
And the chromedriver:
wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
unzip chromedriver_linux64.zip -d bin/
rm chromedriver_linux64.zip
Install selenium web driver gem
First, we need to create a Gemfile for our project with the following content:
Once we have the Gemfile, we can install the required gem and its dependencies:
bundle install --path vendor/bundle
Note the location we are installing it to. We will need to include this folder in the package deployed to AWS S3.
Implement the scraper
The first thing we will need is to set up a selenium driver.
Note how we are passing paths to the binaries we have just installed.
Next, let's implement the lambda function itself.
Here I am just using selenium webdriver API to send some input into google.com and show me the browser title. If you would like to know the details or experiment a little with it, check out the selenium API docs. Do not forget to tell the driver to quit in the end!
I am just going to add some additional driver options to make it more efficient:
Run the scraper code locally
If you're using macOS or Windows, you will need to test your code with Docker. The good news is that there is an image provided by AWS that mirrors the lambda environment and we can use it directly. We use --mount
flag here to set /dev/shm
permissions to read-only.
docker run --rm -v "$PWD":/var/task --mount type=tmpfs,target=/dev/shm,readonly=true lambci/lambda:ruby2.5 lambda_function.lambda_handler
Upload to lambda
Run those commands in order to zip our code and its dependencies, upload it to S3, and update our function from there.
We are ready to invoke our function! Do it with aws lambda invoke --function-name your_function_name output_file
in your terminal, or use the "Test" button in the function view in the AWS console.
Happy scraping!
Posted on October 5, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.