Valentin Prugnaud 🦊
Posted on February 20, 2020
Repository: WhatDaFox/nestjs-fastify-cloud-run-poc
Configure Google Cloud
To be able to build and deploy, you will need a Google Cloud project, with a billing account set up, as well as
the Google Cloud CLI installed.
Then you will need to create a configuration for your project:
$ gcloud config configurations create cloud-run
$ gcloud auth login # and follow the steps
$ gcloud config set project YOUR_PROJECT_ID
Create the project
For this proof of concept, I will only use the default NestJS application, that contains a single endpoint /
returning Hello world!
:
$ npm i -g @nestjs/cli
$ nest new cloud-run
Install the Fastify driver:
$ npm i --save @nestjs/platform-fastify
We need to update main.ts
to make use of the Fastify driver. Also, Cloud Run will decide the port of our application, so we have to update the main.ts
file to reference
the PORT
environment variable, like so:
import { NestFactory } from '@nestjs/core';
import { FastifyAdapter, NestFastifyApplication } from '@nestjs/platform-fastify';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter({ logger: true }),
);
app.enableCors();
await app.listen(parseInt(process.env.PORT) || 3000, '0.0.0.0');
}
bootstrap();
Now we are ready to create the Dockerfile
.
Create the Dockerfile
We need to containerize our application to be able to run on Cloud Run. Create a Dockerfile at the root of your project
and copy/paste the following:
For better performance, I decided to build the app before hand and run the
start:prod
command.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-alpine
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy local code to the container image.
COPY . ./
# Build the application
RUN npm run build
# Run the web service on container startup.
CMD [ "npm", "run", "start:prod" ]
Build & Deploy
Now, we can use Cloud Build to build our docker image. Cloud Build will automatically detect our Dockerfile
, build,
and push our image in Google Container Registry:
$ gcloud builds submit --tag gcr.io/YOUR_PROJECT/helloworld
Once that's done, we can run the following command to deploy our new revision to Cloud Run:
$ gcloud run deploy --image gcr.io/YOUR_PROJECT/helloworld --platform managed
Benchmark
When testing, I ran a small (to avoid crazy costs) benchmark with Apache Benchmark.
Here is the command I ran:
$ ab -n 1000 -c 80 https://cloud-run-url/
Here are the results:
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking cloud-run-url (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: Google
Server Hostname: cloud-run-url
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-CHACHA20-POLY1305,2048,256
Server Temp Key: ECDH X25519 253 bits
TLS Server Name: cloud-run-url
Document Path: /
Document Length: 12 bytes
Concurrency Level: 80
Time taken for tests: 9.300 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 437004 bytes
HTML transferred: 12000 bytes
Requests per second: 107.53 [#/sec] (mean)
Time per request: 743.985 [ms] (mean)
Time per request: 9.300 [ms] (mean, across all concurrent requests)
Transfer rate: 45.89 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 79 461 213.7 430 2633
Processing: 37 208 118.9 200 506
Waiting: 22 163 105.3 139 501
Total: 129 669 220.7 626 2739
Percentage of the requests served within a certain time (ms)
50% 626
66% 702
75% 768
80% 772
90% 862
95% 1161
98% 1371
99% 1576
100% 2739 (longest request)
Conclusion
Compared to my previous experiment with the default installation of NestJS, I didn't observe any improvement in response time.
Posted on February 20, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.