How To: Deploy Next.js Apps with Docker Containers (Efficiently!)

zackdotcomputer

Zack Sheppard

Posted on June 7, 2021

How To: Deploy Next.js Apps with Docker Containers (Efficiently!)

So let's say you've written an awesome app in Next.js and you want to deploy it to a nifty containerized platform like Digital Ocean or Fly.io. But let's say that you, like me at the start of last week, have never containerized a Node app before and need a crash course in how to do that?

Here's what I learned going through this process to deploy Tweet Sweep to fly.io - both the naive first steps for making a container work at all and then also some necessary optimizations for it.

Follow Along

If you want to follow along, you will need Docker Desktop and Yarn installed. To keep things replicable, I'm using the Next.js Blog-Starter-Typescript example in these instructions. You can set that up locally with this command:

yarn create next-app --example blog-starter-typescript blog-starter-typescript-app
Enter fullscreen mode Exit fullscreen mode

As a side note, the tips and tricks in here are generic for all containerized Node apps, but the Dockerfiles themselves will only work as an untweaked copy-paste if you're using Next.js. So, if you're using a different platform you might have to tweak which files get retained in your final container.

The Basics - Just make it work

So let's start with the 101 - what is Docker and why you want to use it. At its core, Docker Containers are tiny virtual computers serialized to disk in a standardized format. To make them, you need three ingredients:

  1. A starter image to build upon - usually this is a full operating system image with some pre-installed software from Docker Hub.

  2. New files to add - in this case the code for your app.

  3. The steps to combine those first two components. This is what is stored in a Dockerfile and a .dockerignore file.

Using these three components you can wrap up your software into a standardized container that can be run on any machine that has the Docker software installed. (Note that this has a big "in theory" caveat attached - if you are doing complex, advanced operations then you might run into the limits of Docker's capabilities. However, for a straight-forward Next.js app like the one I'm using here, it works very well.)

The Naive Dockerfile

So what do these instructions look like for our Next.js application?

# Naively Simple Node Dockerfile

FROM node:14.17-alpine

RUN mkdir -p /home/app/ && chown -R node:node /home/app
WORKDIR /home/app
COPY --chown=node:node . .

USER node

RUN yarn install --frozen-lockfile
RUN yarn build

EXPOSE 3000
CMD [ "yarn", "start" ]
Enter fullscreen mode Exit fullscreen mode

Put these in a file named Dockerfile in the root folder of your app.

Understanding the Dockerfile

So what does this do? Well, Docker will step through these instructions one by one and do the following:

FROM node:14.17-alpine
Enter fullscreen mode Exit fullscreen mode

This tells Docker that your app is building on a container that has Alpine Linux and Node 14.17 (with npm and yarn) preinstalled.

RUN mkdir -p /home/app/ && chown -R node:node /home/app
WORKDIR /home/app
COPY --chown=node:node . .

USER node
Enter fullscreen mode Exit fullscreen mode

These are our first real instructions - we make a directory called /home/app, give ownership of it to a user named node, make it the "working directory" for our container (where Docker expects our main program files to live), and copy the files in the directory where we ran docker build into the container. Remember the container is basically a virtual little computer, so we have to copy our files in there to access them!

We then become that node user. By default Docker runs as root on the contained machine. But that is pretty dangerous since it gives root privileges to whatever code we run, meaning a little security flaw in Node or one of our NPM dependencies could potentially give access to our whole server. So, to avoid that, we switch to a non-root user.

RUN yarn install --frozen-lockfile
RUN yarn build
Enter fullscreen mode Exit fullscreen mode

We install our NPM dependencies and build our Next.js server in production mode.

EXPOSE 3000
CMD [ "yarn", "start" ]
Enter fullscreen mode Exit fullscreen mode

And finally these two commands give Docker instructions it will use when it tries to run this software. The first tells Docker that this container expects connections on port 3000, so it should expose that leaving the container (we'll wire it up in a moment with the -p flag). The second tells Docker that the command to run to start this container is yarn start.

Build and Run!

Now it's time to execute those steps and make your container. Run the following command in a terminal in your project directory (you can replace some-name with a personal tag like zacks-blog-1.0):

docker build -t some-name .
Enter fullscreen mode Exit fullscreen mode

Your built image, containing the virtual machine ready to run your web app, will now show up locally if you check docker image ls:

$ docker image ls
REPOSITORY    TAG       IMAGE ID       CREATED          SIZE
some-name     latest    4c73a8c8d35c   2 minutes ago    622MB
Enter fullscreen mode Exit fullscreen mode

Let's start it up:

docker run -p 3000:3000 some-name
Enter fullscreen mode Exit fullscreen mode

(You can add the -d flag after run to run the server in the background instead.)

You'll see logs same as if you'd run yarn start normally. And, due to the -p 3000:3000 flag, your container will now be connected to your local port 3000, so if you visit http://localhost:3000 you'll see your blog template:

It worked

Optimize it - Getting this production ready

Great! You have now containerized your app. But before you go deploying it to your favorite hosting platform, there are a few things we need to do.

You might have noticed above that the size of our built image is over 600MB - that's over 4x the size of our project on disk outside of the container! This problem only compounds as your apps get more complex - the built versions of the Tweet Sweep Frontend container were more almost 5GB at this point! That's a lot of data to upload to your servers!

Almost all of this size issue is related to one particular quirk of Docker - almost every line in the Dockerfile creates a new "layer" in your final Docker image. Each layer captures the changes made to the virtual machine after that line runs. This is a powerful optimization tool because it allows Docker to reuse work it's already done - for example if you have some setup that never changes like our mkdir line, Docker can compute that layer once and reuse it for all subsequent builds. However, it can also lead to image size issues (since lots of unneeded files might wind up being stored in those layers) and security issues (since you might capture secret values in those layers that could be siphoned off by someone who gets access to your final image).

You can see the layers and their respective sizes using this command (credit to this post where I got it from):

docker history --human --format "{{.CreatedBy}}: {{.Size}}" some-name
Enter fullscreen mode Exit fullscreen mode
CMD ["yarn" "start"]: 0B
EXPOSE map[3000/tcp:{}]: 0B
RUN /bin/sh -c yarn build # buildkit: 10.6MB
RUN /bin/sh -c yarn install --frozen-lockfil…: 340MB
USER node: 0B
COPY . . # buildkit: 155MB
WORKDIR /home/app: 0B
RUN /bin/sh -c mkdir -p /home/app/ && chown …: 0B
/bin/sh -c #(nop)  CMD ["node"]: 0B
/bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…: 0B
/bin/sh -c #(nop) COPY file:238737301d473041…: 116B
/bin/sh -c apk add --no-cache --virtual .bui…: 7.62MB
/bin/sh -c #(nop)  ENV YARN_VERSION=1.22.5: 0B
/bin/sh -c addgroup -g 1000 node     && addu…: 104MB
/bin/sh -c #(nop)  ENV NODE_VERSION=14.17.0: 0B
/bin/sh -c #(nop)  CMD ["/bin/sh"]: 0B
/bin/sh -c #(nop) ADD file:282b9d56236cae296…: 5.62MB
Enter fullscreen mode Exit fullscreen mode

From this we can see that about 117MB of the image size happen before our first command - this the base size of the Alpine-Node image we're building on so there isn't much we can do about that. But let's focus on the two main optimizations we can do after that point:

Easy: Ignore Stuff

In our naive Dockerfile we run the command COPY --chown=node:node . .. This copies all the files in our current directory into the Docker container. This is almost always not what you want! For example, you might have an .env file with secrets in it that will wind up in plain-text in the final Docker image. (You should use the env secrets feature on your hosting platform instead.)

In this app's case this unnecessarily copies the node_modules folder (since we then yarn install it again) and .next folder (since we rebuild the app inside the container). We can fix this with a .dockerignore file. This file, in the root of our project, tells Docker to skip certain files and folders when running COPY.

# .dockerignore file
.DS_Store
.next
node_modules
Enter fullscreen mode Exit fullscreen mode

Advanced: Get your Container a Container

Now the galaxy brain move here is to use containers for our container. We're going to create two that are used only to build the application separately from the one that's uploaded to the server. This saves us from having to upload the layers containing all the files used or created en route to that destination. Here's the Dockerfile for that (with comments explaining along the way what each block does):

(Edit: After I posted this, Vercel got in touch to point out they have their own post with a sample Dockerfile. I've now incorporated some tips from theirs into this one.)


# Double-container Dockerfile for separated build process.
# If you're just copy-pasting this, don't forget a .dockerignore!

# We're starting with the same base image, but we're declaring
# that this block outputs an image called DEPS that we
# won't be deploying - it just installs our Yarn deps
FROM node:14-alpine AS deps

# If you need libc for any of your deps, uncomment this line:
# RUN apk add --no-cache libc6-compat

# Copy over ONLY the package.json and yarn.lock
# so that this `yarn install` layer is only recomputed
# if these dependency files change. Nice speed hack!
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile

# END DEPS IMAGE

# Now we make a container to handle our Build
FROM node:14-alpine AS BUILD_IMAGE

# Set up our work directory again
WORKDIR /app

# Bring over the deps we installed and now also
# the rest of the source code to build the Next
# server for production
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN yarn build

# Remove all the development dependencies since we don't
# need them to run the actual server.
RUN rm -rf node_modules
RUN yarn install --production --frozen-lockfile --ignore-scripts --prefer-offline

# END OF BUILD_IMAGE

# This starts our application's run image - the final output of build.
FROM node:14-alpine

ENV NODE_ENV production

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

# Pull the built files out of BUILD_IMAGE - we need:
# 1. the package.json and yarn.lock
# 2. the Next build output and static files
# 3. the node_modules.
WORKDIR /app
COPY --from=BUILD_IMAGE --chown=nextjs:nodejs /app/package.json /app/yarn.lock ./
COPY --from=BUILD_IMAGE --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=BUILD_IMAGE --chown=nextjs:nodejs /app/public ./public
COPY --from=BUILD_IMAGE --chown=nextjs:nodejs /app/.next ./.next

# 4. OPTIONALLY the next.config.js, if your app has one
# COPY --from=BUILD_IMAGE --chown=nextjs:nodejs  ./

USER nextjs

EXPOSE 3000

CMD [ "yarn", "start" ]
Enter fullscreen mode Exit fullscreen mode

The Results

Now if you build that (again with docker build -t some-name-optimized .) and run it (docker run -p 3000:3000 some-name-optimized) you'll be able to connect to it on localhost:3000 same as before.

What has changed, then? Well, if we list our images:

$ docker image ls                      
REPOSITORY           TAG      IMAGE ID       CREATED       SIZE
some-name-optimized  latest   518ed80eae02   1 hour ago    243MB
some-name            latest   4c73a8c8d35c   2 hours ago   622MB
Enter fullscreen mode Exit fullscreen mode

You can see we've reduce our final build image's size by almost a factor of 3! That's a lot less data we'll need to upload to our server with every deploy! I saw similar results when I employed this strategy on Tweet Sweep's containers, saving me gigabytes of upload bandwidth with every deploy.

The Actual Deploy

Ok, so now that we have our app containerizing successfully, how do we actually deploy? For this, I've been using fly.io because their support for Docker is strong and their service has a generous free tier. But if you'd rather use Heroku or Digital Ocean they have strong support for Docker as well.

With Fly, I'd recommend just following their step by step instructions for deploying Docker. TLDR; you have to create an app on your account and a corresponding fly.toml file locally, then the command flyctl deploy will automatically run your Dockerfile build, upload all the resulting layers to their service (this is why it's important to optimize their size!), and then start them on a VM server. After that, deploys really are as easy as running flyctl deploy again thanks to the compartmentalization of containers!

More Optimizations?

I'm still learning Docker so these optimizations are just the first I've come across. If you've played around with it and know some more ins-and-outs one should include while containerizing a NodeJS app, please do let me know down in the comments.

💖 💪 🙅 🚩
zackdotcomputer
Zack Sheppard

Posted on June 7, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related