Frontend dockerized build artifacts with NextJS
Ernesto Freyre
Posted on December 4, 2019
While deploying Frontend applications there are several ways you can go. None bad, just different use cases. You can dockerize it (this is make a docker container with your application assets and runtime) and deploy it to any infrastructure that supports it (Kubernetes, et al) or you can go a simpler (and more popular by the day) route of creating a static build of your app and serve it over a CDN (Content Delivery Network) with all the benefits this entails (No servers, content the edge closer to users so faster experience, etc).
Now, you probably want to have runtime environments, most of the time at least 3: development, staging and production. This affects your build and deploy pipelines. Let’s say you have your latest app version working well (tested and all) on staging environment and decide to deploy latest version to production. Depending on how builds are created you can end up with a broken version of your app on production, just by having broken dependencies that are not correctly managed. So, your build pipeline performs another build of the production branch (or tag) and now we shipped broken code to our users. Not good.
Dockerizing our application definitively helps. We can create a docker image per commit, environment agnostic, tagged and stored on our registry. We can promote or run this docker image on any environment with confidence. Since we have NextJS on the title of the post, let’s see how to dockerize a NextJS application.
The Dockerfile described has 2 stages. First, will install all dependencies (including development dependencies) and make a production build, also removing non-production dependencies . Second stage will copy relevant files including build and production dependencies. Giving us a more lean and compact image we can then run with:
$ docker run -d -p 3000:3000 fe-app-image
Since we want to run the same image across runtime environments we can also do:
# Development
$ docker run -d -p 3000:3000 \
-e API=[https://dev-api.myapp.com](https://staging-api.myapp.com) \
fe-app-image
# Staging
$ docker run -d -p 3000:3000 \
-e API=[https://staging-api.myapp.com](https://staging-api.myapp.com) \
fe-app-image
# Production
$ docker run -d -p 3000:3000 \
-e API=[https://api.myapp.com](https://staging-api.myapp.com) \
fe-app-image
Or even for local development or tests
# Local dev
$ docker run -d -p 3000:3000 \
-e API=[http://1](https://staging-api.myapp.com)92.168.1.87:5000 \
fe-app-image
Docker images are neat. Now. For our runtime environments we still depend on servers to deploy our app so our users can access it. The other alternative we described was static deploys. This is, build your app so the output is just a bunch of HTML, JS and CSS files we can put on a folder and serve it via a CDN. The main problem this approach has is lack of runtime. In other words we cannot make the static build environment agnostic. Injecting environment properties then becomes a problem we need to solve, via config endpoints (fetch before app loads), environment sniffing (checking domain the app is running and inferring env vars from it), injecting HTTP headers (not sure yet). All requiring extra work. (If you solved this problem please comment with your solutions).
What we usually see with static deploy is: every time we want to deploy to a specific environment we have to run the build process with the runtime vars so the build has them baked in. This approach works, is probably what you are using right now if you are doing static deploys at all. But, still has the problem described above. If some dependency changed or is not well managed at build time we cannot guarantee our build will work the same way.
How can we be protected from this problem and still do static deploys. (Having no servers to maintain is really appealing) Well, One approach is to still create a docker image of your app (using Dockerfile described above). So, build time is separated from deploy time.
At deploy time, we can pull any image (easy rollbacks FTW) and run it changing the entrypoint so instead of running the app we will be exporting its statics assets. (This is viable on NextJS thanks to the next export command)
# Deploying to production
$ docker run \
-e API=[https://api.myapp.com](https://staging-api.myapp.com) \
-v ~/cd-folder/out:/app/out \
--entrypoint "node\_modules/.bin/next" \
fe-app-image export
# Copy static assets from ~/cd-folder/out to your production CDN
Why?
- Build and deploys are separated. Dependency problems are no longer an issue.
- Deploy optionality: We can now choose how we are going to deploy our apps. Kubernetes using docker or static deploy using a CDN
- Easy rollbacks. We can build, tag and store all of our builds on a docker registry. We can then choose what version do we want to deploy directly from the registry.
- Easier local development experience. Any dev team member, Frontend or not can run any version of frontend locally.
- SSR optionality. Static deploys don’t support SSR completely, just parcial renderings of pages. But, you can go back and support it by deploying your app again as a docker container.
- Easier local automated tests. Just run your docker container pointing to a mountebank server http://www.mbtest.org/
Happy hacking!
Posted on December 4, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.