Docker in development (with Node.js)
Akshay Gupta
Posted on September 15, 2021
This post is going to help you find out how to setup docker in such a way that you can easily and quickly get started using docker in development environment with Node.js without much hassle!
We will be learning basics of Docker Volumes first and then move on to how to use volumes during the development phase!
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
Basics Of Volumes
Creating volume is pretty simple using the docker create
command
$ docker volume create myvol
We can also remove the volume straight away by using the remove command
$ docker volume remove myvol
You can also verify that the volume has been created by using list
command to list volumes on your system:
$ docker volume ls
DRIVER VOLUME NAME
local 88b0dd3439a42b08ab161dfb718b1fdcb548d776521f0e008a0e6b002ecd1ee7
local 96a6b003a662d7461c100e3bef816322f036adba8eef1483755551aa463ba7b4
local myvol
As we can see our volume myvol
is created with local driver. We can also go ahead and get some more information regarding the volume with the inspect command
$ docker inspect myvol
[
{
"CreatedAt": "2021-09-13T18:20:00Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/myvol/_data",
"Name": "myvol",
"Options": {},
"Scope": "local"
}
]
Among other information this command show the Mountpoint for our volume data, which is /var/lib/docker/volumes/myvol/_data
. We can very well cd
into this dir and see the data for the volume. This data could be your codebase, or the metadata or any other data that you store in the volume
But there is a catch!!
Are you a mac user ? If you're not a mac user you can skip this section but if you are this might be helpful. You can't directly cd into the /docker folder if you do try to do that it would give
$ cd /var/lib/docker
cd: no such file or directory: /var/lib/docker
Why is that ?!
That is because Docker Desktop (on mac) actually runs a VM behind the scenes because docker, because of the way its made, is not directly compatible with mac. But there are ways to access the underlying data in the VM.
- One option is to log into the shell using
netcat
$ nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
You can then cd into the data directory
/ # cd /var/lib/docker/volumes
You can exist the shell by typing exit
command or pressing ctrl+c
on keyboard
- Another option is using nsenter in privileged container like below
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
This will open the shell same way as the first option.
Checkout this gist by Bret Fisher to know more :)
Note: For windows users, docker artifacts can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\
. If this does not work, I would suggest going through related discussions on stackoverflow and docker forums (example: here) to see how to access data
Cool! Now that we are done with basics of volumes š Let's jump onto the code!
A Node.js Express API
Let's quickly setup an express application. We won't waste much time here we'll pull sample "hello world" example from express.js website
$ mkdir node_docker_demo
$ cd node_docker_demo
$ yarn init -y
$ yarn add express
$ touch index.js
In index.js
let's paste the following sample code
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Now that we have an express application running .. let's write our Dockerfile!!
Dockerfile Setup
We will start with pull node:latest
image from the registry (It doesn't matter the version we pull from registry in our case because it is a simple express app but you might want to stick to a version for backward-compatibility issues or do the node.js and dependencies upgrade accordingly)
FROM node:latest
Let's also set our work directory in the image so that we don't have to mention absolute path everytime
WORKDIR /app
Next up, we will install node_modules in our image and for that we would need package.json
and either yarn.lock
or package-lock.json
file (depending on if you used yarn or npm) in the image
COPY ["package.json", "yarn.lock", "./"]
RUN yarn install
This would copy both package.json and yarn.lock into the current working directory (specified by ./
).
Note: our current working directory has been set to /app
Running yarn install after that would install all the required dependencies in node_modules
Now our directory structure inside the image looks something like this
app
|_ package.json
|_ yarn.lock
|_ node_modules
Next let's copy everything else we have in our project with
COPY . .
This will copy everything from our host's current working (.
) dir to image's working dir (.
)
All there's left to do is run the server with
RUN ["node", "index.js"]
All in all our Dockerfile looks like this
FROM node:latest
# setting work dir
WORKDIR /app
## Following steps are done before copying the remaining file
## to make use of docker's caching capabilities
# copying files required to install node modules
COPY ["package.json", "yarn.lock", "./"]
# install node_modules
RUN yarn install
# copy everything else
COPY . .
# mention the port which we'll expose with port-mapping
EXPOSE 3000
# run server
RUN ["node", "index.js"]
Gotcha! There is a small issue here, and that is that we are installing node modules with yarn install before copying every other file but then when we do COPY . .
we would be again copying node_modules into the image. To prevent this we will make a .dockerignore
file and tell docker to ignore node_modules while copying data inside the image
.dockerignore
node_modules
Let's build this with docker build
command and then run it
$ docker build -t myapp .
$ docker run -it --rm -p 3000:300 --name myapp_container myapp
Example app listening at http://localhost:3000
We have now successfully containerized our node.js application but there is one issue that we have:
If we make any change in our codebase, as we do hundreds of thousands of times during development, we would need to rebuild the image and run the container again (hundreds of thousands of times)
That can't be a good strategy. There must be a better way to do this.
Thankfully, there is! VOLUMES! š
For the purposes of this use-case we will use bind mounts. Essentially we will bind our host's current working directory to the image's working dir (/app
) and attach a file watcher (e.g. nodemon
) so that as soon as we save a change in development, that change get's propagated to the image (because volume!), so nodemon would detect that change and reload our node.js server
We can configure bind-mount while running our container
$ docker run -it --rm \
-p 3000:300 \
-v $(pwd):/app \
--name myapp_container \
myapp
-v $(pwd):/app
above would mount the current working dir to /app. Another way to do it is using --mount
flag
$ docker run -it --rm \
-p 3000:3000 \
--mount type=bind,source=$(pwd),target=/app \
--name myapp_container
myapp
This is fine and dandy, but it's not enough! We also need to configure a file watcher like we discussed. Along with the file watcher another thing to keep in mind is since we are using bind-mounts now, there is no need to actually COPY
anything from our local host to image !! So let's remove that and add nodemon into our image and see how things look
FROM node:latest
# setting work dir
WORKDIR /app
# added nodemon globally
RUN npm i -g nodemon
# run the server with watcher
CMD ["nodemon", "index.js"]
That's it!! Let's build this file and run it
$ docker build -t myapp .
$ docker run -it --rm \
-p 3000:300 \
-v $(pwd):/app \
--name myapp_container \
myapp
Now when we make a code change, the watcher will detect it and restart the node.js server automatically!
And, that is how you can start with developing Node.js applications on docker!
š„³ š„³ š„³
Posted on September 15, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.