Mohammad Reza Karimi
Posted on November 14, 2022
Hi guys!
In the last part, we discussed a little about what an image and a container is, in this part, we will make our own images and wrap our application in a container.
Make a Dockerfile in the root
First of all, make a file named Dockerfile, without any extension, in the root directory:
Tip: you can call this file whatever you want, but Dockerfile is the default recognizable name, other names, should be introduced in later steps.
Starting our image
As I said before:
Image is a collection of applications and commands running on a base image and this base image itself is a simplified Linux distro.
And a Dockerfile, is an introduction file for our image.
First of all, we should specify what base image we're going to run our application and commands on, and we do it like this:
FROM base_image
Alpine latest version as base image:
FROM alpine
Alpine specific version (3.15) with a pre-installed node.js with a specific version (18):
FROM node:18-alpine3.15
And you can find other base images and their tags (versions) in the Dockerhub.
Tip: Always try to use tagged versions, even in the case of latest.
Command the image!
You can make different users with different accessibilities in your image and run commands by their help, but for now, we're not going into that.
You're already familiar with FROM command that is used for pulling base image, there are more:
RUN:
The RUN instruction is used to run commands.
From docker:
Executes any commands on top of the current image as a new layer and commit the results.
Commands could be for example npm install
or go mod tidy
or pip install -r requirements.txt
or more complex commands like apt-get update && apt-get install -y curl
or any other command you run inside a linux!
CMD:
Provide defaults for an executing container, for example npm run start
or running a complied executable.
Tip: there can only be one CMD instruction in a Dockerfile.
COPY:
Copy files or folders from root project directory (called source) to a image's filesystem path (called dest).
# COPY source dest
COPY ui .
COPY . ./
# Or many files, last argument is dest
COPY go.mod go.sum ./
# Or
COPY ["go.mod", "go.sum", "./"]
WORKDIR:
The WORKDIR instruction sets the working directory for the next instructions that follow it in the Dockerfile.
FROM alpine
WORKDIR /app
# Next commands will run in /app
COPY file1 file2 ./
COPY file3 file4 ./
# now, we have /app/file1 /app/file2 etc.
CMD ["./main"]
# command above will look for /app/main
ENV: defines environment variables.
ENV PG_PASS=12@3fl
There are many other commands like EXPOSE, ADD, ENTRYPOINT which you can read on Dockerfile reference.
Build images
Let's start with a node.js example, but don't worry if you're not a node.js developer.
After writing a complete Dockerfile like this:
FROM node:18-alpine3.15
COPY src .
COPY package*.json .
RUN npm install
CMD npm run start
Tip: package*.json means all files starting with package and ending with .json => package.json and package-lock.json
We can build the image by running this command:
docker build <path>
We usually run this command inside project root directory and use a single dot . for path: docker build .
This command alone, assign a random name for our container, but we can name it on our own using tags:
docker build -t <image_name> <path>
Caching and layered images
Last version will work, but is not performant, it's highly recommended to consider caching techniques.
Docker images are layered, simply said, every line in Dockerfile is a layer, and Dockerfile is read line by line.
What do we cache in the world of docker?
Mostly, project dependencies and packages and also we do not COPY them from source to destination.
As you might know, npm i
command is a so heavy command that installs project dependencies and it read dependency needs from package.json file.
Look at example below:
project structure:
- index.js
- node_modules (dependencies)
- package.json
COPY index.js package.json ./
RUN npm install
CMD npm run start
It's good and working, but what happens if we change package.json (we add new dependency)
They will install again and that's OK.
But what happens if we change index.js without changing dependencies?
Dependencies will install again! and that's so NOT OK!
And that happens because we copy package.json and index.js at the same time, and we do installation after that.
We should copy all files except dependency introducer (pakcage.json) after installing dependencies:
COPY package.json ./
RUN npm install
COPY index.js ./
CMD npm run start
It's way much faster!
in this case, if we change only index.js, docker will read Dockerfile from line 3 and go after that, because it's defined on line 3.
Running your image (container)
Now we've learned how to make custom images, and we know how to run a container or expose ports from last tutorial
Running your own container is the same as running a Redis or Postgres container.
You can also make your own Redis or Postgres image!
Exposing port
A container is an isolated environment, that means, all ports are closed by default.
If you have a http server running inside a container, you should open (expose) your preferred port.
There is a EXPOSE instruction, but I personally prefer using -p <port>:<port>
when running a container in most cases, this actually overwrites EXPOSE as well.
Resources and Final quote
I tried to help you understand that docker is not a challenge, it's a tool to make your life easier, this part covered basics of writing a Dockerfile and how to make your custom images.
There are so many tutorials out there, but I'm gonna suggest the ones that I enjoyed the most.
As an instruction that show how containerization will help us: Containerization explained
Docker and Kubernetes course by Stephan Grider
Posted on November 14, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 15, 2024