Docker from zero to pro
Marriane Akeyo
Posted on August 1, 2023
In this article, we embark on an insightful journey into the realm of Docker essentials, unraveling all the knowledge needed to grasp this groundbreaking technology. This involves delving deep into docker commands with examples and docker compose as well. Lets dig in!!
Docker definition
Docker can be defined as an application used to automate the deployment, management and scaling of applications. Its runs applications in an isolated environment very similar to virtual machines but slightly better. Docker works with "boxes" called containers which as the name suggests, holds everything that is required to run the application, from the code to its dependencies. Multiple containers can run on the same machine without affecting each other, since each container has its own dependencies. They all however share the same kernel(acts as an intermediary between the hardware and the software, providing essential services and functionalities to manage system resources, facilitate communication between hardware and software, and ensure overall system stability and security.)
Advantages of docker
- It uses less memory space, since the application are stored in the docker container and not locally.
- They are first to boot with just one command once the container is set up.
- They are easy to set up compared to locally running instances of the same application which are limited to the running computer's capabilities and easy to scale when needed.
- Easy to share with others, deploy and test applications.
The installation of docker and docker-compose(discussed later in the article) are limited to the type of OS a user is running. You can visit this link for installation guidance.
To verify if docker is correctly installed use the following command:
docker --version
Output example depending on the version installed
Docker version 24.0.4
Docker Image
This is a package template used to create one or more containers as we shall see in the examples bellow.
Docker hub
There are some common tool used in day to day programming like languages eg python, web servers eg nginx and databases eg postgresql , which are used when creating various applications. These tools are stored in a registry called docker hub in form of images. You need to create an account in order to access dockerhub and utilize its images.
Once the account is created, we can easily pull images from docker hub.
The images you create can also be pushed into docker hub and made public or private.
The following command pulls a python version 3.10 image from docker hub
docker pull python:3.10-alpine3.18
The alpine3.18 keyword pulls the smallest in size of python3.10 available , hence saving on storage space. It is advisable to use "alpine" when pulling or defining images.
To view all the images we can run
docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
python 3.10-alpine3.18 7d34b8225fda 7 weeks ago 49.7MB
Running a normal installation of python3 using docker pull python:3.10
then viewing installed images, displays the difference in size between the normal one and the alpine version as shown:
REPOSITORY TAG IMAGE ID CREATED SIZE
python 3.10 d9122363988f 6 weeks ago 1GB
python 3.10-alpine3.18 7d34b8225fda 7 weeks ago 49.7MB
To remove an image run
docker image rm <image_name/image_id>
Example:
docker image rm python:3.10
Output:
d9122363988f
Docker Container
Once the image is defined, it can be used to create a container, since a container runs an instance of an image.
For example using the python image above:
docker run --name pytry -it python:3.10-alpine3.18
The --name flag is used to define the name that will be used to reference the container, else a random name will be given to your container.
The -it command is used to run the command in an interactive mode.
The above command produces a python shell which we can use to run various commands as shown.
Python 3.10.12 (main, Jun 15 2023, 03:17:49) [GCC 12.2.1 20220924] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print("I am a python container")
I am a python container
>>>
To view all the flags that can be used with the docker run command use:
docker run --help
Now that the container is defined, run the command bellow on another terminal window to view running containers
Command:
docker ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5bcde232650b python:3.10-alpine3.18 "python3" 32 seconds ago Up 31 seconds pytry
To view all the container which are present in docker use:
docker ps -a
Move to the terminal window running python container and run the command exit() to return to the command line.
Notice running docker ps
to view running containers produces no results since our container is closed.
To make the code above more complex we can create a folder called hello.py in the current directory and write some code in it. Then run the command again but with a different container name to avoid conflicts.
# file in the current directory
hello.py
from datetime import datetime
time = datetime.now()
print(f'I am a python file in a docker container running at {time}')
Command:
docker run --name pytryFile -w /app -v "$(pwd):/app" python:3.10-alpine3.18 python hello.py
Output:
I am a python file in a docker container running at <a_timestamp_of _the_time_right_now>
Let's break down this command:
- -v "$(pwd):/app" mounts the current directory (pwd) into the container at the /app directory, allowing access to your Python script.
- -w /app sets the working directory inside the container to /app.
- python:3.10-alpine3.18 is the image name
- python hello.py is the command that runs your Python script inside the container.
Managing Containers
To stop a running docker container use:
docker stop <container_name/container_id>
example:
docker stop pytry
Output:
pytry
To start a stopped container:
docker start <container_name/container_id>
example:
docker start pytry
Output:
pytry
To remove a docker container :
docker rm <container_name/container_id>
example:
docker rm pytry
To list all container's id
docker ps -aq
To get rid of all of the containers:
docker rm $(docker ps -aq)
To view other interesting flags you can use to manipulate containers:
docker ps --help
Docker Volumes
They enable easy sharing of information like files and folders between host and container and between containers themselves, just as the hello.py example above demonstrated.
Another example, we create a new folder and inside it , a file called hello.html that displays:
<h2> Hello World </>
To display this content on a web browser using the nginx web server, run the nginx image as shown below:
docker run --name helloWorld -v $(pwd)/hello.html:/usr/share/nginx/html/hello.html:ro -p 8080:80 -d nginx
# View the content on the browser as shown:
http://localhost:8080/hello.html
/usr/share/nginx/html: is the nginx folder used to display content stored in nginx web server
ro: includes permissions on the shared content, in our case read and write permission.
-p: this flag illustrates the port mapping in which the contents will be viewed from. In our case all the contents on the container(80) will be viewed on host by accessing port(8080)
Note: we did not pull the nginx image before using it , but running "docker images" after the above command is successful returns an nginx image as one of the images present. This is because when executing docker run ..., docker checks for the presence of the image locally and once it can't find it, it pulls it from docker hub and runs it
docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
python 3.10-alpine3.18 7d34b8225fda 7 weeks ago 49.7MB
nginx latest 021283c8eb95 3 weeks ago 187MB
# running containers
docker ps
Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eefaa7cd973c nginx "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp helloWorld
Dockerfile
This is a file given the name "Dockerfile" in running applications and is usually used to build applications which contain multiple parts and share them.
In simple terms a docker file is used to make our own personal images.
It contains a list of steps used to bring the app to life.
A docker file has its own structure, always starting with a FROM clause and the rest of the flags can come interchangeably depending on the requirements of the application.
The FROM clause can also appear multiple times to create multiple images.
Example of commands that can come after the FROM clause include:
FROM - statement defining the image to pull
RUN - to execute commands
WORKDIR - define the directory to work from
COPY - create a duplicate o the files or folders present in the host to the container.
ADD - copies new files, folders to the remote directory.
RUN - execute a command
CMD - run a file
More commands like this one can be found in the docker reference
An example of a dockerfile includes:
Assuming we have a folder called Example,with a hello.js file
containing
console.log('hello, world');
Create another file and name it "dockerfile" with the content below:
FROM node:alpine
WORKDIR /app
COPY . .
CMD node hello.js
Docker Build
Once a docker file is defined, the "build" command is used to create the image. View docker build --help
to view the flags that can be used with docker build.
The syntax is as shown:
docker build --tag <name_of_image>:<tag> <directory_with_dockerfile>
The tag represents the image versioning which aids in image naming. If none is given, then latest is assumed.It provides more control of the image you create.
Creating the nodejs image above
docker build --tag greetings:latest .
Viewing the images as shown above, we notice a new image with a tag greetings:latest
REPOSITORY TAG IMAGE ID CREATED SIZE
python 3.10-alpine3.18 7d34b8225fda 7 weeks ago 49.7MB
nginx latest 021283c8eb95 3 weeks ago 187MB
greetings latest 531620bb45c5 17 seconds ago 181MB
Running the image:
docker run greetings:latest
Output:
hello, world
.dockerignore
It is used to ignore files that the application does not require to run like node_modules, requirement.txt and .git.
It is created as file in the current directory and the files to be ignored placed inside it
.dockerignore
requirement.txt
node_modules
Docker Registry
As discussed before images are pulled and pushed from docker hub. These images are stored in a repository called docker registry.
That said it is possible to push personal images to docker hub.
- First create an account in docker hub.
- Run the following command and provide your Docker Hub credentials (username and password) when prompted:
docker login
- Tag the previously built Docker image with your Docker Hub username and the desired repository name.
docker tag greetings your_username/greetings:latest
- Finally, push the tagged image to Docker Hub using the following command:
docker push your_username/greetings:latest
This will push the Docker image to your Docker Hub repository named "greetings" with the "latest" tag.
Docker Inspect
This command allows you to retrieve detailed information about Docker objects, such as containers, images, volumes, and networks. It provides a JSON representation of the specified Docker object, which includes various metadata and configuration details.
The syntax is as shown:
docker inspect <container_name /id >
Pick a container and try it out.
Docker Logs
This command is used to monitor traffic of a running container.
The syntax is as shown:
docker logs <container_name /id >
To follow the logs of the container as they come in add the -f flag before the container name/id.
To add a timestamp to each log when following add the the flags -ft before the container name/id.
For more important flags view
docker logs --help
Docker Network
This command is used to place two or more networks on the same network hence making it easy for them to communicate with each other. For example a postgres container and a pgadmin container that provides a graphical user interface of the data in a postgres database can easily communicate if they are placed on the same network.
The syntax is as shown:
docker network create <network_name>
The two images will have to now contain the flag --network when run in order to communicate eg
docker -e POSTGRES_PASSWORD=pass -p 5432:5432 --network <network_name> -d postgres:latest
Docker Compose
It allows you to define and manage multiple docker containers as one instead of linking them using a network.
This command is used to bring multiple parts of an application,eg frontend, backend and storage to life with just one command.
This is especially useful for development and testing environments, as well as for deploying applications to production.
It is mainly done using a docker-compose.yaml file containing services, networks, and volumes. Each service represents a container, and you can specify their images, ports, volumes, environment variables, etc.
The YAML file should be placed in the root directory of your project.
Ensure you install docker-compose according to your os type before usage. The docker-compose version should also be compatible with your docker version.
An example of a docker-compose file includes:
services:
db:
image: postgres:15.3-alpine3.18
environment:
- POSTGRES_PASSWORD=<password>
- POSTGRES_DB=<db_name>
volumes:
- "./green:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
admin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=<any@email.com>
- PGADMIN_DEFAULT_PASSWORD=<pgadmin_password>
ports:
- "8080:80"
volumes:
- "./pgadmindata:/var/lib/pgadmin:rw"
You can include the version of docker compose you are using eg version:3.8
, just above everything else but its not a requirement by default the installed version will be used.
Let's unpack the file above:
- A service: contains various parts of your application. It also directs docker on how to build the image required by that service. Each service has to have a unique name that differentiates it from another. We use db as the service title for our postgres database image and admin to reffere to the pgadmin image and the flags it will use to run successfully.
To run the file :
docker-compose up
Use docker ps to view if the containers were created successfully and are up and running.
So generally our docker-compose file runs both postgres and pgadmin which is postgres gui in the same network.
Running https://localhost 8080
provide a pgadmin interface where you can log in using pgadmin credentials and create a server to view the postgres database with on the browser as shown.
Move to connection and use the name of the postgres service as the hostname and include the postgres user and password in the columns indicated
docker exec -it postgres:15.3-alpine3.18
psql -U <postgres_user>
\c <db_name>
CREATE TABLE hello_pdadmin;
Refresh pgadmin to see the changes.
To build the images and not run them, use
docker-compose build
To stop the running containers use
docker compose down
Conclusion
In conclusion, Docker is not just a technology but a mindset that encourages continuous integration and continuous deployment. Whether you're an individual developer, part of a team, or managing large-scale infrastructures, Docker has something to offer for everyone.Embracing Docker's capabilities will undoubtedly enhance your development workflow and bring your projects to new heights of efficiency and reliability. Thank you for staying curious and happy hacking!!
Posted on August 1, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.