Dock-umentary: A Beginner's Guide to Docker

yrs147

Yash Raj Singh

Posted on January 3, 2023

Dock-umentary: A Beginner's Guide to Docker

Hey Everyone! , welcome to the new blog. In today's blog, we are going to learn about Docker. Having a thorough understanding of Docker can greatly benefit both developers and organizations in many ways. Hope this blog helps you jumpstart your docker journey 🐳. So let's dive into it.

Introduction

In recent years, there has been a shift in the way that software is developed and deployed. In the past, many applications were built as monolithic systems, with all of the components tightly coupled and deployed together as a single unit. However, this approach has several drawbacks, including difficulty in scaling and maintaining the application, and difficulty in introducing new features or making updates.

To address these issues, many organizations turned to microservice architecture, which involves breaking an application down into smaller, independent services that can be developed, deployed, and scaled independently. However, managing and deploying these microservices can be a complex and time-consuming process.

Now, Before getting into what is docker and its other intricacies, let us first understand, why the need for docker arose. You see, prior to the release of the docker, it was often difficult for developers to ensure that their application would run consistently across different environments due to a number of reasons like differences in system configurations and dependencies between environments.

For example, if a developer developed an application on their own machine, it might rely on certain libraries or dependencies that we installed on that machine. When the developer tries to deploy the application to a different environment, such as a staging or production server, the application might not work if the required libraries or dependencies were not present. This could be due to differences in the operating system, package manager, or other system-level configurations.

To address these issues, developers have to manually install and configure all the necessary dependencies and libraries on each environment where the application was deployed. This was time-consuming and print to errors, and made it difficult to ensure that the application would behave consistently across different environments.

Docker provides a solution to these challenges by allowing you to package your applications and their dependencies into lightweight, portable containers that can be easily deployed and run on any platform. With Docker, you can build, ship, and run your applications in a consistent and reliable way, regardless of the underlying infrastructure.

What is docker?

Docker is a tool that allows you to run applications in containers. These containers are isolated environments that contain all the dependencies and libraries required to run the application. This makes it easy to deploy and run applications consistently across different environments, without worrying about differences in system configurations or dependencies.

Setting Up Docker

To start using Docker, you will need to install it on your local development machine. Docker is available for a wide range of operating systems, including Windows, macOS, and various distros of Linux.

Windows

To install Docker on Windows, you can download the Docker Desktop installer from the Docker website and run it on your machine. The installer will take care of installing Docker and its dependencies, as well as setting up Docker to run as a service on your system.

Linux

To install Docker on a Linux machine, you will need to follow the instructions for your specific distribution. For example, on a Debian-based system such as Ubuntu, you can use the following command to install Docker:

sudo apt-get install docker.io
Enter fullscreen mode Exit fullscreen mode

macOS

To install Docker on macOS, you can download the Docker Desktop installer from the Docker website and run it on your machine. Alternatively, you can use Homebrew to install Docker by running the following command:

brew cask install docker
Enter fullscreen mode Exit fullscreen mode

Once docker is installed, you will need to create a Docker Account to log in to Docker Hub, which is a registry of Docker Images that you can use to build and run containers. To create a Docker account, visit the Docker website or click here.

After you have installed Docker and created a Docker account, you can start using Docker to build and run containers.

Understanding Docker Terminologies

To work with Docker, it is important to understand the following Key concepts and terms:

  • Images: Docker image is a lightweight, stand-alone, executable package that contains everything needed to run an application, including the application code, libraries, dependencies, and runtime. Images are created from a set of instructions called a Dockerfile, which specifies the steps needed to build the image.
  • Container: A Docker container is a running instance of a Docker image. When you start a container, Docker creates a runtime environment that is isolated from the host system and other containers. This isolation allows you to run multiple containers concurrently on a single host without interference.
  • Dockerfile: A Dockerfile is a file that is used to automate the process of building a Docker Image, making it easier to create and deploy applications in a consistent and reproducible way.
  • Registry: A Docker registry is a collection of Docker images that can be stored and shared. Docker Hub is a public registry that stores a huge number of images that can be used by everyone.
  • Volume: A Docker volume is a persistent storage location that is used to store data that needs to be preserved between container restarts. Volumes can be used to share files between containers and host systems or to persist data that an application generates.

Understanding these terms will be helpful as you start to work with Docker to build and run containers.

Using Docker Images

One of the benefits of using Docker is its ability to easily find and use pre-built images to create and run containers. For example, if you want to try out an application without installing it in your system you can use Docker and avoid the hassle of setting up the dependencies in your local system to run the application.

To display all the docker images currently installed on the system run the following command :

docker images
Enter fullscreen mode Exit fullscreen mode

To use a prebuilt image, you can pull an image from Docker Hub.

Docker Hub

Docker Hub is a public registry that hosts a large number of images that are available to everyone. These images can be used as a starting point to build and run containers for a wide range of applications and services.

To find and pull an image from Docker Hub, you can use the docker pullcommand.

For example, let us try to pull the latest version of the ubuntu image, you can use the following command:

docker pull ubuntu
Enter fullscreen mode Exit fullscreen mode

Now if you run the docker images command you should see the ubuntu image that you just pulled from Docker Hub

You can also specify a specific version of an image by including the image tag. For example, to pull the 18.04 version of the ubuntu image, you can use the following command:

docker pull ubuntu:18.04
Enter fullscreen mode Exit fullscreen mode

As you can see now there are 2 ubuntu images in the system, one with latest tag and one with 18.04 tag

Once you have pulled an image from Docker Hub, you can use it to create and start a container. To create and start a container from an image, you can use the docker run command.

Building Docker Images

In addition to using pre-built images from Docker Hub, you can also create your own custom images. To create a custom image, you have to create a file with a set of instructions called a Dockerfile. A Dockerfile is a text file that contains a series of commands that are used to build an image.

Let us look at an example of a Dockerfile

FROM node:latest\
RUN mkdir /app\
WORKDIR /app\
COPY package.json /app\
RUN npm install
Enter fullscreen mode Exit fullscreen mode

This Dockerfile builds an image for a Node.js application

  • FROM node:latest: specifies the base image as the latest version of the node image
  • RUN mkdir /app: creates a new directory called /app in the image
  • WORKDIR /app: sets the working directory to /app for subsequent instructions in the Dockerfile
  • COPY package.json /app: copies the package.json file from the host into the /app directory in the image
  • RUN npm install: installs the dependencies listed in the package.json file

After this save the file with the name Dockerfile

(Note --- 'D' in the Dockerfile should be in uppercase)

Now use the docker build command to build the image :

docker build [OPTIONS] PATH | URL | -
Enter fullscreen mode Exit fullscreen mode

Pushing Docker Images

Once you have created a docker image you can also push this image to the docker registry (Docker Hub). There are many benefits to it :

  • It allows you to share your images with other people.
  • It allows you to version your images, so you can roll back to previous versions if needed.
  • Having your images stored on Docker Hub makes it easier to automate the deployment of your applications (Eg- While building a CICD pipeline)

You can push the docker image using the docker push command :

docker push [OPTIONS] NAME[:TAG]
Enter fullscreen mode Exit fullscreen mode

Note that before you can use the docker push command, you need to be logged in to the Docker registry where you want to push the image. Use the docker login command to do so .

Docker Image commands

Here is a list of some common Docker image commands that you can use :

  • docker image ls: lists all images
  • docker image pull <IMAGE_NAME>: pulls an image from a registry
  • docker image build -t <IMAGE_NAME> .: builds an image from a Dockerfile in the current directory
  • docker image inspect <IMAGE_NAME>: displays detailed information about an image
  • docker image tag <IMAGE_NAME> <NEW_TAG>: tags an image with a new name
  • docker image push <IMAGE_NAME>: pushes an image to a registry
  • docker image rm <IMAGE_NAME>: removes an image
  • docker image prune: removes all unused images

Managing Docker containers

After successfully building or pulling the image you can start off a container using the following command :

docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
Enter fullscreen mode Exit fullscreen mode

Once you have started a Docker container, you will need to be able to manage it. This includes tasks such as starting and stopping the container, viewing and managing the container logs, and removing the container when it is no longer needed.

Docker Container commands

Here is a list of some common docker container commands you can use :

  • docker container ls: lists all running containers
  • docker container ls -a: lists all containers, including stopped ones
  • docker container start <CONTAINER_ID>: starts a stopped container
  • docker container stop <CONTAINER_ID>: stops a running container
  • docker container rm <CONTAINER_ID>: removes a container
  • docker container kill <CONTAINER_ID>: sends a SIGKILL signal to a running container
  • docker container inspect <CONTAINER_ID>: displays detailed information about a container
  • docker container logs <CONTAINER_ID>: displays the logs for a container
  • docker container stats <CONTAINER_ID>: displays resource usage statistics for a running container

Docker Storage

Docker uses a union file system to manage the file system of a container. This allows multiple layers of files to be stacked on top of each other, with the topmost layer taking precedence.

But the question still remains how does Docker store files of an image and a container?

To understand this better you first need to learn about the layered architecture in docker.

Layered Architecture of Docker Images

Docker uses a layered architecture for its images, which allows for efficient image management and distribution. A Docker image is made up of layers, each of which represents a different step in the build process for the image. When you create an image, each step in the build process is recorded as a new layer.

Image Layers

All of the image layers are created when we run the docker build command to form the final Docker image. And once the image is built you cannot modify the contents of this layer so that's why they are Read Only and can only be modified by starting a new build

Remember --- The same image layer is shared by all the containers created using this image.

Container Layer

When you run a container of this image, docker creates a container based on these layers and creates a new writable layer on top of the image layers. The writeable layer is used to store data created by the container like log files written by the application, any temporary files created, etc.

This layer is alive only till the container is up and running. When the container is destroyed, this layer and all of the changes stored in this layer are also destroyed.

You can refer to the example below for better clarity.

This Dockerfile creates an image with four layers: the base image, the layer containing the files from the current directory, the layer containing the results of the make command, and the layer specifying the default command to run. When you build an image using this Dockerfile, the layers will be stored in a cache on your local machine, and you can use the image to create and run containers.

Docker Volumes

Docker volumes are persistent storage locations that are used to store container data that needs to be preserved. Volumes can be used to share files between containers and host systems or to persist data that an application generates.

To create a named volume, you can use the docker volume create command. For example, to create a volume named myvolume, you can use the following command:

docker volume create myvolume
Enter fullscreen mode Exit fullscreen mode

To mount a volume to a container, you can use the -v flag with the docker run command. For example, to mount the myvolume volume to the /app/data directory inside a container, you can use the following command:

docker run -it -v myvolume:/app/data ubuntu
Enter fullscreen mode Exit fullscreen mode

You can also use the -v flag to mount a host directory to a container. For example, to mount the /data directory on the host system to the /app/data directory inside a container, you can use the following command:

docker run -it -v /data:/app/data ubuntu
Enter fullscreen mode Exit fullscreen mode

Here is a list of some common Docker volume commands you can use :

  • docker volume ls: lists all volumes
  • docker volume create <VOLUME_NAME>: creates a new volume
  • docker volume inspect <VOLUME_NAME>: displays detailed information about a volume
  • docker volume rm <VOLUME_NAME>: removes a volume
  • docker volume prune: removes all unused volumes

Networking in Docker

Docker creates 3 networks automatically when it is installed bridgenone, and host.

Bridge

This is the default network type when you run a container. It creates a private network inside the Docker daemon and assigns a virtual IP address to the container. This network allows the container to communicate with other containers on the same network and with the host, but it is isolated from the outside world.

None

This network type removes all networking from the container. The container will not be able to communicate with the host or any other containers. This is useful if you want to run a process in a container without any networking in complete isolation.

docker run <image> --network=none
Enter fullscreen mode Exit fullscreen mode

Hosts

This network type removes networking isolation between the container and the host. The container will use the host's network stack and will be able to access the host's network resources directly. This is useful if you want to run a container that needs to listen on a specific port on the host.

docker run <image> --network=host
Enter fullscreen mode Exit fullscreen mode

User Defined Networks

By default, containers are connected to the same network, when multiple containers are created on the same Docker host, they are all connected via a single bridge with an IP address such as 172.17.0.1. If you want to create a new bridge with a different IP address (e.g., 182.18.0.1) on the same host, you can do so using the appropriate command.

docker network create \
    --driver bridge \
    --subnet 182.18.0.0/16 
    user-def
Enter fullscreen mode Exit fullscreen mode

Docker Networking commands

Here is a list of some common Docker networking commands you can use :

  • docker network ls: lists all networks
  • docker network create <NETWORK_NAME>: creates a new network
  • docker network inspect <NETWORK_NAME>: displays detailed information about a network
  • docker network connect <NETWORK_NAME> <CONTAINER_ID>: connects a container to a network
  • docker network disconnect <NETWORK_NAME> <CONTAINER_ID>: disconnects a container from a network
  • docker network rm <NETWORK_NAME>: removes a network

Docker Compose

Docker Compose is a tool that helps you manage and deploy multi-container applications. It allows you to define all the services that make up your application in a single file called a docker-compose.yml file. You can then use a single command to create and start all the services defined in the file.

To use Docker Compose, you will first need to install it on your system. You can find instructions for installing Docker Compose on the Docker Compose documentation page.

Once Docker Compose is installed, you can create a docker-compose.yml file in the directory where you want to run your application. This file should contain the definitions for the services that make up your application.

Let's look at an example docker-compose.yml file for a simple web application :

version: '3'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
  db:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: password
Enter fullscreen mode Exit fullscreen mode
  • The version field specifies the version of the Docker Compose file format. In this case, it is using version 3.
  • The services block defines the services that make up the application. In this example, there are two services: web and db.
  • The web service is based on the nginx:latest Docker image. This image will be pulled from a Docker registry, such as Docker Hub, if it is not already present on the host machine.
  • The ports field specifies that the web service should expose port 80 on the host machine and map it to port 80 inside the container.
  • The db service is based on the mysql:latest Docker image.
  • The environment field specifies environment variables that should be passed to the db container. In this case, it sets the MYSQL_ROOT_PASSWORD environment variable to password.

Docker Architecture

The Docker architecture consists of three main components: the Docker host, the Docker client, and the Docker registry.

Docker Host

The Docker host is the machine on which the Docker daemon runs. The Docker daemon is responsible for running containers and managing their resources, such as CPU, memory, and storage. The Docker host can be a physical machine or a virtual machine, and it can run on a variety of operating systems, including Linux, macOS, and Windows.

Docker Client

The Docker client is the interface used to send commands to the Docker daemon. The client can be installed on the same machine as the daemon, or it can be installed on a separate machine and used to communicate with the daemon over a network. The Docker client and daemon communicate with each other using a REST API.

Docker Registry

The Docker registry is a storage and distribution system for Docker images. It allows users to upload and download images, and it acts as a central repository for sharing images with others. The Docker registry can be a public registry, such as Docker Hub, or it can be a private registry run by an organization.

Container Orchestration

Container orchestration refers to the process of managing and deploying multiple containers as a single, cohesive application. This involves automating the deployment, scaling, and management of containers across a cluster of machines.

There are several tools and platforms available for container orchestration:

  • Docker Swarm
  • Kubernetes
  • RedHat Openshift
  • Mesos

Since container orchestration in itself is a big topic, so we'll deep dive into the world of orchestration some other day. Until then," happy building and deploying with Docker! 😄"

You can catch me up on my Socials for more such content : Link

References 📚

💖 💪 🙅 🚩
yrs147
Yash Raj Singh

Posted on January 3, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related