Docker Networking 101: A Blueprint for Seamless Container Connectivity

nobleman97

David Omokhodion

Posted on December 13, 2023

Docker Networking 101: A Blueprint for Seamless Container Connectivity

Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, standalone, and executable software packages that include everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Docker provides a consistent and reproducible environment across different environments, making it easier to build, deploy, and scale applications.

To enable communication between containers, your host and the outside world, docker implements an interesting networking system. It is this system that bridges the gap between isolated containers, allowing them to function as well-coordinated services.

In this article, you will learn about the basics of the different network types and how to use them in development or deployment.

We will cover:

Prerequisites

Docker Network Types

Docker comes with 7 (rumours say there are more) network types. However, we will discuss the most important four. They are:

  • None
  • Bridge (default)
  • Bridge (user-defined)
  • Host

If you already have Docker setup on your host machine, and you run the command "docker network list", you should get an output similar to this:



$ docker network list
NETWORK ID     NAME      DRIVER    SCOPE
6f23ed77734d   bridge    bridge    local
3bbca0d09738   host      host      local
a67ca10d6dfd   none      null      local



Enter fullscreen mode Exit fullscreen mode

"Driver" in this list, tells us the network type, and by default, docker creates one bridge, host and null(none) network each.

Let's discuss each network type in more detail...

1. None

None network type
None is a docker network-type where the container is not attached to any network. As a result, the container is unable to communicate with any external network or other containers. It is isolated from every other network.

You can run an nginx container in a "none" network type using the following command:



docker run -d --network none --name my_nginx nginx


Enter fullscreen mode Exit fullscreen mode

2. Bridge (default)

Default bridge network
The bridge network mode sets up an internal private network within the host. This allows communication between containers within such network, but isolates them from the host's network.

When docker containers are created without specifying a network, they are automatically placed in the default bridge network.

For example:



# running a container in the default bridge network
$ docker run -dit --name Rock nginx


Enter fullscreen mode Exit fullscreen mode

3. Bridge (user-defined)

User-defined bridge network

This network type is similar to the user-defined bridge network. It also creates an internal private network within the host, but it does not come already created by Docker. You have to create it yourself.

Here's how you create a user-defined bridge network called "demo_net":



# create a user-defined bridge network
$ docker network create -d bridge demo_net

# list all your docker networks
$ docker network list
NETWORK ID     NAME       DRIVER    SCOPE
6f23ed77734d   bridge     bridge    local
7a824036d47a   demo_net   bridge    local
3bbca0d09738   host       host      local
a67ca10d6dfd   none       null      local



Enter fullscreen mode Exit fullscreen mode

Here's how to attach a container to the user-defined bridge network you just created...



# Attaching containers to bridged network
$ docker run -dit --network demo_net --name service_A nginx

# create another container and publish a port
$ docker run -dit -p 8000:80 --network demo_net --name service_B nginx


Enter fullscreen mode Exit fullscreen mode

service_A and service_B containers are now attached to the 'demo_net' network. By default, all containers within the same network can communicate with one another freely but are isolated from the host network and other user-defined networks.

In order to access applications running on these containers from the host network, we have to publish (or expose) ports from the containers. In the snippet above, port 80 on service_B container is mapped to port 8000 on the host, allowing access to an application running on service_B's container port 80 to be accessed from port 8000 on the host.

4. Host

Host network

Containers in host network mode directly utilize your host's network stack, lacking isolation. They do not receive separate IP addresses, and any port bindings are directly exposed on your host's network interface. In practical terms, if a container process is configured to listen on port 80, it will bind to your host machine's IP address on port 80.

Here's an example of a container that binds nginx to a host network:



# bind container to host network
docker run -d --network host --name my_nginx nginx


Enter fullscreen mode Exit fullscreen mode

If you're using the host's network for your web container (like Nginx), it means you can't run multiple web containers on the same host and port. This is because they would all share the same network settings, causing conflicts.

Using Docker Networks

Testing Communications Within and Between Networks

In an earlier step, we attached two containers (service_A and service_B) to the demo_net network. Let's confirm that they can communicate with one another within the same network and whether they can communicate with a container in a different network(e.g the "Rock" container in the default bridge network).



# open a shell into service_A container
$ docker exec -it service_A /bin/bash

# when inside service_A container update its apt repos and install ping
$ apt update && apt install iputils-ping -y

# Then ping service_B using the container name because name resolution is automatically enabled within the network 
$ ping service_B

root@06dbaea8c2a2:/# ping service_B
PING service_B (172.18.0.3) 56(84) bytes of data.
64 bytes from service_B.demo_net (172.18.0.3): icmp_seq=1 ttl=64 time=0.086 ms
64 bytes from service_B.demo_net (172.18.0.3): icmp_seq=2 ttl=64 time=0.154 ms
64 bytes from service_B.demo_net (172.18.0.3): icmp_seq=3 ttl=64 time=0.119 ms
64 bytes from service_B.demo_net (172.18.0.3): icmp_seq=4 ttl=64 time=0.121 ms
...


Enter fullscreen mode Exit fullscreen mode

The results are similar when you exec into service_B and ping service_A container. However, none of the containers can reach the "Rock" container which is in the default bridge network because they exist in different virtual networks.



# Inside service_A
root@06dbaea8c2a2:/# ping Rock
ping: Rock: Temporary failure in name resolution


Enter fullscreen mode Exit fullscreen mode

Reaching Apps Running in Containers via Published Ports

When creating service_B in an earlier step, we ran the following command:



$ docker run -dit -p 8000:80 --network demo_net --name service_B nginx


Enter fullscreen mode Exit fullscreen mode

This command mapped port 80 within the container to port 8000 on the host machine. Hence, when we visit localhost:8000 in our browser, we see the web server running in service_B:

localhost:8000

Manipulating Network Connections



# Disconnect service_A
$ docker network disconnect demo_net service_A

# Reconnect service_A to default "bridge" network
docker network connect bridge service_A

#Next, use `docker inspect` to get the ip of the Rock container and try pinging it.

root@06dbaea8c2a2:/# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.186 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.149 ms

# It works!


Enter fullscreen mode Exit fullscreen mode

List Docker Networks



$ docker network ls
NETWORK ID     NAME       DRIVER    SCOPE
a87871978d59   bridge     bridge    local
747d0aff98fc   demo_net   bridge    local
3bbca0d09738   host       host      local
a67ca10d6dfd   none       null      local



Enter fullscreen mode Exit fullscreen mode

Deleting Docker Networks

To delete a custom network, you must first stop or disconnect all running containers attached to the network. When that's done, proceed to delete the network by running the docker network rm command like this:



$ docker network rm demo_net

# Confirm that the demo_net network has been removed
$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a87871978d59   bridge    bridge    local
3bbca0d09738   host      host      local
a67ca10d6dfd   none      null      local


Enter fullscreen mode Exit fullscreen mode

Conclusion

Docker's networking system gives you different choices for handling communication between containers, their nearby containers, and your Docker host. In a network, containers can connect with each other using their names or IP addresses.

User-defined bridge networks work well when you want various containers to talk to each other on the same Docker host. On the flip side, host networks are more suitable when you don't want the network stack to be separate from the Docker host, but you still want other parts of the container to be isolated.

Docker's networking system could initially feel overwhelming, but I hope this gives you some clarity.

...

If you have any questions or comments, you can leave them below, or reach out to me on LinkedIn. Till we meet again...

Stay awesome!
~ David Omokhodion

Attributions

💖 💪 🙅 🚩
nobleman97
David Omokhodion

Posted on December 13, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related