Production-Ready Docker Configuration With DigitalOcean Container Registry Part I

endalk200

Endalkachew Biruk

Posted on November 3, 2021

Production-Ready Docker Configuration With DigitalOcean Container Registry Part I

Prerequisites-This article assumes a basic understanding of Docker and Django.

Objective

  • Part I of this article series will be setting up a basic Django web app with development level docker configuration using docker-compose and production level docker configuration. We will also discuss in detail the rationale behind our docker configuration.

  • Part II we will be building our docker image, push it to DigitalOcean container registry, and setup CI/CD pipeline with GitHub actions. We will also be deploying our built docker image to app platform and setup CI/CD using GitHub actions.

Docker has revolutionized software development and has proven to be the nucleus of new-age development practices like CI-CD, distributed development, and collaboration.

Still, there isn’t any popular consensus on what are good docker development principles and guidelines. Dockerfiles written for Java or any other programming language don’t directly translate to Python.

This article discusses an opinionated, production-ready Docker setup for Django applications which can be used in docker-compose files or with Kubernetes clusters. Our requirement further extends for containers to be scaled up and down without any side effects.

Note: Even though our docker configuration is production-ready the Django application itself is in no way ready for production.

If you need the code without going into the reasoning, a sample Django repo with Docker setup is available for download on Github, here.

So without further ado, let's start. The tech stacks we are using are:

  1. Celery is used for background tasks, with Redis as the celery backend.

  2. Celery beat is used for cron jobs, to schedule periodic tasks.

  3. Flower is used for background tasks monitoring.

  4. We are using PostgreSQL as our Database.

Both Django server and celery will be run from one docker image thus they share the same docker configuration. For database backup and restore purposes we are going to use a custom image for our PostgreSQL with maintenance commands and scripts. Thus we have two dockerfiles, one for our Django server and one for PostgreSQL. We will discuss both docker configs in detail below.

Django Docker Configuration

Let's go over our first dockerfile which runs the web server and celery.

# Section 1- Basic parameters
ARG PYTHON_VERSION=3.9-slim-buster
ARG BUILD_ENVIRONMENT=production
ARG APP_HOME=/app

# Section 2- Set the python base image
FROM python:${PYTHON_VERSION}

# Section 3- Python interpreter flags
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1

# Section 4- Compiler and OS libraries
RUN apt-get update && apt-get install --no-install-recommends -y \
    build-essential \
    # psycopg2 dependencies
    libpq-dev \

    # Translations dependencies
    gettext \

    # cleaning up unused files
    && apt-get purge -y --auto-remove -o \
    APT::AutoRemove::RecommendsImportant=false \
    && rm -rf /var/lib/apt/lists/

# Section 5- Project libraries and User Creation
COPY requirements.txt /tmp/requirements.txt

RUN pip install --no-cache-dir -r /tmp/requirements.txt \
    && rm -rf /tmp/requirements.txt \

RUN useradd -U app_user \
    && install -d -m 0755 -o app_user -g app_user /app/static

# Section 6- Code and User Setup
WORKDIR ${APP_HOME}

USER app_user:app_user

COPY --chown=app_user:app_user . ${APP_HOME}

RUN chmod +x ./*.sh && chmod +x ./postgresql/maintenance/*.sh && \
    chmod +x ./postgresql/maintenance/_sourced/*.sh

# Section 7- Docker Run Checks and Configurations
ENTRYPOINT [ "./entrypoint.sh" ]

CMD [ "./start.sh", "server" ]
Enter fullscreen mode Exit fullscreen mode

Let’s explore each section of our Dockerfile:

Section 1- Basic Parameters

# Step 1- Set arguments used throughout the build
ARG PYTHON_VERSION=3.9-slim-buster
ARG BUILD_ENVIRONMENT=production
ARG APP_HOME=/app
Enter fullscreen mode Exit fullscreen mode

In the first section of our Dockerfile, we are declaring arguments or variables we are going to use throughout the build process. This is good practice for maintenance and updates later on. If you want to update or change one config you don’t need to go through all of your docker configurations.

Section 2- Base Image

We have selected python:3.9-slim-buster as the base image. While choosing a base image key consideration is its size, as a bigger base image results in a bigger docker image size. Developers prefer alpine flavor due to its small size and for languages such as Java or Scala, in most cases, it is the right way to go. Alpine is a minimal Docker image based on Alpine Linux.

But for Python applications, many requisite libraries are not supported by alpine flavor out of the box. It means you would end up downloading dependencies on alpine flavor which will result in bigger image size. This also means, greater image build time and application incompatibility. The slim flavor sits between alpine and full version and hits the sweet spot in terms of size and compatibility.

If you want to dig deep into this topic this article can get you started.

Section 3- Python Interpreter Flags

*# Section 3- Python interpreter flags
*ENV PYTHONUNBUFFERED *1
*ENV PYTHONDONTWRITEBYTECODE *1*
Enter fullscreen mode Exit fullscreen mode

We have set two flags PYTHONUNBUFFERED and PYTHONDONTWRITEBYTECODE to non-empty values to modify the behavior of the Python interpreter.

When set to a non-empty value, PYTHONUNBUFFERED will send python output straight to the terminal(standard output) without being buffered. This helps in two ways. Firstly, this allows us to get logs in real-time. Secondly, in case of container crash, it ensures that you receive output and hence, the reason for failure.

We are also setting PYTHONDONTWRITEBYTECODE to a non-empty value. This ensures that the Python interpreter doesn’t generate .pyc files which apart from being useless in our use-case, can also lead to few hard-to-find bugs due to caching.

Section 4- Compiler and OS libraries

# Section 4- Compiler and OS libraries
RUN apt-get update && apt-get install --no-install-recommends -y \
    build-essential \
    # psycopg2 dependencies
    libpq-dev \

    # Translations dependencies
    gettext \

    # cleaning up unused files
    && apt-get purge -y --auto-remove -o \
    APT::AutoRemove::RecommendsImportant=false \
    && rm -rf /var/lib/apt/lists/
Enter fullscreen mode Exit fullscreen mode

Commands in this section install compilers, tools, and OS-level libraries. For e.g. apt-get update , as you may already know, update the list of available packages. It doesn’t update packages themselves, just fetches their latest versions.

apt-get install -y --no-install-recommends build-essential \
libpq-dev gettext
Enter fullscreen mode Exit fullscreen mode

The build-essential contains a collection of meta-packages that are necessary to compile software. This includes, but is not limited to, GNU debugger, g++/GNU compiler collection, and a few other tools and libraries. The complete list of build-essential packages can be found here. As per official documentation libpq-dev contains,

Header files and static library for compiling C programs to link with the libpq library in order to communicate with a PostgreSQL database backend.

Since libpq-dev contains libraries concerning the PostgreSQL database, feel free to drop this if you are using some other database and install the requisite for that database.

The flag --no-install-recommends skips the installation of other recommended packages. This is done to reduce docker image size. Please note that dependent packages mandatory for our packages are still getting installed. gettext is a Linux package that facilitates translations. If you want to know more refer here

apt-get purge -y --auto-remove -o \ APT::AutoRemove::RecommendsImportant=false* \
Enter fullscreen mode Exit fullscreen mode

In this command, we are cleaning up our package repository by removing orphaned packages we don’t need.

rm -rf /var/lib/apt/lists/
Enter fullscreen mode Exit fullscreen mode

Cleaning /var/lib/apt/lists/* can easily reduce your docker image size by ~5%-25%. The apt-get update command updates versions of the list of packages that are not required in our Dockerfile after installing build-essential and libpq-dev . Hence, in this step, we clean out all the files added.

Section 5- Project libraries and User Creation

In this section, we install the project libraries mentioned in requirements.txt and create a user who will be a non-root user for security purposes.

COPY requirements.txt /tmp/requirements.txt
Enter fullscreen mode Exit fullscreen mode

If you notice, instead of copying the whole project, which we do eventually in Section 6, we are only copying requirements.txt. Then we are installing all the libraries mentioned in it. This is done so because Docker works on the principle of layers. If there is any change in a layer, all the subsequence layers will be re-processed. Hence, copying only requirements.txt ensures that installation is reused across docker builds. This layer is dropped if there is a change in the requirements.txt file itself. Had we copied the entire project of Section 6 here, each new commit or change in code would lead to invalidating of these layers and re-installation of libraries.

RUN pip install --no-cache-dir -r /tmp/requirements.txt
&& rm -rf /tmp/requirements.txt \
Enter fullscreen mode Exit fullscreen mode

In this stage, we are installing all the project dependencies mentioned in requirements.txt. The --no-cache-dir flag is used to disable caching during pip installation. By default, pip caches installation files(.whl etc) and source files(.tar.gz etc). In docker installation, we don’t reinstall using the cache hence disabling it will reduce image size. Then we remove the requirements file we copied to /tmp directory

useradd -U app_user
Enter fullscreen mode Exit fullscreen mode

Here, we are creating a non-root user app_user using the useradd command. By default, Docker runs container processes as root inside of a container. This is a bad practice since attackers can gain root access to the Docker host if they manage to break out of the container (source). The -U flag creates a user group with the same name.

install -d -m 0755 -o app_user -g app_user /app/static
Enter fullscreen mode Exit fullscreen mode

At the end of the section, we are creating a folder app/static and giving our user app_user ownership to it. This folder will be used by Django to collect all static resources of our project by running the command python manage.py collectstatic .

Section 6- Code and User Setup

WORKDIR ${APP_HOME}
Enter fullscreen mode Exit fullscreen mode

We start this section by setting the working directory. The WORKDIR instruction sets the working directory for subsequent commands. Since we don’t want to copy our code to the root folder, we are copying it to /app folder.

USER app_user:app_user
Enter fullscreen mode Exit fullscreen mode

Then we are setting the non-root user created at the end of Section 5 as the owner of subsequent commands. As mentioned earlier, this will improve our security.

COPY --chown=app_user:app_user . ${APP_HOME}
Enter fullscreen mode Exit fullscreen mode

With everything set up, we copy the project into the docker image. Any code change will only result in an update in this and subsequent layers of docker, hence resulting in reduced docker image build time. While copying we are providing the content’s ownership to our user app_user created in Section 4.

RUN chmod +x ./*.sh && chmod +x ./postgresql/maintenance/*.sh && \
    chmod +x ./postgresql/maintenance/_sourced/*.sh
Enter fullscreen mode Exit fullscreen mode

At the end of this section, we are giving executable permission to our two scripts files i.e. entrypoint.sh and start.sh and all scripts we use to maintain our database. We will go into detail about these two files after the end of Section 6.

Section 7- Docker Run Checks and Configurations

ENTRYPOINT [ "./entrypoint.sh" ]
Enter fullscreen mode Exit fullscreen mode

The ENTRYPOINT section of a Dockerfile is always executed, hence we would like to hitch it for validations and Django commands such as migrate. The CMD is overridden by the command section in a docker-compose file so the value given here, serves as a default.

CMD [ "./start.sh", "server" ]
Enter fullscreen mode Exit fullscreen mode

For a better understanding of what we are trying to do with ENTRYPOINT and CMD let’s look at the corresponding files entrypoint.sh and start.sh which are invoked by them.

entrypoint.sh

#!/bin/bash

set -o errexit
set -o pipefail
set -o nounset

postgres_ready() {
    python << END
import sys
from psycopg2 import connect
from psycopg2.errors import OperationalError
try:
    connect(
        dbname="${DJANGO_POSTGRES_DATABASE}",
        user="${DJANGO_POSTGRES_USER}",
        password="${DJANGO_POSTGRES_PASSWORD}",
        host="${DJANGO_POSTGRES_HOST}",
        port="${DJANGO_POSTGRES_PORT}",
    )
except OperationalError:
    sys.exit(-1)
END
}

redis_ready() {
    python << END
import sys
from redis import Redis
from redis import RedisError
try:
    redis = Redis.from_url("${CELERY_BROKER_URL}", db=0)
    redis.ping()
except RedisError:
    sys.exit(-1)
END
}

until postgres_ready; do
    >&2 echo "Waiting for PostgreSQL to become available..."
    sleep 5
done
>&2 echo "PostgreSQL is available"

until redis_ready; do
    >&2 echo "Waiting for Redis to become available..."
    sleep 5
done
>&2 echo "Redis is available"

python3 manage.py collectstatic --noinput

exec "$@"
Enter fullscreen mode Exit fullscreen mode

Let’s look at the above entrypoint.sh, though in lesser detail than Dockerfile.

Docker provides a default entrypoint /bin/sh . In most systems, it is a symbolic link, and in the case of Ubuntu it is linked to, /bin/bash but in some scenarios, this assumption could be wrong(source). Hence we will be explicitly linking it to /bin/bash.

Section 1- Bash options

set -o errexit
set -o pipefail
set -o nounset
Enter fullscreen mode Exit fullscreen mode

Here, we are setting few bash options. The errexit fails the script on the first encounter of error and doesn’t proceed further, which is default bash behavior. The pipefail means that if any element of the pipeline fails, then the pipeline as a whole will fail. The nounset forces error whenever an unset variable is extended.

Section 2: Health of dependent services

Earlier, we had assumed that our application is using PostgreSQL database and Redis as celery backend. In this section, we are checking if both services are up and if not, we wait for them to come up.

Similarly, you may add other such critical services which are necessary for the normal functioning of your application.

Section 3- Idempotent Django commands

python manage.py collectstatic --noinput
Enter fullscreen mode Exit fullscreen mode

There are many Django management commands which we need to run before starting our Django server. This includes commands to collect all static resources, collectstatic. Djangos makemigrations and migrate command should not be run at container runtime due to the following reasons:

  1. In dev environments you typically spin up one server, but in production you’re likely spinning up more than one. So now instead of one process doing schema migration, you have multiple processes trying to do multiple identical schema migrations at the same time.

Depending on your database, the migration tool you’re using, and the kind of migration you’re doing, parallel schema upgrades might break your database in a variety of ways.
You don’t want a broken database!

  1. If you always do schema upgrades as part of the application startup you also end up mentally coupling schema migrations and code upgrades.*** In particular, you’ll start assuming that you only ever have new code running with the latest schema.***

    Why is that assumption a problem? From most to least common:

    1. Sometimes you need to rollback a broken code upgrade. If you assume you always have new code with a new schema, you can end up in a situation where your new code is broken, but you can’t easily rollback to older code because you’ve done an irreversible schema change.
    2. To minimize downtime on upgrades, you want to have a brief moment where both old and new versions of your application are running in parallel. If your schema migration breaks old code, you can’t do that.
    3. To catch bugs in new code, you might want to do a canary deploy. That is, upgrade only one or two of your many processes and see if they break.

The only thing which should be kept in mind is that all these commands should be idempotent i.e. multiple runs of these commands should not have any side-effect on the state of our application. Idempotency is required here because, suppose if Kubernetes is scaling these containers, multiple instances will be running and they will interfere will each other.

In fact, any idempotent operation can be executed here, not just Django commands.

start.sh

We are using start.sh file, to leverage the same Dockerfile and commands to run containers for Django server, Celery workers, Celery Beat and Flower, by having different arguments for each.

#!/bin/bash

cd /app

if [ $# -eq 0 ]; then
    echo "Usage: start.sh [PROCESS_TYPE](server/beat/worker/flower)"
    exit 1
fi

PROCESS_TYPE=$1

if [ "$PROCESS_TYPE" = "server" ]; then
    if [ "$DJANGO_DEBUG" = "true" ]; then
        gunicorn \
            --reload \
            --bind 0.0.0.0:8000 \
            --workers 2 \
            --worker-class eventlet \
            --log-level DEBUG \
            --access-logfile "-" \
            --error-logfile "-" \
            dockerapp.wsgi
    else
        gunicorn \
            --bind 0.0.0.0:8000 \
            --workers 2 \
            --worker-class eventlet \
            --log-level DEBUG \
            --access-logfile "-" \
            --error-logfile "-" \
            dockerapp.wsgi
    fi
elif [ "$PROCESS_TYPE" = "beat" ]; then
    celery \
        --app dockerapp.celery_app \
        beat \
        --loglevel INFO \
        --scheduler django_celery_beat.schedulers:DatabaseScheduler
elif [ "$PROCESS_TYPE" = "flower" ]; then
    celery \
        --app dockerapp.celery_app \
        flower \
        --basic_auth="${CELERY_FLOWER_USER}:${CELERY_FLOWER_PASSWORD}" \
        --loglevel INFO
elif [ "$PROCESS_TYPE" = "worker" ]; then
    celery \
        --app dockerapp.celery_app \
        worker \
        --loglevel INFO --loglevel INFO -P gevent --concurrency=100

fi
Enter fullscreen mode Exit fullscreen mode

In the above script, we are using gunicorn to run our application server which is recommended approach for production. The python manage.py runserver command should be used only in the development setup.

PostgreSQL Docker Configuration

Now lets take a look at our other dockerfile for our database

# Step 1- Set arguments used throughout the build
ARG POSTGRES_VERSION=13.3-alpine

# Step 2- Set the postgresql base image
FROM postgres:${POSTGRES_VERSION}

# Step 3- Copy Postgresql configuration files and maintenance scripts

COPY ./postgresql/maintenance /usr/local/bin/maintenance
RUN chmod +x /usr/local/bin/maintenance/

RUN mv /usr/local/bin/maintenance/* /usr/local/bin \
&& rmdir /usr/local/bin/maintenance
Enter fullscreen mode Exit fullscreen mode

Step 1 & Step 2 Argument Setup and Image Version

In steps 1 and 2 we are setting up argument and pulling PostgreSQL image from dockerhub

Step 3 Copy Postgresql configuration files and maintenance scripts

In step 3 we are copying all config files and maintenance files for PostgreSQL to /user/local/bin/This directory is where all binary files for linux are stored which makes it easier to invoke those scripts. Let's look at those scripts

backup.sh and backups.sh

These two bash scripts hold our backup-related scripts.

working_dir="$(dirname ${0})"

source "${working_dir}/_sourced/constants.sh"
source "${working_dir}/_sourced/messages.sh"

message_welcome "Backing up the '${POSTGRES_DB}' database..."

if [[ "${POSTGRES_USER}" == "postgres" ]]; then
message_error "Backing up as 'postgres' user is not supported. Assign 'POSTGRES_USER' env with another one and try again."

exit 1

fi

export PGHOST="${POSTGRES_HOST}"
export PGPORT="${POSTGRES_PORT}"
export PGUSER="${POSTGRES_USER}"
export PGPASSWORD="${POSTGRES_PASSWORD}"
export PGDATABASE="${POSTGRES_DB}"

backup_filename="${BACKUP_FILE_PREFIX}_$(date +'%Y_%m_%dT%H_%M_%S').sql.gz"

pg_dump | gzip > "${BACKUP_DIR_PATH}/${backup_filename}"

message_success "'${POSTGRES_DB}' database backup '${backup_filename}' has been created and placed in '${BACKUP_DIR_PATH}'."
Enter fullscreen mode Exit fullscreen mode

We are exporting environment variables to connect to the database and preparing a backup filename from the date.

pg_dump | gzip > “${BACKUP_DIR_PATH}/${backup_filename}”
Enter fullscreen mode Exit fullscreen mode

This command will dump all the content of the database to the backup file in the backup directory.

To run our container we are running

docker-compose up
Enter fullscreen mode Exit fullscreen mode

The other essential commands are

# To backup the database
docker-compose exec postgres backup

# To restore created backup
docker-compose exec postgres restore backup_2021_03_13T09_05_07.sql.gz
Enter fullscreen mode Exit fullscreen mode

Part I of this article is fairly long, We are going to leave it there for this one. In the next article, we are going to build our docker image and push it to DigitalOcean container registry. We will also setup CI/CD for the build process, after that we are going to deploy our docker image to DigitalOcean App platform and setup CI/CD for deployment using GitHub actions.

Production-Ready Docker Configuration With DigitalOcean Container Registry Part II

To get the full code and try it for your self, It is on GitHub here.

Please comment on any gaps or improvements in the above setup. Follow me for more articles like this one.

Use the following link and get 100 USD in DigitalOcean free credit.

DigitalOcean Referral Badge

💖 💪 🙅 🚩
endalk200
Endalkachew Biruk

Posted on November 3, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related