Rose Day
Posted on April 27, 2018
After writing my last post about starting to use Docker for Jupyter notebooks with Python I got a recommendation to learn docker-compose
to replace lengthy docker run commands in the Dockerfile. Continuing to work, I started to research docker-compose
to see how I could better improve upon the development environment I was constructing with Docker and the Intel python distribution.
docker-compose
While researching docker-compose
it was said that the compose command is a tool for Docker used to define and run multiple container applications in which a compose file is used to define the services needed by the application. By setting up an application in this manner, all services can be started from a single command. With this, applications can be created in four simple steps:
- Set up a Dockerfile to define the application environment.
- Add a requirements file to download Python packages.
- Create a docker-compose.yml file to define services that make up the application. These will run together in an isolated environment.
- Run docker-compose build to build the application and docker-compose up to start and run the application.
Setup Dockerfile and Requirements
To start, let's look at the Dockerfile I started last time. In this Dockerfile I used miniconda as the base image with Intel Python2 installed. As mentioned previously, the Intel distribution has both Python 2 and Python 3 images in Docker with core or full configurations. The core configurations contain NumPy/SciPy with dependencies while full contains everything that Intel distributes.
Since I typically use Python2 for classes, I picked the Intel Python2 version to install ontop of miniconda. With this, remember to set the End-User License Agreement to 'yes' as an environmental variable.
# Set the base image using miniconda
FROM continuumio/miniconda3:4.3.27
# Set environmental variable(s)
ENV ACCEPT_INTEL_PYTHON_EULA=yes
With the intial image in place and environmental variables set in place, I set my working directory to /home/notebooks
. As you may have seen, I used /home/notebooks
as the mounted volume in the last post. Through some research, I realized I could also set this as the working directory using the command WORKDIR, which placed me right at my files instead of a few folders above them. This just saves some clicks when I open a notebook.
# Set working directory
WORKDIR /home/notebooks
After setting up the working directory, I adapted my RUN statements by adding a requirments files. This file was added in the /app/ directory of the container and then called using pip later on. I found this method to be one of the most common to handle installs for Python dependencies. This also makes the RUN command shorter by allowing for any Python dependencies to be placed in a seperate file.
# Add requirements file
ADD requirements.txt /app/
# Installs, clean, and update
RUN apt-get update \
&& apt-get clean \
&& apt-get update -qqq \
&& apt-get install -y -q g++ \
&& conda config --add channels intel \
&& conda install -y -q intelpython2_full=2018.0.1 python=2 \
&& pip install --upgrade pip \
&& pip install -r /app/requirements.txt
Here is the small Python dependencies list I have used for the image thus far.
mapboxgl
seaborn
timestring
Lastly, after setting up the requirements file, I added a line for CMD. The CMD command executes an instruction as default for the container unless another command is specified. Knowing this, the default command is to run jupyter notebook.
# Run shell command for notebook on start
CMD jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root
Define Services
After adapting the Dockerfile, the next step was to define services in a YAML file. The purpose of this file is to set up multiple services with Docker, each in their own container which is used along side the environment you already specified in your Dockerfile. Therefore, the YAML file will be used to configure these services which can then, later on, be created and started all with one command.
With this, the YAML file I created is currently a single service which allows for the setup of Jupyter notebook for Intel Python2. This container is given the name python_notebook and utilizes the port 8888. The last thing set up with this container was a volume to mount all my notebooks at /home/notebooks
.
version: '3'
services:
notebook:
container_name: python_notebook
labels:
description: Intel Python 2 using Jupyter Notebooks
name: jupyter notebook
ports:
- "8888:8888"
volumes:
- ~/Documents/notebooks:/home/notebooks
build: .
Later on, this file can be adapted to add database services, other data analytics tools, and more.
Building and Running docker-compose
After updating the Dockerfile, adding a requirements.txt file, and creating a YAML file for services, we are ready to run our Intel2 container again. The first command to run when using docker-compose would be to build the container. This command, as seen below, allows for any services to be updated if needed, and will build the container(s). This command should be run anytime you update your services so that they can be rebuilt before use. Note: This may take a few minutes if it is the first time running it.
docker-compose build
After building all the services, the container can be brought up. The command, as shown below, will start up all services you have defined for this project. For this particular example, the command will run the python_notebook specified above.
docker-compose up
After running this command with the example above, you will recieve a URL to open up Jupyter notebook. Copy and paste this into a browser to open it. When you have finished with your application, the command below can be used to shut down the services.
docker-compose down
This will either need to be run in another terminal window or you can press Control C
in your current terminal to exit instead.
References
Intel Optimized Packages for the Intel Distribution for Python
Docker
Docker Compose Version 3
Docker Cloud Stack YAML
Cover image sourced from Docker Wallpapers
Posted on April 27, 2018
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.