Docker - Setup a local JS and Python Development environment

jagkush

Gbenga Kusade

Posted on December 2, 2023

Docker - Setup a local JS and Python Development environment

It is difficult to develop locally for modern systems because they typically incorporate various services. The process of setting up a local development environment has drawn concerns from certain developers. One major problem that affects developers is the "this works on my computer" problem, which arises when they develop an application that works well on their local computer but not at all when it is deployed to other environments. These situations make it more difficult to collaborate or deploy effectively.

This post is for you if you encounter the above problem during the development process. We'll use Docker to set up a local development environment in this tutorial. You will know how to create and configure a local development environment for Node.js and Python by the time you finish reading this post.

Prerequisites

Install the following:

Example Architectural Design

Suppose we have a set of services with the following architecture.

sample architecture design

As we can see from the diagram, we have:

Node - A NodeJS service running on port 5000
Py - A Python service running on port 8000

Setting it up

We'll establish a basic Python and Node.js service setup as described above. To follow along, clone the repository using the following commands.



git clone https://github.com/jagkt/local_dev_env.git
cd local_dev_env


Enter fullscreen mode Exit fullscreen mode

Now you should have the following project structure to start with:

.
├── node
│ ├── index.js
│ └── package.json
└── py
│ ├── Dockerfile
│ ├── requirements.txt
│ └── main.py
├── LICENSE
├── Makefile
├── README.md
└── docker-compose.yml

To spin up the services, run the script below from the project directory:



make up  
make run_con #check the services are up and running


Enter fullscreen mode Exit fullscreen mode

you can access both py and node applications from the browser

py: http://localhost:8000 or
node: http://localhost:5000



Now, let's deep dive into the project setup details.

Build a Docker Image for the Python Environment

Here, we’ll build a Docker image for the Python environment from scratch, based on the official Python image, and build a FastAPI application in it. First, let's create a package requirements.

Create a directory named “py” and inside the directory, create a requirements.txt file and copy the below dependencies to run the FastAPI application in it.



fastapi
uvicorn[standard]


Enter fullscreen mode Exit fullscreen mode

Next, create a main.py with the FastAPI application code below:



from fastapi import FastAPI
app = FastAPI()

@app.get("/")
async def home():
    return {"message": "Hello from py_app!"}

@app.get("/hello/{user}")
async def greetings(user):
    return {"Hello": user}


Enter fullscreen mode Exit fullscreen mode

Containerized our python environment

We must first create a Dockerfile with the instructions needed to build the image in order to generate a Docker image. After that, the Docker builder processes the Dockerfile and creates the Docker image. The Python service is then launched and a container is created using a simple docker run command.

Dockerfile

One way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it. The steps are sketched below.

docker build process

_credit: docker.com_

Now in the same project directory create a Dockerfile file with the following code:




# # Pull the official docker image
FROM python:3.11.1-slim
# # set work directory
WORKDIR /home/py/app
# # set env variables
ENV PYTHONDONTWRITEBYTECODE 1 \ 
    PYTHONUNBUFFERED 1 \
    PIP_NO_CACHE_DIR=1
# # install dependencies
COPY ./requirements.txt .
RUN pip install --upgrade -r requirements.txt 
# copy project
COPY . /home/py/app


Enter fullscreen mode Exit fullscreen mode

Let's take a deep dive into the Dockerfile code.
FROM: specifies the slim version of the Python 3.11 base image to build a Docker image
WORKDIR: This command sets the active directory (/home/py/app) on which all the following commands run.
ENV variables are set to optimize the behavior of pip during the installation of the packages (in the requirements.txt file) in the Docker container.
-- PYTHONUNBUFFERED=1 -- Allow statements and log messages to immediately appear
-- PIP_DISABLE_PIP_VERSION_CHECK=1 -- disable a pip version check to reduce run-time & log-spam
-- PIP_NO_CACHE_DIR=1 – this is to disable cache to reduce the docker image size
COPY: This copies the requirements.txt file from the host to the container’s WORKDIR
RUN: invokes installing the container applications or package dependencies in the requirements.txt file
COPY: copies the application source codes from the host to the WORKDIR.

Now, Let’s create a container image, run the container, and test the Python application.
To build the container image, switch to the py directory and run the following command



docker build -t python-application:0.1.0 .


Enter fullscreen mode Exit fullscreen mode

Check your image



docker image ls


Enter fullscreen mode Exit fullscreen mode

Then, proceed to create a container. To access your application from your host machine, you need to do “port forwarding” to forward or proxy traffic on a specific port on the host machine to the port inside your container.



docker run -p 4000:4000 python-application:0.1.0


Enter fullscreen mode Exit fullscreen mode

Check your container



docker container ls


Enter fullscreen mode Exit fullscreen mode

Finally, access your application by running a curl command, testing with the browser, or running Postman.



curl http://127.0.0.1:4000


Enter fullscreen mode Exit fullscreen mode


curl http://127.0.0.1:4000/hello/James


Enter fullscreen mode Exit fullscreen mode

Build the Node Environment

First, create a new directory named "node" in the "local_dev_env" directory, and create an index.js file in it with the code below.



const express = require("express");
const app = express();
const PORT = process.env.PORT || 5000;

app.get("/", (req, res) => {
    res.send("Hello from node_app");
});

app.listen(PORT, () => {
    console.log(`Server running on port ${PORT}`);
  });


Enter fullscreen mode Exit fullscreen mode

Also, create a new file name package.json with the code below in the node directory



{
    "name": "node",
    "version": "1.0.0",
    "description": "A sample nodejs application",
    "main": "index.js",
    "scripts": {
      "start": "node index.js"
    },
    "author": "admin@admin.com",
    "license": "MIT",
    "engines": {
      "node": ">=10.1.0"
    },
    "dependencies": {
      "express": "^4.18.2"
    }
  }


Enter fullscreen mode Exit fullscreen mode

Now that we have our basic script to run the Node application, we'll create our base image. This time we will not be using the Dockerfile as we did earlier with the Python environment, but we will pull directly from the Docker Hub registry.
Because we have multi-container services, it's best to orchestrate our services from a single file rather than building the services individually from a Dockerfile, which could be a daunting task if we need to build many services. Therefore, spinning up our Node containers with Docker Compose can be pretty handy in these situations. Note that Docker compose does not replace Dockerfile. Rather, the latter is part of a process to build Docker images, which are part of containers.
Docker Compose allows us to operate the Node app alongside other services (assuming we have many services we need to spin up). In our case, it will be alongside our py service.

Docker Compose

In the "local_dev_env" directory, create a "docker-compose.yml" file with the code below:



version: '3'
services:
  py_app:
    build: ./py
    container_name: py
    command: uvicorn main:app --host 0.0.0.0 --reload
    environment:
      - FastAPI_ENV=development
      - PORT=8000
    ports:
      - '8000:8000'
    volumes:
      - ./py:/home/py/app 

  node_app:
    image: node:12.3-alpine
    container_name: node
    user: "node"
    environment:
      - NODE_ENV=development
      - PORT=5000
    command: sh -c "npm install && npm start"
    ports:
      - '5000:5000'
    working_dir: /home/node/app
    volumes:
      - ./node:/home/node/app:cached


Enter fullscreen mode Exit fullscreen mode

As seen from the compose file, two services were defined: py and node.

You’ll see some parameters that we didn’t specify earlier in our Dockerfile. For example, The py service uses an image that's built from the Dockerfile in the py directory we created above using the build key. it is assigned a container name of "py" with the container_name key, and upon starting, it runs the FastAPI web server with the command key (uvicorn main:app --host 0.0.0.0 --reload).
The volumes key mounts the py directory in the project directory (./py) on the host to (/home/py/app) directory inside the container, allowing you to modify the code on the fly, without having to rebuild the image. The environment key sets the FastAPI_ENV and PORT environment variables, which tells FastAPI to run in development mode and listen on PORT 8000.
It then binds the container and the host machine to the exposed port, 8000 with the ports key. This example service uses the default port for the FastAPI web server, 8000.

Similar to the py service declaration, the node service uses most of the declared keys but instead of building its image from a Dockerfile, it uses a public node 12.3 alpine image pulled from the Docker Hub registry with the image key. The user key lets you run your container as an unprivileged user. This follows the principle of least privilege. The working_dir key is used to set the working directory in the container to /home/node/app.

Now, let us build and run our services
To jumpstart your services (py and node) containers, build the app with the compose file from your project directory, and run it.



docker-compose up -d


Enter fullscreen mode Exit fullscreen mode

To verify that all services' images have been created, run the following command. The py and node images should be returned within the CLI.



docker image ls 


Enter fullscreen mode Exit fullscreen mode

To verify that all services are running, run the following command. This will display all existing containers within the CLI



docker container ls --all


Enter fullscreen mode Exit fullscreen mode

Let us try one more thing, execute a bash command from the py container to list the contents in the directory (/home/py/app).



docker exec -it py bash
ls -l  #list the directory in the py container


Enter fullscreen mode Exit fullscreen mode

Makefile

Here, we took an additional step and created a Makefile, which simplifies dealing with the tools by enabling us to use shortcuts rather than typing out lengthy commands. A command can be defined in the Makefile and used by using the syntax of the make command.

We can spin up all the containers, execute commands from within the container, check logs, and spin down the containers as shown below.



make up
make py # execute shell script from our py container
make py_log  #ouput logs from py container
make node_log #ouput logs from node container
make down  #spin down our services including network created


Enter fullscreen mode Exit fullscreen mode

Conclusion

I hope you now have a solid idea of how to set up your local development environment from this tutorial. In summary, we observed the following:

  • Making images for Docker and running a container from a Dockerfile.
  • Spin up several Docker containers using docker-compose.
  • Using Makefile to simplify the execution of complicated commands

UPNEXT!: let us automate our build process. Click here to read.


Reference

💖 💪 🙅 🚩
jagkush
Gbenga Kusade

Posted on December 2, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related