How to Create a Local Development Environment with Docker Compose

madmaxx

Robert Nemet

Posted on April 3, 2023

How to Create a Local Development Environment with Docker Compose

As a developer, when working on a service, you face a problem with the working environment. And when I say working environment, I do not think about IDEs, stacks, OS, libraries, etc. I’m thinking about the environment where our services live.

These days, our services are usually packed inside some container and put in some kind of distributed system. Most containers and other moving parts are
controlled by Kubernetes, Nomad, and similar orchestration systems.

No matter how different they are, they all control containerized services. They do more than just control containers; at least, we care that they own containers.

The Problem

So, I’m developing a feature for some services. As soon I checkout the code, I realize that the service has external dependencies: other services, some storage(database),
messaging system(Kafka).

In the ideal world, I would not care much. I would add an API endpoint and some DTOs and make some calls to external services. To be sure everything works,
I add some unit tests.

Later, when I finish the code, I would push it to the working environment to test service integration with other components. And everything is fine.

In the real world, it isn’t like that. Some bugs, wrong assumptions, and freshly discovered constraints exist when you use newly made features.

The pain here is waiting for the code to be compiled and deployed. I make a series of small changes, one by one. After each change, I push the code to the working environment.
Then you enter a cycle of commit, push, build, and test, which is very slow if done through CI.

Idea: Replicate the Required Dependencies

Simple. Replicate required dependencies.

But as your dependencies have dependencies, you have to replicate those dependencies, and then those can have dependencies, etc. Does that mean copying the whole system?
No, that wouldn’t be smart. We need to figure out the minimal set of required dependencies. The idea is to replicate only a minimal set of dependencies.

The good part is that the replica doesn’t need to be precise as in the working environment. It can work with less memory and less CPU. For example, suppose the working environment has Postgres database version 14.1.1. In that case, you’ll use the same Postgres version but much lighter(with less memory and CPU). The data stored in the replica would be a fragment of the original.

How to do it?

Thanks to the containers, you can quickly run any program without messing with installation. If you do not believe me, try to install to Postgres database on your local machine and then do the
same with Docker(run containerized Postgres). Then compare the experience, especially if you need to run several instances of the same service but different versions.

So, containers. Imho, docker compose is perfect for this job.

Example

Let me describe the scenario:

  • my target service, named app
  • dependency named next
  • dependency for the app Postgres DB
  • dependency for the next MariaDB
  • dependency for both Kafka

Let me quickly describe configurations.

Services app and next

These two services share a code base. Before the container starts, an image has to be built. Image is built with a two-step docker build:

FROM golang:latest as builder

WORKDIR /app
COPY . /app/
RUN go mod tidy
RUN go build -o app

FROM golang:buster

WORKDIR /app
COPY --from=builder /app/app /app/
ENTRYPOINT [ "/app/app" ]
Enter fullscreen mode Exit fullscreen mode

But configurations are different. The service app uses configuration from the file, while the service next gets configuration from environment variables.
At the same time, the service app writes and reads from Kafka, so it needs to wait for the broker service.

On another side, the next service has only one dependency: mariaDB.

Service app:

app:
  container_name: echo
  build: .
  environment:
    KAFKA_BROKER: "broker:29092"
  ports:
  - 9999:9999
  volumes:
  - ./configs:/app/configs
  depends_on:
  - liquibase_pg
  - broker
  restart: always
Enter fullscreen mode Exit fullscreen mode

The instruction:

build: .
Enter fullscreen mode Exit fullscreen mode

It tells Docker that the image needs to be built first. The build context is the current directory.

Next, instruct Docker to mount one volume from local FS to container FS:

 volumes:
 - ./configs:/app/configs
Enter fullscreen mode Exit fullscreen mode

And wait for dependencies:

depends_on:
- liquibase_pg
- broker
Enter fullscreen mode Exit fullscreen mode

Where liquibase_pg is one shot service that starts after pg service starts to make DB schema with Liquibase.
Only when the schema is created the app service can start.

The service next:

next:
  container_name: beta
  build: .
  ports:
  - 8888:8888
  depends_on:
  - liquibase_maria
  environment:
    DB_HOST: maria
    DB_PORT: 3306
    DB_USER: docker
    DB_PASSWORD: password
    DB_NAME: docker
    DB_TYPE: MARIA
    APP_PORT: 8888
    TARGET: "http://echo:9999"
    ERROR_RATE: 10
    DELAY: 3000
  restart: always
Enter fullscreen mode Exit fullscreen mode

The next service waits for Liquibase to create the schema in MariaDB.

The Other Services

Other services represent dependencies: Postgres DB, MariaDB, Kafka broker, and Liquibase. The fun part is that setting all those dependencies is pretty easy.
If you search for them on official docs or on DockerHub, you’ll find instructions on setting them up.

What is essential is to set up the proper order of starting services. For example, a Kafka broker should start before any service that uses it. The same goes for DBs.

About Names

One thing to notice: service names and container names. They can be different. But it is better if they are the same. It will make your life easier.
In my example, they are different. The reason is to make this difference clear.

You’ll use service names when using docker compose command because you’ll interact with services. On the other hand, containers see only containers. That means you’ll use the container name to address another container. For example, in my example, the app service container is echo. It periodically calls another container.
To address that container, I have to use the container name:

TARGET="http://beta:8888"
Enter fullscreen mode Exit fullscreen mode

The reason is that Docker makes its network, and those containers are addressed by name. Look at service next environment variables:

  environment:
    DB_HOST: maria
    DB_PORT: 3306
    DB_USER: docker
    DB_PASSWORD: password
    DB_NAME: docker
    DB_TYPE: MARIA
    APP_PORT: 8888
    TARGET: "http://echo:9999"
    ERROR_RATE: 10
    DELAY: 3000
Enter fullscreen mode Exit fullscreen mode

When everything is set up, you can start the whole system with one command:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Rebuilding App, Logs, and Stopping

You set all this up. Start work on your feature, and at one moment, you want to see what you did. Now you have to rebuild your app:

docker compose -f <compose file> up --detach --build <service name>
Enter fullscreen mode Exit fullscreen mode

Using the default docker-compose file name, you can omit the part with a file name: compose.yml.

To get logs:

docker compose logs -f <container name>
Enter fullscreen mode Exit fullscreen mode

To stop the whole system:

docker compose down
Enter fullscreen mode Exit fullscreen mode

To stop and delete all volumes:

docker compose down -v
Enter fullscreen mode Exit fullscreen mode

Conclusion

I hope you like this idea. It could be better, but it is a good start. I am sure you can improve it. If you have any questions, feel free to ask.

References

💖 💪 🙅 🚩
madmaxx
Robert Nemet

Posted on April 3, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related