JaafarMehdi
Posted on March 14, 2023
Recently I got assigned to an old project and while luckily it had instructions on how to set it up locally in the Read.me the number of steps was damn too high. So instead of wasting half a day executing them, I wasted 2 days automating them (future devs will thank me… maybe).
I wanted to get as close as possible to a one command local env setup.
To simplify the process as you have guessed from the title of the article I decided to set up docker and docker-compose. I started with an example config from awesome-compose.
Level 0: just get the app to run on docker
FROM ruby:2.6.5
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
COPY config/database.yml.example /myapp/config/database.yml
CMD bundle exec rails s -p 8080 -b '0.0.0.0'
I won’t enter into details too much the main concept is:
- first, copy just the gemfiles and install gems (this is done to take advantage of the caching mechanism in building images to speed up rebuilding)
- copy the rest of the app code
- create a config file for the database based on a template (the details will be passed through ENV variables)
Since this time the goal is local developments assets compilation is not an issue we want to optimize.
Next, the maestro orchestrating the local env: docker-compose
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
ports:
- "3000:3000"
environment:
POSTGRES_HOST: db
REDIS_URL: redis://redis
REDIS_HOST: redis
depends_on:
- db
- redis
redis:
image: 'redis:5-alpine'
command: redis-server
Most of it is thanks to the help of awesome-compose
What we want to achieve here:
- Run our app alongside a postgres database and a redis server
- Have the postgres database data persist
- Use env variables to pass the component names for routing
As a warning note: avoid mapping any port that’s not necessary. So no mapping of 3456:3456 for postgres db. While it’s not an issue for local dev if you use a similar config on a production server it could open you to some brute-force attacks.
The odd thing: you may have noticed that we have both a REDIS_URL and a REDIS_HOST env. It’s because the default redis cache use requires the url param to have the protocol redis:// at the beginning, but our app also uses redis_store for session_store which requires just the host without the protocol. Chances are you don’t use redis_store and can safely remove the unnecessary env.
Level 1: add some dev quality-of-life configs
So this config was enough for me to get rolling and work on the project and contribute but it’s not the most comfortable thing due to a couple of issues:
Any change to the code required turning off the docker-compose instances, rebuilding and turning the infrastructure on again and that adds up.
I can't use binding.pry do debug
So to be able to cut on the restarts necessary and auto update code we will use volumes:
web:
build: .
volumes:
- .:/myapp
Adding a volume will mirror the current dir to the app dir in the image will automatically mirror any change we make to the code. While this won't help with changes to config files it will handle most changes in real-time (Assuming we run on typical dev env configs)
To enable access to binding.pry you need to add the following to the web service
web:
build: .
volumes:
- .:/myapp
tty: true # for binding.pry
stdin_open: true # for binding.pry
But adding this won’t allow us to just access the debug console from the console used to run docker-compose we will need to find the id of the web process and connect to it:
docker ps
docker attach 75cde1ab8133
Level 2: add some reusability and sidekiq
One last thing missing now in my case was the possibility to also run sidekiq. It was not necessary in the beginning but once the development went on we needed to work on some async tasks to handle long data import processes.
x-my-app: &my_app
build: .
volumes:
- .:/myapp
environment:
POSTGRES_HOST: db
REDIS_URL: redis://redis
REDIS_HOST: redis
depends_on:
- db
- redis
tty: true # for binding.pry
stdin_open: true # for binding.pry
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
<<: *my_app
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
ports:
- "3000:3000"
sidekiq:
<<: *my_app
command: 'sidekiq -L ./log/sidekiq.log -C ./config/sidekiq-dev.yml -P tmp/pids/sidekiq.pid'
redis:
image: 'redis:5-alpine'
command: redis-server
So as you can see before starting to define services we created a block that will contain all the common configs between the rails app and the sidekiq process containers. The only differences are the command and the ports since sidekiq doesn't need any open and we can’t have 2 containers trying to occupy the same port anyway.
And with this addition, I’d consider the docker config done for this project. While it still requires more than one command to setup initially with the database creation/migration/seeding it’s a step requiring additional scripting on top of docker-compose. Maybe for another article.
The commands toolbox
As a parting note i’d like to list the docker command i found myself using most often:
- docker-compose up --build // start the app
- docker-compose down // stop it if running in the background
- docker-compose ps // check the status of currently running images
- docker-compose run web bash // run commands
- docker images // show list of built images
- docker image prune // delete unused images since they can take up a lot of hard drive
- docker ps / docker kill // if docker-compose not doing the job
Tip: you may consider creating an alias for docker-compose to avoid typing it every time.
Posted on March 14, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.