Marek Stocki
Posted on June 9, 2021
I want to show you how to deploy your app to production with minimal cost and make the deployment process fully automated. If you have never done it before, this post will show you how to achieve it step by step. Maybe you have already deployed some apps, then you know that there are always some problems, especially when the server is used by multiple applications. This approach isn’t something innovative, there are many blog posts where you can learn how to dockerize apps, how to use GitHub Actions, and how to deploy code to VPS, but this tutorial brings it all together.
Docker
The whole idea is based on Docker's image. So the first thing to do is Docker installation. You can skip that part if you have already installed it.
Install Docker
For more details check the official site.
# Update the apt package index and install packages to allow apt to use a repository over HTTPS:
$ sudo apt-get update && apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# Add Docker’s official GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Update the apt package index and install the latest version of Docker Engine and Containerd
$ sudo apt-get update && apt-get install docker-ce docker-ce-cli containerd.io
# Verify that Docker Engine is installed correctly by running the hello-world image
$ sudo docker run hello-world
Manage Docker as a non-root user
For more details check official site
# Create the docker group
$ sudo groupadd docker
# Add your user to the docker group
$ sudo usermod -aG docker $USER
# Activate the changes to groups (only Linux)
$ newgrp docker
# Verify that you can run docker commands without sudo
$ docker run hello-world
Dockerfile
Docker is creating images using Dockerfile - it's a file with all commands that are executed during the build. I will show you the simplest version that will work. Later I will improve it and shorten the build time. Create a file Dockerfile
in the main app directory.
#1 This is the official Ruby image (https://hub.docker.com/_/ruby) - a complete Linux system with Ruby installed
FROM ruby:3.0.1
#2 Install applications needed for building Rails app
RUN apt-get update && apt-get install -y \
build-essential libpq-dev nodejs zlib1g-dev liblzma-dev
#3 The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD
# If a directory doesn’t exist, it will be created
WORKDIR /app
#4 Copy files from current location to image WORKDIR
COPY . .
#5 Install gems in the image
RUN bundle install
#6 Command that will be executed when you run the image
CMD bundle exec rails s -p 3000 -b '0.0.0.0'
Now let’s test it and create an image with the name rails_app.
$ docker build -t rails_app .
Sending build context to Docker daemon 86.65MB
Step 1/6 : FROM ruby:3.0.1
3.0.1: Pulling from library/ruby
d960726af2be: Pull complete
## part ommited
Status: Downloaded newer image for ruby:3.0.1
---> 9cba361e78fe
Step 2/6 : RUN apt-get update && apt-get install -y build-essential libpq-dev nodejs zlib1g-dev liblzma-dev
---> Running in fa0bce0b6b81
Get:1 http://deb.debian.org/debian buster InRelease [121 kB]
## part ommited
Removing intermediate container 40b752bd0ef3
---> 7d09aa5c9ced
Step 3/6 : WORKDIR /app
---> Running in 427dea58acb0
Removing intermediate container 427dea58acb0
---> 8ed87d4b0643
Step 4/6 : COPY . .
---> 0b3a695a0987
Step 5/6 : RUN bundle install
---> Running in 65a2592eca90
Fetching gem metadata from https://rubygems.org/............
Fetching rake 13.0.3
Installing rake 13.0.3
## part ommited
Removing intermediate container 65a2592eca90
---> 55d9368c4b98
Step 6/6 : CMD bundle exec rails s -p 3000 -b '0.0.0.0'
---> Running in 795356f8553e
Removing intermediate container 795356f8553e
---> 2466c41ac676
Successfully built 2466c41ac676
Successfully tagged rails_app:latest
The image is successfully built, to check available images you can use this command
$ docker images
Now it's time to run the container with the application and check if it works.
#-p parm allows to map ports with scheme EXPOSED_PORT:IMAGE_INTERNAL_PORT
$ docker run -p 3001:3000 rails_app
Open the browser and go to http://localhost:3001/ - there is a little success, Rails application is working partially:
This is an error from Rails, so Rails is working. Still, there is a problem with the database. There must be another container with the Postgres application and connection between these containers. To achieve it I will use Docker Compose.
Docker Compose
This is a tool that allows to run multiple containers and create a network between them. The configuration file is stored as YAML.
Install Docker Compose
For more details check official site
# Download the current stable release of Docker Compose
# To install a different version of Compose, substitute 1.29.2 with the version of Compose you want to use.
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Apply executable permissions to the binary
$ sudo chmod +x /usr/local/bin/docker-compose
# Test the installation
$ docker-compose --version
Compose config file
Create a file docker-compose.yaml
in the main app directory.
version: "3"
services:
database:
# Official postgres image available in https://hub.docker.com/
image: postgres
# There are many types of volumes, this is a named volume, which will store database in docker directory
# Named volumes must be listed under the top-level volumes key, as shown at bottom of the file
volumes:
- db_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=password
web:
image: rails_app
# Command will replace CMD from Dockerfile
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
# Path on the host, relative to the Compose file. 'app' is a WORKDIR name from Dockerfile
# This volume will allow you to run the Rails app with Docker Compose
# and made live changes without rebuilding the image
volumes:
- .:/app
ports:
- "3001:3000"
# 'database' is Postgres service name from the top of the file - it will allow communication between containers
depends_on:
- database
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USERNAME=postgres
- POSTGRES_HOST=database # it's Postgres service name from the top of the file
volumes:
db_data:
Now it's time to run Rails application and Postgres database with Docker Compose, but before you must update Rails database config file, create a database, and run migrations.
#config/database.yaml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
username: <%= ENV['POSTGRES_USERNAME'] %>
password: <%= ENV['POSTGRES_PASSWORD'] %>
host: <%= ENV['POSTGRES_HOST'] %>
development:
<<: *default
database: rails_app_development
test:
<<: *default
database: rails_app_test
production:
<<: *default
database: rails_app_production
After updating the database file Docker image needs to be rebuild.
$ docker build -t rails_app .
The first 3 steps are cached, but changes in the application directory cause gems installation. I will show you later how to avoid it and use cache.
Now start containers and in another terminal window run a command to create a database.
# Run command and leave it running
$ docker-compose up
# From another terminal window
$ docker-compose run web rake db:create db:migrate
Open the browser and go to http://localhost:3001/ and... You just run the Rails app with Docker.
VPS
The next piece of the puzzle is VPS - a place where you deploy application. You can find many companies that provide cloud services and it's your decision which one you choose. I wanna show you an example based on a server with Ubuntu. Like on your localhost, firstly you install Docker and Docker Compose on VPS. Use steps from the beginning of this post. You will need two additional non-root users: nginx_proxy and rails_app.
$ sudo adduser nginx_proxy
$ sudo adduser rails_app
# Add new users to the docker group
$ sudo usermod -aG docker nginx_proxy
$ sudo usermod -aG docker rails_app
HTTP server
For HTTP server I will use NGINX with this awesome application nginx-proxy and acme-companion for automatic SSL certificate generation. Connect to the server as nginx_proxy user and create two files docker-compose.yaml
and nginx_custom.conf
.
$ cd ~ && touch docker-compose.yaml nginx_custom.conf
I will show you the basic configuration of these two applications. For more details check the app's documentation from the links above.
# docker-compose.yaml
version: '3.9'
services:
nginx-proxy:
restart: always
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx_custom.conf:/etc/nginx/conf.d/nginx_custom.conf
networks:
nginx-proxy-network:
letsencrypt:
restart: always
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
volumes_from:
- nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
nginx-proxy-network:
volumes:
conf:
vhost:
html:
dhparam:
certs:
acme:
networks:
nginx-proxy-network:
name: "nginx-proxy-network"
# nginx_custom.conf
# here you can customize NGINX
server_tokens off;
client_max_body_size 100m;
When these two files are created and filled with content, let's run the NGINX server with deamon (-d param)
$ docker-compose up -d
And that's all you need to do with the HTTP server - this app will handle all new Rails applications on your server with few ENV variables that you will add to the Rails app docker-compose files.
Rails app - Production Docker Compose file
Let's connect to the server as rails_app user. You must create two files docker-compose.yaml
and .env
on the server and copy the below content to these files.
$ cd ~ && touch docker-compose.yaml .env
In the production version you must pass more ENV variables, so let's create a file to store these variables separately. Also, you must remember that every file created in the Docker image during the app life cycle will be deleted with the new app version release. So e.g. files from ActiveStorage or logs need to be stored outside of the image.
version: "3"
services:
database:
# restart docker container when there will be a crash
restart: always
image: postgres
volumes:
- db_data:/var/lib/postgresql/data
# instead of environment let's use the env file
env_file: .env
web:
restart: always
image: rails_app
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
env_file: .env
environment:
- VIRTUAL_HOST=your_dns_for_rails_app.com # it will allow nginx-proxy to redirect HTTP request to your Rails app
# LETSENCRYPT variables are used by acme-companion and it will create SSL certificate for those params
- LETSENCRYPT_HOST=your_dns_for_rails_app.com
- LETSENCRYPT_EMAIL=some_user@your_dns_for_rails_app.com
volumes:
- ./storage:/app/storage # store ActiveStorage files in `storage` directory
- ./log:/app/log # store logs in `log` directory
ports:
- 3001:3000
depends_on:
- database
volumes:
db_data:
networks:
default:
external:
name: nginx-proxy-network
Example env file:
POSTGRES_PASSWORD=password
POSTGRES_USERNAME=postgres
POSTGRES_HOST=database
RAILS_ENV=production
SECRET_KEY_BASE=some_secret_key
RAILS_LOG_TO_STDOUT=true
RAILS_SERVE_STATIC_FILES=true
Rails app - Production Dockerfile
The main difference is a need to precompile assets to run the production environment. To do it with Rails and Webpacker, then also Yarn is needed. Let’s update Dockerfile to handle it and fix gems caching.
FROM ruby:3.0.1
# add yarn to apt list
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# add yarn to installed apps
RUN apt-get update && apt-get install -y \
build-essential libpq-dev nodejs zlib1g-dev liblzma-dev yarn
WORKDIR /app
# copy Gemfile and Gemfile.lock and install gems before copying rest of the application
# so the steps will be cached until there won't be any changes in Gemfile
COPY Gemfile* ./
RUN bundle install
COPY . .
# precompile assets with temporary secret key base
RUN SECRET_KEY_BASE="assets_compile" RAILS_ENV=production bundle exec rake assets:precompile
CMD bundle exec rails s -p 3000 -b '0.0.0.0'
GitHub Actions
When the production Rails app on Docker image is fully working and VPS is ready, it's time to create an image with GitHub Actions and store it in GitHub Container Registry. Before I show you the config file, there are few things to do in GitHub.
- GitHub Container Registry (GHCR) is in an experimental state, so you must enable that feature with this tutorial.
- Second thing needed is token, which allow to login to GHCR - tutorial (select two scopes: write:packages and delete:packages)
- Create repository secrets. Go to your repository -> Settings -> Secrets and add New repository secret and create two secrets:
CR_PAT
with GHCR token andVPS_PASSWORD
- its password for user rails_app.
Then log in to your server with rails_app user and edit bashrc
file. Add a line at the end of the file:
export CR_PAT=<your GHCR token>`
In your project create a file in that path /.github/workflows/deploy.yml
name: Deploy
on:
push:
branches:
# Run deploy job on every push to the master branch
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
-
name: Login to GitHub Container Registry
run: echo ${{ secrets.CR_PAT }} | docker login ghcr.io -u <YOUR GITHUB LOGIN> --password-stdin
-
name: Pull image to use as a cache
run: docker pull ghcr.io/<YOUR GITHUB LOGIN>/rails_app:latest || exit 0
-
name: Build Docker image
run: docker build . --cache-from ghcr.io/<YOUR GITHUB LOGIN>/rails_app:latest --tag ghcr.io/<YOUR GITHUB LOGIN>/rails_app:latest
-
name: Push the image to GitHub Container Registry
run: docker push ghcr.io/<YOUR GITHUB LOGIN>/rails_app:latest
-
name: VPS - pull image and run app containters
uses: appleboy/ssh-action@master
with:
host: <your-server-ip>
username: rails_app
password: ${{ secrets.VPS_PASSWORD }}
script: |
echo $CR_PAT | docker login ghcr.io -u <YOUR GITHUB LOGIN> --password-stdin
docker-compose pull web
docker-compose up -d --no-deps
After first successful deploy, login to your server as rails_app and create database with command:
$ docker-compose run web rake db:create db:migrate
The last improvement
The final touch to make deployment fully automated is the migration script. Create a file docker-entrypoint.sh
in your project main directory and paste the below content.
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails db:migrate 2>/dev/null
exec bundle exec "$@"
And then few changes are needed in Dockerfile
:
FROM ruby:3.0.1
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y \
build-essential libpq-dev nodejs zlib1g-dev liblzma-dev yarn
WORKDIR /app
COPY Gemfile* ./
RUN bundle install
COPY . .
RUN SECRET_KEY_BASE="assets_compile" RAILS_ENV=production bundle exec rake assets:precompile
# Add entrypoint script to handle migrations
ENTRYPOINT [ "./docker-entrypoint.sh" ]
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
You just created a fully working continuous deployment. You don't have to worry about errors on your local machine or some problems with the internet connection anymore. Just write your code and simply push commit and the rest is magic. Below some useful commands that may help you.
# view logs from Postgres
$ docker-compose logs database -f
# view logs from Rails
$ docker-compose logs web -f
# run Rails console inside Docker container
$ docker-compose run web rails c
# list available images
$ docker images
# list running containers
$ docker ps
# stop containers
$ docker-compose down
# remove old images/containers
$ docker system prune
Posted on June 9, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.