Multiple deployments and High Availability with Mina and Ruby on Rails
David Muñoz
Posted on October 29, 2024
Recently we realized that we needed a High Availability model on our infrastructure with some of our Monolithic and Micro services projects on Ruby on Rails.
That means we needed, in terms of code delivery, to make same project version deployment into multiple machines at once.
However this model is generic to any client-server / monolithic / micro services approach and to any languages and frameworks. In my project I use Mina (Formerly using Capistrano), so that means that on each deployment the script makes a SSH-in to the remote machine and performs the deployment process: Git clone, Git pull, rake db:migrate assets:precompile, puma:restart, etc… Before using Capistrano I was doing all this manually #sigh.
Fortunately Mina allowed to do all this just by setting up the deployment script through the deploy.rb file, now the next challenge would be on how to do deployment on many remote machines using the same deployment script (without modifying it each time or running multiple commands).
Mina or Capistrano?
Why do I use Mina instead of Capistrano? I won’t write it here since Infinum guys already did a great explanation and comparison:
But even so, this solution would fit Capistrano deployment script too. See also:
Setting up Rails app in Ubuntu machine (RVM + Sidekiq + Nginx + Puma + Capistrano)
There is already a Mina multi deploy gem
The guys at Codica managed to solve this problem with mina-multideploy gem, but quickly I found a problem that wouldn’t let me continue use it, so instead of creating the issue, look around for a solution and doing a PR… 😅 I did a little of scripting myself using Mina tasks and achieved to do inline (one at a time, one after the other) deployments using just a few ruby lines.
Problem with mina-multideploy is that they use a parallelism model and if one fails nothing happens; not even a warning or an error output, that could lead to unwanted results if you are using a HA infrastructure model, since different app versions would be running over the same load balancer and that would be 🤯 to troubleshoot (Trust me, been there).
You can try yourself by creating a deploy.lock file (Using mina, or manually creating it in the remote’s server path) on one of the remote machines and then running the bundle exec rails multideploy:start to do deployment on all the target machines; you will see the deployment success, however we had a remote machine deploy-locked 👎
See also:
Setting up Ruby on Rails with RVM, Puma, Mina, Nginx, Sidekiq and Redis on Amazon Linux 2
The solution
This solution is environment friendly, in our case we will use production as our environment but we can declare staging too.
Step 1
Let’s assume that you have your deploy.rb with generic settings, mine are:
Step 2
Now simply declare a hash of your desired environments with useful info inside of each one; if you wish you could just leave the urls array, in my case each of the stages need to restart different amount of Sidekiq services and should compile assets only in production:
Step 3
Now let’s declare a task named as each of our environments. If you have more options for each of your stage this is the step where to add variables that will be considered later in our deploy task:
Step 4
Now we should simply write our deploy task. Note that we only invoke the assets_precomiple task if the assets_precompile variable is set to true. Same with Sidekiq services were i got them listed in my remote machines as Sidekiq1, Sidekiq2, … If you don’t have any custom settings per stage just remove those specific command lines.
And finally we just create our multideploy task:
Now we can just run mina production multideploy and it will:
Loop through the urls array that we set in the first step
For each URL in the production stage it will set the domain and the rails_env .
Invoke the deploy task that will also execute additional invocations per our options that we declared in our stage setting step.
Bonus
Now if we want to connect to our console over a specific stage we can invoke the console by overwritting the console task:
It will grab the first value of the urls array and will invoke the old reliable console task. Since we are using a HA strategy all our remote machines should be the same (supposedly). We can now call mina production console .
Our final deployment script would look like this:
Next steps
Since we are looping through an array of remote machines IPs and doing each deployment inline what happens if 1 of them fails? The answer is that the deployment will raise an exception and you will have a nice Error output from our deployment script, now we need to implement a way to safely rollback all of our remote machines to a stable project’ s version. Please leave in the comments any ideas on how to achieve this.
High Availability with AWS
Now that we have managed our multiple deployment strategy using Mina (But also reproducible on Capistrano) we should setup a load balancer, in my case with AWS is just a few clicks away; I will not go deep into this since there is a lot out there on how to do it, also if you are not in AWS ecosystem you could setup your load balancer using any other service, like Nginx which allows you to make a simple load balancer with a few lines of code.
Step 1
Create a target group:
Step 2
Create a load balancer
Select the target group we created on step 1:
Click on create and make sure your rules are in place. Once provisioned you should be ready to test your load balancer URL.
Step 3
Make your DNS point your app URL to the load balancer address.
Thats pretty much it!
Final notes
High availability goes beyond of having multiple machines being balanced over all of the requests, but this is a first step that you should consider once you need to guarantee uptime, resources usage optimization, failover strategies and so others on your project.
Please leave in the comments any ideas, questions or ways to improve this topic.
Posted on October 29, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.