Our deployment strategy for on-premise customers (in China)
Michiel Sikkes
Posted on September 10, 2020
Last year (2019), one of our customers asked if it was possible to run our full recurring commerce platform to sell product subscriptions in China. That sounded like a great challenge. Hosting something in China is not as straightforward as hosting something for the rest of the world.
In this article, I'll show you how we've adopted our deployment pipeline to support on-premise deployments for our customers. I'm taking China as an example with some unique challenges. But the following article can be used to set up an on-premise deployment for your application on any type of infrastructure.
What's up with China?
Doing business in mainland China is fully locked down for any non-Chinese company. To do business in China, you need a domestic business license for a specific product category, and you need to deal with The Great Firewall. The Great Firewall causes your connection to be really slow or unavailable when you're hosting outside of mainland China. So getting local infrastructure in mainland China is kind of critical.
Unfortunately, hosting or deploying apps on mainland China servers is also fully locked down and only accessible for Chinese citizens and companies. So that's what we were facing.
We had set up an on-premise environment of our platform in our client's hosting account on Aliyun (Alibaba Cloud). And in doing this, make sure our existing deployment processes and hosting tooling would still work.
Our approach
This is the deployment approach we ended up with. I'll explain below the detailed configuration and setup of each important component. I'll also provide some examples and snippets on specific configuration setups.
On every PR merge into our main branch on GitHub, the following steps are taken:
- CircleCI runs our tests, security checks, dependency scanners, etc.
- On a successful build, CircleCI kicks off a special production Docker image build.
- CircleCI uploads the new production Docker image to a repository on Docker Hub. The Docker image is tagged with the Git commit SHA.
- A developer logs into the on-premise environment and deploys the image via Dokku's container-based deployment. The Git commit SHA is used to identify the release to deploy.
- Dokku runs its regular deployment steps, application restarts, asset compilation, migrations, etc.
- New release is live!
Server and database setup
The deployment process is pretty independent from the final server or database setup. All you need to be able to do is run container images and have your various database services available.
Here's what we used in China on Aliyun (Alibaba Cloud). Parts can easily be adapted or replaced with any other cloud infrastructure or local component via Dokku plugins.
- Aliyun Elastic Compute Service instance with Ubuntu LTS.
- Aliyun ApsaraDB RDS for PostgreSQL.
- Aliyun ApsaraDB for Redis.
- Terraform and Ansible for creating provisioning the servers with our standard server setup.
- Dokku as on-server hosting and deployment platform.
As you can see, we used Aliyun Cloud's managed services for PostgreSQL and Redis. However, you can change this up if you're deploying to an on-premise environment on some kind of (virtual) server in a datacenter somewhere.
You can for example use Dokku's Redis and PostgreSQL plugins. These plugins allow you to run Redis and PostgreSQL on the same single server as your application container runs on. Additionally, they make sure that only your application can access these services and by default they are not accessible through the public internet.
Dokku with container-based deploys
We're quite a big fan of Dokku for relatively simple on-premise deployments. Dokku is very well supported, easy to set up, and you have your apps running in no time.
Dokku takes care of deployment access control, deploy and migration steps, versions, scaling, webserver hosting, etc. It also has great plugins for backups, various databases, and other components you might need. You can configure ENV vars per Dokku-managed application so that you can set configuration settings and database connections at runtime.
Dokku is your own little mini app deployment Platform as a Service running on your own infrastructure. Read here on how to deploy your app via Dokku.
With Dokku, you can either deploy your apps using the git push deployment strategy, or have it deploy your Docker containers. At first, we used the git push deployment method. Later we switched to a Docker container-based deploy method.
Why we don't use "git push" for deployment
One problem with the git push method is that every on-premise server your software runs on has a slightly different version of your app running. Even if it's based on the same commit. This is because git push in Dokku will build a new container image on every deploy on every server. So you cannot be certain that your application image is exactly the same on each on-premise environment you manage.
In addition, the Docker container build is triggered for every deploy on every server. And deploying an app can be quite CPU-intensive for a short period of time. You then risk pulling down our production app if you do not have access to a large enough server.
For China we had an additional problem with the git push method, related to The Great Firewall. The internet connection from Europe to China is very unreliable and/or very slow. It could sometimes take hours to deploy a single commit, as our codebase would have to be pushed to the server in China. But Dokku also needs to download a lot of images and dependencies during a deploy. We'd see connections, stalling, being paused for hours, or simply timing out.
Switching to Docker image-based deployment
So all these problems with git push -based deployment resulted in us switching to the Docker Image-based deployments with Dokku. It was definitely more work to set up but in the end resulted in a much smoother and faster deployment process.
The Dokku documentation can tell you how to use Docker images for your deployments.
After switching, a deployment from our side basically looked like running the following commands on the on-premise server:
Building the production Docker image
CircleCI is our CI of choice and it runs our test suites and it builds our Docker containers. It sometimes even deploys our apps straight away.
Here are a few configuration snippets on how we set things up on CircleCI to build a Docker image and push it to our Docker Hub account.
CircleCI configuration for building and pushing the image
We have a special build step in our CircleCI workflow builds our production image and then pushes it to Docker Hub.
For extra security we have a separate Docker Hub user for every repository so that we can easily revoke access from CircleCI in the case of a breach.
Here's the relevant parts from our circle.yml configuration. This files lives in our application codebase and is automaticaly picked up by CircleCI on every push to the repository on GitHub.
Dockerfile for production
We have a Dockerfile-production in our codebase that is used specifically for building the image to be deployed to production. It uses the officially supported Ruby base images with Alpine as base distribution. It is also set up as a multi-stage build so that we don't leave any development/build dependencies in the final image.
You'll notice some Ruby on Rails-specific bits in here. Those can be taken out or replaced with what's needed for your framework.
Docker Hub for hosting our images
We currently use Docker Hub for hosting our container images. For additional security, we have all our applications in their own Docker Hub repositories. We create additional user accounts per repository/application so we can put their credentials in CircleCI.
A pretty decent on-premise deployment mechanism
For us, this is a pretty decent on-premise deployment mechanism. We don't do many on-premise setups anymore as this is truly an exceptional enterprise customer requirement.
Our main (European) platform runs on Heroku, and we leverage all their nice features to deploy and scale our platform.
However, having the setup described in this article in place allows us to very easily add any on-premise environments if required by our customers. Since it's based on a container image it is also quite easy to make a scalable version out of this on a Kubernetes cluster.
Happy to answer any of your questions about this setup!
Posted on September 10, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.