Deploying a containerized app in production simply

mindsers

Nathanaël CHERRIER

Posted on December 13, 2021

Deploying a containerized app in production simply

We developers, even if it's not our main job, often need to manage servers, VPS or other infrastructures for our projects. It's rarely the case in bigger companies but in startups on personal projects, we frequently have to manage it all ourselves.

⚠️ Read more of my blog posts about tech and business on my personal blog! ⚠️

Honestly, I like to manage every aspect of a project! Thinking about and deploying the necessary infrastructure so everything works is, in my opinion, part of the conception and development of the project.

It's basic for a developer to rent a server or a VPS from a traditional provider to install a docker environment, it can be a bit more tricky to understand how it works for the cloud. We can easily make a parallel between the VPS and our machine. It's just another machine somewhere else on Earth. And it works exactly like ours, no difference.

The cloud brings its own new vocabulary, new tools and new concepts. What does my project run on? Do I build my containers in a container? What's the use of Kubernetes? For developers used to traditional machines, the cloud can be a bit blurry.

This is why I'm writing this post. I'm going to try to demystify it all a little bit. We're going to use a simple example : my blog. I am going to show you how to deploy a simple docker app to the cloud. And, to switch things a bit, we are not going to talk about Google Cloud Platform or Amazon Web Services here. In this post, we will use Hidora, a European cloud platforms whose data centers are in Switzerland.

The Docker development environment we used to install on our machine is quite simple. After installing Docker, we can create a new image for our project or use existing images.

When you want to deploy this same project in production, you have a few more possibilities. It's not a bad thing but you gotta know where you're going with it.

Preconfigured Docker environments

My blog uses a CMS called Ghost. It's coded in JavaScript. More specifically, it uses a Node.js environment. Here is what cloud provider Hidora offers for this type of application (I'll take more about the interface too, so it's easier to understand) :

Deploying a containerized app in production simply

On the left, we can see our containers. There is already a Node.js container because it's the option that is selected in the top bar. There are also other spaces for other specific types of applications.

This way you can add the necessary containers for your project very easily. For example, to add a NGINX front to our Node.js app, we can click on “équilibrage” (“load balancing”) and select the NGINX version we need.

Deploying a containerized app in production simply

In the center section, we have all the parameters of the selected container. Hidora lets us change the vertical scalability, which is the power allocated to our container and the horizontal scalability, which is the number of containers we want to start simultaneously.

Deploying a containerized app in production simply

For each container you can : manager the storage it requires, the version of the Docker image and the time it has to wait when you reboot (in case it has to wait for another longer container to star). We can also choose if the container will have a public IP address or if it'll have to go through the Shared Load Balancer of the platform.

Deploying a containerized app in production simply

In the last part, on the right, we can see the price of the environment we've created. If you hover over it, details will be displayed.

Personalized Docker environments

By using the previous solution for my blog, I'll have to install Ghost in the Node.js Docker myself. And I don't really want to do that. It'll bring more constraints during maintenance.

It's not a huge problem because cloud providers offer several flexible options for our Dock architecture. Hidora does too.

Deploying a containerized app in production simply

We can, for example, use Kubernetes for our architecture. A couple of advantages of using this are :

  • use a tool you're already familiar with
  • manage our cloud architecture with the classic commands kube or helm locally

I am going to assume that some of you don't know Kubernetes and because I want this post to be understood by all, we're not going to go deeper into that here. In the same idea, you can add a link to a Docker Compose file and deploy an environment from it.

Deploying a containerized app in production simply

The solution I've chosed is the one I call “blank page”. Hidora allows us to build an environment from scratch by using Dockers Images we want to use.

Deploying a containerized app in production simply

We're facing the same interface as before, but no pre-config is done to the environment. Still looks familiar.

Installing my application

Here is a step by step of how I build this test environment. The goal is to build a copy of my blog using the CMS Ghost.

Proxy

I'll keep NGINX as the entry point for my environment.

It's a good practice to refuse giving Node.js direct access to an app. To avoid security breaches, you better use a proxy for the data flux, like NGINX.

NGINX won't have much to do. Even if everything will go through it, it will only transmit and decrypt information. It's quite optimized, so I only reserved one cloudlet. I will still allow the system to give it two in case it needs more.

Deploying a containerized app in production simply

Another detail concerning the storage attributed to NGINX, I gave them the minimum Hidora allows because in this container, we'll only have NGINX and its config. Five giga is already a bit too much. It allows us to lower our maximum cost for this environment.

I activated direct access via IPv4. It's the entry point of our environment. This is where we'll redirect our domain name later.

Applicative

In “App. Serveurs”, select the tag of the Ghost Docker image. Make sure it's the latest version. You will be able to change it later anyway if needed.

Deploying a containerized app in production simply

Because it's the element of our environment that will answer users, I chose to give it two cloudlets and accept a vertical scalability up to 4 cloudlets. For the horizontal scalability, I'll keep a single container, like NGINX.

Let's focus on the scalability for a minute, so it's clear for everyone.

Deploying a containerized app in production simply

Horizontal scalability consists in increasing the number of instances of the application server (container, server, VPS, etc) that will run the same application. A duplication of the application server, specifying how the data will be handled between the different instances.

Vertical scalability touches on the power allocated to the instances. It's expressed in cloudlets that are each 128 MiB and 400MHZ. Depending on the power your application will need, you can reserve the corresponding number of cloudlets. I already know Ghost CMS requires more than 128 MiB, so I reserve two cloudlets.

To compensate for punctual peak load, you can plan a higher scalability limit. As you've seen on the screenshot, I put a 4 cloudlets limit which means that I'll pay for 2 cloudlets but if Hidora sees my app requires more power, it'll give it in the limit of 4 cloudlets. It allows us to be more flexible depending on situations while also keeping our budget in check.

Be careful, though, reserved cloudlets aren't the same price as flexible cloudlets. This is why I reserve two cloudlets for Ghost. I could've put one and let them manage the rest. But reserved cloudlets are cheaper. If you know in advance how much power your app will need, it's best to reserve the corresponding cloudlets.

Of course, you can always change the topology of your environment later. If you see one of your container is often using flexible cloudlets (maybe because your app is more successful than anticipated), you can add more reserved cloudlets to lower the price. Flexible cloudlets should only be used for unplanned events.

Now that you know everything about scalability, let's move forward. The last thing I wanted to mention on the application part is that I disabled the “Access via SLB” option. It's the Shared Load Balancer from Hidora. Very useful for tests, but it gives direct access to Ghost container, we don't want that in production. The flux must go through NGINX.

Storage and database

By default, Ghost in production uses MySQL as database, but the containerized version prefers SQLite. The database is written on the disk.

This leads us to the storage of our information. As the Ghost Docker image defines a volume for its data (including database, images, themes, etc.), Hidora automatically creates a bind mount to the local file system. It means it creates a link between the container and local file system, so the data are persistent and don't disappear when a container is deleted.

For a small application like mine, it's more than enough. We're not going to do anything more. But imagine we want to replicate our data on a storage cluster, or easier, share the data between our containers.

Deploying a containerized app in production simply

For this, you'd have to create a knot for a Shared Storage. Then in the configuration of all the containers needing to share the data, by clicking the “volumes” button, you'd have to modify the volume to make it point to the new knot.

Deploying a containerized app in production simply

Other methods exist to make containers communicate with each other. We won't list it here because it's not the purpose of this post. I just wanted to show you a way to do it, so you know it's possible.

Same thing for the database, it's also possible to configure Ghost, so it uses one exterior to its container. You'd have to create a new knot for this one.

Deploying a containerized app in production simply

That's it for me, I didn't tell you everything, so you have a lot more to discover by yourselves.

Who is Hidora?

A few words on Hidora to wrap this up. I discovered this Swiss company while I was working on cloud providers and their solutions.

First interesting point : Hidora is Swiss and hosts the data in Switzerland. It means your data stay on European ground. Away from the Patriot Act and what it implies. As every company working with Europeans customers, Hidora is GDPR-compliant.

Before writing this post, I tested the platform for a little over a month period with my own projects. The interface is familiar, a lot of cloud providers use Jelastic as a PaaS cloud solution. And I have to say that Hidora's structure seems to be very stable.

Hidora has been created by Mathieu ROBIN, I've been able to talk to him and two other co-founders. They're passionate people trying to bring simple solutions to developers while also thinking about all the problematics we may encounter with our jobs (GAFA's omnipresence in the cloud landscape, etc.). It's a suitable alternative to GCP, AWS and others.

💖 💪 🙅 🚩
mindsers
Nathanaël CHERRIER

Posted on December 13, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related