Containers Explained (By a Non-Developer)
Kaylan Stock
Posted on December 21, 2020
Happy Holidays! I wanted to squeeze in one more explained blog before the end of the year and I picked a doozy to finish out 2020. Let’s take a look at the mysterious, the elusive, yet amazing THING that is containers (yup, calling it a thing). When I thought about containers before I did my research, my first thought was Docker and their cute blue whale. I thought of a Tupperware container with a few carved out sections (think bento box) that has different types of food in each section, and when you open that Tupperware and eat all the food, you get a cohesive meal that works together to power the person eating it. Are you ready for your lunch break now?
Containers History
So did anyone else think containers were relatively new or was that just me? Welp, news flash, they are older than we all thought! I did find an article dating them all the way back to 1979 when Unix V7 came along. Unix V7 allowed users to use the chroot
system call which meant that they would find the root directory of a process to a new location in the filesystem. So, process isolation is a simple and effective way to isolate a process, a core component of what would become containerization. Examples of process isolation can be: Preventing process A from accessing the memory used by process B, and preventing process B from opening a file that process A has opened. So Process A and B can never be together… and now I feel like I’m writing the next Romeo and Juliet story here.
In 2000, the tech world was booming. It was young, but finding its footing on a pathway to greatness. Some of you may know this time as the “dot-com boom. The first implementation of containers became available in a new FreeBSD mechanism called jails. Jails had three goals including virtualization, security, and ease of delegation. Ease of delegation in this case means the ability for system admins to assign and delegate several tasks which require superuser access without handing out complete control over the system. Virtualization means that each jail is a virtual environment running on the host machine with its own files, processes, user, and superuser accounts segregating each environment so they are completely isolated from each other.. From within a jailed process, the environment is almost indistinguishable from a real system.” That is a direct quote from the jails Wikipedia.
By 2001, a version of containers was available on Linux via the VServer project. This was created by Jacques Gélinas, and the solution allowed users to run multiple Linux servers on a single box while maintaining security and independence. From there, containers incrementally got better and better, and a wider variety of products hopped onto market such as Solaris Containers, OpenVZ, Process Containers, LXC, Warden, and Let Me Contain That For You (love this name!). In 2013, the one and only little blue whale appeared in the market, and Docker has been the dominant container company ever since. If you want to know a bit more about the listed technologies, you can check them out here in this nicely written summary.
What is a Container Anyway?
So who should I turn to in order to better understand the mysterious technology that is a container? Docker of course! My initial description of containers wasn’t completely off base. Docker calls container software a package, and I personally like my bento box analogy better, but I am what some may call a “foodie” so I understand things better when they’re described in food terms. Here is Docker’s definition:
“A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.”
Image Source: https://www.docker.com/resources/what-containerLet’s break this definition down. Unit of software in this case means that the container is one technology, one singular THING. Oops, I used thing again. The way I see it is that while some applications can have multiple aspects to them, containers do not. This becomes more relevant when we add packages up code and all its dependencies. So back to my bento box analogy. The container is a single dish. The application is built using multiple containers interacting together, just like a bento box is built from multiple dishes. The difference from my initial stab at containers is that it’s not really about the food in the bento box, it’s that when you have your food in the box, you can take it with you easily to wherever you want. It’s also about making sure the food doesn’t touch… You don’t want your soup mixed in with your rice.
Containers hold everything needed to make your application run in an isolated environment so that you can take the app across different computing environments with ease. What’s faster? Taking your food, packaging it up, and then unpacking it onto a new plate at the next location… or putting your food in your bento box and leaving it organized in that container everywhere you go. Okay now my bento box analogy is starting to REACCHHH, but hey I’m rolling with it.
Let me also break down what Docker’s definition means by computing environment. When a problem is solved by the computer, that computer uses many devices while solving that problem, arranged in a multitude of ways and which work together to solve the problem. These devices make up your environment. You may use a personal environment on your laptop, but have a different environment for your work computer, and then maybe a different one for the client you are coding the app for. A common environment that you probably use in personal or work projects is a “stage” environment and “production” or “prod” environment. A production environment is where your code runs to make your application/software/product work where as your stage environment is where you write and test your code to make sure it works for prod. Other types of computing environments include, but aren’t limited to: distributed computing, cloud computing, and cluster computing environments.
So your computing environments can be considered in relation to our bento box analogy as say your home, your job, your gym (if you eat food at your gym? lol), maybe you go to night school. You can take your nice little organized bento box and eat your snacks more efficiently and easily. It's more reliable to carry all your snacks in one box and more secure. You’re less likely to lose snacks along the way when they are all in your bento box. Do you like my bento box analogy? Lmk in the comments! I’ll leave you with a nice bulleted list of the benefits of containers:
- Agile Development- Containers mean you can be more agile, and are you even a developer if you don’t follow agile practices?
- Containers=portability- Its so much easier to deploy to different computing environments when you have your bento box!
- Efficiency- You can work smarter and get more done when you implement containers because you can trust that your code is going to work (hallelujah).
- Consistency and Reliability- this goes hand in hand with efficiency. Let’s break down an example on how containers provide this. A Docker container is guaranteed to run with identical behavior regardless of where it’s installed. For example, the HarperDB container works on Docker on Linux, Mac, and Windows. So even though you can’t install HarperDB on Windows, you can install HarperDB’s container on Docker for Windows and get consistent behavior.
- Less Overhead than Virtual Machines- taking it back to my bento box one more time. You need LESS stuff if you eat from your bento box all day. Your bento box could even have a nice little sustainable fork and knife (yay being green!), so you don’t need to find a new set of silverware everywhere you go, you don’t need a new plate, etc. etc. Virtual Machines would be another great explained blog at some point in the future, here is breakdown by none other than….Docker!
How and What to Contain with Containers
So we know what containers are and the history of where they came from, but what do you use them for? Well, I mentioned probably one of the most common use cases above briefly when I talked about stage vs. prod environments. Why do development teams use stage v. prod? Well, why do we write blogs in our notes/word/google docs before we post them? You can think of the stage environment as the draft of your code. You don’t want to throw code into prod and then break your entire product which real live customers are using unless you’re pulling a Jared Dunn from Silicon Valley and paying people across the globe at click farms to up your active daily users. 🙂 So stage is the place where you are free to f*** up, test your code, and make it rock.
Another use case for containers is to modernize your organization’s applications by way of the “Lift and Shift” method. You can move your applications into the cloud when you containerize them. You can also use containers to better support your microservice architectures. If you containerize your different individual microservices, you then have building blocks that allow you to deploy and scale those microservices like a pro.
One last example (although the possibilities for containers are endless) is to containerize your machine learning (ML) model. It’s a big challenge to deploy ML models in prod that can scale, and containers allow you to do that by distributing the ML application by spinning up multiple containers across multiple machines! Then you can utilize orchestration with tools like Kubernetes to easily manage all your different ML containers. Orchestration is the salt to container’s pepper. I probably should do an “Orchestration of Containers Explained” blog in the near future because that’s a whole other beast.
Cool Container Conclusion
Yup, it’s cool, it’s about containers, and it’s a conclusion. Wow, lots to unpack when it comes to containers. This blog in no way explains everything surrounding containers, but hopefully it helped clarify some basics surrounding this amazing software. Zach recently wrote this epic blog about HarperDB’s containerization journey and I think it's a nice follow up to this blog if you want to keep digging and learning about containers in a real world example. Also, I do have to plug the HarperDB Container on Docker Hub. Docker makes it easy to explore and develop on HarperDB. After Docker is installed, it’s a simple command to get up and running on HarperDB in your local environment. Docker abstracts the process of installing and configuring HarperDB, so developers can iterate quickly with a new or existing HarperDB datastore.
Finally, Happy Holidays and Happy New Year ya filthy animals!
Posted on December 21, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.