You need to learn Kubernetes RIGHT NOW!! 🚀

iarchitsharma

Archit Sharma

Posted on July 30, 2022

You need to learn Kubernetes RIGHT NOW!! 🚀

Why Should I learn Kubernetes? 🤔

Because Kubernetes can deploy 100 Docker containers with just one command 🤯

This article is for those who want to learn Kubernetes as a beginner. This article takes an example of scaling a website to make you understand the concept of Kubernetes.

Prerequisites:

  • Computer Networking
  • Docker

What is Kubernetes? 🤔

Kubernetes(K8s) is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. 😵‍💫

Did not get it? ☹️, I know!

Let me explain:

Kubernetes solves a major problem with docker containers or any type of container. 🛠️

Let's look at the problem with Docker by creating a cookie website💻 where we'll sell cookies🍪 and deploy it on a docker container because of its great isolated environment, better resource use, and so on.

Problem with Docker:

NOTE: In this blog, I will not guide you through the process of creating and running a website in Docker. I am only using a situation to demonstrate the issue.

Deployed website
So I have deployed our cookies website (cookies.com) 💻 in our Docker container and now guess what people are purchasing our cookies🍪 like crazy, and our website is doing so well – a bit too well, in fact – that it can't take it and is crashing. 🤯
Not only that, but the server or host🖥️ it's operating on has gone down a few times, causing a few outages, How do we solve it? 🤔

Let's create another host🖥️ so that we may run our second container on another host, which is quite simple using Docker. Just one simple command and you're done. 😋

But wait, we're not done yet because we need to ensure that when people click cookies.com when they visit the website, it can go to this server 1 or the server 2, which is normally done with a load balancer, so we need to find out how to do a load balancer. 😲

Two servers Load balanced
Now both servers are being used since they are load balanced, so if one goes down, the other remains running.👍

People are loving our cookies🍪, and they're coming to the website in large numbers. We're receiving a lot of views, visits, and purchases. 😲
Our servers can't take it anymore, and they're crashing again, so what do we do?
Everytime we add new servers, we also have to setup a container for that server, then to setup load balancer and if our website crash again as our website traffic keeps on increasing
We have to repeat the process...😡

We need a solution. We need to automate this or orchestrate it in some way, perhaps using container orchestration. 🤔

Problem Solution:

This is where Kubernetes(K8s) comes into play. It will handle all the junk.

Let's see how:

A part of the setup has already been done as Kubernetes is a container orchestrator, so we'll still need servers and some sort of container runtime, which in our case will be Docker.
Kubernetes will basically help us make Docker better.

So where do we begin? 🤔

Kubernetes Architecture:

We'll first introduce a new server, someone who will be in charge of our servers and we are going to call it Master component.
Master is the one who instructs them what to do and keeps them on track.

Enters the Master Node
We do need to include our servers in the team, therefore we'll install Kubernetes on these servers, which will include two components that is Kube-Proxy and kubelet.

Kubernetes Cluster

They are now part of the team after installing those two components along with their container runtime, which, in our case, is Docker. These servers or machines are called Worker Nodes, mostly referred as Nodes.
These components are just a way for the Master to control them.

Kubelet:

Kubelet is just a Kubernetes process that makes it possible for the cluster to talk to each other or communicate with each other and execute some task on those Nodes like running application processes.

Kube-proxy:

Kube-proxy is a network proxy/load balancer that implements the Service abstraction. It programs the iptables rules on
the node to redirect service IP requests to one of its registered backend pods.

What is Kubernetes cluster?

A Kubernetes cluster is a collection of Node machines which include Master Node and the Workers Nodes.

Master is a server or Node like any other, but it has certain specific components and tasks that we have assigned to it.
Master have four jobs or components that is Kubernetes API Server, Scheduler, Controller Manager and etcd.
These are nothing but just processes running on Master Nodes that are absolutely necessary to run and manage the Cluster properly.

Kubernetes master explained

Kubernetes API Server:

API Server is an entry point to the Kubernetes cluster. This is the process to which different Kubernetes client will talk to, like UI(User Interface) if you are using Kubernetes dashboard, an API if you are using some scripts and automating technologies and a CLI(Command Line Interface).

Controller Manager:

Controller Manager basically keeps an overview of what's happening in the cluster, whether something needs to be repaired or maybe if a container stopped and it needs to be restarted etc.

Scheduler:

Scheduler is basically responsible for scheduling containers on different Nodes based on the workload and available server resources on each Node. So it's an intelligent process that decides on which worker Node the next container should be scheduled based on available resources on Worker Nodes.

etcd:

It's an key-value storage which basically holds at any time the current state of the Kubernetes cluster, so it has all the configuration data inside and all the status data of each Node and each container inside of that Node.
The backup and restore is actually from these etcd snapshot because you can recover the whole cluster state using that etcd snapshot.

Kubernetes Architecture

What is Virtual Network?

Virtual Network is also a very important component of Kubernetes which enables Worker Nodes and Master Nodes talk to each other. It basically turns all the Nodes inside of a cluster into one powerful machine that has sum of all the resources of individual Nodes.

Thing to be noted here is that, Worker Nodes has most load because they are running the actual application, so they are much bigger and have more resources. Whereas Master Node will just be running handful of master processes so it doesn't need much resources.

However, Master Node is much more important than Individual Worker Nodes because if you lose a Master Node access, you will not be able to access the cluster anymore and that means you absolutely have to have a backup of your Master Node at any time.

In Production environment usually you would have at least two Master Nodes but you can have multiple Master Nodes. So if one Master Node goes down the cluster will still run smoothly because you have other masters available.

In the next article we'll take a look at Main Kubernetes Components and will talk about them one by one in detail

Thank you for reading this blog; do follow me
💖 💪 🙅 🚩
iarchitsharma
Archit Sharma

Posted on July 30, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related