Kubernetes: A beginner's guide
Niharika Goulikar
Posted on August 14, 2024
Kubernetes is an orchestration tool. It is often used alongside docker. Kubernetes is used when we want to scale our containers.
Why do we need Kubernetes when we have docker?
While Docker allows us to create and manage containers, it has certain limitations. Kubernetes addresses these limitations by providing more advanced features for orchestrating and managing containerized applications at scale.Some of them are:
- Automated Scaling: Kubernetes can automatically scale applications up or down based on traffic or resource usage. Docker alone doesn’t have native support for this level of dynamic scaling.
- Self-Healing: If any one of the containers fails,kubernetes will restart the container.Docker doesn't provide out of the box support for this feature.
- Automated Rollouts and Rollbacks: Kubernetes automatically rolls back to the previous stable version if the current update causes any issues in the application.
- Persistant Storage management: Among the many advanced Storage management features it offers, one such capability is volume snapshots, which allow us to capture the state of volumes at specific points in time, making them useful for backups and data recovery.
- Load Balancing: As your application scales across multiple containers, Kubernetes automatically balances the traffic between these containers, ensuring that no single container is overwhelmed.
Kubernetes is all about the "game of clusters".Orchestration is largely about how the clusters are managed.It's all about how and where the containers run across the clusters.
What is cluster?
Cluster is a bunch of machines(physical or virtual) that work together to run your containerized applications.The cluster consists of nodes and control pane.
There are two types of nodes:
- Master node: The master node is responsible for scaling, updating, and assigning workloads to worker nodes.It is responsible for distributing the incoming workload evenly across the available worker nodes.
- Worker nodes: It's node on which our application containers run.
Control Plane
It consists of the below mentioned components.
- API Server: The API server exposes the Kubernetes API and is the entry point for all commands and queries.We use kubectl which is a command line tool to communicate with the API server.
- Controller Manager: Ensures that the desired state of the cluster is maintained by running controllers that manage different aspects of the cluster (e.g., replication, node status).
- Scheduler: Assigns newly created Pods to worker nodes based on resource availability and other constraints.
etcd: A distributed key-value store that holds the cluster's state and configuration data.
At times, when we think of the server for a highly scaled application, it is actually the Kubernetes cluster that makes our system scalable. This cluster has the capability to automatically scale applications up or down, distribute workloads evenly, ensure high availability, perform self-healing by replacing failed containers, and manage seamless updates and rollbacks.Kubectl is a command line tool used to communicate with master node and manage all the administrative tasks.
Kube-proxy is a network proxy used to run network rules on each node.It can also be configured to apply specific set of network rules on the nodes.
Container Runtime is responsible for running containers.It manages life cycle of the containers (creating them,stopping them,pulling images etc).
Each node has multiple pods inside them.Each pod has multiple containers running inside them.
What is pod?
A Pod is the smallest deployable unit in Kubernetes and is a logical group of one or more containers.
These containers share the same network namespace, IP address, and storage volumes, which allows them to communicate with each other efficiently.
These pods are accessed via services.
Service:It acts as a consistent access point (with a stable IP address and port) that can route traffic to the correct Pods, even if the underlying Pods are dynamically created or destroyed.
Some of the key components related to node are:
- kubelet:kubelet interacts with both the container runtime as well as the Node. It is the process responsible for starting a pod with a container inside.
- Container runtime: A container runtime is needed to run the application containers running on pods inside a pod.
- Kube-proxy: It is a process which is responsible for forwarding the request from services to pods.
All the requests or admistrative tasks happens in the control plane.
The Kubernetes API server is the central management entity in a Kubernetes cluster. It acts as the single point of communication for the entire cluster, exposing the Kubernetes API. All components, such as kubectl, other control plane components, and even the nodes, communicate with the API server to manage and control the cluster.
kube API Server receives requests from kubectl,CI/CD tools and some SDKs wanting to access the kube API server
In summary, Kubernetes is a container orchestration platform that enables the scaling of containerized applications, ensures self-healing of containers, manages rollouts and rollbacks, and provides persistent storage management, including volume snapshots. It helps build a robust and reliable system by automating these critical aspects of container management.
Posted on August 14, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.