Introduction to Kubernetes
Mesut Oezdil
Posted on July 25, 2023
Get ready to embark on a journey into the exciting world of Kubernetes — the revolutionary open-source system transforming the way we deploy, scale, and manage containerized applications. Discover the magic behind Kubernetes as it orchestrates your applications effortlessly, streamlining their management, and ensuring seamless discovery. Get ready to witness the power of Kubernetes and unlock the secrets that make it the captain of the container fleet!
What Does Kubernetes Entail?
Kubernetes serves as an open-source solution designed to automate the process of deploying, scaling, and overseeing containerized applications. It efficiently organizes the constituent containers of an application into coherent entities, simplifying their management and enabling seamless discovery. Frequently denoted as k8s, it derives this abbreviation from the eight characters situated between ‘k’ and ‘s.’ The term Kubernetes originates from the Greek word κυβερνήτης, signifying a helmsman or ship captain. By drawing parallels with this analogy, we can liken Kubernetes to the adept captain skillfully navigating a fleet of containers.
The Importance of Kubernetes and Its Capabilities
Containers offer an ideal method for bundling and executing applications. When operating in a production setting, it becomes essential to oversee the execution of containers hosting these applications and guarantee uninterrupted operation. For instance, in the event of a container failure, prompt replacement with a new container is imperative. Now, imagine having a program that can autonomously manage such behaviours. This is precisely where K8s comes into play! K8s presents a robust framework to ensure the reliable operation of distributed systems. Within this framework, it handles scaling, failover offers various deployment patterns, and more to efficiently support your application’s needs. K8s supplies you with the following:
Service discovery and load balancing
K8s offers the option to expose a container through either its DNS name or its dedicated IP address. In situations where container traffic is substantial, K8s efficiently implements load balancing and evenly distributes network traffic, ensuring a stable deployment.
Storage orchestration
With K8s, you have the flexibility to effortlessly attach and utilize your preferred storage system, be it local storage, public cloud providers, or other options, as it automatically enables seamless mounting.
Automated rollouts and rollbacks
Using K8s, you have the ability to define the intended configuration of your deployed containers, and it can smoothly transition the current state to the desired state at a regulated pace. For instance, you can employ K8s automation to generate new containers for your deployment, eliminate existing containers, and seamlessly transfer all their resources to the new container.
Automatic bin packing
K8s relies on the cluster of nodes you provide to execute containerized tasks. You specify the CPU and memory (RAM) requirements for each container, and K8s optimizes the placement of containers on your nodes to utilize your available resources effectively.
Self-healing
K8s automatically handles container restarts in the event of failures, performs replacements when necessary, terminates unresponsive containers based on user-defined health checks, and ensures that these containers are not exposed to clients until they are fully ready to serve.
Secret and configuration management
K8s provides the capability to securely store and handle sensitive information, including passwords, OAuth tokens, and SSH keys. It enables the deployment and modification of secrets and application configuration without the need to rebuild container images, ensuring that secrets remain concealed within your stack configuration.
Docker Swarm vs. Kubernetes: Key Differences
Docker Swarm serves as Docker’s proprietary container orchestration system, open-source in nature, enabling container clustering and scheduling. In comparison to K8s, Swarm exhibits the following distinctions:
Docker Swarm offers greater convenience during setup, but its cluster may not be as robust as K8s, which requires a more complex setup but provides the benefit of a reliable cluster.
While K8s supports auto-scaling, Docker Swarm lacks this capability; however, Docker scaling is five times faster compared to K8s.
Docker Swarm lacks a graphical user interface (GUI), whereas K8s provides a GUI through its dashboard.
Docker Swarm automatically load balances traffic between containers in a cluster, while K8s requires manual intervention for load balancing such traffic.
For logging and monitoring, Docker requires third-party tools like ELK stack, whereas K8s includes integrated tools for the same purpose.
Docker Swarm can easily share storage volumes with any container, whereas K8s can only share storage volumes with containers in the same pod.
Docker supports rolling updates but doesn’t offer automatic rollbacks, while K8s can handle both rolling updates and automatic rollbacks.
Kubernetes components
A K8s cluster comprises a minimum of one master node and one or more worker nodes. The master node serves as the control plane, responsible for task scheduling and cluster monitoring. When deploying K8s, you automatically obtain a cluster, consisting of worker machines known as nodes, which execute containerized applications. Each cluster includes at least one worker node, hosting the Pods constituting the application workload. The control plane oversees the worker nodes and the Pods within the cluster. In production settings, the control plane often spans multiple computers, while a cluster typically incorporates multiple nodes to ensure fault tolerance and high availability.
1- Control Plane Components
The control plane’s various elements are responsible for making overarching determinations regarding the cluster, such as scheduling tasks, and promptly reacting to cluster-related incidents, such as initiating a new pod when the deployment’s replicas field is unfulfilled. These control plane components can be executed on any machine within the cluster. Nevertheless, to streamline the process, setup scripts often commence all control plane components on a single machine, while ensuring that user containers do not run on this particular machine.
kube-apiserver
The API server, a pivotal element of the K8s control plane, serves as the gateway to the K8s API. Functioning as the front end, it encapsulates the K8s control plane. The primary implementation of the K8s API server is known as kube-apiserver. Designed for horizontal scalability, kube-apiserver can expand its capacity by deploying multiple instances. Consequently, running multiple instances of kube-apiserver and distributing traffic among them is a feasible approach.
etcd
It is a reliable and fault-tolerant key-value store utilized as the underlying data store for all cluster-related information in K8s. In the event that your K8s cluster relies on etcd as its data store, it is imperative to establish a comprehensive backup strategy for safeguarding that data.
kube-scheduler
kube scheduler, an integral control plane element, constantly monitors for recently created pods lacking assigned nodes and subsequently designates a suitable node for their execution. The scheduling process involves evaluating various factors, such as individual and collective resource demands, hardware/software/policy restrictions, affinity and anti-affinity specifications, data proximity, inter-workload interference, and deadlines, to make informed decisions.
kube-controller-manager
The Control Plane encompasses a pivotal component that executes various controller processes. While each controller logically functions as an independent process, for the sake of simplicity, they are consolidated into a unified binary and run within a single process. These controllers encompass:
Node controller: Tasked with detecting and reacting to node failures.
Replication controller: Ensures the appropriate number of pods is maintained for each replication controller object in the system.
Endpoints controller: Fulfills the role of populating the endpoints object, effectively joining services and pods.
Service Account & Token controllers: Create default accounts and API access tokens for newly created namespaces.
cloud-controller-manager
The cloud controller manager is an essential element within the K8s control plane, incorporating cloud-specific control logic. By employing the cloud controller manager, you can seamlessly integrate your cluster with your cloud provider’s API, while effectively segregating components that interact with the cloud platform from those focused on cluster interactions. The cloud controller manager exclusively operates controllers designed for your specific cloud provider. In situations where K8s runs on your own premises or in a learning environment on your personal computer, the cluster will not include a cloud controller manager. Similar to the kube controller manager, the cloud-controller-manager consolidates logically independent control loops into a single binary, executed as a unified process. To enhance performance and increase fault tolerance, horizontal scaling allows running multiple copies of the manager. The following controllers can have cloud provider dependencies:
Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding.
Route controller: For setting up routes in the underlying cloud infrastructure.
Service controller: For creating, updating and deleting cloud provider load balancers
2- Node Components
On each node, node components manage the active pods and offer the necessary K8s runtime environment.
kubelet
kubelet acts as an agent, operating on every node within the cluster, to ensure the proper execution of containers within pods. It receives a collection of PodSpecs through various means and guarantees the continuous and healthy operation of containers defined in these PodSpecs. Notably, the kubelet solely oversees containers created by K8s and does not manage those outside of its purview.
kube-proxy
kube-proxy is a network proxy that operates on every node within your cluster, playing a crucial role in the implementation of K8s Services. By managing network rules on nodes, kube-proxy facilitates seamless network communication to your pods from both internal and external network sessions. When available, kube-proxy utilizes the operating system packet filtering layer; otherwise, it independently forwards the traffic to ensure efficient networking.
Container runtime
The container runtime serves as the software responsible for executing containers. K8s provides support for multiple container runtimes, including Docker, containers, CRI-O (CRI-O is meant to provide an integration path between OCI conformant runtimes and the Kubelet), and any implementation of the K8s CRI (Container Runtime Interface).
Conclusion
I appreciate you being kind enough to read this far, and congratulations on your patience! This article provided an introduction to K8s, emphasizing its role as an open-source solution for automating application deployment, scaling, and management. The significance of K8s’ capabilities, such as service discovery, automated rollouts, and self-healing, was highlighted, along with a comparison between Docker Swarm and K8s. Additionally, the essential components of a K8s cluster, including the control plane and node components, were explored.
If you want to stay up to date on K8s, I highly recommend you follow Daniele Polencic and subscribe to Learn Kubernetes weekly.
Posted on July 25, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.