574n13y

Vivesh

Posted on November 11, 2024

K8s Basic

Kubernetes is an open-source platform for automating deployment, scaling, and managing containerized applications. Created by Google, Kubernetes is now maintained by the Cloud Native Computing Foundation (CNCF) and has become the go-to orchestration tool for containerized applications. Here’s a breakdown of the main components and concepts in Kubernetes:


1. Core Concepts

  • Containers and Pods: Containers are packaged environments that include an application and its dependencies. In Kubernetes, a Pod is the smallest deployable unit and can contain one or more tightly coupled containers that share storage and network resources.
  • Nodes and Clusters:
    • A Node is a worker machine (virtual or physical) where Pods run. It includes the Kubernetes software needed to manage these Pods.
    • A Cluster is a collection of nodes managed by a Master node (or control plane), which is responsible for managing the state of the cluster.

2. Kubernetes Architecture

  • Master Components: These manage the cluster and ensure Pods run as expected.

    • API Server: The core interface to the cluster, handling requests from users and the various components.
    • Controller Manager: Ensures the desired state of the cluster by making sure the right number of Pods are running, based on configurations.
    • Scheduler: Decides on which nodes new Pods should be placed based on resource requirements and availability.
    • etcd: A distributed key-value store that keeps all cluster data, serving as Kubernetes' single source of truth.
  • Node Components: These run on every Node, managing the operation of the containers.

    • Kubelet: An agent that communicates with the API server to ensure containers in Pods are running as expected.
    • Kube-Proxy: Manages network rules to allow communication with Pods across nodes.
    • Container Runtime: Runs the actual containers, often using Docker, containerd, or CRI-O.

3. Kubernetes Resources

  • Deployments: Control how many replicas of an application are running, allowing for easy scaling and rollback.
  • Services: Provide stable network addresses for Pods and manage load balancing between them.
  • ConfigMaps and Secrets: Used to pass configuration data or sensitive information to applications.
  • Persistent Volumes (PV) and Persistent Volume Claims (PVC): Provide and request storage for Pods that needs to persist beyond the lifecycle of individual Pods.

4. Key Kubernetes Features

  • Self-healing: Restarts failed containers, replaces or reschedules them on other nodes, and kills containers that don’t respond.
  • Horizontal Scaling: Allows you to scale applications up or down based on demand.
  • Automated Rollouts and Rollbacks: Kubernetes can roll out updates to applications, ensure the updates are healthy, and roll back if issues arise.
  • Service Discovery and Load Balancing: Provides built-in service discovery and load balancing.

5. How to Interact with Kubernetes

  • kubectl: The command-line tool for interacting with a Kubernetes cluster. With kubectl, you can create, update, delete, and troubleshoot resources in the cluster.
  • YAML Configuration Files: Kubernetes resources are often defined using YAML files. These configurations define the desired state of resources, which Kubernetes works to maintain.

6. Getting Started with Kubernetes

  • Local Setup Options:
    • Minikube is a local Kubernetes cluster that runs on a single node. It's great for development and learning.
    • KIND (Kubernetes in Docker) is another lightweight option for running a Kubernetes cluster in Docker.
  • Cloud Providers: Major cloud providers like AWS, Google Cloud, and Azure offer managed Kubernetes services (EKS, GKE, and AKS, respectively), which simplify deployment by handling the cluster management.

7. Kubernetes Use Cases

  • Microservices architecture, where services need to be independently deployed, scaled, and managed.
  • Automated scaling and deployment of web applications.

- Infrastructure-as-Code (IaC) environments for DevOps.

TASK - Install minikube -

To get started with Kubernetes locally, installing Minikube is an excellent choice. Minikube provides a single-node Kubernetes cluster that runs in a virtual machine or container on your local machine, making it perfect for testing and development.

Here's a step-by-step guide to install Minikube and run your first Kubernetes cluster:

Step 1: Install Minikube

1.1 Install Prerequisites

  • kubectl: Kubernetes’ command-line tool is needed to manage and interact with the cluster.
  # Install kubectl (Linux or MacOS)
  curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/$(uname | tr '[:upper:]' '[:lower:]')/amd64/kubectl"
  chmod +x ./kubectl
  sudo mv ./kubectl /usr/local/bin/kubectl
Enter fullscreen mode Exit fullscreen mode
  • Virtualization Support: Minikube requires virtualization to run Kubernetes. Ensure you have one of the following:
    • Docker (recommended for Linux, Windows, and macOS)
    • Hyperkit (macOS)
    • KVM (Linux)
    • Hyper-V (Windows)

1.2 Install Minikube

Download and install Minikube using a package manager or direct download.

  • On macOS (Homebrew):
  brew install minikube
Enter fullscreen mode Exit fullscreen mode
  • On Linux:
  curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
  sudo install minikube-linux-amd64 /usr/local/bin/minikube
Enter fullscreen mode Exit fullscreen mode

Step 2: Start Minikube

With Minikube installed, you can start a local Kubernetes cluster.

  1. Open a terminal or command prompt.
  2. Run the following command to start Minikube:
   minikube start
Enter fullscreen mode Exit fullscreen mode

Minikube will automatically detect and use the appropriate VM or container runtime.

  1. Verify that Minikube started successfully:
   kubectl cluster-info
Enter fullscreen mode Exit fullscreen mode

Step 3: Run Your First Kubernetes Application

To deploy a simple application on your Kubernetes cluster, follow these steps:

  1. Create a Deployment: Use kubectl to create a deployment that runs a sample NGINX container.
   kubectl create deployment hello-world --image=nginx
Enter fullscreen mode Exit fullscreen mode

This command tells Kubernetes to create a deployment named "hello-world" using the NGINX container image.

  1. Expose the Deployment as a Service:
   kubectl expose deployment hello-world --type=NodePort --port=80
Enter fullscreen mode Exit fullscreen mode

This creates a Service, which provides an external URL for accessing the application.

  1. Access the Application:

    • Get the URL to access the service:
     minikube service hello-world --url
    
  • Open the URL in your browser to see the NGINX welcome page.

Step 4: Verify and Explore

  • List Pods: View running Pods:
  kubectl get pods
Enter fullscreen mode Exit fullscreen mode
  • View Cluster Status:
  kubectl get nodes
Enter fullscreen mode Exit fullscreen mode
  • View Minikube Dashboard:
  minikube dashboard
Enter fullscreen mode Exit fullscreen mode

This command opens the Minikube Kubernetes dashboard in a browser, providing a graphical interface for monitoring the cluster.

Step 5: Stop and Delete Minikube (Optional)

To stop the Minikube cluster when you’re done:

minikube stop
Enter fullscreen mode Exit fullscreen mode

To completely delete the Minikube cluster:

minikube delete
Enter fullscreen mode Exit fullscreen mode

Summary

Now you have a local Kubernetes cluster up and running! Minikube simplifies learning and experimenting with Kubernetes, enabling you to practice deployment, scaling, and management of containerized applications.

Happy Learning ...

💖 💪 🙅 🚩
574n13y
Vivesh

Posted on November 11, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

K8s Basic
kubernetes K8s Basic

November 11, 2024