Getting Started with Amazon EKS: Basics and guide to create your first POD
erozedguy
Posted on August 17, 2021
1. What is Amazon EKS?
Is a managed service to run Kubernetes on AWS without needing to install, operate and maintain the insfrastructure.
References:
2. Important Concepts
2.1 EKS Control Plane
- Consists of control plane nodes that run the Kubernetes Software, such as
etcd
and the kubernetes API server. - Control Plane infrastructure is not shared across cluster or AWS accounts
- The control plane exposes a public endpoint so that clients and nodes can communicate with the cluster.
References:
2.2 Worker Nodes & Node Groups
- Workers nodes are EC2 intances
- Run the applications workloads
- Node Groups are groups of EC2 intances that are provisioned as part of an EC2 autoscaling group
- Connect to the control plane via API server endpoint
Considerations:
- All EC2 instances must be the same type
- Be running the same AMI
- Use the same EKS worker node IAM role
Reference: https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html
2.3 VPC
- Control plane components cannot view or receive communications from other cluster or another AWS accounts, except as authorized with Kubernetes RBAC policies
- VPCs and subnets must be tagged appropriately, so that Kubernetes knows that it can use them for deploying resources, such as load balancers
- Amazon EKS creates and manages network interfaces in your account
- Fargate PODS are deployed to private subnets only
Reference: https://docs.aws.amazon.com/eks/latest/userguide/eks-networking.html
2.4 Fargate Profiles
- The Fargate profile allows an administrator to declare which pods run on Fargate
- Each POD running on Fargate is isolated of the other resources and does not share the underlying kernel, CPU resources, memory or elastic network interface with another POD
- AWS built fargate controller that recognizes the PODS that belong to fargate and schedules the on FARGATE PROFILES
Reference: https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html
3. Creating an EKS cluster
PREREQUISITES
Install and configure AWS CLI
References:
Install eksctl
Reference: https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html
Install kubectl
Reference: https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
3.1 Create EKS Cluster with eksctl
eksctl create cluster --name <my-cluster> \
--version <1.21> \
--region <region> \
-- zones <az1,az2...>\
--without-nodegroup
- Check the cluster in our AWS account
eksctl get cluster
3.2 Create and associate IAM OIDC Provider for EKS Cluster
With this feature, you can manage user access to your cluster by leveraging existing identity management life cycle through your OIDC identity provider.
eksctl utils associate-iam-oidc-provider --region <region-code> \
--cluster <cluter-name> \
--approve
3.3 Create a EC2 key pair with AWS Management Console
- For this step you must use the AWS Management Console in your web browser
NOTE: When we use
eksctl
to create an EKS cluster, automatically are created public and private subnets, NAT Gateways, Cluster service Role, security group and more resources required for our eks cluster are automatically created
4. Creating a NODE GROUP
4.1 Create Node Group with additional Add-Ons in Public Subnets
eksctl create nodegroup --cluster <CLUSTER_NAME> \
--region <REGION> \
--name <NAME_NODE_GROUP> \
-node-type <TYPE_EC2_INSTANCE> \
--nodes <#_EC2_INSTANCES> \
--nodes-min <MIN_NODES> \
--nodes-max <MAX_NODES> \
--node-volume-size <GB>\
--ssh-access \
--ssh-public-key <PUB_KEY> \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access
4.2 Check the nodes
4.3 Use kubectl
to managed the EKS cluster
- Check nodes
kubectl get nodes
- Check Services
kubectl get services
5. Creating the first POD
- Write a
.yaml
manifest to create a POD - Execute the
.yaml
file to create the POD
kubectl apply -f <manifest_name>.yaml
- Check the PODS in our EKS cluster with
kubectl
kubectl get po
- View the POD into a node through the aws management console
6. DELETE THE EKS CLUSTER
- Delete the NODE GROUP
eksctl delete nodegroup --cluster <clusterName> \
--name <nodegroupName>
- Delete the EKS Cluster
eksctl delete cluster <clusterName>
RESOURCES: https://github.com/stacksimplify/aws-eks-kubernetes-masterclass
Posted on August 17, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.