Provisioning a Kubernetes Cluster Using Rancher in AWS EC2

cpiyush151

Piyush Chaudhari

Posted on March 16, 2023

Provisioning a Kubernetes Cluster Using Rancher in AWS EC2

In this blog, I will be demonstrating how to use a container tool to create a gossip-based Kubernetes cluster using Rancher (RKE).

The rancher is an enterprise open source tool much like Kubernetes and Swarm container orchestration and it’s very simple to use.

Two steps we will be performing:

  1. Installing Rancher
  2. Creating a k8s cluster using rancher

Installing Rancher

We have two options available to install Rancher:

  1. Single node installation
  2. High availability installation

Single Node Installation: Install rancher in single Linux node; this is for development and testing purposes. Click here for more details on Rancher single node installation.

High Availability Installation: Installing and configuring Rancher on a cluster mode for production mode is recommended by Rancher. Click here for more details on Rancher high availability installation.

Here in this article, we will perform a single node installation.

Launching an EC2 Instance and Install Docker :-

Here, I have already launched an EC2 instanced based on the Ubuntu Server 22.04 LTS (HVM) AMI.
I have created a security group for this Rancher EC2 instance. I opened the necessary ports according to the official documentation. Click here

Now, I will start installing Docker on this.

Step 1: Update system repositories



sudo apt update


Enter fullscreen mode Exit fullscreen mode


sudo apt upgrade


Enter fullscreen mode Exit fullscreen mode

Step 2: Install required dependencies
After updating the system packages, next step is to install required dependencies for Docker:



sudo apt install lsb-release ca-certificates apt-transport-https software-properties-common -y


Enter fullscreen mode Exit fullscreen mode

Step 3: Adding Docker repository to system sources
When Docker repository is added to the system sources, it makes the Docker installation easier and provides faster updates.
To add the Docker repository to the system sources, first, import the Docker GPG key required for connecting to the Docker repository:



curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


Enter fullscreen mode Exit fullscreen mode

Then, execute the following command for adding the Docker repository to your Ubuntu 22.04 system sources list:



echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


Enter fullscreen mode Exit fullscreen mode

Step 4: Update system packages
After adding Docker repository to the system sources, again update the system packages:



sudo apt update


Enter fullscreen mode Exit fullscreen mode

Step 5: Install Docker on Ubuntu 22.04
If you have carefully followed the previously given steps, then at this point, your Ubuntu 22.04 system is all ready for the Docker installation:



sudo apt install docker-ce


Enter fullscreen mode Exit fullscreen mode

Note that we are utilizing the “docker-ce” package instead of “docker-ie” as it is supported by the official Docker repository:

Enter “y” to permit the Docker installation to continue:

The below-given error-free output indicates that Docker is successfully installed on our Ubuntu 22.04 system:

Step 6: Verify Docker status
Now, execute the below-given “systemctl” command to verify if the Docker is currently active or not on your system:



sudo systemctl status docker


Enter fullscreen mode Exit fullscreen mode

Image description
Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process.
If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:



sudo usermod -aG docker ${USER}


Enter fullscreen mode Exit fullscreen mode

Here, I will add my default user ubuntu to docker group.
Image description

Bingo!! our installation of Docker is complete.

Installing Rancher on a Single Node Using Docker :-

Rancher can be installed by running a single Docker container.
In this installation scenario, we already installed Docker on a single Linux host EC2 instance, and now we will deploy Rancher on our host using a single Docker container.
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the --privileged option.
Let's install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.



docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher --privileged rancher/rancher:latest


Enter fullscreen mode Exit fullscreen mode

Note: In above command, we are adding -v option to persist the data.

Make sure if our container is running. Let's fire below command.



docker ps -a


Enter fullscreen mode Exit fullscreen mode

Image description

After the container is up and running, you can access the UI on “https” and the first screen will ask you to set password.
Image description

Lets perform the steps to retrieve the Bootstrap password to get started.
Image description

We can now set the new password. Default user is "admin".
Image description

After setting password you will see below page as shown below:
Image description

Bingo!! our Rancher setup is ready..!!!

Creating a k8s cluster using rancher

Once Rancher is up and running, it makes the deployment and management of Kubernetes clusters quite easy.

Before you start with this, make sure, that you meet these requirements:

  • The host on which you run Rancher needs to communicate with all instances you deploy on EC2, in both directions. If you have Rancher running locally this will only work if the EC2 instances will be able to reach your local Rancher installation.
  • You need to setup the correct IAM policies and groups. If you don’t get this right you will not be able to deploy the cluster.

Because this is the most important point, lets start with the IAM user & policies. I created a new IAM user named "piyush" which I will be using for deploying the cluster through Rancher.
Additionally, I have generated the Access Key and Secret Key that will be used to create the instances.

I’ve created three IAM policies:
Image description

  • piyush-rancher-controlplane-policy: This is the policy that will be used for the control plane
  • piyush-rancher-etcd-worker-policy: This is the policy that will be used for the etcd and worker nodes
  • piyush-rancher-passrole-policy: This is the policy that will be attached to the AWS user that will be registered in Rancher with the cloud credentials

Here is the piyush-rancher-controlplane-policy (replace [YOUR_AWS_ACCOUNT_ID] with your AWS account ID):



{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "ec2:DescribeRegions",
                "elasticloadbalancing:DescribeLoadBalancerPolicyTypes",
                "elasticloadbalancing:SetWebAcl",
                "elasticloadbalancing:DescribeLoadBalancers",
                "ec2:DeleteVolume",
                "elasticloadbalancing:DescribeListeners",
                "autoscaling:DescribeAutoScalingGroups",
                "ec2:CreateRoute",
                "ec2:CreateSecurityGroup",
                "ec2:DescribeVolumes",
                "elasticloadbalancing:DescribeLoadBalancerPolicies",
                "kms:DescribeKey",
                "elasticloadbalancing:DescribeListenerCertificates",
                "elasticloadbalancing:DescribeInstanceHealth",
                "ec2:ModifyInstanceAttribute",
                "ec2:DescribeRouteTables",
                "elasticloadbalancing:DescribeSSLPolicies",
                "ec2:DetachVolume",
                "ec2:ModifyVolume",
                "ec2:CreateTags",
                "autoscaling:DescribeTags",
                "ec2:DeleteRoute",
                "elasticloadbalancing:*",
                "ec2:DescribeSecurityGroups",
                "ec2:CreateVolume",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "ec2:RevokeSecurityGroupIngress",
                "iam:CreateServiceLinkedRole",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "ec2:DescribeVpcs",
                "elasticloadbalancing:DescribeAccountLimits",
                "ec2:DeleteSecurityGroup",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeRules",
                "ec2:DescribeSubnets"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "elasticloadbalancing:*",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "elasticloadbalancing:*",
            "Resource": "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/*"
        },
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": "elasticloadbalancing:*",
            "Resource": [
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/app/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/net/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/net/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/app/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/app/*/*"
            ]
        },
        {
            "Sid": "VisualEditor4",
            "Effect": "Allow",
            "Action": "elasticloadbalancing:*",
            "Resource": [
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/app/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/net/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/net/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/app/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/app/*/*"
            ]
        },
        {
            "Sid": "VisualEditor5",
            "Effect": "Allow",
            "Action": "elasticloadbalancing:*",
            "Resource": [
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/app/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/app/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener-rule/net/*/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/net/*/*/*",
                "arn:aws:elasticloadbalancing:*:[YOUR_AWS_ACCOUNT_ID]:listener/app/*/*/*"
            ]
        }
    ]
}


Enter fullscreen mode Exit fullscreen mode

Here is the piyush-rancher-etcd-worker-policy (replace [YOUR_AWS_ACCOUNT_ID] with your AWS account ID):



{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ec2:*",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "secretsmanager:*",
            "Resource": "arn:aws:secretsmanager:*:[YOUR_AWS_ACCOUNT_ID]:secret:*"
        }
    ]
}


Enter fullscreen mode Exit fullscreen mode

Finally, here is the content of piyush-rancher-passrole-policy (here you need to reference the other two policies):



{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:ModifyInstanceMetadataOptions",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:Describe*",
                "ec2:ImportKeyPair",
                "ec2:CreateKeyPair",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "eks:*",
                "ec2:DeleteKeyPair"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ec2:RunInstances",
            "Resource": [
                "arn:aws:ec2:eu-central-1::image/ami-*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:security-group/*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:subnet/*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:network-interface/*",
                "arn:aws:iam::[YOUR_AWS_ACCOUNT_ID]:role/piyush-rancher-controlpane-role",
                "arn:aws:iam::[YOUR_AWS_ACCOUNT_ID]:role/piyush-rancher-etcd-worker-role",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:instance/*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:volume/*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:placement-group/*",
                "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:key-pair/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "ec2:RebootInstances",
                "ec2:TerminateInstances",
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "arn:aws:ec2:eu-central-1:[YOUR_AWS_ACCOUNT_ID]:instance/*"
        },
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::[YOUR_AWS_ACCOUNT_ID]:role/piyush-rancher-controlpane-role",
                "arn:aws:iam::[YOUR_AWS_ACCOUNT_ID]:role/piyush-rancher-etcd-worker-role"
            ]
        }
    ]
}


Enter fullscreen mode Exit fullscreen mode

Once you have that ready, create two IAM roles with the same name as the policies you created above. This is required, because you need to specify those later when you setup the node templates in Rancher:
piyush-rancher-controlplane-role
piyush-rancher-etcd-worker-role

Image description

The final step for the permissions in AWS is to assign the last policy (piyush-rancher-passrole-policy) as a permission to the AWS IAM user "piyush" which I will be using for deploying the cluster:
Image description

First, you will set up your EC2 cloud credentials in Rancher. Then you will use your cloud credentials to create a node template, which Rancher will use to provision new nodes in EC2.

Then you will create an EC2 cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool.

The steps to create a cluster differ based on your Rancher version.

  1. Create your cloud credentials
  2. Create a node template with your cloud credentials and information from EC2
  3. Create a cluster with node pools using the node template

1. Create your cloud credentials :
In the Rancher UI, Left panel contains Cluster Management.
Go to Cluster management and click Cloud Credentials.
Image description

Click Create Cloud Credential and select Amazon.
Image description

Enter a name for the cloud credential.
In the Region field, select the AWS region where your cluster nodes will be located.
Enter your AWS EC2 Access Key and Secret Key.
Click Create.
Image description

2. Create a node template with your cloud credentials and information from EC2

Creating a node template for EC2 will allow Rancher to provision new nodes in EC2. Node templates can be reused for other clusters.

In the Rancher UI, Cluster management, click Node Templates.
Click Add Template.
Image description

Fill out a node template for EC2. For Account Access, Select AWS region, Cloud credentials added previously. Click Next: Authenticate and configure nodes.
Image description

Select appropriate AZ & VPC. Click Next: Select a security group.
Image description

Lets choose Standard which will automatically create a security group for this demo. Click Next: Set Instance options.
Image description

This is the most important section: The AMI ID you see, is the latest Ubuntu 20.04 AMI. The user for that AMI is "ubuntu". If you want to go with a Debian, CentOS or whatever AMI you need to adjust those (The user for Debian would be "admin", for CentOS it would be "centos" ). The "IAM instance profile name" is the role you created above, and this is important. Here you see "piyush-rancher-controlplane-role" because this will be the node template for the control plane:
Image description

Rest, keep all defaults. Click Create. This will create a control plane node template in Rancher.

By following the same steps mentioned above, you will create 2 more node templates 1 for etcd node and worker node EC2. Now, I have already created those as shown below -
Image description

Now we are ready to deploy a brand new Kubernetes cluster on top of EC2:
On Clusters page, Click Create and select EC2.
Image description

Here you reference the node templates. Make sure you use the control pane template for the control pane, and the other templates for etcd and worker nodes:
Image description

Go with the default and select “AWS” as cloud provider:
Image description

Before you press “Create”, it is a good idea to log into your Rancher host and tail the logs of the Rancher container. If anything goes wrong it shows up there.

Once you started the cluster creation, you can also monitor the EC2 console and watch the EC2 instances coming up:
Image description

Check AWS Console for the progress of launching EC2 cluster -
Image description

Now, the cluster state is "Active" as shown -
Image description

The cluster is fully ready and you can drill into the cluster Explore section.
Image description

It is also possible to get into the Kubectl shell via Rancher UI. Lets initiate Kubectl shell via UI.



kubectl get all


Enter fullscreen mode Exit fullscreen mode

Image description

Bingo!!!! We are now able to manage our k8s cluster based on EC2 instances provisioned from Rancher UI!!!!!!!!!!! :-)

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow me on "LinkedIn" & my other blogs -
cpiyush151 - Wordpress
cpiyush151 - Hashnode

💖 💪 🙅 🚩
cpiyush151
Piyush Chaudhari

Posted on March 16, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related