Deploying Tanzu Kubernetes Grid Workload Cluster to Microsoft Azure
Dean
Posted on May 10, 2021
Following on from my previous blog post;
We will now continue and deploy our first Workload (Guest) Cluster into Azure for us by our developers to deploy their applications into.
For this technical walkthrough, I am assuming you have followed the previous blog post and have the Tanzu CLI and Kubectl CLI installed, and a working management cluster.
As a reminder of the terminology;
- Tanzu Kubernetes Workload Clusters
Once you have deployed your management cluster, you can deploy additional CNCF conformant Kubernetes clusters and manage their full lifecycle. These clusters are designed to run your application workloads, managed via your management cluster. These clusters canrun different Kubernetes versions as required. These clusters use Antrea networking by default.
These types of clusters are also referred to as “workload” clusters, or “guest” clusters, with the latter typically referring to the Tanzu Kubernetes Grid Service running in vSphere.
Deploying a Guest Cluster
Login to your Tanzu environment Management Cluster with the following:
Tanzu login
First we need to create a cluster configuration YAML file. You can find a template here for Azure, or view the full available variables here.
Alternatively, we can use the existing YAML file in our ~/.tanzu/tkg/clusterconfigs folder used for the management cluster deployment and change a few settings to make it ready for our workload guest cluster.
This was my preferred method as it contained all my Azure settings already.
#Find existing cluster config file
ls -lh ~/.tanzu/tkg/clusterconfigs/
#Copy file to a new config
cp ~/.tanzu/tkg/clusterconfigs/6x4hl1wy8o.yaml tanzu-veducate-guest-azure.yaml
# Edit file = CLUSTER_NAME
# Workload cluster names must be 42 characters or less.
Once we have our file, we can run the cluster create command, and sit back and wait for the cluster to be made available.
#Create cluster
tanzu cluster create --file tanzu-guest-azure.yaml
#Alternatively you can not edit the file and just specify a new cluster name as part of the tanzu create command
tanzu cluster create {new_cluster_name} --file {file_location}
Below is the output of my cluster create command. It took 6 minutes and 50 seconds to create my basic guest workload cluster.
Next, we can validate and view the Workload cluster details,
# Get available clusters once deployed
tanzu cluster list
# To include the management cluster in our list
tanzu cluster list --include-management-cluster
# To get more information about our specific cluster deployment
tanzu cluster get {name}
And here are the resources deployed to Microsoft Azure.
Getting your Workload (Guest) Cluster credentials
Now we have successfully deployed our Workload Cluster, we need to be able to connect to it.
By default, this clusters details will not be added to our Kubeconfig contexts file.
But it is easy to rectify this by running the following commands:
# Running the below command adds the cluster context to your kubeconfig file locally
# Using the argument "--admin" ensures the administrator access context is added
tanzu cluster kubeconfig get --admin {cluster_name}
# View your kubeconfig file contexts
kubectl config get-contexts
# Change kubectl context to run commands on your workload cluster
kubectl config use-context {context_name}
Finally, we can also export the kubeconfig context to a standalone file, for example, for your developers to use to access the cluster using their own authentication details.
tanzu cluster kubeconfig get {cluster_name} --export-file {file_name} --admin
Alternatively, you can use the “–admin” argument so that the credentials are embedded into the file, and no external authentication provider is needed. This would be considered less secure.
And here is my file itself.
You can find more details about connecting to clusters here.
These are the default locations where the Tanzu cluster context files are saved on your bootstrap machine.
- Management cluster contexts:
- ~/.kube-tkg/config
- Workload cluster contexts:
- ~/.kube/config
Scaling your Workload Cluster
To scale your workload cluster is simple and quick, once again using the Tanzu CLI.
tanzu cluster scale {workload_cluster_name} --controlplane-machine-count {number} --worker-machine-count {number}
If you have deployed a development management cluster, like I did in this blog, when deploying control plane nodes of 3 or higher, the cluster will automatically initiate high availability configuration.
To view the scaling up or down process run.
tanzu cluster list
tanzu cluster get
Summary
In these two blog posts we have deployed a new Tanzu Kubernetes Grid Management Cluster and Workload Cluster into Microsoft Azure using the Tanzu CLI (previously TKG CLI).
For the next steps and ideas where to head next, I recommend reading the official TKG documentation and looking at these articles:
- Deploy Tanzu Kubernetes Clusters with Different Kubernetes Versions
- Deploy a Cluster with a Non-Default CNI
- Create Persistent Volumes with Storage Classes
- Configure Tanzu Kubernetes Cluster Plans
- Tanzu Kubernetes Grid Logs and Troubleshooting
Regards
The post Deploying Tanzu Kubernetes Grid Workload Cluster to Microsoft Azure appeared first on vEducate.co.uk.
Posted on May 10, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.