Understanding application routing in Istio
Scott Coulton
Posted on March 1, 2019
In this blog, we will look at how to deploy two versions of the same application and route traffic on weight. This can come in super handy to test a new version of your application on a test percentage of users or more generally for full blue/green deployments.
First thing is first we need a kubernetes cluster, to build that we will use the following docs or use the following code. If you don't have an Azure account you can get a free trial here. Make sure you have run the az login command before any other commands.
az group create --name k8s --location eastus
az aks create --resource-group k8s \
--name k8s \
--generate-ssh-keys \
--kubernetes-version 1.12.5 \
--enable-rbac \
--node-vm-size Standard_DS2_v2
This will create our resource group and then our kubernetes cluster if you have kubectl installed then skip the next step. If not install the binary with the following command
az install aks-cli
Now to set up kubectl with our credentials. We will do that with the following command.
az aks get-credentials --resource-group k8s --name k8s
Now we have our cluster up and running we will deploy Istio via Helm. In this blog, we are not going to dive into Helm. If you want to learn more please click the link above. Now to get you up and running quickly I have created a script to install helm and Istio.
#!/bin/bash
if [[ "$OSTYPE" == "linux-gnu" ]]; then
OS="linux"
ARCH="linux-amd64"
elif [[ "$OSTYPE" == "darwin"* ]]; then
OS="osx"
ARCH="darwin-amd64"
fi
ISTIO_VERSION=1.0.4
HELM_VERSION=2.11.0
check_tiller () {
POD=$(kubectl get pods --all-namespaces|grep tiller|awk '{print $2}'|head -n 1)
kubectl get pods -n kube-system $POD -o jsonpath="Name: {.metadata.name} Status: {.status.phase}" > /dev/null 2>&1 | grep Running
}
pre_reqs () {
curl -sL "https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istio-$ISTIO_VERSION-$OS.tar.gz" | tar xz
if [ ! -f /usr/local/bin/istioctl ]; then
echo "Installing istioctl binary"
chmod +x ./istio-$ISTIO_VERSION/bin/istioctl
sudo mv ./istio-$ISTIO_VERSION/bin/istioctl /usr/local/bin/istioctl
fi
if [ ! -f /usr/local/bin/helm ]; then
echo "Installing helm binary"
curl -sL "https://storage.googleapis.com/kubernetes-helm/helm-v$HELM_VERSION-$ARCH.tar.gz" | tar xz
chmod +x $ARCH/helm
sudo mv linux-amd64/helm /usr/local/bin/
fi
}
install_tiller () {
echo "Checking if tiller is running"
check_tiller
if [ $? -eq 0 ]; then
echo "Tiller is installed and running"
else
echo "Deploying tiller to the cluster"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF
helm init --service-account tiller
fi
check_tiller
while [ $? -ne 0 ]; do
echo "Waiting for tiller to be ready"
sleep 30
done
}
install () {
echo "Deplying istio"
helm install istio-$ISTIO_VERSION/install/kubernetes/helm/istio --name istio --namespace istio-system \
--set global.controlPlaneSecurityEnabled=true \
--set grafana.enabled=true \
--set tracing.enabled=true \
--set kiali.enabled=true
if [ -d istio-$ISTIO_VERSION ]; then
rm -rf istio-$ISTIO_VERSION
fi
}
pre_reqs
install_tiller
install
Make sure all the Istio pods are running
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system grafana-546d9997bb-9mmmn 1/1 Running 0 4m32s
istio-system istio-citadel-5c9544c886-hplv6 1/1 Running 0 4m31s
istio-system istio-egressgateway-6f9db5ff8d-9lgsd 1/1 Running 0 4m32s
istio-system istio-galley-8dcbb5f99-gf44n 1/1 Running 0 4m32s
istio-system istio-ingressgateway-6c6b9f9c55-mm82k 1/1 Running 0 4m32s
istio-system istio-pilot-74984d9cf5-49kj9 2/2 Running 0 4m31s
istio-system istio-policy-6dd4496b8c-p9s2h 2/2 Running 0 4m31s
istio-system istio-sidecar-injector-6bd4d9487c-hhwqb 1/1 Running 0 4m31s
istio-system istio-telemetry-7bb4ffcd9d-5f2bf 2/2 Running 0 4m31s
istio-system istio-tracing-6445d6dbbf-65mwt 1/1 Running 0 4m31s
istio-system kiali-ddf8fbbb-sjklt 1/1 Running 0 4m31s
istio-system prometheus-65d6f6b6c-8bgzm 1/1 Running 0 4m31s
kube-system coredns-754f947b4-2r565 1/1 Running 0 14m
kube-system coredns-754f947b4-d5pdf 1/1 Running 0 18m
kube-system coredns-autoscaler-6fcdb7d64-q245b 1/1 Running 0 18m
kube-system heapster-5fb7488d97-v45pc 2/2 Running 0 18m
kube-system kube-proxy-gpxvg 1/1 Running 0 14m
kube-system kube-proxy-rdrxl 1/1 Running 0 14m
kube-system kube-proxy-sc9q6 1/1 Running 0 14m
kube-system kube-svc-redirect-5t75d 2/2 Running 0 14m
kube-system kube-svc-redirect-6bzz8 2/2 Running 0 14m
kube-system kube-svc-redirect-jntkv 2/2 Running 0 14m
kube-system kubernetes-dashboard-847bb4ddc6-6vxn4 1/1 Running 1 18m
kube-system metrics-server-7b97f9cd9-x59p2 1/1 Running 0 18m
kube-system tiller-deploy-6f6fd74b68-rf2lf 1/1 Running 0 5m20s
kube-system tunnelfront-8576f7d885-tzhnw 1/1 Running 0 18m
Now we are ready to deploy our application. Our application will consist of a simple one-page web application. We are going to have two versions of this application v1 and v2. For this post, we are going to route the traffic equally between both. If this was a production environment you might want to only canary 5% of the traffic to the new version of your application to see how users like it etc.
The first thing we are going to do is mark the default namespace to have Istio automatically inject the envoy proxy.
kubectl label namespace default istio-injection=enabled
Now in my opinion, if this was a production environment I would create a new namespace for the application and have the proxy auto inject.
The next thing to do is deploy our application. We will do this using the standard
Kuberntes deployment type.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
ports:
- port: 3000
name: http
selector:
app: webapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
version: v1
spec:
containers:
- name: webapp
image: scottyc/webapp:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp-v2
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
version: v2
spec:
containers:
- name: webapp
image: scottyc/webapp:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
EOF
You will notice within this deployment we are deploying v1 and v2 of our application.
Once our deployment is up and running we have to add a destination rule so Istio knows about our application. Istio will now internally assign a DNS name to the application. The name will be made up of the application name, hostname (taken from our deployment below) and namespace. It will be appended like so .svc.cluster.local.
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: webapp
spec:
host: webapp
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
EOF
Now we will create the Istio gateway. This will define the inbound port the application will be listening on and the hosts we will route to. In our case, it will be port 80 and we will use a * to hit any host. We are also going to tie our gateway to the default Istio ingress gateway.
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: webapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
EOF
Lastly, we will create our Istio virtual service. This defines how we are going to route the traffic on weight. As I mentioned before we are going to weight the traffic 50/50 but have a play around here and change the numbers. This will give you a really good grasp on the mechanics under the hood.
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: webapp
spec:
hosts:
- "*"
gateways:
- webapp-gateway
http:
- route:
- destination:
host: webapp
subset: v1
weight: 50
- destination:
host: webapp
subset: v2
weight: 50
EOF
Now we have everything deployed and our application is accessible to the internet. To get the public IP of the Istio gateway use the following.
kubectl get svc istio-ingressgateway -n istio-system -o jsonpath="
{.status.loadBalancer.ingress[0].ip}"
Now use that public IP in your browser and you should get one version of the application.
Then use an incognito window and the same public ip address and you should get the other version
Now if you get the same version just close the incognito window and try again.
This was a basic example of what Istio can do. If you want to read more on Istio and its traffic rule configuration the official docs are here
Posted on March 1, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.