Step by step guide to chaos testing using Litmus Chaos toolkit

sunitparekh

Sunit Parekh

Posted on November 4, 2021

Step by step guide to chaos testing using Litmus Chaos toolkit

by Sunit Parekh & Prashanth Ramakrishnan

In this article we will describe how to perform chaos testing using Litmus (a popular chaos testing tool).

There are 4 major steps for running any chaos test.

  1. The first step is defining a steady state, which means defining how an ideal system would look like. For a web application, the home page is returning a success response, for a web service this would mean that it is healthy or it is returning a success for the health endpoint.
  2. The second step is actually introducing chaos such as simulating a failure such as a network bottleneck / disk fill etc.
  3. The third step is to verify a steady state, i.e, to check if the system is still working as expected.
  4. The fourth step which is the most important step (more important if you are running in production) is that we roll back the chaos that we caused.

chaos testing as 4 steps


Step 0: Kubernetes Cluster with Application running & Monitoring in place

To learn more about chaos testing, first we need to have an application under test, for this demo, we are going to have a BookInfo application deployed on a single node Kubernetes cluster. Along with it, we have Prometheus, Grafana, Jaeger & Kiali setup, along with Istio service mesh.

0.1) Setup Kubernetes Cluster: Get your Kubernetes cluster up and running with Docker1 as container runtime. To keep it simple install Docker for Desktop and also start Kubernetes along with it. However, you can also use Minikube, k3d, kind to set up local k8s clusters.

0.2) Setup Monitoring: Next to setup Istio along with all monitoring tools such as Prometheus, Grafana, Jaeger & Kiali

Install Istio:

istioctl install --set profile=demo -y
Enter fullscreen mode Exit fullscreen mode

Install monitoring:

# set ISTIO_RELEASE_URL to specific istio release version 
export ISTIO_RELEASE_URL=https://raw.githubusercontent.com/istio/istio/release-1.11/

kubectl apply -f $ISTIO_RELEASE_URL/samples/addons/jaeger.yaml
kubectl apply -f ISTIO_RELEASE_URL/samples/addons/prometheus.yaml
kubectl apply -f ISTIO_RELEASE_URL/samples/addons/grafana.yaml
kubectl apply -f $ISTIO_RELEASE_URL/samples/addons/kiali.yaml
Enter fullscreen mode Exit fullscreen mode

Istio and monitoring pods installed in istio-system namespace of k8s custer

0.3) Install Bookinfo application with Istio service mesh enabled and envoy sidecar installed.

Bookinfo application overview with 4 microservices and sidecar proxy injected

Download BookInfo yaml from Istio website: https://raw.githubusercontent.com/istio/istio/release-1.11/samples/bookinfo/platform/kube/bookinfo.yaml

Install Bookinfo with envoy proxy injected as sidecar container into default namespace:

istioctl kube-inject -f book-info.yaml | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

Bookinfo application pods running in default namespace

0.4) Verify applications is running and deployed with envoy proxy as sidecar.

Do port forwarding for the productpage service and check in your browser.

kubectl port-forward service/productpage 9080:9080
Enter fullscreen mode Exit fullscreen mode

If you open localhost:9080 in your web browser, you should see something like this

Product page of Bookinfo application

So now I have my application up and running inside the k8s cluster.


Step 1: Define steady state

Steady state for the bookinfo application is that the product page should keep rendering without any issues. Means http://localhost:9080/productpage?u=normal should return 200 http status code under continuous load.

To check my steady state condition let me first generate continuous load on my bookinfo application using a command line tool called hey and monitor it.

hey -c 2 -z 200s http://localhost:9080/productpage 
Enter fullscreen mode Exit fullscreen mode

Above command generates continuous load on the product page for 200 seconds with 2 concurrent workers.

Here is a quick view of the Kiali dashboard showing all pods healthy and review service responding in a 100-200 ms timeframe, which internally calling rating service responding in avg 50-60 ms.

Bookinfo application — Kiali Dashboard


Step 2: Introduce chaos

All set, now time to introduce chaos in the system. Let's first understand Litmus core concepts before we jump into execution.

2.1) Install Litmus: First step is to install an operator (Litmus Operator) into the Kubernetes cluster where we like to introduce chaos. Limus operator adds 3 custom resource definitions related to Litmus chaos into k8s cluster. You can also use Helm charts to install Litmus operators and its web UI. However, for simplicity I am going with direct yaml and install only operator.

$kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v2.2.0.yaml 
Enter fullscreen mode Exit fullscreen mode

List of all CRDs added as part of installing Litmus operator

2.2) Setup Experiment: After that we need to add a specific experiment in the namespace where we like to introduce chaos. List of all the available chaos are listed here. Lets add a network deploy chaos experiment into the default namespace where we have our bookinfo application installed.

$kubectl apply -f https://hub.litmuschaos.io/api/chaos/2.2.0?file=charts/generic/pod-network-latency/experiment.yaml
Enter fullscreen mode Exit fullscreen mode

pod-network-latency experiment added in default namespace

2.3) Apply Permissions: Now we need to give permission using RBAC to allow chaos experiments to run.

$kubectl apply -f https://hub.litmuschaos.io/api/chaos/2.2.0?file=charts/generic/pod-network-latency/rbac.yaml
Enter fullscreen mode Exit fullscreen mode

pod-network-latency-sa added in default namespace

2.4) Run Chaos: Using ChaosEngine custom resource definition, we inject network delay chaos. Please look at the following yaml network-delay-engine.yaml of kind ChaosEngine for introducing network delay of 2 sec for ratings deployment for about 100 seconds affecting all pods under deployment. Delay in ratings service response is going to indirectly delay review services and which indirectly adds delay to product page.

network-delay-engine.yaml

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
 name: bookinfo-network-delay
 namespace: default
spec:  
 jobCleanUpPolicy: 'retain'  # It can be delete/retain
 annotationCheck: 'false'
 engineState: 'active'
 monitoring: false
 appinfo:
   appns: 'default'
   applabel: 'app=ratings'   # application label matching
   appkind: 'deployment'     # k8s object type
 chaosServiceAccount: pod-network-latency-sa
 experiments:
   - name: pod-network-latency
     spec:
       components:
         env:
           - name: NETWORK_INTERFACE
             value: 'eth0'   # default interface used by pod   
           - name: NETWORK_LATENCY
             value: '2000'   # delay in milliseconds
           - name: TOTAL_CHAOS_DURATION
             value: '100'    # chaos duration in seconds
           - name: PODS_AFFECTED_PERC
             value: '100'    # effect # of pods in percentage
Enter fullscreen mode Exit fullscreen mode

Please check comments in above yaml to learn more about different configurations. Details about each configuration can be found in documentation provided by Litmus toolkit here.

$kubectl apply -f network-delay-engine.yaml
Enter fullscreen mode Exit fullscreen mode

Here is pod watch of default namespace and notice bookinfo-network-delay-runner, pod-network-latency-rp2aly-vg4xt and pod-network-latency-helper-hhpofr pods doing the job of introducing network delay for rating service.

pods status during chaos testing from start to end

2.5) Observe Result: Use Kubernetes describe command to see output of the chaos run we had in previous steps. Lets first notice increased time in review service response time on Kiali.

Kiali dashboard showing 2s+ response time for Reviews service

Now let's describe ChaosEngine and ChaosResult to see the result in Litmus custom objects description.

Chaos custom resource lookups

Observe events using describe on chaosengine custom resource bookinfo-network-delay

Events stream using describe from chaosengine custom resource bookinfo-network-delay

Observe events using describe on chaosresult custom resource bookinfo-network-delay-pod-network-latency

Events stream using describe from chaosresult custom resource bookinfo-network-delay-pod-network-latency

Repeat chaos testing by increasing delay to 6 seconds (6000 ms) and repeating steps 2.4 and 2.5. Change network-delay-engine.yaml with config NETWORK_LATENCY: 6000.

Reviews services turning red when 6s delay introduced by ratings service

Product page loading with error handled gracefully now showing ratings information


Step 3: Verify steady state

During the chaos test time we continuously accessed the system and observed 200 responses for the product page.

hey command output showing result of productpage response as all 200 status code

Observed 2 sec delay in response time on review service on Kiali dashboard.

Kiali dashboard with 2s delay on Reviews service


Step 4: Rollback chaos

In our case of network delay, since the chaos duration was set to 100 sec. It stopped automatically after 100 sec. So nothing to be done. Just observe that our system is back to normal.

On Kiali dashboard we see returning it to normal with review response time less than 100 ms timeframe and rating response time in 50-60 ms timeframe.

Kaili dashboard with Review services responses in double digit ms time (all back to normal)


Q&A

Can I use the Litmus tool with any other container runtime like contrainerd?

Yes, Steps in this article are keeping Docker as container runtime, however, if you have other runtimes like containerd, please read configuration on Chaos website for different configurations needed to run chaos experiments.

Where can I find a list of all chaos experiments available?

Litmus has some predefined chaos experiments available which can be found here, but it does not limit us to define our own experiments and run them in our own environments.

How to debug issues if any?

While running chaos using ChoseEngine CRD, use following flag jobCleanUpPolicy: 'retain' to keep pods in complete state (and not to be deleted after chaos run) which provides ability to look at logs of the pods.


References

Above commands and code checked into the public repository on Github.

https://github.com/sunitparekh/chaos-engg-litmus

Watch all of above in action in XConf 2021 online conference talk

https://www.youtube.com/watch?v=6Lz_0uNaVMA&list=PL8f-F_Zx8XA-kMENPeMMXT9KKo-x4F_NO&index=3

Learn more with hands-on tutorial on Litmus site

https://docs.litmuschaos.io/tutorials/

Online hands-on with Litmus tool on KataCode

https://katacoda.com/litmusbot/scenarios/getting-started-with-litmus

💖 💪 🙅 🚩
sunitparekh
Sunit Parekh

Posted on November 4, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related