Simple load testing with Locust and Kubernetes

viniciusccarvalho

Vinicius Carvalho

Posted on October 17, 2019

Simple load testing with Locust and Kubernetes

The code for the manifest script can be found here

Every single time I need to run some load testing, I'm faced with analysis paralysis syndrome.

So after searching I bumped into locust.io, a python based distributed load testing framework.

I'm not a Python developer, but locust is simple enough that made me reconsider dumping gatling and give it a try. FWIW: Gatling is extremely powerful, but also quite more complex to work with.

Ok, time to wrap this on a container and run on kubernetes, luckily someone already did that: Distributed load testing using Kubernetes

But I was not ok with that solution, mostly because I had to build a new container for every test. The flow I was considering was a bit more like this:

  1. Tests are saved to github
  2. Images are immutable
  3. Users create clusters and point to a specific test

The good thing is that achieving this is relatively easy, we can add an Init Container section to pull the tests from github:



 initContainers:
      - name: gitbox
        image: viniciusccarvalho/alpine-git
        command:
        - git
        - clone
        - "https://github.com/user/test-repository"
        - "/data/tests"
        volumeMounts:
        - name: workdir
          mountPath: "/data/"


Enter fullscreen mode Exit fullscreen mode

So now when the locust container starts all the test resources should be available at the /data/ folder.

Automate all the things

To deploy a locust cluster we will need 4 kubernetes resources:

  1. A master deployment
  2. A worker deployment (with N number of replicas and pointing to the master service)
  3. A master service (using ClusterIp so workers can access the master)
  4. A Loadbalancer service for the master (optional) for accessing the endpoint

All those files would only differ on a few parts (the repo, the target file, the target host, and the name), so I decided to create a script that automatically creates those files.

You can pull it here

If you run the script with a -h you would get all the required parameters to create the artifacts:



python locust-deploy.py -h
usage: locust-deploy.py [-h] -n NAME -t TARGET_HOST -r REPO -f TEST_FILE
                        [-s SIZE] [-o OUTPUT]

Locust cluster manifest generator

optional arguments:
  -h, --help            show this help message and exit
  -n NAME, --name NAME  Name of the test cluster artifacts
  -t TARGET_HOST, --target_host TARGET_HOST
                        Target host to run tests against
  -r REPO, --repo REPO  Github repository containing the tests
  -f TEST_FILE, --test_file TEST_FILE
                        The locust file to be used, relative to the github
                        root path
  -s SIZE, --size SIZE  Number of workers pods to be created. Default to 1
  -o OUTPUT, --output OUTPUT
                        Manifest files output


Enter fullscreen mode Exit fullscreen mode

And if you run it pointing to a location, say ~/tmp/locust-deployments you end up with:



 ls -la ~/tmp/locust-deployments
total 32
drwxr-xr-x   6 vinnyc  primarygroup   192 Oct 16 17:45 .
drwxr-xr-x  16 vinnyc  primarygroup   512 Oct 16 11:35 ..
-rw-r--r--   1 vinnyc  primarygroup   246 Oct 16 17:45 appengine-locust-master-service-lb.yaml
-rw-r--r--   1 vinnyc  primarygroup   352 Oct 16 17:45 appengine-locust-master-service.yaml
-rw-r--r--   1 vinnyc  primarygroup  1415 Oct 16 17:45 appengine-locust-master.yaml
-rw-r--r--   1 vinnyc  primarygroup  1285 Oct 16 17:45 appengine-locust-worker.yaml


Enter fullscreen mode Exit fullscreen mode

After deploying those artifacts you will have a new locust cluster ready to be accessed on your LB public endpoint:
Locust Cluster

There you have it, a very simple, and easy to deploy 20 node swarm test cluster for you to use.

Happy coding

💖 💪 🙅 🚩
viniciusccarvalho
Vinicius Carvalho

Posted on October 17, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related