Solution for running Load Tests in Kubernetes in order to reduce the complexity to launch a load test in Kubernetes and avoid the costs associated to usual Cloud load testing tools. Following framework / platforms are currently used in this project:
- Locust (https://locust.io/)
- Azure Kubernetes Service (AKS)
- PowerShell Core (Azure CLI)
- Docker >= 18 (kubernetes enabled for local debug)
- Azure subscription
- PowerShell / PowerShell Core (cross-platform) >= 5
- Azure CLI >= 2.16
- Kubectl >= 1.19
- Locust >= 1.3.2
git clone https://github.com/clement-joye/Kubernetes-Load-Testing.git
Regardless of the environment, the following parameters must be substituted accordingly in the config file: In template section:
image
,replicas
In server section: (see locust documentation for more info)
host
,expect-workers
-> must be set to same value as replicas,users
,spawn-rate
,run-time
In client section:
host
-> must be set to same value as host in server section
In script section:
script
-> path to your locust Python script
For DEBUG purpose:
- Make sure that your kubectl context points to your local kubernetes.
For running in AKS:
- Open config/config.TEST.json and replace accordingly with your own values
resourceGroup
,name
,nodeCount
,vmSize
,imageName
Make sure to use a docker image that uses root user. A non root user will result in the server not being able to write the report files to the pv storage.
For example:
FROM locustio/locust
USER root
ENTRYPOINT ["locust"]
ENV PYTHONUNBUFFERED=1
Alternatively, feel free to use clementjoye/base-locust or to creat eyour own docker image with any other 3rd party libraries.
$ cd ./powershell
$ cd ./Invoke-K8sTests.ps1 -Mode "All" -Environment "DEBUG"
This will run the all 3 main stages of the PowerShell script: Create, Run, Dispose, with config.DEBUG.json configuration file.
For running specific stage:
$ cd ./Invoke-K8sTests.ps1 -Mode "Create" -Environment "DEBUG"
or
$ cd ./Invoke-K8sTests.ps1 -Mode "Run" -Environment "DEBUG"
or
$ cd ./Invoke-K8sTests.ps1 -Mode "Dispose" -Environment "DEBUG"
As mentioned above:
The config.DEBUG.json is meant to be used for your local Kubernetes installation, therefore it requires no resourceGroup, name, nodeCount or vmSize.
The config.TEST.json is meant to be used with your cloud Kubernetes service (AKS in this project), therefore resourceGroup, name, nodeCount or vmSize values must be provided.
isLocal
specifies if the PowerShell script is used agains your local environment or in AKS.wait
specifies if Create or Dispose stages are meant to be run asynchronously or not. This is useful for configuring the way the script will be run in your build or release pipeline to only run specific stages.
The rest of the json file is not meant to be edited, and is only there for leaving it open to future developement.
Example:
{
"ClusterParameters":
{
"resourceGroup": "my-resourcegroup-name",
"name": "my-aks-cluster-name",
"nodeCount": 1,
"vmSize": "Standard_DS2_v2",
"wait": true,
"isLocal": false
},
}
When running the PowerShell script, three different stages will be executed:
Create stage:
- Retrieves the configuration,
- Instantiates data in memory,
- Creates the necessary yaml templates and Locust configurations to be deployed for our server and clients,
- Creates the cluster (or use an existing one if already created).
Run stage:
- Gets an existing cluster,
- Deploys the configmaps needed, the pv/pvc for storing the reports, the services, the server (pod) and the clients (deployment),
- Monitors the load test ends (pod status completed),
- Export the reports generated by the server back to the local host in /reports folder.
Dispose stage:
- Clear all resources in the cluster
- Dispose the cluster (if AKS cluster is used).
- A publication about the project is available here on Medium
Bug reports and pull requests are welcome on GitHub. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
The code within this repository is available as open source under the terms of the GNU GPL v3 License.