- AKS Cluster, or you can have it created as well. Uncomment line
az aks create -g $1 -n --cluster-name $3
in file - Create AZURE_CREDENTIALS in GitHub secrets
To run CPU and Storage Test update and commit run-test-for in below format
<Resource-Group-Name> <AKS-Cluster-Name> <Kubernetes-Namespace-Name> <Nodepool-VM-SKU>
eg. AKS kluster m128r Standard_M128
Once changes to this file are committed , the Github Actions would start , which will create the nodedepool/namespace and will also execute the tests. It filnanly delete the nodepool/kubernetes resources created for the tests.
Tests Included:
- CPU Testing using
sysbench
- Sequential Reads on NODE's OS disk using
fio
- Sequential Writes on NODE's OS disk using
fio
- Sequential Reads on NODE's TEMP disk using
fio
- Sequential Writes on NODE's TEMP disk using
fio
- Sequential Reads in a debian POD AZURE Files Standard using
fio
- Sequential Writes in debian POD AZURE Files Standard
fio
- Sequential Reads in a debian POD AZURE Files Premium
fio
- Sequential Writes in debian POD AZURE Files Premium using
fio
Results: The results for each run can be found under Artifacts section of each run. Here
Configs:
To increase Azure Files Standard and Premium storage capacity update below section in Azure-Files-PVC
resources:
requests:
storage: 128Gi
To increase OS Disk size on Nodes update below node pool parameter
--node-osdisk-size 128
Note: Updates to below section would need docker build/push and updates to stat.yaml.
To execute same test multiple times update below section in Tests file
for i in {1..1}
Below are defults that run as part of the tests. Sysbech
sysbench --test=cpu --cpu-max-prime=20000 run
Fio
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600 --group_reporting
To make --size dynamic based on the avilable Node Memory update --size parameter as below. eg here the --size is twice the total memory on the node
--size=$(($(grep MemTotal /proc/meminfo|awk '{print $2}') * 2))K