/AKS-stats-gathering

Run CPU(sysbench) and Storage(OS,Temp,Azure Files Std and Premium using fio) automated performance tests on AKS Nodes and PODs.

Primary LanguageShellMIT LicenseMIT

Pre requisites

  1. AKS Cluster, or you can have it created as well. Uncomment line az aks create -g $1 -n --cluster-name $3 in file
  2. Create AZURE_CREDENTIALS in GitHub secrets

CPU and Storage tests on AKS

To run CPU and Storage Test update and commit run-test-for in below format

<Resource-Group-Name> <AKS-Cluster-Name> <Kubernetes-Namespace-Name> <Nodepool-VM-SKU>

eg. AKS kluster m128r Standard_M128

Once changes to this file are committed , the Github Actions would start , which will create the nodedepool/namespace and will also execute the tests. It filnanly delete the nodepool/kubernetes resources created for the tests.

Tests Included:

  1. CPU Testing using sysbench
  2. Sequential Reads on NODE's OS disk using fio
  3. Sequential Writes on NODE's OS disk using fio
  4. Sequential Reads on NODE's TEMP disk using fio
  5. Sequential Writes on NODE's TEMP disk using fio
  6. Sequential Reads in a debian POD AZURE Files Standard using fio
  7. Sequential Writes in debian POD AZURE Files Standard fio
  8. Sequential Reads in a debian POD AZURE Files Premium fio
  9. Sequential Writes in debian POD AZURE Files Premium using fio

Results: The results for each run can be found under Artifacts section of each run. Here

Configs:

To increase Azure Files Standard and Premium storage capacity update below section in Azure-Files-PVC

 resources:
    requests:
      storage: 128Gi

To increase OS Disk size on Nodes update below node pool parameter

--node-osdisk-size 128 

Note: Updates to below section would need docker build/push and updates to stat.yaml.

To execute same test multiple times update below section in Tests file

for i in {1..1}

Below are defults that run as part of the tests. Sysbech

sysbench  --test=cpu  --cpu-max-prime=20000 run

Fio

fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600  --group_reporting

To make --size dynamic based on the avilable Node Memory update --size parameter as below. eg here the --size is twice the total memory on the node

--size=$(($(grep MemTotal /proc/meminfo|awk '{print $2}') * 2))K