This source installs and configures a (local) Kubernetes cluster from scratch, and deploys a Python/Flask workload onto n nodes, providing a way to create and measure CPU load on them.
The purpose of this rather academic project is to get into K8s and to learn how to communicate with the Pods/Deployment.
Since I had to start somewhere, I'm assuming a regular Linux user with sudo/root access, and the group names "kube" and "docker" for the respective services. I used Arch Linux/Pacman on my test, a realistic scenario would be it's own instance with Alpine Linux.
I got stuck in the Routing and Proxying realm of K8s, so the Loadbalancer Deployment doesn't work yet and the fun part begins only there: Do statistics over different scenarios and – ultimately – provide fancy graphics!
Usually one would use the stress/stress_ng cmdline tool for this kind of scenario, but I resorted to some simple Float arithmetics for now.
I also had to considerably upgrade my host on AWS: Kubernetes, even Minikube just doesn't run nicely on 1 vcore, with never-enough RAM, and it wouldn't provide useful data anyway. On the bright side, vertically scaling MyLittleEC2 instance was really easy.
The stress method returns only after $STRESSTIME seconds, which is braindead. I'd solve that with fork/SIGALARM IPC, if I had the time.
Flask daemon should return plain json.
sudo pacman -Sy docker ethtool wget unzip containerd
sudo pacman -Sy minikube
sudo pacman -Sy python-pip
Install etcd from AUR (https://aur.archlinux.org/etcd.git)
sudo pacman -Sy go
mkdir pkg && cd pkg
git clone https://aur.archlinux.org/etcd.git
cd etcd
makepkg
sudo pacman -U etcd-.*.pkg.tar.zst
cd ../..
sudo pip install -r requirements.txt
Doing sudo pip makes sure to install the necessary Python modules system-wide. (Only needed for locally testing main.py.)
This was taken from https://github.com/JasonHaley/hello-python.git, and extended by the CPU stressing and Load Average functions.
.
├── app
│ ├── main.py
│ └── requirements.txt
├── docker
│ └── Dockerfile
├── kubernetes
│ └── deployment.yaml
├── LICENSE
└── README.md
Url | Description | Output |
---|---|---|
/stress | Create CPU load for 60 seconds. | hostname, seconds |
/cpu | Return LoadAverage (psutil.getloadavg()[0]) | hostname, load |
/insight | Environment of the current pod/process. | environment |
docker image rm load-and-stress
docker build -f Dockerfile -t load-and-stress:latest .
minikube start
eval $(minikube docker-env)
kubectl delete deployment --all
kubectl apply -f deployment.yaml
kubectl replace --force -f deployment.yaml
# kubectl expose deployment load-and-stress --type=LoadBalancer --port=8080
minikube tunnel > /dev/null 2>&1 &
minikube proxy >/dev/null 2>&1 &
kubectl get svc
url=$(kubectl get svc|grep load-and-stress|grep 8080| sed -e 's# *#\t#gi'| cut -f 4,5 | cut -f 1 -d:| sed -e 's#\t#:#')
echo "http://${url}/"