Kubernetes cluster setup on Raspberry Pi 4B
| Item | Information |
|---|---|
| Number of Pi Server | 2 |
| Pi1 Hostname | jgte - static IP assigned by router (192.168.0.200) |
| Pi2 Hostname | khbr - static IP assigned by router (192.168.0.201) |
| CPU Architecture | ARM64 (not AMD64) |
| Processor | 1.5GHz quad-core processors |
| RAM | 8GB |
| Disk | SanDisk MicroSD 32GB |
| OS | Linux jgte 5.4.0-1041-raspi #45-Ubuntu SMP PREEMPT Thu Jul 15 01:17:56 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux |
- RaspberryPi-Kubernetes
-
- Preparation
- Installation
- Add user in group
- Setup
- Create Remote User -
alok- Create CSR for user
alokand copy to Kubernetes master node - Sign User CSR on master node
- Copy Signed User Certificate to local server
- Copy CA Certificate to local server
- Create User Credentials -
alok - Create Cluster -
home-cluster - Bind User
alokContext to Clusterhome-cluster-alok-home - Use the context -
alok-home
- Create CSR for user
- Setup Worker Node (not control plane HA)
Ubuntu Server Boot setup
- Choose Pi Version/OS/Storage
- EDIT SETTINGS
- GENERAL
- Set wireless LAN setting
- Set locals setting
- Enable SSH and User: aloksingh
- GENERAL
https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#2-prepare-the-sd-card
Add below entry in /etc/hosts
192.168.1.200 jgte kubernetes
192.168.1.201 khbr
vim /etc/hostsxxx to be replaces by looking at the dynamic IP allocated for the Iefi interface
ssh aloksingh@192.168.1.xxxAdd below config in /etc/netplan/50-cloud-init.yaml to configure static IP for eth interface. You may leave Wifi config as is.
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.1.200/24
nameservers:
addresses: [8.8.8.8,8.8.8.4]
routes:
- to: default
via: 192.168.1.1Add below config in /etc/netplan/50-cloud-init.yaml to configure static IP for wifi interface.
network:
version: 2
wifis:
renderer: networkd
wlan0:
access-points:
Alok_5GH:
password: b885a9eea2d5fcfa6672ebca7bc92efcd64a2f5e51773f88c0fefd97b15682ea
dhcp4: false
addresses:
- 192.168.1.201/24
nameservers:
addresses: [8.8.8.8,8.8.8.4]
routes:
- to: default
via: 192.168.1.1sudo netplan applyssh aloksingh@jgte sudo -S hostnamectl set-hostname jgte ssh aloksingh@jgte sudo -S groupadd -g 600 singhssh aloksingh@jgte sudo -S useradd -u 601 -g 600 -s /usr/bin/bash alokssh aloksingh@jgte sudo -S mkdir /home/alokssh aloksingh@jgte sudo -S chown -R alok:singh /home/alok/ssh aloksingh@jgte sudo -S passwd alokssh aloksingh@jgte sudo -S usermod -aG sudo alok ssh-keygenNote: Skip keygen if you want to reuse the key pair
cat ~/.ssh/id_rsa.pub | ssh alok@jgte "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"ssh alok@jgte sudo -S apt install net-toolsssh alok@jgte sudo -S apt install snapdEnable IP forward
net.ipv4.ip_forward=1
ssh alok@jgte sudo -S vim /etc/sysctl.confSet Timezone
ssh alok@jgte sudo timedatectl set-timezone Asia/Kolkatassh alok@jgte curl -fsSL https://get.docker.com -o get-docker.shssh alok@jgte sh get-docker.shssh alok@jgte sudo -S groupadd dockerssh alok@jgte sudo -S usermod -a -G docker alokAdd below in /etc/docker/daemon.json
{
"exec-opts": ["native, cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
ssh alok@jgte sudo nano /etc/docker/daemon.jsonAdd below line in file /boot/firmware/cmdline.txt (add in the same line from start)
cgroup_enable=memory cgroup_memory=1
ssh alok@jgte sudo -S nano /boot/firmware/cmdline.txtssh alok@jgte sudo -S snap install microk8s --channel=1.25/stable --classicBy adding user in microk8s group, user will have full access to the cluster
ssh alok@jgte sudo -S usermod -a -G microk8s alokssh alok@jgte sudo -S chown -f -R alok ~/.kubessh alok@jgte sudo -S snap alias microk8s.kubectl kubectlssh alok@jgte -S microk8s.startssh alok@jgte -S microk8s enable dnsEnable Nginx Ingress Controller. This will deploy a daemonset nginx-ingress-microk8s-controller.
ssh alok@jgte -S microk8s enable ingressssh alok@jgte microk8s enable rbaccd ~/cert/k8s
openssl genrsa -out alok.key 2048
openssl req -new -key alok.key -out alok-csr.pem -subj "/CN=alok/O=home-stack/O=ingress"
ssh alok@jgte mkdir cert
scp alok-csr.pem alok@jgte:cert/ssh alok@jgte "openssl x509 -req -in ~/cert/alok-csr.pem -CA /var/snap/microk8s/current/certs/ca.crt -CAkey /var/snap/microk8s/current/certs/ca.key -CAcreateserial -out ~/cert/alok-crt.pem -days 365"scp alok@jgte:cert/alok-crt.pem ~/cert/k8sscp alok@jgte:/var/snap/microk8s/current/certs/ca.crt ~/cert/k8skubectl config set-credentials alok --client-certificate=/Users/aloksingh/cert/k8s/alok-crt.pem --client-key=/Users/aloksingh/cert/k8s/alok.key --embed-certs=truekubectl config set-cluster home-cluster --server=https://kubernetes:16443 --certificate-authority=/Users/aloksingh/cert/k8s/ca.crt --embed-certs=truekubectl config set-context alok-home --cluster=home-cluster --namespace=home-stack --user alokkubectl config use-context alok-home- OS Setup
- User Setup
- Microk8s installation
- Start Microk8s 4.1 Do not enable DNS
From master node
sudo microk8s.add-nodeFrom worker node
microk8s join 192.168.1.200:25000/01fd669b595c650e243ac70c02eb3b54/d2301359744a --workersudo microk8s.leaveIf calico (CNI) not able to create IP routes you may have to upgrade kernal modules
sudo apt install linux-modules-extra-raspiNote: you will get permission denied as the role binding not done yet for the user alok
kubectl get nodes -o jsonpath='{}'Check if all pod ports (across nodes) are reachable
kubectl get pod --namespace home-stack -o json | jq .items[].status.podIP -r | fpingssh alok@jgte "git config --global user.email alok.ku.singh@gmail.com"ssh alok@jgte "git config --global user.name alokkusingh"ssh alok@jgte "ssh-keygen -t rsa -b 4096 -C alok.ku.singh@gmail.com" Add the pub key to Github - SSH and GPG Key
ssh alok@jgte "mkdir data/git"ssh alok@jgte "cd data/git; git clone git@github.com:alokkusingh/BankStatements.git"| Command Description | Command |
|---|---|
| Start Kubernetes services | microk8s.start |
| The status of services | microk8s.inspect |
| Stop all Kubernetes services | microk8s.stop |
| Status of the cluster | microk8s.kubectl cluster-info |
| Set up DNS microk8s enable dns | microk8s enable dns |
| Command Description | Command |
|---|---|
| Add Master Node | sudo microk8s.add-node |
| Add a Slave Node | microk8s join 192.168.0.200:25000/201afbfc67544696d01eed22a56d5030/4496beb91a5d |
| Node List | kubectl get nodes |
| Remove Node by Master | sudo microk8s remove-node <node name> |
| Leave Node by Slave | sudo microk8s.leave |
kubectl apply -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/namespace-kafka.yaml
kubectl apply -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/kafka-networkpolicy.yaml
kubectl apply --validate=true --dry-run=client --filename=https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/zookeeper-cluster.yaml
kubectl apply -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/zookeeper-cluster.yaml --namespace=kafka-cluster
kubectl delete -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/zookeeper-cluster.yaml --namespace=kafka-cluster
kubectl apply --validate=true --dry-run=client --filename=https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/kafka-cluster.yaml
kubectl apply -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/kafka-cluster.yaml --namespace=kafka-cluster
kubectl delete -f https://github.com/alokkusingh/kafka-experimental/blob/master/yaml/kafka-cluster.yaml --namespace=kafka-cluster
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
kubectl get svc -n kubernetes-dashboard
kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
spec:
type: LoadBalancer # this has to be changed to LoadBalancer to access the dashboard externally
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. If you’re not running Kubernetes on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare-metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services.
For LoadBalancer service type we left with 2 choices:
kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
spec:
type: LoadBalancer # this has to be changed to LoadBalancer to access the dashboard externally
externalIPs:
- 192.168.0.200 # this is needed because public IP cant be assigned automaic
MetalLB (https://metallb.universe.tf) aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.
kubectl create serviceaccount dashboard -n kafka-cluster
kubectl create clusterrolebinding dashboard-admin -n kafka-cluster --clusterrole=cluster-admin --serviceaccount=kafka-cluster:dashboard
kubectl get secrets -n kafka-cluster
kubectl get secret dashboard-token-bspqw -n kafka-cluster -o jsonpath="{.data.token}" | base64 --decode
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward
kubectl api-resources
kubectl explain pods
kubectl describe secret
sudo microk8s inspect
kubectl get all --namespace=kafka-cluster
kubectl get namespaces
kubectl get networkpolicy --namespace=kafka-cluster
kubectl top node
kubectl top pod --namespace=kafka-cluster
kubectl apply --validate=true --dry-run=client --filename=yaml/zookeeper-cluster.yaml
kubectl apply -f yaml/zookeeper-cluster.yaml --namespace=kafka-cluster
kubectl delete -f yaml/zookeeper-cluster.yaml --namespace=kafka-cluster
kubectl get all --namespace=kafka-cluster
kubectl get events --namespace=kafka-cluster
kubectl describe pod pod/zookeeper-deployment-65487b964b-ls6cg
kubectl get pod/zookeeper-deployment-65487b964b-ls6cg -o yaml --namespace=kafka-cluster
kubectl logs pod/zookeeper-deployment-7549748b46-x65fp zookeeper --namespace=kafka-cluster
kubectl logs --previous pod/zookeeper-deployment-7549748b46-jvvqk zookeeper --namespace=kafka-cluster
kubectl describe pod/zookeeper-deployment-65cb748c5c-fv545 --namespace=kafka-cluster
kubectl exec -it pod/zookeeper-deployment-7549748b46-9n9kb bash --namespace=kafka-cluster
apt-get update
apt-get install iputils-ping
apt-get install net-tools
kubectl apply --validate=true --dry-run=client --filename=yaml/kafka-cluster.yaml
kubectl apply -f yaml/kafka-cluster.yaml --namespace=kafka-cluster
kubectl delete -f yaml/kafka-cluster.yaml --namespace=kafka-cluster
kubectl logs pod/kafka-b8bdd7bc8-w2qq4 kafka --namespace=kafka-cluster
kubectl exec -it pod/kafka-deployment-86559574cc-jpxwq bash --namespace=kafka-cluster
apt-get update
kubectl cluster-info dump --namespace=kafka-cluster
kubectl apply --validate=true --dry-run=client --filename=yaml/nginx-cluster.yaml
kubectl apply -f yaml/nginx-cluster.yaml --namespace=kafka-cluster
kubectl delete -f yaml/nginx-cluster.yaml --namespace=kafka-cluster
kubectl logs pod/nginx-deployment-6d9d878b78-tst2b nginx --namespace=kafka-cluster
kubectl exec -it pod/nginx-6d9d878b78-kqlfb bash --namespace=kafka-cluster
nc -vz zookeeper-service 2181 -w 10
kubectl get ep nginx-service --namespace=kafka-cluster
kubectl describe svc nginx-service --namespace=kafka-cluster
kubectl get pods --show-labels --namespace=kafka-cluster
