Set up a MultiMaster Setup for Kubernetes using Kubeadm utility

Nodes Requirements

Here I am setting up a basic cluster with 2 master nodes and 1 worker node and 1 load balancer node to load balance b/w to masters to create highly available cluster

  • 1 LoadBalancer Node using HAProxy
  • 2 Master Nodes
  • 1 Worker Node

Setting up load balancer node.

Install HAProxy sudo yum update && sudo yum upgrade -y image

sudo yum install haproxy
image

Now Edit HAProxy Config sudo vi /etc/haproxy/haproxy.cfg

We have to create frontend and backend service for our HAProxy, so from our frontend service we will use the ip of loadbalancer to use kubectl and loadbalancer will implicitly loadbalanced b/w the ips of our master nodes using the backend service we will provide. Here we have to provide the ips of our master01 and master02 nodes and the default port for the kube proxy API server i.e 6443 so that loadbalancer can communicate with the master nodes using kubectl.

For Frontend

frontend frontend-apiserver
      bind 0.0.0.0:6443
      mode tcp
      option tcplog
      default_backend backend_apiserver

For Backend

backend backend_apiserver
     mode tcp
     option tcp-check
     balance roundrobin
     default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
        server <master node hostname> <master node ip:6443> check
        server <master node 02 hostname> <master node 02 ip:6443> check

image

**Now add entry for master01 and master02 in /etc/hosts file

sudo vi /etc/hosts

image

Now restart and verify HAProxy

sudo systemctl restart haproxy

sudo systemctl status haproxy

If you are getting this error use this command image

sudo setsebool -P haproxy_connect_any=1

image

Install netcat

sudo yum install -y nc

Run to confirm 6443 is binding to loadbalancer

nc -v localhost 6443

image

Add rule to firewalld otherwise cluster won't be able to communicate with the loadbalancer node

sudo firewall-cmd --zone=public --permanent --add-port=<exposed lb port>/tcp sudo systemctl restart firewalld sudo systemctl daemon-reload

Check If kube proxy api server is up

Type in any of the master node curl https://<loadbalancer ip>:<exposed port>/version -k If there is no output or no route to host then debug firewall config If successful you will get prompt similar to below

image

PRE REQUISITES *Kubeadm, kubelet, kubectl and docker must be installed on all the nodes i.e master01, master02, worker *Swap must be off use sudo swapoff -a and edit in vi /etc/fstab for persistance *If you have init a cluster before run sudo kubeadm reset -f and flush iptables *I am using calico network system for different network list you are free to use any manifest provided by the kubernetes *Disable firewalld service sudo systemctl stop firewalld and sudo systemctl disable firewalld

BOOTSTRAPPING K8 Cluster On any of the master node initialize the cluster

kubeadm init --control-plane-endpoint=<LOADBALANCER_IP:PORT> --upload-certs --pod-network-cidr=<Calico Network Range> image

After successful execution you will get a prompt similar to this

image

Please keep note of the join token for master, worker nodes

Copy the join token for master and paste it on master02

Master 02 has also join the cluster

image

Execute the provided three commands on both the masters

image

ON LOADBALANCER NODE

Follow this procedure

*Create Kube directory mkdir -p $HOME/.kube *If not root add user permission for the file so that it can be copied using scp sudo chown centos /etc/kubernetes/admin.conf *Use ip of master01 and use ssh to copy admin.conf to loadbalancer node scp centos@master01:/etc/kubernetes/admin.conf $HOME/.kube/config

image

*Finally give owner ship to copied file sudo chown $(id -u):$(id -g) $HOME/.kube/confi

Now load balancer is almost setup you need to install kubectl cli so that we can communicate with the master nodes here i have it already setup

Here multimaster setup is done I can see multiple control-plane-nodes

image

IMPORTANT It says not ready because i've not setup any network cni for the internal pod traffic, you can setup any network from kubernetes and then they will be in ready state