- Create a node with the following configuration
- 1GB/1CPU 30GB 2TB nodehttps://discovery.etcd.io/new?size=3
- Select an appropriate region
- CoreOS 76.4.0(Stable release)
- Turn on private networking
- Select User Data option
- In the textbox for user data provide information as specidied in this file
- Use this URL https://discovery.etcd.io/new?size=3 to generate a discovery token. The 3 at the end of the token specifies the number of nodes that are to be present while initiating the cluster. A minimum of these many number of nodes should be present in the cluster for it to be connected through fleet.
- Replace the token generated above in line 7 of the cloud-config file.
- CoreOS instances can only be connected to via ssh and hence a public-key needs to be set up in the digital ocean dashboard. Use this link to learn how to setup ssh keys with digital ocean.
- In a similar manner create 2 more nodes.
- Once the nodes are created you can securely connect to them using ssh. ssh -A core@public-ip-address
- Once you are connected to the device you can clone this repository
git clone https://github.com/jiteshmohan/kubernetes-do.git
- Create a new directory
/opt/bin/
sudo mkdir -p /opt/bin/
cd kubernetes-do
- Copy all files from the folder executables into the
/opt/bin/
directory.sudo cp executables/* /opt/bin/
- Next you need to copy the service files to
/etc/systemd/system/
. However the files are separate for master and minion.
- On the master:
sudo cp service-files/master/*.service /etc/systemd/system/
- On minion:
sudo cp service-files/minion/*.service /etc/systemd/system/
- Digital ocean creates a file
/etc/environment
when spawning the instance. The contents of this file are as follows:To this file add additional two lines in the case of the minion. The lines that need to be added are$ cat /etc/environment COREOS_PRIVATE_IPV4=<ip_addr> COREOS_PUBLIC_IPV4=<ip_addr>
MASTER_PUBLIC_IPV4=<ip_addr> MASTER_PRIVATE_IPV4=<ip_addr>
- Ensure that all the executable copied into
/opt/bin
have execute permissions. - Enable the services that we created.
cd /etc/systemd/system sudo systemctl enable *
- On master start the following services
sudo systemctl restart flanneld sudo systemctl restart docker sudo systemctl restart kube-apiserver sudo systemctl restart kube-controller-manager sudo systemctl restart kube-scheduler sudo systemctl restart kube-proxy sudo systemctl restart kubelet
- On the minions start the following services
sudo systemctl restart flanneld sudo systemctl restart docker sudo systemctl restart kube-scheduler sudo systemctl restart kube-proxy sudo systemctl restart kubelet
- After starting all the above services useto check the status of each service.
systemctl status <service-name>
- On the master run the following command to get the list of machines in the kubernetes cluster.
kubectl get nodes
- To access the Kubernetes dashboard perform the following steps
cd git clone https://github.com/kubernetes/kubernetes.git cd kubernetes kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
- To access the UI use the link https://<master-public-ip>:6443/ui/