This repository hosts a concrete implementation of a provider for DigitalOcean for the cluster-api project.
This project is currently work-in-progress and in Alpha, so it may not be production ready. There is no backwards-compatibility guarantee at this point. For more details on the roadmap and upcoming features, check out the project's issue tracker on GitHub.
In order to create a cluster using clusterctl, you need the following tools installed on your local machine:
kubectl, which can be done by following this tutorialminikubeand the appropriateminikubedriver. We recommendkvm2driver for Linux andvirtualboxfor macOS.- DigitalOcean API Access Token generated and set as the
DIGITALOCEAN_ACCESS_TOKENenvironment variable, - Go toolchain installed and configured, needed in order to compile the
clusterctlbinary, cluster-api-provider-digitaloceanrepository cloned:
git clone https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean $(go env GOPATH)/src/sigs.k8s.io/cluster-api-provider-digitaloceanThe clusterctl tool is used to bootstrap an Kubernetes cluster from zero. Currently, we have not released binaries, so you need to compile it manually.
Compiling is done by invoking the compile Make target:
make compileThis command generates three binaries: clusterctl, machine-controller and cluster-controller, in the ./bin directory. In order to bootstrap the cluster, you only need the clusterctl binary.
The clusterctl can also be compiled manually, such as:
cd $(go env GOPATH)/src/sigs.k8s.io/cluster-api-provider-digitalocean/cmd/clusterctl
go installTo create your first cluster using cluster-api-provider-digitalocean, you need to use the clusterctl. It takes the following four manifests as input:
cluster.yaml- defines Cluster properties, such as Pod and Services CIDR, Services Domain, etc.machines.yaml- defines Machine properties, such as machine size, image, tags, SSH keys, enabled features, as well as what Kubernetes version will be used for each machine.provider-components.yaml- contains deployment manifest for controllers, userdata used to bootstrap machines, a secret with SSH key for themachine-controllerand a secret with DigitalOcean API Access Token.- [Optional]
addons.yaml- used to deploy additional components once the cluster is bootstrapped, such as DigitalOcean Cloud Controller Manager and DigitalOcean CSI plugin.
The manifests can be generated automatically by using the generate-yaml.sh script, located in the cmd/clusterctl/examples/digitalocean directory:
cd cmd/clusterctl/examples/digitalocean
./generate-yaml.sh
cd ../..The result of the script is an out directory with generated manifests and a generated SSH key to be used by the machine-controller. More details about how it generates manifests and how to customize them can be found in the README file in cmd/clusterctl/examples/digitalocean directory.
Once you have manifests generated, you can create a cluster using the following command. Make sure to replace the value of vm-driver flag with the name of your actual minikube driver.
./bin/clusterctl create cluster \
--provider digitalocean \
--vm-driver kvm2 \
-c ./cmd/clusterctl/examples/digitalocean/out/cluster.yaml \
-m ./cmd/clusterctl/examples/digitalocean/out/machines.yaml \
-p ./cmd/clusterctl/examples/digitalocean/out/provider-components.yaml \
-a ./cmd/clusterctl/examples/digitalocean/out/addons.yamlMore details about the create cluster command can be found by invoking help:
./bin/clusterctl create cluster --helpThe clusterctl's workflow is:
- Create a Minikube bootstrap cluster,
- Deploy the
cluster-api-controller,digitalocean-machine-controlleranddigitalocean-cluster-controller, on the bootstrap cluster, - Create a Master, download
kubeconfigfile, and deploy controllers on the Master, - Create other specified machines (nodes),
- Deploy addon components (
digitalocean-cloud-controller-managerandcsi-digitalocean), - Remove the local Minikube cluster.
To learn more about the process and how each component work, check out the diagram in cluster-api repostiory.
clusterctl downloads the kubeconfig file in your current directory from the cluster automatically. You can use it with kubectl to interact with your cluster:
kubectl --kubeconfig kubeconfig get nodes
kubectl --kubeconfig kubeconfig get all --all-namespacesUpgrading Master is currently not possible automatically (by updating the Machine object) as Update method is not fully implemented. More details can be found in issue #32.
Workers can be upgraded by updating the appropriate Machine object for that node. Workers are upgraded by replacing nodes—first the old node is removed and then a new one with new properties is created.
To ensure non-disturbing maintenance we recommend having at least 2+ worker nodes at the time of upgrading, so another node can take tasks from the node being upgraded. The node that is going to be upgraded should be marked unschedulable and drained, so there are no pods running and scheduled.
# Make node unschedulable.
kubectl --kubeconfig kubeconfig cordon <node-name>
# Drain all pods from the node.
kubectl --kubeconfig kubeconfig drain <node-name>Now that you prepared node for upgrading, you can proceed with editing the Machine object:
kubectl --kubeconfig kubeconfig edit machine <node-name>This opens the Machine manifest such as the following one, in your default text editor. You can choose editor by setting the EDITOR environment variable.
There you can change machine properties, including Kubernetes (kubelet) version.
apiVersion: cluster.k8s.io/v1alpha1
kind: Machine
metadata:
creationTimestamp: 2018-09-14T11:02:16Z
finalizers:
- machine.cluster.k8s.io
generateName: digitalocean-fra1-node-
generation: 3
labels:
set: node
name: digitalocean-fra1-node-tzzgm
namespace: default
resourceVersion: "5"
selfLink: /apis/cluster.k8s.io/v1alpha1/namespaces/default/machines/digitalocean-fra1-node-tzzgm
uid: a41f83ad-b80d-11e8-aeef-0242ac110003
spec:
metadata:
creationTimestamp: null
providerConfig:
ValueFrom: null
value:
backups: false
image: ubuntu-18-04-x64
ipv6: false
monitoring: true
private_networking: true
region: fra1
size: s-2vcpu-2gb
sshPublicKeys:
- ssh-rsa AAAA
tags:
- machine-2
versions:
kubelet: 1.11.3
status:
lastUpdated: null
providerStatus: nullSaving changes to the Machine object deletes the old machine and then creates a new one. After some time, a new machine will be part of your Kubernetes cluster. You can track progress by watching list of nodes. Once new node appears and is Ready, upgrade has finished.
watch -n1 kubectl get nodesTo delete Master and confirm all relevant resources are deleted from the cloud, we're going to use doctl—DigitalOcean CLI. You can also use DigitalOcean Cloud Control Panel or API instead of doctl.
First, save the Droplet ID of Master, as we'll use it later to delete the control plane machine:
export MASTER_ID=$(kubectl --kubeconfig=kubeconfig get machines -l set=master -o jsonpath='{.items[0].metadata.annotations.droplet-id}')Now, delete all Workers in the cluster by removing all Machine object with label set=node:
kubectl --kubeconfig=kubeconfig delete machines -l set=node
You can confirm are nodes deleted by checking list of nodes. After some time, only Master should be present:
kubectl --kubeconfig=kubeconfig get nodesThen, delete all Services and PersistentVolumeClaims, so all Load Balancers and Volumes in the cloud are deleted:
kubectl --kubeconfig=kubeconfig delete svc --all
kubectl --kubeconfig=kubeconfig delete pvc --allFinally, we can delete the Master using doctl and $MASTER_ID environment variable we set earlier:
doctl compute droplet delete $MASTER_IDYou can use doctl to confirm that Droplets, Load Balancers and Volumes relevant to the cluster are deleted:
doctl compute droplet list
doctl compute load-balancer list
doctl compute volume listMore about development and contributing practices can be found in CONTRIBUTING.md.