/cluster-api-provider-kubernetes

A Cluster API Infrastructure Provider implementation using Kubernetes itself as the infrastructure

Primary LanguageGo

Kubernetes Cluster API Provider Kubernetes

The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.

This project is a Cluster API Infrastructure Provider implementation using Kubernetes itself to provide the infrastructure. Pods using the kindest/node image built for kind are created and configured to serve as Nodes which form a cluster.

The primary use cases for this project are testing and experimentation.

Quickstart

We will deploy a Kubernetes cluster to provide the infrastructure, install the Cluster API controllers and configure an example Kubernetes cluster using the Cluster API and the Kubernetes infrastructure provider. We will refer to the infrastructure cluster as the outer cluster and the Cluster API cluster as the inner cluster.

Infrastructure

Any recent Kubernetes cluster (1.16+) should be suitable for the outer cluster.

We are going to use Calico as an overlay implementation for the inner cluster with IP-in-IP encapsulation enabled so that our outer cluster does not need to know about the inner cluster's Pod IP range. To make this work we need to ensure that the ipip kernel module is loadable and that IPv4 encapsulated packets are forwarded by the kernel.

On GKE this can be accomplished as follows:

# The GKE Ubuntu image includes the ipip kernel module
# Calico handles loading the module if necessary
# https://github.com/projectcalico/felix/blob/9469e77e0fa530523be915dfaa69cc42d30b8317/dataplane/linux/ipip_mgr.go#L107-L110
MANAGEMENT_CLUSTER_NAME="management"
gcloud container clusters create $MANAGEMENT_CLUSTER_NAME \
  --image-type=UBUNTU \
  --machine-type=n1-standard-2

# Allow IP-in-IP traffic between outer cluster Nodes from inner cluster Pods
CLUSTER_CIDR=`gcloud container clusters describe $MANAGEMENT_CLUSTER_NAME --format="value(clusterIpv4Cidr)"`
gcloud compute firewall-rules create allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip \
  --source-ranges=$CLUSTER_CIDR \
  --allow=ipip

# Forward IPv4 encapsulated packets
kubectl apply -f hack/forward-ipencap.yaml

Installation

# Install clusterctl
# https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
CLUSTER_API_VERSION=v0.3.15
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/$CLUSTER_API_VERSION/clusterctl-`uname -s  | tr '[:upper:]' '[:lower:]'`-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl

# Configure the Kubernetes infrastructure provider
mkdir -p $HOME/.cluster-api
cat > $HOME/.cluster-api/clusterctl.yaml <<EOF
providers:
- name: kubernetes
  url: https://github.com/dippynark/cluster-api-provider-kubernetes/releases/latest/infrastructure-components.yaml
  type: InfrastructureProvider
EOF

# Initialise
clusterctl init --infrastructure kubernetes

Configuration

CLUSTER_NAME="example"

# Use ClusterIP for clusters that do not support Services of type LoadBalancer
export KUBERNETES_CONTROL_PLANE_SERVICE_TYPE="LoadBalancer"
export KUBERNETES_CONTROLLER_MACHINE_CPU_REQUEST="500m"
export KUBERNETES_CONTROLLER_MACHINE_MEMORY_REQUEST="1Gi"
export KUBERNETES_WORKER_MACHINE_CPU_REQUEST="200m"
export KUBERNETES_WORKER_MACHINE_MEMORY_REQUEST="512Mi"
# See kind releases for other available image versions of kindest/node
# https://github.com/kubernetes-sigs/kind/releases
clusterctl config cluster $CLUSTER_NAME \
  --infrastructure kubernetes \
  --kubernetes-version 1.17.0 \
  --control-plane-machine-count 1 \
  --worker-machine-count 1 \
  | kubectl apply -f -

# Retrieve kubeconfig
until [ -n "`kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' 2>/dev/null`" ] ; do
  sleep 1
done
kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' | base64 --decode > $CLUSTER_NAME-kubeconfig

# Switch to new Kubernetes cluster. If the cluster API Server endpoint is not reachable from your
# local machine you can exec into a controller Node (Pod) and run
# `export KUBECONFIG=/etc/kubernetes/admin.conf` instead
export KUBECONFIG=$CLUSTER_NAME-kubeconfig

# Wait for the API Server to come up
until kubectl get nodes &>/dev/null; do
  sleep 1
done

# Install Calico. This could also be done using a ClusterResourceSet
# https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# Interact with your new cluster!
kubectl get nodes

Clean up

unset KUBECONFIG
rm -f $CLUSTER_NAME-kubeconfig
kubectl delete cluster $CLUSTER_NAME
# If using the GKE example above
yes | gcloud compute firewall-rules delete allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip
yes | gcloud container clusters delete $MANAGEMENT_CLUSTER_NAME --async

TODO