This repository hosts a concrete implementation of a provider for Google Cloud Platform for the cluster-api project.
Learn how to engage with the Kubernetes community on the community page.
- Join our Cluster API working group sessions
- Weekly on Wednesdays @ 10:00 PT on Zoom
- Previous meetings: [ notes | recordings ]
You can reach the maintainers of this project at:
- Slack: #cluster-api
- Mailing List
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.
-
Install
kubectl
(see here). -
Install minikube.
-
Install a driver for minikube. For Linux, we recommend kvm2. For MacOS, we recommend VirtualBox.
-
Install
kustomize
(see here). -
Build the
clusterctl
toolgit clone https://github.com/kubernetes-sigs/cluster-api-provider-gcp $GOPATH/src/sigs.k8s.io/cluster-api-provider-gcp cd $GOPATH/src/sigs.k8s.io/cluster-api-provider-gcp make clusterctl
-
Create the
cluster.yaml
,machines.yaml
,provider-components.yaml
, andaddons.yaml
files, and create GCP serviceaccounts:cd cmd/clusterctl/examples/google ./generate-yaml.sh cd ../../../.. kustomize build config/default/ > cmd/clusterctl/examples/google/out/provider-components.yaml echo "---" >> cmd/clusterctl/examples/google/out/provider-components.yaml kustomize build vendor/sigs.k8s.io/cluster-api/config/default/ >> cmd/clusterctl/examples/google/out/provider-components.yaml
-
Create a cluster:
Set the generated serviceaccount as a local environment variable so that the
clusterctl
process uses the same google credentials as do the processes running in minikube and in the final cluster.export GOOGLE_APPLICATION_CREDENTIALS=cmd/clusterctl/examples/google/out/machine-controller-serviceaccount.json ./bin/clusterctl create cluster --provider google -c cmd/clusterctl/examples/google/out/cluster.yaml -m cmd/clusterctl/examples/google/out/machines.yaml -p cmd/clusterctl/examples/google/out/provider-components.yaml -a cmd/clusterctl/examples/google/out/addons.yaml --minikube="kubernetes-version=v1.12.0"
To choose a specific minikube driver, please use the --vm-driver
command line parameter. For example to use the kvm2 driver with clusterctl you woud add --vm-driver kvm2
.
Adding --minikube="kubernetes-version=v1.12.0"
enforces bootstrap cluster to be in a version supporting sub-resources in CRDs, used by this code. Kubernetes before version v1.12 doesn't support them out-of-the-box.
Additional advanced flags can be found via help.
./bin/clusterctl create cluster --help
Once you have created a cluster, you can interact with the cluster and machine resources using kubectl:
kubectl --kubeconfig=kubeconfig get clusters
kubectl --kubeconfig=kubeconfig get machines
kubectl --kubeconfig=kubeconfig get machines -o yaml
This guide explains how to delete all resources that were created as part of your GCP Cluster API Kubernetes cluster.
-
Remember the service accounts that were created for your cluster
export MASTER_SERVICE_ACCOUNT=$(kubectl --kubeconfig=kubeconfig get cluster -o=jsonpath='{.items[0].metadata.annotations.gce\.clusterapi\.k8s\.io\/service-account-k8s-master}') export WORKER_SERVICE_ACCOUNT=$(kubectl --kubeconfig=kubeconfig get cluster -o=jsonpath='{.items[0].metadata.annotations.gce\.clusterapi\.k8s\.io\/service-account-k8s-worker}') export INGRESS_CONTROLLER_SERVICE_ACCOUNT=$(kubectl --kubeconfig=kubeconfig get cluster -o=jsonpath='{.items[0].metadata.annotations.gce\.clusterapi\.k8s\.io\/service-account-k8s-ingress-controller}') export MACHINE_CONTROLLER_SERVICE_ACCOUNT=$(kubectl --kubeconfig=kubeconfig get cluster -o=jsonpath='{.items[0].metadata.annotations.gce\.clusterapi\.k8s\.io\/service-account-k8s-machine-controller}')
-
Remember the name and zone of the master VM and the name of the cluster
export CLUSTER_NAME=$(kubectl --kubeconfig=kubeconfig get cluster -o=jsonpath='{.items[0].metadata.name}') export MASTER_VM_NAME=$(kubectl --kubeconfig=kubeconfig get machines -l set=master | awk '{print $1}' | tail -n +2) export MASTER_VM_ZONE=$(kubectl --kubeconfig=kubeconfig get machines -l set=master -o=jsonpath='{.items[0].metadata.annotations.gcp-zone}')
-
Delete all of the node Machines in the cluster. Make sure to wait for the corresponding Nodes to be deleted before moving onto the next step. After this step, the master node will be the only remaining node.
kubectl --kubeconfig=kubeconfig delete machines -l set=node kubectl --kubeconfig=kubeconfig get nodes
-
Delete any Kubernetes objects that may have created GCE resources on your behalf, make sure to run these commands for each namespace that you created:
# See ingress controller docs for information about resources created for # ingress objects: https://github.com/kubernetes/ingress-gce kubectl --kubeconfig=kubeconfig delete ingress --all # Services can create a GCE load balancer if the type of the service is # LoadBalancer. Additionally, both types LoadBalancer and NodePort will # create a firewall rule in your project. kubectl --kubeconfig=kubeconfig delete svc --all # Persistent volume claims can create a GCE disk if the type of the pvc # is gcePersistentDisk. kubectl --kubeconfig=kubeconfig delete pvc --all
-
Delete the VM that is running your cluster's control plane
gcloud compute instances delete --zone=$MASTER_VM_ZONE $MASTER_VM_NAME
-
Delete the roles and service accounts that were created for your cluster
./scripts/delete-service-accounts.sh
-
Delete the Firewall rules that were created for the cluster
gcloud compute firewall-rules delete $CLUSTER_NAME-allow-cluster-internal gcloud compute firewall-rules delete $CLUSTER_NAME-allow-api-public