This repository implements a simple controller for watching Foo resources as defined with a CustomResourceDefinition (CRD).
This particular example demonstrates how to perform basic operations such as:
- How to register a new custom resource (custom resource type) of type
Foo
using a CustomResourceDefinition. - How to create/get/list instances of your new resource type
Foo
. - How to setup a controller on resource handling create/update/delete events.
It makes use of the generators in k8s.io/code-generator
to generate a typed client, informers, listers and deep-copy functions. You can
do this yourself using the ./hack/update-codegen.sh
script.
The update-codegen
script will automatically generate the following files &
directories:
pkg/apis/samplecontroller/v1alpha1/zz_generated.deepcopy.go
pkg/client/
Changes should not be made to these files manually, and when creating your own
controller based off of this implementation you should not copy these files and
instead run the update-codegen
script to generate your own.
This is an example of how to build a kube-like controller with a single type.
Prerequisite: Since the sample-controller uses apps/v1
deployments, the Kubernetes cluster version should be greater than 1.9.
# assumes you have a working kubeconfig, not required if operating in-cluster
$ go run *.go -kubeconfig=$HOME/.kube/config
# create a CustomResourceDefinition
$ kubectl create -f artifacts/examples/crd.yaml
# create a custom resource of type Foo
$ kubectl create -f artifacts/examples/example-foo.yaml
# check deployments created through the custom resource
$ kubectl get deployments
CustomResourceDefinitions can be used to implement custom resource types for your Kubernetes cluster.
These act like most other Resources in Kubernetes, and may be kubectl apply
'd, etc.
Some example use cases:
- Provisioning/Management of external datastores/databases (eg. CloudSQL/RDS instances)
- Higher level abstractions around Kubernetes primitives (eg. a single Resource to define an etcd cluster, backed by a Service and a ReplicationController)
Each instance of your custom resource has an attached Spec, which should be defined via a struct{}
to provide data format validation.
In practice, this Spec is arbitrary key-value data that specifies the configuration/behavior of your Resource.
For example, if you were implementing a custom resource for a Database, you might provide a DatabaseSpec like the following:
type DatabaseSpec struct {
Databases []string `json:"databases"`
Users []User `json:"users"`
Version string `json:"version"`
}
type User struct {
Name string `json:"name"`
Password string `json:"password"`
}
To validate custom resources, use the CustomResourceValidation
feature.
This feature is beta and enabled by default in v1.9.
The schema in crd-validation.yaml
applies the following validation on the custom resource:
spec.replicas
must be an integer and must have a minimum value of 1 and a maximum value of 10.
In the above steps, use crd-validation.yaml
to create the CRD:
# create a CustomResourceDefinition supporting validation
$ kubectl create -f artifacts/examples/crd-validation.yaml
Custom Resources support /status
and /scale
subresources as an
alpha feature in v1.10.
Enable this feature using the CustomResourceSubresources
feature gate on the kube-apiserver:
--feature-gates=CustomResourceSubresources=true
The CRD in crd-status-subresource.yaml
enables the /status
subresource
for custom resources.
This means that UpdateStatus
can be used by the controller to update only the status part of the custom resource.
To understand why only the status part of the custom resource should be updated, please refer to the Kubernetes API conventions.
In the above steps, use crd-status-subresource.yaml
to create the CRD:
# create a CustomResourceDefinition supporting the status subresource
$ kubectl create -f artifacts/examples/crd-status-subresource.yaml
You can clean up the created CustomResourceDefinition with:
$ kubectl delete crd foos.samplecontroller.k8s.io
HEAD of this repository will match HEAD of k8s.io/apimachinery and k8s.io/client-go.
sample-controller
is synced from
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/sample-controller.
Code changes are made in that location, merged into k8s.io/kubernetes and
later synced here.