This is a new project that is under development. The architecture, interfaces, and code layout are all subject to change. The policy syntax and ConstraintTemplate spec schema should be stable enough for alpha. For information on constraints and constraint templates, see the How to Use Gatekeeper section.
If you need OPA-style admission control right now, we recommend using the OPA Kubernetes Admission Control tutorial.
Join us to help define the direction and implementation of this project!
-
Join the
#kubernetes-policy
channel on OPA Slack. -
Join weekly meetings to discuss development, issues, use cases, etc.
-
Use GitHub Issues to file bugs, request features, or ask questions asynchronously.
Every organization has policies. Some are essential to meet governance and legal requirements. Others help ensure adherance to best practices and institutional conventions. Attempting to ensure compliance manually would be error-prone and frustrating. Automating policy enforcement ensures consistency, lowers development latency through immediate feedback, and helps with agility by allowing developers to operate independently without sacrificing compliance.
Kubernetes allows decoupling policy decisions from the inner workings of the API Server by means of admission controller webhooks, which are executed whenever a resource is created, updated or deleted. Gatekeeper is a validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent, a policy engine for Cloud Native environments hosted by CNCF as an incubation-level project.
In addition to the admission
scenario, Gatekeeper's audit functionality allows administrators to see what resources are currently violating any given policy.
Finally, Gatekeeper's engine is designed to be portable, allowing administrators to detect and reject non-compliant commits to an infrastructure-as-code system's source-of-truth, further strengthening compliance efforts and preventing bad state from slowing down the organization.
WARNING: It is not recommended to install Gatekeeper on a production cluster. The project is in alpha and may or may not uninstall cleanly.
For either installation method, make sure you have cluster admin permissions:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user <YOUR USER NAME>
If you want to deploy a released version of Gatekeeper in your cluster with a prebuilt image, then you can run the following command:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper-constraint.yaml
Currently the most reliable way of installing Gatekeeper is to build and install from HEAD:
- Make sure Kubebuilder and Kustomize are installed.
- Clone the Gatekeeper repo to your local system
- Make sure you have a container registry you can write to that is readable by the target cluster
- cd to the repository directory
- run
make docker-build REPOSITORY=<YOUR DESIRED DESTINATION DOCKER IMAGE>
- run
make docker-push-release REPOSITORY=<YOUR DESIRED DESTINATION DOCKER IMAGE>
- make sure your kubectl context is set to the desired installation cluster
- run
make deploy
Before uninstalling Gatekeeper, be sure to clean up old Constraints
, ConstraintTemplates
, Constraint
CRDs and the Configs
CRDs.
Currently the uninstall mechanism only removes the Gatekeeper system, it does not remove any ConstraintTemplate
, Constraint
, and Config
resources that have been created by the user, nor does it remove their accompanying CRDs
.
When Gatekeeper is running it is possible to remove unwanted constraints by:
- Deleting all instances of the constraint resource
- Deleting the
ConstraintTemplate
resource, which should automatically clean up theCRD
- Deleting the
Config
resource removes finalizers on synced resources
If you used a prebuilt image to deploy Gatekeeper, then you can delete all the Gatekeeper components with the following command:
kubectl delete -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper-constraint.yaml
If you used make
to deploy, then run the following to uninstall Gatekeeper:
- cd to the repository directory
- run
make uninstall
If Gatekeeper is no longer running and there are extra constraints in the cluster, then the finalizers, CRDs and other artifacts must be removed manually:
- Delete all instances of the constraint resource
- Executing
kubectl patch crd constrainttemplates.templates.gatekeeper.sh -p '{"metadata":{"finalizers":[]}}' --type=merge
. Note that this will remove all finalizers on every CRD. If this is not something you want to do, the finalizers must be removed individually. - Delete the
CRD
andConstraintTemplate
resources associated with the unwanted constraint.
Gatekeeper uses the OPA Constraint Framework to describe and enforce policy. Look there for more detailed information on their semantics and advanced usage.
Before you can define a constraint, you must first define a ConstraintTemplate
, which describes both the Rego that enforces the constraint and the schema of the constraint. The schema of the constraint allows an admin to fine-tune the behavior of a constraint, much like arguments to a function.
Here is an example constraint template that requires all labels described by the constraint to be present:
apiVersion: templates.gatekeeper.sh/v1alpha1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
deny[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.constraint.spec.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
You can install this ConstraintTemplate with the following command:
kubectl apply -f https://github.com/open-policy-agent/gatekeeper/blob/master/demo/basic/templates/k8srequiredlabels_template.yaml
Constraints are then used to inform Gatekeeper that the admin wants a ConstraintTemplate to be enforced, and how. This constraint uses the K8sRequiredLabels
constraint template above to make sure the gatekeeper
label is defined on all namespaces:
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
You can install this Constraint with the following command:
kubectl apply -f https://github.com/open-policy-agent/gatekeeper/blob/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
Note the match
field, which defines the scope of objects to which a given constraint will be applied. It supports the following matchers:
kinds
accepts a list of objects withapiGroups
andkinds
fields that list the groups/kinds of objects to which the constraint will apply. If multiple groups/kinds objects are specified, only one match is needed for the resource to be in scope.namespaces
is a list of namespace names. If defined, a constraint will only apply to resources in a listed namespace.labelSelector
is a standard Kubernetes label selector.
Note that if multiple matchers are specified, a resource must satisfy each top-level matcher (kinds
, namespaces
, etc.) to be in scope. Each top-level matcher has its own semantics for what qualifies as a match. An empty matcher is deemed to be inclusive (matches everything).
Some constraints are impossible to write without access to more state than just the object under test. For example, it is impossible to know if an ingress's hostname is unique among all ingresses unless a rule has access to all other ingresses. To make such rules possible, we enable syncing of data into OPA.
The audit feature also requires replication. Because we rely on OPA as the source-of-truth for audit queries, an object must first be cached before it can be audited for constraint violations.
Kubernetes data can be replicated into OPA via the sync config resource. Currently resources defined in syncOnly
will be synced into OPA. Updating syncOnly
should dynamically update what objects are synced. Below is an example:
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
- group: ""
version: "v1"
kind: "Pod"
You can install this config with the following command:
kubectl apply -f https://github.com/open-policy-agent/gatekeeper/blob/master/demo/basic/sync.yaml
Once data is synced into OPA, rules can access the cached data under the data.inventory
document.
The data.inventory
document has the following format:
- For cluster-scoped objects:
data.inventory.cluster[<groupVersion>][<kind>][<name>]
- Example referencing the Gatekeeper namespace:
data.inventory.cluster["v1"].Namespace["gatekeeper"]
- Example referencing the Gatekeeper namespace:
- For namespace-scoped objects:
data.inventory.namespace[<namespace>][groupVersion][<kind>][<name>]
- Example referencing the Gatekeeper pod:
data.inventory.namespace["gatekeeper"]["v1"]["Pod"]["gatekeeper-controller-manager-0"]
- Example referencing the Gatekeeper pod:
The audit functionality enables periodic evaluations of replicated resources against the policies enforced in the cluster to detect pre-existing misconfigurations. Audit results are stored as violations listed in the status
field of the failed constraint.
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
status:
auditTimestamp: "2019-05-11T01:46:13Z"
enforced: true
violations:
- kind: Namespace
message: 'you must provide labels: {"gatekeeper"}'
name: default
- kind: Namespace
message: 'you must provide labels: {"gatekeeper"}'
name: gatekeeper-system
- kind: Namespace
message: 'you must provide labels: {"gatekeeper"}'
name: kube-public
- kind: Namespace
message: 'you must provide labels: {"gatekeeper"}'
name: kube-system
NOTE: Audit requires replication of Kubernetes resources into OPA before they can be evaluated against the enforced policies. Refer to the Replicating data section for more information.
To configure Audit frequency, update the auditInterval
flag, which defaults to every 60
seconds. To configure limits for how many audit violations to show per constraint, update the constraintViolationsLimit
flag, which defaults to 20
.
The demo/basic directory contains the above examples of simple constraints, templates and configs to play with. The demo/agilebank directory contains more complex examples based on a slightly more realistic scenario. Both folders have a handy demo script to step you through the demos.