Prow | Bazel CI |
---|---|
This repository contains rules for interacting with Kubernetes configurations / clusters.
Add the following to your WORKSPACE
file to add the necessary external dependencies:
- Info for rules_docker
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# https://github.com/bazelbuild/rules_docker/#setup
# http_archive("io_bazel_rules_docker", ...)
http_archive(
name = "io_bazel_rules_k8s",
strip_prefix = "rules_k8s-0.5",
urls = ["https://github.com/bazelbuild/rules_k8s/archive/v0.5.tar.gz"],
sha256 = "773aa45f2421a66c8aa651b8cecb8ea51db91799a405bd7b913d77052ac7261a",
)
load("@io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")
k8s_repositories()
load("@io_bazel_rules_k8s//k8s:k8s_go_deps.bzl", k8s_go_deps = "deps")
k8s_go_deps()
As is somewhat standard for Bazel, the expectation is that the
kubectl
toolchain is preconfigured to authenticate with any clusters
you might interact with.
For more information on how to configure kubectl
authentication, see the
Kubernetes documentation.
NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed
For Google Container Engine (GKE), the gcloud
CLI provides a simple
command
for setting up authentication:
gcloud container clusters get-credentials <CLUSTER NAME>
NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed
New: Starting https://github.com/bazelbuild/rules_k8s/commit/ff2cbf09ae1f0a9c7ebdfc1fa337044158a7f57b
These rules can either use a pre-installed kubectl
tool (default) or
build the kubectl
tool from sources.
The kubectl
tool is used when executing the run
action from bazel.
The kubectl
tool is configured via a toolchain rule. Read more about
the kubectl toolchain here.
If GKE is used, also the gcloud
sdk needs to be installed.
NOTE: we are currently experimenting with toolchain features in these rules so there will be changes upcoming to how this configuration is performed
load("@io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
# A template of a Kubernetes Deployment object yaml.
template = ":deployment.yaml",
# An optional collection of docker_build images to publish
# when this target is bazel run. The digest of the published
# image is substituted as a part of the resolution process.
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
)
In your WORKSPACE
you can set up aliases for a more readable short-hand:
load("@io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_defaults")
k8s_defaults(
# This becomes the name of the @repository and the rule
# you will import in your BUILD files.
name = "k8s_deploy",
kind = "deployment",
# This is the name of the cluster as it appears in:
# kubectl config view --minify -o=jsonpath='{.contexts[0].context.cluster}'
cluster = "my-gke-cluster",
)
Then in place of the above, you can use the following in your BUILD
file:
load("@k8s_deploy//:defaults.bzl", "k8s_deploy")
k8s_deploy(
name = "dev",
template = ":deployment.yaml",
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
)
Note that in load("@k8s_deploy//:defaults.bzl", "k8s_deploy")
both k8s_deploy
's are references to the name
parameter passed to k8s_defaults
. If you change name = "k8s_deploy"
to something else, you will need to change the load
statement in both places.
It is common practice in the Kubernetes world to have multiple objects that comprise an application. There are two main ways that we support interacting with these kinds of objects.
The first is to simply use a template file that contains your N objects
delimited with ---
, and omitting kind="..."
.
The second is through the use of k8s_objects
, which aggregates N k8s_object
rules:
# Note the plurality of "objects" here.
load("@io_bazel_rules_k8s//k8s:objects.bzl", "k8s_objects")
k8s_objects(
name = "deployments",
objects = [
":foo-deployment",
":bar-deployment",
":baz-deployment",
]
)
k8s_objects(
name = "services",
objects = [
":foo-service",
":bar-service",
":baz-service",
]
)
# These rules can be nested
k8s_objects(
name = "everything",
objects = [
":deployments",
":services",
":configmaps",
":ingress",
]
)
This can be useful when you want to be able to stand up a full environment, which includes resources that are expensive to recreate (e.g. LoadBalancer), but still want to be able to quickly iterate on parts of your application.
A common practice to avoid clobbering other users is to do your development against an isolated environment. Two practices are fairly common-place.
- Individual development clusters
- Development "namespaces"
To support these scenarios, the rules support using "stamping" variables to
customize these arguments to k8s_defaults
or k8s_object
.
For per-developer clusters, you might use:
k8s_defaults(
name = "k8s_dev_deploy",
kind = "deployment",
cluster = "gke_dev-proj_us-central5-z_{BUILD_USER}",
)
For per-developer namespaces, you might use:
k8s_defaults(
name = "k8s_dev_deploy",
kind = "deployment",
cluster = "shared-cluster",
namespace = "{BUILD_USER}",
)
You can customize the stamp variables that are available at a repository level
by leveraging --workspace_status_command
. One pattern for this is to check in
the following:
$ cat .bazelrc
build --workspace_status_command="bash ./print-workspace-status.sh"
$ cat print-workspace-status.sh
cat <<EOF
VAR1 value1
# This can be overriden by users if they "export VAR2_OVERRIDE"
VAR2 ${VAR2_OVERRIDE:-default-value2}
EOF
For more information on "stamping", you can see also the rules_docker
documentation on stamping here.
Another ugly problem remains, which is that image references are still shared across developers, and while our resolution to digests avoids races, we may not want them trampling on the same tag, or on production tags if shared templates are being used.
Moreover, developers may not have access to push to the images referenced in a particular template, or the development cluster to which they are deploying may not be able to pull them (e.g. clusters in different GCP projects).
To resolve this, we enable developers to "chroot" the image references, publishing them instead to that reference under another repository.
Consider the following, where developers use GCP projects named
company-{BUILD_USER}
:
k8s_defaults(
name = "k8s_dev_deploy",
kind = "deployment",
cluster = "gke_company-{BUILD_USER}_us-central5-z_da-cluster",
image_chroot = "us.gcr.io/company-{BUILD_USER}/dev",
)
In this example, the k8s_dev_deploy
rules will target the developer's cluster
in their project, and images will all be published under the image_chroot
.
For example, if the BUILD file contains:
k8s_deploy(
name = "dev",
template = ":deployment.yaml",
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
)
Then the references to gcr.io/rules_k8s/server:dev
will be replaced with one
to: us.gcr.io/company-{BUILD_USER}/dev/gcr.io/rules_k8s/server@sha256:...
.
Sometimes, you need to replace additional runtime parameters in the YAML file.
While you can use expand_template
for parameters known to the build system,
you'll need a custom resolver if the parameter is determined at deploy time.
A common example is Google Cloud Endpoints service versions, which are
determined by the server.
You can pass a custom resolver executable as the resolver
argument of all
rules:
sh_binary(
name = "my_script",
...
)
k8s_deploy(
name = "dev"
template = ":deployment.yaml",
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
resolver = "//my_script",
)
This script may need to invoke the default resolver (//k8s/go/cmd/resolver
) with all
its arguments. It may capture the default resolver's output and apply additional
modifications to the YAML, printing the final YAML to stdout.
The k8s_object[s]
rules expose a collection of actions. We will follow the :dev
target from the example above.
Build builds all of the constituent elements, and makes the template available
as {name}.yaml
. If template
is a generated input, it will be built.
Likewise, any docker_build
images referenced from the images={}
attribute
will be built.
bazel build :dev
Deploying with tags, especially in production, is a bad practice because they are mutable. If a tag changes, it can lead to inconsistent versions of your app running after auto-scaling or auto-healing events. Thankfully in v2 of the Docker Registry, digests were introduced. Deploying by digest provides cryptographic guarantees of consistency across the replicas of a deployment.
You can "resolve" your resource template
by running:
bazel run :dev
The resolved template
will be printed to STDOUT
.
This command will publish any images = {}
present in your rule, substituting
those exact digests into the yaml template, and for other images resolving the
tags to digests by reaching out to the appropriate registry. Any images that
cannot be found or accessed are left unresolved.
This process only supports fully-qualified tag names. This means you must
always specify tag and registry domain names (no implicit :latest
).
Users can create an environment by running:
bazel run :dev.create
This deploys the resolved template, which includes publishing images.
Users can update (replace) their environment by running:
bazel run :dev.replace
Like .create
this deploys the resolved template, which includes
republishing images. This action is intended to be the workhorse
of fast-iteration development (rebuilding / republishing / redeploying).
Users can "apply" a configuration by running:
bazel run :dev.apply
:dev.apply
maps to kubectl apply
, which will create or replace an existing
configuration. For more information see the kubectl
documentation.
This applies the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).
Users can tear down their environment by running:
bazel run :dev.delete
It is notable that despite deleting the deployment, this will NOT delete any services currently load balancing over the deployment; this is intentional as creating load balancers can be slow.
Users can "describe" their environment by running:
bazel run :dev.describe
Users can "diff" a configuration by running:
bazel run :dev.diff
:dev.diff
maps to kubectl diff
, which will diff the live against the would-be applied version.
For more information see the kubectl
documentation.
This diffs the resolved template, but does not include republishing images.
k8s_object(name, kind, template)
A rule for interacting with Kubernetes objects.
Attributes | |
---|---|
name |
Unique name for this rule. |
kind |
The kind of the Kubernetes object in the yaml. If this is omitted, the |
cluster |
The name of the cluster to which If this is omitted, the |
context |
The name of a kubeconfig context to use. Subject to "Make" variable substitution. If this is omitted, the current context will be used. |
namespace |
The namespace on the cluster within which the actions are performed. Subject to "Make" variable substitution. If this is omitted, it will default to the value specified
in the template or if also unspecified there, to the value
|
user |
The user to authenticate to the cluster as configured with kubectl. Subject to "Make" variable substitution. If this is omitted, kubectl will authenticate as the user from the current context. |
kubeconfig |
The kubeconfig file to pass to the `kubectl` tool via the `--kubeconfig` option. Can be useful if the `kubeconfig` is generated by another target. |
substitutions |
Substitutions to make when expanding the template. Follows the same rules as
expand_template
Values are "make variable substituted."
You can also use the Bazel command line option # Example k8s_object( name = "my_ingress", kind = "ingress", Which is then invoked with Any stamp variables are also replaced with their values. This is done after make variable substitution. |
template |
The yaml or json for a Kubernetes object. |
images |
When this target is The published digests of these images will be substituted directly, so as to avoid a race in the resolution process Subject to "Make" variable substitution |
image_chroot |
The repository under which to actually publish Docker images. |
resolver |
A build target for the binary that's called to resolves references inside the Kubernetes YAML files. |
args |
Additional arguments to pass to the kubectl command at execution. NOTE: You can also pass args via the cli by run something like:
NOTE: Not all options are available for all kubectl commands. To view the list of global options run: |
resolver_args |
Additional arguments to pass to the resolver directly. NOTE: This option is to pass the specific arguments to the resolver directly, such as |
k8s_objects(name, objects)
A rule for interacting with multiple Kubernetes objects.
Attributes | |
---|---|
name |
Unique name for this rule. |
objects |
The list of objects on which actions are taken. When If a dict is provided it will be converted to a select statement. |
k8s_defaults(name, kind)
A repository rule that allows users to alias k8s_object
with default values.
Attributes | |
---|---|
name |
The name of the repository that this rule will create. Also the name of rule imported from
|
kind |
The kind of objects the alias of |
cluster |
The name of the cluster to which This should match the cluster name as it would appear in
|
context |
The name of a kubeconfig context to use. |
namespace |
The namespace on the cluster within which the actions are performed. |
user |
The user to authenticate to the cluster as configured with kubectl. |
image_chroot |
The repository under which to actually publish Docker images. |
resolver |
A build target for the binary that's called to resolves references inside the Kubernetes YAML files. |
To test rules_k8s, you can run the provided e2e tests locally on Linux by following these instructions.
Users find on stackoverflow, slack and Google Group mailing list.
Stackoverflow is a great place for developers to help each other.
Search through existing questions to see if someone else has had the same issue as you.
If you have a new question, please [ask] the stackoverflow community. Include rules_k8s
in the title and add [bazel]
and [kubernetes]
tags.
The general bazel support options links to the official bazel-discuss Google group mailing list.
Slack and IRC are great places for developers to chat with each other.
There is a #bazel
channel in the kubernetes slack. Visit the kubernetes community page to find the slack.k8s.io invitation link.
There is also a #bazel
channel on Freenode IRC, although we have found the slack channel more engaging.
Here's a (non-exhaustive) list of companies that use rules_k8s
in production. Don't see yours? You can add it in a PR!