The kubit
operator is a Kubernetes controller that can render and apply jsonnet templates based on the kubecfg jsonnet tooling/framework.
kubit
aims to decouple of the persona of who builds a package vs who installs it.
In the current landscape, the choice of the templating engine is heavily influenced by what is the current tool your users are more comfortable with.
For example, if you think your users are going to prefer using helm
to install the package, you're likely to pick helm
as your templating language.
But it doesn't have to be this way. What if the tool used to install the package is decoupled from the the choice of the tool used to build the package?
By using kubit
as the package installation method, the choice of helm
, kustomize
, or anything else becomes obselete as it installs packages from generic OCI bundles, a simple tarball containing manifests detailing how to install the package.
This means that the installation experience is decoupled from the language of choice for packaging the application, it is simply handed to kubit
and abstracted away, performing the necessary installation steps.
kubectl apply -k 'https://github.com/kubecfg/kubit//kustomize/global?ref=v0.0.19'
The Kubernetes controller is the main way to use kubit.
(popular on macos, but also available on linux)
brew install kubecfg/kubit/kubit
Direct install from sources:
cargo install --git https://github.com/kubecfg/kubit/ --tag v0.0.19
The CLI is an optional tool that provides helpers and alternative ways to install and inspect packages.
- Install the kubit operator once
- Apply a CR that references a package OCI artifact
Example foo.yaml
CR:
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
metadata:
name: foo
namespace: myns
spec:
package:
image: ghcr.io/kubecfg/demo:v0.1.0
apiVersion: demo/v1alpha1
spec:
bar: baz
Such a CR can be applied using standard Kubernetes tooling such as kubectl, or ArgoCD:
kubectl apply -f foo.yaml
The controller will continuously attempt to reconcile the desired state of the application instance
and update the outcome of the reconciliation in the status
field of the AppInstance
custom resource.
You can observe the status
field of the AppInstance
resource using standard Kubernetes tooling such as:
kubectl get -f foo.yaml -o json | jq .status
TIP: render logs in more readable format with:
kubectl get -f foo.yaml -o json | jq -r '.status.lastLogs|to_entries[] | "\(.key): \(.value)"'
The kubecfg pack
command can be used to take a jsonnet file and all its dependencies and push them
all together as a bundle into an OCI artifact.
kubecfg pack ghcr.io/kubecfg/demo:v0.1.0 demo.jsonnet
You can run the same logic that the kubit
controller does when rendering and applying a template by running
the kubit
CLI tool from your laptop:
kubit local apply foo.yaml
kubit
is just a relatively thin wrapper on top of kubecfg
.
For increased compatibility, it uses the kubectl apply
operation to apply the manifests using more standard
tooling, rather than the kubecfg
integrated Kubernetes API.
You can preview the actual commands that kubit
will run with:
kubit local apply foo.yaml --dry-run=script
Other interesting options are --dry-run=render
and --dry-run=diff
which will respectively just render the YAML without applying it
and rendering + diffing the manifests against a running application. This can be useful to preview effects of changes in the spec or
between versions of a package.
If you do not wish to install later versions of kubectl
and kubecfg
onto your system, you can specify the --docker
flag to have the
dependencies run as Docker containers instead.
Sometimes you'd like to try out some jsonnet code before you package it up and publish to your OCI registry:
kubit local apply foo.yaml --dry-run=diff --package-image file://$HOME/my-project/my-main.jsonnet
By default kubit
runs in its own kubit
namespace. This is not always desired, so kubit
also supports running in a specified namespace.
This has a few advantages:
kubit
only requires aRole
andRoleBinding
when running in a single namespace and does not require theCRD
- In companies/environments where namespaces are limited,
kubit
can run alongside the app without needing a second namespace
To use kubit
in single namespace mode, install the single-namespace
flavor of the kustomize
package into a specific namespace:
kubectl apply -k 'https://github.com/kubecfg/kubit//kustomize/single-namespace?ref=v0.0.19' -n <my-application-namespace>
This instance of kubit
is then configured by creating a ConfigMap
named app-instance
with the data
field containing a key app-instance
with the yaml
version of the AppInstance
.
Example for how to create this ConfigMap
:
kubectl create configmap -n mycoolapp app-instance --from-file=app-instance=example-kubit-testing.yaml
Create Kubernetes resources:
kubectl apply -k ./kustomize/local
The manifests in ./kustomize/local
are like ./kustomize/global
but don't spawn the kubit controller.
Build and run the controller locally:
cargo run -- --as system:serviceaccount:kubit:kubit
If you already installed kubit (e.g. with kubectl apply -k ./kustomize/global
) in your test cluster but you still want to quickly run the locally built kubit controller without uninstalling the in-cluster controller you can pause an appinstance and run the local controller with --only-paused
:
kubectl patch -f foo.yaml --patch '{"spec":{"pause": true}}' --type merge
Then you can run the controller locally and have it process only the resource you paused:
cargo run -- --as system:serviceaccount:kubit:kubit --only-paused
To unpause the resource:
kubectl patch -f foo.yaml --patch '{"spec":{"pause": false}}' --type merge