Prerequisite | Installation | Quickstart | Documentation | Troubleshooting
kubectl-opslevel
is a command line tool that enables you to import & reconcile services with OpsLevel from your Kubernetes clusters. You can also run this tool inside your Kubernetes cluster as a job to reconcile the data with OpsLevel periodically using our Helm Chart.
brew install opslevel/tap/kubectl
The docker container is hosted on AWS Public ECR
# Generate a config file
kubectl opslevel config sample > ./opslevel-k8s.yaml
# Like Terraform, generate a preview of data from your Kubernetes cluster
# NOTE: this step does not validate any of the data with OpsLevel
kubectl opslevel service preview
# Import (and reconcile) the found data with your OpsLevel account
OPSLEVEL_API_TOKEN=XXXX kubectl opslevel service import
version: "1.1.0"
service:
import:
- selector: # This limits what data we look at in Kubernetes
apiVersion: apps/v1 # only supports resources found in 'kubectl api-resources --verbs="get,list"'
kind: Deployment
excludes: # filters out resources if any expression returns truthy
- .metadata.namespace == "kube-system"
- .metadata.annotations."opslevel.com/ignore"
opslevel: # This is how you map your kubernetes data to opslevel service
name: .metadata.name
description: .metadata.annotations."opslevel.com/description"
owner: .metadata.annotations."opslevel.com/owner"
lifecycle: .metadata.annotations."opslevel.com/lifecycle"
tier: .metadata.annotations."opslevel.com/tier"
product: .metadata.annotations."opslevel.com/product"
language: .metadata.annotations."opslevel.com/language"
framework: .metadata.annotations."opslevel.com/framework"
aliases: # This are how we identify the services again during reconciliation - please make sure they are very unique
- '"k8s:\(.metadata.name)-\(.metadata.namespace)"'
tags:
assign: # tag with the same key name but with a different value will be updated on the service
- '{"imported": "kubectl-opslevel"}'
# find annoations with format: opslevel.com/tags.<key name>: <value>
- '.metadata.annotations | to_entries | map(select(.key | startswith("opslevel.com/tags"))) | map({(.key | split(".")[2]): .value})'
- .metadata.labels
create: # tag with the same key name but with a different value with be added to the service
- '{"environment": .spec.template.metadata.labels.environment}'
tools:
- '{"category": "other", "displayName": "my-cool-tool", "url": .metadata.annotations."example.com/my-cool-tool"} | if .url then . else empty end'
# find annotations with format: opslevel.com/tools.<category>.<displayname>: <url>
- '.metadata.annotations | to_entries | map(select(.key | startswith("opslevel.com/tools"))) | map({"category": .key | split(".")[2], "displayName": .key | split(".")[3], "url": .value})'
repositories: # attach repositories to the service using the opslevel repo alias - IE github.com:hashicorp/vault
- '{"name": "My Cool Repo", "directory": "/", "repo": .metadata.annotations.repo} | if .repo then . else empty end'
# if just the alias is returned as a single string we'll build the name for you and set the directory to "/"
- .metadata.annotations.repo
# find annotations with format: opslevel.com/repo.<displayname>.<repo.subpath.dots.turned.to.forwardslash>: <opslevel repo alias>
- '.metadata.annotations | to_entries | map(select(.key | startswith("opslevel.com/repos"))) | map({"name": .key | split(".")[2], "directory": .key | split(".")[3:] | join("/"), "repo": .value})'
We have the ability to generate autocompletion scripts for the shell's bash
, zsh
, fish
and powershell
. To generate
the completion script for MacOS zsh:
kubectl opslevel completion zsh > /usr/local/share/zsh/site-functions/_kubectl-opslevel
Make sure you have zsh
completion turned on by having the following as one of the first few lines in your .zshrc
file
echo "autoload -U compinit; compinit" >> ~/.zshrc
The tool also has the ability to output a JSON-Schema file for use in IDE's when editing the configuration file. You can read more about adding JSON-Schema validate to VS Code
kubectl opslevel config schema > ~/.opslevel-k8s-schema.json
Then add the following to you VS Code user settings
"yaml.schemas": {
"~/.opslevel-k8s-schema.json": ["opslevel-k8s.yaml"],
}
This can happen for a number of reasons:
- Kubernetes RBAC permissions do not allow for listing namespaces
- Configuration file exclude rules exclude all found resources
Generally speaking if any other command works IE kubectl get deployment
then any kubectl opslevel
command should work too. If this is the not the case then there is likely a special authentication mechanism in place that we are not handling properly. This should be reported as a bug.
For the most part jq
filter failures are bubbled up but in certain edgecases they can fail silently.
The best way to test a jq
expression in isoloation is to emit the Kubernetes resource to json IE kubectl get deployment <name> -o json
and then play around with the expression in jqplay
Generally speaking if we detect a json null
value we do build any data for that field.
There is a special edgecase with string interpolation and null values that we cannot handle that is documented here
Sometimes in tight permissions cluster listing of all Namespaces is not allowed. The tool currently tries to list all Namespaces
in a cluster to use as a batching mechanism. This functionality can be skipped by using
an the explict list namespaces
in the selector which skips the API call to Kubernetes to list all Namespaces.
service:
import:
- selector: # This limits what data we look at in Kubernetes
apiVersion: apps/v1 # only supports resources found in 'kubectl api-resources --verbs="get,list"'
kind: Deployment
namespaces:
- default
- kube-system