bingo is not git ops
This is for TKGs supervisor clusters. It's some light and dumb automation, to manage some basics, namely guest clusters and base workloads on them.
It runs a vSphere pod which polls a git repo and applies the repos content.
It is absolutely not supported, just a PoC, and you should not use it.
I want some light automation to manage the guest clusters in git without the need to apply things manually, with no external dependencies (e.g. running concourse or argo somewhere).
Ideally, I'd have the kapp-controller or something similar running in the supervisor cluster. I think that will come eventually, but is not here yet.
Because I am not allowed to deploy CRDs into the supervisor cluster, I can't deploy kapp-controller or someting similar. Thus, I opted for the next best thing any experienced engineer would opt for: A bunch of shell scripts!
The vSphere pod runs two containers:
- git-sync to pull in a git repo
- bingo to apply the whole world when a change in the git repo is detected
The git-sync & bingo containers share the git repo and a fifo. Once the
git-sync container discovers a change in the git repo, it notifies the other
container via that fifo. The bingo container kicks in and kapp deploy
s
stuff in the following order:
${BASE}/${NS}/${CLUSTER}/cluster.yml
All those files will be collected and deployed as akapp
. Thus, if you remove a cluster by deleting such a file,kapp
will make sure to delete the cluster from the supervisor.${BASE}/${NS}/${CLUSTER}/*workload*.yml
A separatekapp
will be deployed on each workload cluster with the files matching the above glob. Here too, when you remove a file or any object in any of those files,kapp
will delete those objects from the respective workload cluster.
where
$BASE
is a sub directory inside the git repo$NS
is the workload namespace in vSphere, thus a namespace in the supervisor cluster
These namespaces need to be created and configured (VMClases, content library, ...) up front / out of band, bingo won't handle that. However, bingo needs to be configured to be allowed to run against each of those namespaces. Have a look at ./bingo.yml, you need to configure all namespaces there.$CLUSTER
is the guest cluster's name, i.e. themetadata.name
of aTanzuKubernetesCluster
$NS
& $CLUSTER
are especially important, because those will be used to pull
the kubeconfig of a guest cluster from secrets in the supervisor clusters. Thus
you need to ensure the directories in the git repo are named correctly, the
very same as you've named the workload namespaces and your clusters.
All files, the cluster.yml
and the *workload*.yml
will ran through ytt
before they get kapp
lied. In those files you have accss to the following
variables, which are derived from the directory path in the git repo (i.e.
$NS
& $CLUSTER
)
- in
cluster.yml
:ns
the vSphere namespace the cluster is (about to) deployed intocluster
the name of the cluster that is (about to) deployed
- in
*workload*.yml
, if you loadytt
'sdata
module:data.values.clusterNS
the vSphere namespace of the clusterdata.values.cluster
the name of the cluster
It runs everything in serial, and if there is an error on apply
or delete
it will just 🤷 and try again on the next update of the repo.
- As said, if there are errors in an
kapp deploy
run, bingo won't care. It will tell you in the logs, but won't do much about it. However, it will periodically reapply, by default every 5min. - Runs everything in serial:
- first runs all
cluster.yml
s - runs all
*workload*yml
, for on cluster after the other
- first runs all
- Needs open firewalls, i.e. the supervisor cluster needs to be able to:
- if any of the scripts change, you need to manually restart bingo
- ... and a lot more ...
Alright, you really still want to testdrive that thing? Be my guest, but don't shout at me if it breaks and messes up your lovely guest clusters!
- prepare your git repo with the directory/file structure layed out above (you can find an example in ./example/)
- set up all your workload namespaces
- configure bingo by setting up env vars:
export BINGO_namespaces='[ "ns01", "ns02" ]'
add all vSphere namespaces bingo should be able to deploy/manage clusters inexport BINGO_repo='{"url": "git@ithub.com:hoegaarden/bingo", "dir": "example", "priv-key": "-----BEGIN OPENSSH PRIVATE KEY-----...."}'
to specify which repo holds the cluster / workload, and the subdirectory these configs are in, and which key we use to pull it
- deploy to the supervisor cluster
make install
- check, if it actually works, e.g.:
kubectl tail -n bingo
An example .envrc
to setup all variables to deploy bingo could look something like:
export BINGO_namespaces='[ "ns01" ]'
export BINGO_repo='
url: git@github.com:hoegaarden/bingo
dir: example
priv-key: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
nope nope nope
Nonha/zPwQDL8AAAALaG9ybGhAYmx1cHA=
-----END OPENSSH PRIVATE KEY-----
'
- you can force a re-run with pushing some random commit to the repo, or:
Way nicer, init?
kubectl exec deploy/bingo -c bingo -- bash -c 'echo > /shared/pipe'
- After you fixed some major bugs in the scripts, you can reload the things by either
kubectl rollout restart deploy bingo
or by running🤯🥷kubectl exec deploy/bingo -c bingo -- bash -c 'echo -n reload > /shared/pipe'
First off, don't wait for any improvements.
Secondly, there is quite some stuff that could be done, which could make this thing a bit better:
- acutally test this thing
- publish container images with everything baked in, so we don't have maintain the scripts in a config map
- implement in a proper language