bank-vaults-repro
Reproduction repo for PRs/issues with Banzai Cloud's bank-vaults
Prerequisites
make
k3d
kubectl
- Alternatively, you can use any other Kubernetes API interaction tool like
k9s
.
- Alternatively, you can use any other Kubernetes API interaction tool like
helm
helmfile
vault
- Have the bank-vaults repo (or a fork of it, e.g. patoarvizu/bank-vaults) cloned locally.
1651
Reproducing feature in PRThis repo deploys a Vault
object in a local Kubernetes instance that leverages the feature introduced in bank-vaults PR 1651 to use placeholders for mount accessor ids in templated policies.
- Go to the directory where the patoarvizu/bank-vaults clone is, and checkout the
parameterize-mount-accessor
branch. - Run
DOCKER_REGISTRY=patoarvizu DOCKER_TAG=latest make docker
. - Set
KUBECONFIG
to a k3d-specific file, to avoid having your default configuration or default Kubernetes edited by accident, e.g.export KUBECONFIG=~/.k3d/k3s-default-config
. - Run
make start
, wait for all charts to finish installing.- This installs the following:
- The bank-vaults operator.
- A
Vault
instance with pre-configured roles, a templated policy, and startup secrets. - A
cert-manager
instance. - The
vault-secrets-webhook
. - Demo cronjobs running workloads on different namespaces that echo secrets injected by the
vault-secrets-webhook
, fetched from prefixed that they should/shouldn't have access to.
- This installs the following:
- Run
export VAULT_ADDR=http://localhost:8200
. - Run
export VAULT_TOKEN=$(make get-root-token)
. - Run
vault policy read templated
.- It should display the something like the following, with the correct interpolation of the
${ accessor ... }
placeholder defined in theVault
object:
- It should display the something like the following, with the correct interpolation of the
path "secret/data/{{identity.entity.aliases.auth_kubernetes_abcd1234.metadata.service_account_namespace}}" {
capabilities = ["read"]
}
- Inspect the workloads in the
repro-ns1
namespace, i.e.kubectl -n repro-ns1 get pods
. If you don't see any pods yet, wait up to one minute until Kubernetes schedules the next cronjob run. - You'll see you'll have one pod called something like
echo-secret-found-01234567--1-abcde
inCompleted
status, and another one calledecho-secret-not-found-76543210--1-edcba
in eitherError
orCrashLoopBackOff
status. - Inspect the logs of the
echo-secret-found
pod withkubectl -n repro-ns1 logs -l secret=found
.- You should see a line that says
Found secret foo at secret/data/repro-ns1
.
- You should see a line that says
- Inspect the logs of the
echo-secret-not-found
pod withkubectl -n repro-ns1 logs -l secret=not-found
.- You should see a line with an error that says
failed to inject secrets from vault: failed to read secret from path: secret/data/repro-ns2: Error making API request.\n\nURL: GET http://vault.vault:8200/v1/secret/data/repro-ns2?version=-1\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n"
.
- You should see a line with an error that says
- Conversely, inspect the analogous workloads in the
repro-ns2
namespaces, and you'll see that the pattern is the same, thesecret-found
pods could fetch secretbar
fromsecret/data/repro-ns2
, but thesecret-not-found
pods fail when trying to fetch secrets fromsecret/data/repro-ns1
.