kudobuilder/kuttl

`unknown flag: --kubeconfig` but documented to be present

Closed this issue · 4 comments

What happened:
Unable to run Kuttl tests from a pod, to another Kubernetes cluster. The specific setup involves cluster A (CI cluster) and Cluster B (target Ci cluster). I expect to trigger tests from Cluster A, and trigger for cluster B. Even tho I sign in to cluster A, update kubeconfig A, tests trigger in Cluster A.

I have tried, according to documentation the following:

  • Use --kubeconfig flag in test commands, however this renders:
╰─ kubectl-kuttl --kubeconfig=/my/.kube/config
Error: unknown flag: --kubeconfig

And setting $KUBECONFIG environment variable.

What you expected to happen:
I expect to have my test executed in cluster B.

How to reproduce it (as minimally and precisely as possible):
Run any test from pod A using kubeconfig for cluster B.

Anything else we need to know?:
This has been already mentioned here in issues however I am unclear how this got solved since documentation instructions seem not to be working for me.

Environment:

  • Kubernetes version (use kubectl version): 1.29
  • KUTTL version (use kubectl kuttl version): KUTTL Version: version.Info{GitVersion:"0.18.0", GitCommit:"3cfbf0c", BuildDate:"2024-07-08T07:14:28Z", GoVersion:"go1.22.1", Compiler:"gc", Platform:"darwin/amd64"}
  • Cloud provider or hardware configuration: EKS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

I'm pretty sure seeing $KUBECONFIG is enough to make this work, we do this all the time for https://gihub.com/stackrox/stackrox end2end tests 🤔

Can you please prepend env|sort; to your kuttl command, and post the results (after eliding any secrets). Maybe this will reveal something?

Otherwise, if you could come up with a minimal reproducer using e.g. a couple of kind clusters, that would be great.

I seem to recall that mistyping the path to the kubeconfig file might have such effect, the client lib behaves as if no kubeconfig was set, when running in a pod.

I just verified that kuttl does obey KUBECONFIG even when running in the cluster. Here is what I did:

  • copied a kubeconfig for a different cluster into a configmap: kubectl create configmap kc --from-file=kubeconfig.txt=kubeconfig-ocp.txt
  • grant some permissions: kubectl create clusterrolebinding def-admin --clusterrole=cluster-admin --serviceaccount=default:default (not sure if this step is really necessary, perhaps I was confused in the early iterations of this procedure, but including it here for completeness just in case)
  • created a test suite CM: kubectl create configmap t1 --from-file=1-assert.yaml=1-assert.yaml

that file is:

apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
data:
  foo: bar
  • ran kuttl with:
apiVersion: v1
kind: Pod
metadata:
  name: p
spec:
  containers:
  - name: k
    image: quay.io/mowsiany/kuttl:latest
    args:
    - /kubectl-kuttl
    - test
    - /tmp/test
    env:
    - name: KUBECONFIG
      value: /tmp/kc/kubeconfig.txt
    volumeMounts:
    - name: t1
      mountPath: /tmp/test/t1
    - name: kc
      mountPath: /tmp/kc
  volumes:
  - name: t1
    configMap:
      name: t1
  - name: kc
    configMap:
      name: kc

Watched the pod with kubectl logs --follow p

Output:

2024/09/03 10:36:58 running without a 'kuttl-test.yaml' configuration
2024/09/03 10:36:58 kutt-test config testdirs is overridden with args: [ /tmp/test ]
=== RUN   kuttl
    harness.go:464: starting setup
    harness.go:255: running tests using configured kubeconfig.
    harness.go:278: Successful connection to cluster at: https://api.mo-09-03-ocp.ocp.infra.rox.systems:6443
    harness.go:363: running tests
    harness.go:75: going to run test suite with timeout of 30 seconds for each step
    harness.go:375: testsuite: /tmp/test has 1 tests
=== RUN   kuttl/harness
=== RUN   kuttl/harness/t1
=== PAUSE kuttl/harness/t1
=== CONT  kuttl/harness/t1
    logger.go:42: 10:36:59 | t1 | Ignoring ..2024_09_03_10_36_57.3847788774 as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$
    logger.go:42: 10:36:59 | t1 | Ignoring ..data as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$
    logger.go:42: 10:36:59 | t1 | Creating namespace: kuttl-test-relaxing-osprey
    logger.go:42: 10:36:59 | t1/1- | starting test step 1-
  • then created the expected configmap on the other cluster with KUBECONFIG=kubeconfig-ocp.txt kubectl -n kuttl-test-relaxing-osprey apply -f 1-a.yaml
  • noticed test resume and complete OK:
    logger.go:42: 10:37:08 | t1/1- | test step completed 1-
    logger.go:42: 10:37:08 | t1 | t1 events from ns kuttl-test-relaxing-osprey:
    logger.go:42: 10:37:08 | t1 | Deleting namespace: kuttl-test-relaxing-osprey
=== NAME  kuttl
    harness.go:407: run tests finished
    harness.go:515: cleaning up
    harness.go:572: removing temp folder: ""
--- PASS: kuttl (16.57s)
    --- PASS: kuttl/harness (0.00s)
        --- PASS: kuttl/harness/t1 (15.76s)
PASS

Regarding Error: unknown flag: --kubeconfig - looks like this either never worked, or was removed by mistake at some point. Not sure if it's better to add that flag or remove the mention from the docs at this point 🤔