The UI does not show additional clusters when specifying `customResources` at cluster level
rafaeltuelho opened this issue · 5 comments
Describe the bug
The UI does not show additional clusters when specifying customResources
at the cluster level in the Kubernetes plugin configuration as described in the upstream docs here https://backstage.io/docs/features/kubernetes/configuration/#clusterscustomresources-optional
Expected Behavior
The UI should present the list of clusters in the Topology and Kubernetes view.
What are the steps to reproduce this bug?
- With the kubernetes plugin enabled, add a second cluster to the cluster list. For instance, consider this config snippet:
kubernetes:
clusterLocatorMethods:
- clusters:
- authProvider: serviceAccount
name: ${K8S_CLUSTER_NAME}
serviceAccountToken: ${K8S_CLUSTER_TOKEN}
url: ${K8S_CLUSTER_URL}
skipTLSVerify: true
customResources:
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'pipelines'
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'pipelineruns'
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'taskruns'
- group: 'org.eclipse.che'
apiVersion: 'v2'
plural: 'checlusters'
- group: 'route.openshift.io'
apiVersion: 'v1'
plural: 'routes'
- authProvider: serviceAccount
name: ${PREPROD_K8S_CLUSTER_NAME}
serviceAccountToken: ${PREPROD_K8S_CLUSTER_TOKEN}
url: ${PREPROD_K8S_CLUSTER_URL}
skipTLSVerify: true
customResources:
- group: 'route.openshift.io'
apiVersion: 'v1'
plural: 'routes'
type: config
serviceLocatorMethod:
type: multiTenant
- restart the DevHub POD and open the Topology or Kubernetes view in the UI
- ONly the first custer shows up
Versions of software used and environment
DeveloperHub 1.0.0
Hi @rafaeltuelho ! Do you recall adding k8s resources into your 2nd cluster ? Clusters won't appear unless there is at least one resource present. Could you please verify this on your end and let me know? Thanks !
@debsmita1 , what kind of resource do you refer to? An app Deployment with Backstage annotations or any resource? This issue only happens if I use the customResources:
nested under each cluster. It works the other way.
@rafaeltuelho yup app Deployment with Backstage with annotations.
my k8s configuration in the app-config
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- URL: <cluster1-url>
name: openshift
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
serviceAccountToken: <token>
customResources:
- group: 'tekton.dev'
apiVersion: 'v1'
plural: 'pipelines'
- group: 'tekton.dev'
apiVersion: 'v1'
plural: 'pipelineruns'
- group: 'tekton.dev'
apiVersion: 'v1'
plural: 'taskruns'
- group: 'route.openshift.io'
apiVersion: 'v1'
plural: 'routes'
- URL: <cluster2-url>
name: minikube
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
serviceAccountToken: <token>
customResources:
- group: 'route.openshift.io'
apiVersion: 'v1'
plural: 'routes'
And I can see both the clusters in the dropdown:
Screen.Recording.2024-01-22.at.9.20.02.PM.mov
Interesting. I have yet to try since I raised this issue here.
I can try again later and will let you know.
But I believe the behavior in the UI should be consistent no matter where you put the customResources:
in your config.
Hi @rafaeltuelho ! Please feel free to re-open this issue if it doesn't work for you.