enable Cross-namespace volume data source at v3.6.3 but report failed
Closed this issue · 11 comments
how to enable Cross-namespace volume data source at v3.6.3
containers:
- args:
- --csi-address=$(ADDRESS)
- --v=3
- --timeout=150s
- --retry-interval-start=500ms
- --retry-interval-max=5s
- --leader-election=true
- --leader-election-namespace=rook-ceph
- --default-fstype=ext4
+ - --feature-gates=CrossNamespaceVolumeDataSource=true
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
image: r-veen.volces.com/infras/edge-ceph/csi-provisioner:v3.6.3
imagePullPolicy: IfNotPresent
name: csi-provisioner
kubectl logs csi-rbdplugin-provisioner-5864859554-hhm55 -n rook-ceph -c csi-provisioner
I0314 03:17:23.481828 1 feature_gate.go:249] feature gates: &{map[CrossNamespaceVolumeDataSource:true]}
enable successful
# kubectl logs csi-rbdplugin-provisioner-67db94cfdc-rc4px -n rook-ceph -c csi-provisioner
I0314 07:22:00.670224 1 feature_gate.go:249] feature gates: &{map[CrossNamespaceVolumeDataSource:true]}
I0314 07:22:00.670299 1 csi-provisioner.go:154] Version: v3.6.3
I0314 07:22:00.670304 1 csi-provisioner.go:177] Building kube configs for running in cluster...
I0314 07:22:01.671665 1 common.go:138] Probing CSI driver for readiness
I0314 07:22:01.675299 1 csi-provisioner.go:230] Detected CSI driver rook-ceph.rbd.csi.ceph.com
I0314 07:22:01.676537 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I0314 07:22:01.677147 1 controller.go:732] Using saving PVs to API server in background
I0314 07:22:01.677615 1 leaderelection.go:250] attempting to acquire leader lease rook-ceph/rook-ceph-rbd-csi-ceph-com...
I0314 07:22:01.682795 1 leader_election.go:185] new leader detected, current leader: 1710399451026-5806-rook-ceph-rbd-csi-ceph-com
I0314 07:22:19.836673 1 leaderelection.go:260] successfully acquired lease rook-ceph/rook-ceph-rbd-csi-ceph-com
I0314 07:22:19.836730 1 leader_election.go:185] new leader detected, current leader: 1710400921676-512-rook-ceph-rbd-csi-ceph-com
I0314 07:22:19.836745 1 leader_election.go:178] became leader, starting
I0314 07:22:19.836835 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150
I0314 07:22:19.836844 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150
I0314 07:22:19.836835 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150
I0314 07:22:19.836896 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150
I0314 07:22:19.936901 1 reflector.go:289] Starting reflector *v1beta1.ReferenceGrant (1h0m0s) from sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132
I0314 07:22:19.936914 1 reflector.go:325] Listing and watching *v1beta1.ReferenceGrant from sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132
W0314 07:22:19.937615 1 reflector.go:535] sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132: failed to list *v1beta1.ReferenceGrant: referencegrants.gateway.networking.k8s.io is forbidden: User "system:serviceaccount:rook-ceph:rook-csi-rbd-provisioner-sa" cannot list resource "referencegrants" in API group "gateway.networking.k8s.io" at the cluster scope
E0314 07:22:19.937636 1 reflector.go:147] sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132: Failed to watch *v1beta1.ReferenceGrant: failed to list *v1beta1.ReferenceGrant: referencegrants.gateway.networking.k8s.io is forbidden: User "system:serviceaccount:rook-ceph:rook-csi-rbd-provisioner-sa" cannot list resource "referencegrants" in API group "gateway.networking.k8s.io" at the cluster scope
add referencegrants resource as below
kubectl edit clusterroles.rbac.authorization.k8s.io rbd-external-provisioner-runner
- apiGroups:
- gateway.networking.k8s.io
resources:
- referencegrants
verbs:
- list
- watch
- get
but report other failed:
I0314 09:25:39.133002 1 reflector.go:325] Listing and watching *v1beta1.ReferenceGrant from sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132
W0314 09:25:39.133749 1 reflector.go:535] sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132: failed to list *v1beta1.ReferenceGrant: the server could not find the requested resource (get referencegrants.gateway.networking.k8s.io)
E0314 09:25:39.133787 1 reflector.go:147] sigs.k8s.io/gateway-api/pkg/client/informers/externalversions/factory.go:132: Failed to watch *v1beta1.ReferenceGrant: failed to list *v1beta1.ReferenceGrant: the server could not find the requested resource (get referencegrants.gateway.networking.k8s.io)
hi @ttakahashi21 please help me review this problem
I will check tonight.
https://kubernetes.io/blog/2023/01/02/cross-namespace-data-sources-alpha/
Did you refer to this blog for implementation?
https://kubernetes.io/blog/2023/01/02/cross-namespace-data-sources-alpha/ Did you refer to this blog for implementation?
thanks @ttakahashi21 , this document is very detailed. but my k8s cluster version is v1.18, can not enable the AnyVolumeDataSource and CrossNamespaceVolumeDataSource feature for the kube-apiserver and kube-controller-manager. I need time to apply for a test k8s cluster for version v1.26. :) thanks again.
@YiteGu No problem!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.