ceph-provisioner - unexpected error getting claim reference: selfLink was empty, can't make reference
wyllys66 opened this issue · 5 comments
This is in Kubernetes 1.17.0:
Getting this unusual error in the cephfs-provisioner pod when deploying in Kubernetes and trying to create a volume:
controller.go:1004] provision "mydataclaim" class "cephfs": unexpected error getting claim reference: selfLink was empty, can't make reference
Any ideas?
This is a little more detailed error that occurs when trying to use a ceph pvc:
E0224 15:49:06.098645 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ceph.com-cephfs", GenerateName:"", Namespace:"ki", SelfLink:"", UID:"a7ec4adb-4e9d-443c-951d-932971e8301b", ResourceVersion:"838255", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717906417, loc:(*time.Location)(0x19b4b00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"cephfs-provisioner-6f9b59d479-wb844_25bcb305-571d-11ea-bf3c-6e7cf70a2af5\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2020-02-24T15:49:06Z\",\"renewTime\":\"2020-02-24T15:49:06Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'cephfs-provisioner-6f9b59d479-wb844_25bcb305-571d-11ea-bf3c-6e7cf70a2af5 became leader'
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.