changing NFSServer name will keep PVC in Pending State
Escaflow opened this issue · 2 comments
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
PersistentVolumeClaim stays Pending if NFSServer name is not rook-nfs.
Expected behavior:
PersistentVolumeClaim get Bound if NFSServer name is not rook-nfs.
How to reproduce it (minimal and precise):
Create all resources from the documentation till the PersistentVolumeClaim
kubectl apply -f common.yaml
kubectl apply -f operator.yaml
kubectl apply -f webhook.yaml
Create a PVC for the NFSServer:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-demo-claim
namespace: rook-nfs
spec:
storageClassName: csi-cinder-classic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
Create the PVC for the pod that is using the NFSServer:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rook-nfs-pv-claim
spec:
storageClassName: "rook-nfs-demo"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
Now create the NFSServer and StorageClass, this will work:
---
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
name: rook-nfs
namespace: rook-nfs
spec:
replicas: 1
exports:
- name: demo
server:
accessMode: ReadWrite
squash: "none"
persistentVolumeClaim:
claimName: nfs-demo-claim
annotations:
rook: nfs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: rook-nfs
name: rook-nfs-demo
parameters:
exportName: demo
nfsServerName: rook-nfs
nfsServerNamespace: rook-nfs
provisioner: nfs.rook.io/rook-nfs-provisioner
reclaimPolicy: Delete # Retain
volumeBindingMode: Immediate
This won't work:
---
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
name: foo-bar
namespace: rook-nfs
spec:
replicas: 1
exports:
- name: demo
server:
accessMode: ReadWrite
squash: "none"
persistentVolumeClaim:
claimName: nfs-demo-claim
annotations:
rook: nfs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: rook-nfs
name: rook-nfs-demo
parameters:
exportName: demo
nfsServerName: foo-bar
nfsServerNamespace: rook-nfs
provisioner: nfs.rook.io/rook-nfs-provisioner
reclaimPolicy: Delete # Retain
volumeBindingMode: Immediate
To get logs, use kubectl -n <namespace> logs <pod name>
rook-nfs-operator:
2021-07-19 17:24:54.978595 I | nfs-operator: Initialize status state
2021-07-19 17:24:55.022047 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result created
2021-07-19 17:24:55.038120 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result created
2021-07-19 17:24:55.054868 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result created
2021-07-19 17:24:55.063750 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result unchanged
2021-07-19 17:24:55.063921 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result unchanged
2021-07-19 17:24:55.070239 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result unchanged
2021-07-19 17:24:55.079261 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result unchanged
2021-07-19 17:24:55.079437 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result unchanged
2021-07-19 17:24:55.088250 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result unchanged
2021-07-19 17:25:05.064736 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result unchanged
2021-07-19 17:25:05.064909 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result unchanged
2021-07-19 17:25:05.072739 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result unchanged
2021-07-19 17:25:15.074214 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result unchanged
2021-07-19 17:25:15.074405 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result unchanged
2021-07-19 17:25:15.080917 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result unchanged
2021-07-19 17:25:15.114188 I | nfs-operator: Reconciling NFSServer ConfigMapOperation.Result unchanged
2021-07-19 17:25:15.114430 I | nfs-operator: Reconciling NFSServer ServiceOperation.Result unchanged
2021-07-19 17:25:15.121743 I | nfs-operator: Reconciling NFSServer StatefulSetOperation.Result unchanged
2021-07-19 17:27:35.943748 I | nfs-operator: Deleting NFSServer foo-bar in rook-nfs namespace
rook-nfs-webhook:
2021-07-19 17:24:54.967696 I | nfs-webhook: validate createnamefoo-bar
2021-07-19 17:24:54.984059 I | nfs-webhook: validate updatenamefoo-bar
2021-07-19 17:27:35.962977 I | nfs-webhook: validate updatenamefoo-bar
2021-07-19 17:27:35.981433 I | nfs-webhook: validate updatenamefoo-bar
Environment:
- OS: Ubuntu 18.04.5
- Kernel (e.g.
uname -a
): 4.15.0-147-generic - Cloud provider or hardware configuration: OVH
- Rook version: v1.6.7
- Kubernetes version (use
kubectl version
): v1.20.2 - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): OpenShift
I was having the exact same issue. I started with a non "rook-nfs" server name, and was scratching my head for the problem until I saw this..
This effectively makes it impossible to deploy more than one NFSServer.
i have exact same issue, reproduced it twice and im pretty sure this issue is legit.