Cannot get NFS to create PV
chrisdmacrae opened this issue · 2 comments
chrisdmacrae commented
Running ganesha in k8s.
Getting the following error when trying to create PV:
I0406 00:26:56.616749 1 main.go:65] Provisioner example.com/nfs specified
I0406 00:26:56.617163 1 main.go:89] Setting up NFS server!
I0406 00:26:57.573661 1 server.go:149] starting RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0406 00:26:57.573762 1 server.go:160] ending RLIMIT_NOFILE rlimit.Cur 1048576, rlimit.Max 1048576
I0406 00:26:57.575869 1 server.go:134] Running NFS server!
E0406 00:27:02.889823 1 controller.go:908] error syncing claim "2937d4b7-2215-4d84-be2d-5552790e7cd4": failed to provision volume with StorageClass "example-nfs": error getting NFS server IP for volume: service SERVICE_NAME=nfs-provisioner is not valid; check that it has for ports map[{111 TCP}:true {111 UDP}:true {662 TCP}:true {662 UDP}:true {875 TCP}:true {875 UDP}:true {2049 TCP}:true {2049 UDP}:true {20048 TCP}:true {20048 UDP}:true {32803 TCP}:true {32803 UDP}:true] exactly one endpoint, this pod's IP POD_IP=10.42.2.26
Here is the sum of my config:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: ganesha
name: nfs-provisioner
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
nodeSelector:
purpose: nas
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.0
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /mnt/data
---
apiVersion: v1
kind: Namespace
metadata:
name: ganesha
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: ganesha
name: nfs
spec:
storageClassName: example-nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: ganesha
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: ganesha
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: ganesha
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: ganesha
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ganesha
name: nfs-provisioner
kind: Service
apiVersion: v1
metadata:
namespace: ganesha
name: nfs-provisioner
labels:
app: nfs-provisioner
spec:
ports:
- name: nfs
port: 2049
targetPort: 2049
protocol: TCP
- name: nfs-udp
port: 2049
targetPort: 2049
protocol: UDP
- name: nlockmgr
port: 32803
targetPort: 32803
protocol: TCP
- name: nlockmgr-udp
port: 32803
targetPort: 32803
protocol: UDP
- name: mountd
port: 20048
targetPort: 20048
protocol: TCP
- name: mountd-udp
port: 20048
targetPort: 20048
protocol: UDP
- name: rquotad
port: 875
targetPort: 875
protocol: TCP
- name: rquotad-udp
port: 875
targetPort: 875
protocol: UDP
- name: rpcbind
port: 111
targetPort: 111
protocol: TCP
- name: rpcbind-udp
port: 111
targetPort: 111
protocol: UDP
- name: statd
port: 662
targetPort: 662
protocol: TCP
- name: statd-udp
port: 663
targetPort: 663
protocol: UDP
selector:
app: nfs-provisioner
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: ganesha
name: example-nfs
provisioner: example.com/nfs
mountOptions:
- vers=4.1
---
apiVersion: v1
kind: Pod
metadata:
namespace: ganesha
name: write-pod
spec:
containers:
- name: write-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs
kvaps commented
Hi, you have missing separator ---
in your example between kind: ServiceAccount
and kind: Service
Could you provide the output of the following command:
kubectl get ep nfs-provisioner -o yaml
chrisdmacrae commented
@kvaps I just added these manually from a dump; I reset my cluster as I setup a separate NAS box, so I have to re-apply this configuration.