Error when starting NFS service
hulu1522 opened this issue · 11 comments
I am trying to run this container in Kubernetes but there is an error that is logged when starting the NFS service. It also looks like the port is not listening.
Error:
Starting NFS in the background...
rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
Complete startup logs:
Starting Confd population of files...
confd 0.14.0 (Git SHA: 9fab9634, Go Version: go1.9.1)
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO Backend set to env
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO Starting confd
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO Backend source(s) set to
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO /etc/exports has md5sum 4f1bb7b2412ce5952ecb5ec22d8ed99d should be e00bc1ed62ce760dcaedf40a45211f66
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO Target config /etc/exports out of sync
2018-02-22T20:59:41Z indiatts-cutaudio-67fb8d88d4-n8n78 /usr/bin/confd[14]: INFO Target config /etc/exports has been updated
Displaying /etc/exports contents...
/data/cutaudio *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
Starting rpcbind...
Displaying rpcbind status...
program version netid address service owner
100000 4 tcp6 ::.0.111 - superuser
100000 3 tcp6 ::.0.111 - superuser
100000 4 udp6 ::.0.111 - superuser
100000 3 udp6 ::.0.111 - superuser
100000 4 tcp 0.0.0.0.0.111 - superuser
100000 3 tcp 0.0.0.0.0.111 - superuser
100000 2 tcp 0.0.0.0.0.111 - superuser
100000 4 udp 0.0.0.0.0.111 - superuser
100000 3 udp 0.0.0.0.0.111 - superuser
100000 2 udp 0.0.0.0.0.111 - superuser
100000 4 local /var/run/rpcbind.sock - superuser
100000 3 local /var/run/rpcbind.sock - superuser
Starting NFS in the background...
rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
Exporting File System...
exporting *:/data/cutaudio
Starting Mountd in the background...
Had the same issue.
You should used "privileged: true" option in kubernetes config
Given that https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container shows that k8s now can defined the capabilities of pods more fine-grained than just “privileged: true”, which capabilities are needed for nfs-server-alpine?
My understanding is that CAP_SYS_ADMIN is all that is required. I can confirm later today.
No, doesn't work without --privileged
I'm afraid.
Okay. Thank you for the information!
No, doesn't work without --privileged I'm afraid.
FYI, I ran into the same issue with Rancher Server running on CentOS if you want to add it to the documentation.
Updated the readme with lots more information I'm sure people with find useful, thanks for the suggestion.
I encountered a situation today
Server A: Debian, kernel version 4.9
Server B: Ubuntu, kernel version 4.15
For Server A, both
privileged: true
and
cap_add:
- SYS_ADMIN
- SETPCAP
works.
But For Server B
cap_add:
- SYS_ADMIN
- SETPCAP
not working.
Record here, in case it might help someone
I encountered a situation today
Server A: Debian, kernel version 4.9
Server B: Ubuntu, kernel version 4.15For Server A, both
privileged: true
and
cap_add: - SYS_ADMIN - SETPCAP
works.
But For Server B
cap_add: - SYS_ADMIN - SETPCAP
not working.
Record here, in case it might help someone
In addition for OpenShift this help me to solve the issue
...
spec:
containers:
securityContext:
fsGroup: 0
capabilities:
add: ["NET_ADMIN", "SYS_TIME", "SYS_ADMIN"]
privileged: true
...
In addition for OpenShift this help me to solve the issue
... spec: containers: securityContext: fsGroup: 0 capabilities: add: ["NET_ADMIN", "SYS_TIME", "SYS_ADMIN"] privileged: true ...
In OpenShift, I still got this error even after enable those 3 capabilities.
My pods capsh shown this:
/ # capsh --print
Current: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,cap_sys_time=eip
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,cap_sys_time
Ambient set =
Current IAB: cap_chown,cap_dac_override,!cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,!cap_linux_immutable,cap_net_bind_service,!cap_net_broadcast,cap_net_admin,!cap_net_raw,!cap_ipc_lock,!cap_ipc_owner,!cap_sys_module,!cap_sys_rawio,cap_sys_chroot,!cap_sys_ptrace,!cap_sys_pacct,cap_sys_admin,!cap_sys_boot,!cap_sys_nice,!cap_sys_resource,cap_sys_time,!cap_sys_tty_config,!cap_mknod,!cap_lease,!cap_audit_write,!cap_audit_control,!cap_setfcap,!cap_mac_override,!cap_mac_admin,!cap_syslog,!cap_wake_alarm,!cap_block_suspend,!cap_audit_read,!cap_perfmon,!cap_bpf
Securebits: 00/0x0/1'b0 (no-new-privs=0)
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
secure-no-ambient-raise: no (unlocked)
uid=0(root) euid=0(root)
gid=0(root)
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
Any suggestion guys ?