kubernetes-sigs/nfs-ganesha-server-and-external-provisioner

Pods stucks sometime in Terminating status

dev-ago opened this issue · 5 comments

We are using the nfs nfs-server 1 2023-08-09 13:19:13.293962 +0200 CEST deployed nfs-server-provisioner-1.7.0 4.0.8 version of the Chart and we have the problem that many times the PODS are stuck in "Terminating" after removing them by force with --force these pods are removed. It always happens with Pods that use und PVC of the NFS class.

journalctl -u kubelet -r --since "2023-08-24 00:10:00" --until "today" -g "f10d06e7-d5b0-4677-a882-7f3a9d3e6c0e"

# Beispiel vom ursprünglichen terminating Zeitpunkt
Aug 24 00:16:23 node-pool-development-7fwcb6j kubelet[1028]: E0824 00:16:23.980371 1028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/nfs/f10d06e7->
Aug 24 00:16:23 node-pool-development--7fwcb6j kubelet[1028]: I0824 00:16:23.978783 1028 reconcilercommon.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-91> Aug 24 00:14:21 node-pool-development-7fwcb6j kubelet[1028]: Output: umount: /var/lib/kubelet/pods/f10d06e7-d5b0-4677-a882-7f3a9d3e6c0e/volumes/kubernetes.io~nfs/pvc-91510590-a1> Aug 24 00:14:21 node-pool-development-7fwcb6j kubelet[1028]: Unmounting arguments: /var/lib/kubelet/pods/f10d06e7-d5b0-4677-a882-7f3a9d3e6c0e/volumes/kubernetes.io~nfs/pvc-91510> Aug 24 00:14:21 node-pool-development-7fwcb6j kubelet[1028]: E0824 00:14:21.970481 1028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/nfs/f10d06e7-> Aug 24 00:14:21 node-pool-7fwcb6j kubelet[1028]: I0824 00:14:21.968677 1028 reconcilercommon.go:169] "operationExecutor.UnmountVolume started for volume \"pvc-91>
Aug 24 00:12:19 node-pool-development-7fwcb6j kubelet[1028]: Output: umount: /var/lib/kubelet/pods/f10d06e7-d5b0-4677-a882-7f3a9d3e6c0e/volumes/kubernetes.io~nfs/pvc-91510590-a1>

Does anyone have any idea, what could be this problem?

I may be seeing this from a slightly different angle. I have this running on a home lab setup, and while I see lots of containers shutting down/unmounting, some things are hanging around. This is k3s which ships a k3s-killall.sh script that attempts to tear things down on the namespace/network interface layer. I'm seeing a lot of hangs with what look like PVC-related resources, and when my system fails to reboot, the kernel spams messages about failing to connect to NFS servers. Not entirely clear why it's attempting NFS connections on shutdown anyway, but it seems to be.

I can't say definitively that they're the same issue, or that ganesha is the cause, but something NFS/PVC-related seems to be hanging around and I'm not clear why.

Had to debug this on the weekend in my home lab, and having gotten things working I'm not too inclined to push it further right now. :) If someone else doesn't find a solution first, I'll debug it further the next time I need to reboot and can actually plan in some debugging time.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.