Failed to resolve server nfs-server.default.svc.cluster.local: Name or service not known
Azbesciak opened this issue · 10 comments
Hello, I tried to use example for NFS server, with the only change that I changed in nfs-server-rc.yml
persistentVolumeClaim
to
hostPath:
path: /path/on/my/machine
I applied all NFS samples in a given order (rc, service, pv, pvc) but when I try to consume it via pvc in my job I receive an error like below
MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32 Mounting command: mount
Mounting arguments: -t nfs nfs-server.default.svc.cluster.local:/ /var/lib/kubelet/pods/5f4d4cce-ab59-44cb-9311-ef5fef696902/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Failed to resolve server nfs-server.default.svc.cluster.local: Name or service not known
Do you have any idea how to solve this? I saw in kubernetes/minikube#3417 that I am not the one facing this issue, but I suppose that if you placed this as an example, it should work. I am using Docker for Windows with the following spec
- docker v20.10.5
- kubernetes v1.19.7
- WSL 2, based on ubuntu 20.04
Also, I think I can but not sure - can I consume a single pv with multiple pvc, assuming ReadWriteMany
? Worth to mention is that I aim to have jobs in different namespaces. I assume that each namespace must have at least dedicated for it pvc (I cannot have one global)? In general, I am looking for some substitute of hostPath
- without NFS, but hostPath
doest not work with multi-node cluster, so...
Thanks!
Hello, could you please take a look at this?
@Azbesciak I have the same issue. You can't use the DNS of Service. Use Cluster-IP of Service instead. Change server:
value in nfs-pv.yaml
Yup, take a look at kubernetes/minikube#3417 (comment)
But I rather expected some solution from k8s team.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think it would be good to revert this commit, as it misleads the reader to believe that using the FQDN of the NFS service should work (even though it doesn't).
From my understand, currently there are only two somewhat reasonable solutions for this problem:
- Use the services IP instead of the FQDN
- Manually update the nodes hosts file (
/etc/hosts
) to make using the FQDN work
/reopen
@HusseinKabbout: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
I think it would be good to revert this commit, as it misleads the reader to believe that using the FQDN of the NFS service should work (even though it doesn't).
From my understand, currently there are only two somewhat reasonable solutions for this problem:
- Use the services IP instead of the FQDN
- Manually update the nodes hosts file (
/etc/hosts
) to make using the FQDN work/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.