nfs-provisioner v2.3.0: deployment.yaml does not contain all needed port for v2.3.0 quay image
radtriste opened this issue · 16 comments
related to #1262
Image has been updated on Quay to v2.3.0 but deployment.yaml on nfs-provisioner-v2.3.0 tag does not include the latest port(s).
service and deployment are missing some TCP/UDP ports.
using current master is ok, but the tag should contain the correct deployment.yaml
Is this still pending ?
this is pending. The fix mentioned in this issue is only in our internal repository and not on this one.
nothing has changed on your side.
do we have an update for this ?
Planning a 2.3.1 or something ?
For now we use the 2.2.2 as workaround ...
I am using 2.3.0 it is working. Using the stable/nfs-server-provisioner
What is stable/nfs-server-provisioner
?
When you try to deploy in kubernetes the deployment.yaml, the pod is starting well.
But when requesting a new volume storage, you get an error into the nfs-provisioner that some ports are unavailable.
This is because the quay image is needing much more opened ports on deployment/service than before:
nfs-provisioner-v2.3.0...master
(look for nfs/deploy/kubernetes/deployment.yaml
). Those latest ports from master need also to be on deployment.yaml from nfs-provisioner-v2.3.0 tag ... Else it does not work.
It's a helm chart for setting up a nfs server provisiner for dynamic PV provisioning with PVC . I think they have updated the ports properly in their template.
Could we correct the deployment.yaml as well ?
We are not using helm on our side
@radtriste, agree with you, seems the port part in deployment.yaml with nfs-provisioner-v2.3.0 tag should be updated as master.
I don't want to retroactively move the git tag but that is an option. Alternatively, I plan to make a patch to remove the port checking altogether and release 2.3.1 since technically it was a breaking change and it should not have gone out in a release
Good for me for a 2.3.1
I have same problem but.
Combination of image "quay.io/kubernetes_incubator/nfs-provisioner
tag: v2.2.2" in Values file and "stable/nfs-server-provisioner" as chart worked for me.
Combination of v2.2.2 and downloaded stable chart from "github/helm/charts/stable" also didn't work.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Thanks for reporting the issue!
This repo is no longer being maintained and we are in the process of archiving this repo. Please see kubernetes/org#1563 for more details.
If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! 🙏