kube-proxy cluster-cidr is omitted breaking external service access when multiple cluster CIDRs are provided
matthewdupre opened this issue · 7 comments
This line here: https://github.com/openshift/cluster-network-operator/blob/master/pkg/network/kube_proxy.go#L45 doesn't pass a cluster-cidr to kube-proxy when the number of ClusterNetworks isn't one. Omitting this field means services can't be accessed from outside the cluster.
The ClusterNetworks are immutable, so if the user does configure two and later discover that they want to access services from outside the cluster, they won't be able to (and it's not easy to debug why). It's sometimes better to set too small a cluster CIDR (leading to some unwanted NAT) than not set one at all - but probably not always.
When using Calico, for example, additional IP ranges can be given to pods beyond those configured here, so this ends up being a trap. I feel like a validation failure if len() > 1 would be more helpful? I'd also like to see the field be mutable (although I understand that perhaps OpenShift SDN can't support that yet?).
Now that kube-proxy has the local-detector, perhaps we should be configuring detection differently anyways. What would you suggest, @matthewdupre?
Yes, for openshift-sdn we should probably pick a better local-detector. In the general case, we might want to expose that configuration a little bit better so that third-party plugins can make use of it correctly.
/assign @tssurya
I randomly noticed this issue lying around. Once kube-proxy implements the other local-detector modes we should make sure CNO lets you override --detect-local-mode
usefully in the KubeProxyConfig
, when using standalone kube-proxy.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.