Update Ingress status with Service LoadBalancer IP/Hostname
rickhlx opened this issue · 9 comments
Is your feature request related to a problem? Please describe.
When using tools such as external-dns
, the IP address or hostname used to create the DNS record is optionally obtained by the Ingress status. Since skipper does not update the Ingress status, external-dns is not able to determine which IP address to use for the record.
Current status:
status:
loadBalancer: {}
Describe the solution you would like
Have skipper update the Ingress resource status with the IP address or hostname for the loadbalancer.
Desired status:
status:
loadBalancer:
ingress:
- ip: 1.1.1.1
Describe alternatives you've considered (optional)
N/A
Would you like to work on it?
Yes
Many people use skipper as second layer of load balancer infrastructure, that's why we do not set status in ingress.
There is another complexity, because of routesrv, which let's you split control plane and data plane in skipper. It's not really clear to me how this can work very well. Maybe best to outline how this can work.
Required will be an option to enable this behavior.
Many people use skipper as second layer of load balancer infrastructure, that's why we do not set status in ingress.
Not opposed to replicating this behavior, unsure how we could from an on-premise infrastructure setup.
I'll work on an initial proposal on for setting this up. Thanks!
Maybe it would also make more sense to create a new controller that just set's the status of ingress to your LB VIP (service type loadbalancer IP).
To be more helpful I would need to understand your VIP implementation by service type loadbalancer.
Looking into how other controllers do this, NGINX uses a CLI flag to enable reporting and option to use LB type Service or a predefined address.
For our clusters, when a VIP (LB type Service) is created, an external load balancer assigns an external IP address from a pool and forwards L4 traffic to the service endpoints.
On Ingress-NGINX (not NGINX ingress controller) and Contour Ingress controllers, the ingresses "external" address are automatically updated with the LB type Service External-IP
address.
Yes these controllers are basically singleton controllers that do not consume the traffic, that's why I wrote that it might make more sense to create a new controller.
nginx nor envoy knows about kubernetes nor ingress. Skipper knows kubernetes to build its routing table. If you want to build it into skipper, then I would propose to do this only in routesrv sub component and also add a flag to enable this. I think it will create more work for you if you add it to routesrv.
@szuecs thanks for the direction, I think we will develop a controller to update our ingress definitions. As you stated, working it into skipper would be more work.
Feel free to close this issue if it's not something planned to be implemented by you. Thank you!
If you want we can also link the controller if you want to publish it open source.
What do you think?
Not opposed to that personally, would have to see if we could do that or would have to be a personal project.
Let us know what decision you took, when it happened. We are happy to link any kind of project which is useful and it sounds really useful.
For now I will close the issue, but if you have something public to share, let us know and we'll link it!