IBM/varnish-operator

[Question] Configuring backend from Service (ExternalName) or with ext ip:port

Opened this issue · 4 comments

Trying out varnish-operator, I see that the backends are generated dynamically from the pods. I would like to know if it is possible to specify an ExternalName Service as a backend or bunch of ip:port external to the cluster running varnish.

Additionally, when I try to get specify the pods using backend.selector.app, the varnish cluster pods in the sts have only the default/dummy backend resulting in "HTTP/1.1 503 No backends configured". How can one troubleshoot the processing of the backend configuration? "logLevel: debug" does not seem to produce any additional logs for this case.

Any help/guidance is much appreciated.

cin commented

Right now, the backends are chosen by the backend.selector map. If you want to override this behavior, you can update the VCL to manually list whatever backends you want. Unfortunately we don't support getting the backends/endpoints from a service at the moment. This could be a feature to consider though.

In regard to your selector question, that should only happen when there are no pods matching the selector. Double check how you're setting backend.selector as it should be a map that describes the label selector. So it should look something like:

  backend:
    port: 2034
    selector:
      app: varnish-backend

To test, you can use kubectl get pods -l app=varnish-backend. Other than that, make sure the port matches what your backends are listening on. Also, if the pods are not ready, they will not be added as backends. You can override this behavior by setting backend.onlyReady to false. This could be a bad idea in some use cases, so know your backends and whether a not ready state is okay to keep routing traffic to the them. HTH.

Based on your feedback, I was able get real backends in backends.vcl and also "X-Varnish-Cache: HIT" in the response. The problem was due to the backend.port referring to the service port (80) rather then pod hostport (8180). This was causing readiness check to fail. Thanks a lot for your help.

Should I create the feature request for supporting a Service backend, in particular services that are running outside varnish's cluster ? I think this will be a very useful addition since caching aims to reduce external roundtrips and origin processing.

cin commented

@kmathewmk, I'm glad you were able to get things going. TBH, it's been so long since we wrote the selection code (which I didn't write) that I don't recall why decisions were made. I vaguely recall watching services being a problem because you don't get updates when the endpoints change or something like that. Maybe you just watch the endpoints themselves? So there's some research to be done there.

I think there are two choices here:

  1. Create a separate deployment (or job or w/e) that is responsible for updating the backends in the VCL configmap.
  2. Create a k8s service object w/the external endpoints defined. The operator gets the backends from this set of endpoints. However, how do the service endpoint IPs get updated?

Either way there needs to be something that runs outside the operator to obtain and update the endpoint IPs. Am I missing anything here? Maybe a better way to do it? Otherwise, I think I may have just talked myself into thinking this should be something maintained outside of the operator. I'm definitely open to hearing more (hopefully better :) ) ideas.

@cin sorry for the delayed response.
I think watching the (proxy) service (of type ExternalName or containing externalIPs for changes and then updating the VCL could be a suitable approach. The externalIPs/name in the service will be managed by the cluster administrator.