Support UI behind proxy
nmnellis opened this issue · 4 comments
We have a number of these apps running behind an Istio ingress gateway but we cannot get to each services UI because each application is at its on path. Example.
Appplication 1
/app1/ui
Application 2
/app2/ui
The json assets and redirects all expect /ui
to be at the root
something like this https://dev.to/n1ru4l/configure-the-cra-public-url-post-build-with-node-js-and-express-4n8
Hey,
In this instance the standard approach would be to do a URI rewrite in the proxy, this way envoy would modify the given request /app1/ui changing the path to /ui before forwarding the request to fake-service.
I am not against making the UI path in fake service configurable but a rewrite is close to a production use case.
Even with that, this does not work because the UI itself makes requests on /ui
from the browser. we cannot differentiate 1 UI from another within the same cluster due to this.
I think this is because it is not picking up the relative path in the UI and is probably using /. Let me take a look
@nmnellis on my side I was able to overcome this not by touching the original fake-service code.
In fact I found quite not that obvious to modify the Golang embedded UI code furthermore potentially including some kind of "intelligence" depending on some headers like in this traefik case where X-Forwarded-Prefix
could be used.
Therefore I went on another/easier solution consisting in rebundling the fake-service Docker container to include both the original and unmodified fake-service code but also an Nginx instance with proper rewriting rules. You can find my working code here .
It is working like a charm and you can try it yourself with the following docker-compose file if you like:
---
version: "3.3"
services:
web:
image: obourdon/fake-service:0.23.2-test
environment:
LISTEN_ADDR: 0.0.0.0:9090
UPSTREAM_URIS: "http://api:9090"
MESSAGE: "Hello from web"
NAME: "web"
SERVER_TYPE: "http"
TIMING_50_PERCENTILE: 30ms
TIMING_90_PERCENTILE: 60ms
TIMING_99_PERCENTILE: 90ms
TIMING_VARIANCE: 10
TRACING_ZIPKIN: "http://jaeger:9411"
ports:
- "9090:9090"
- "8080:80"
api:
image: obourdon/fake-service:0.23.2-test
environment:
ALLOW_CLOUD_METADATA: "true"
LISTEN_ADDR: 0.0.0.0:9090
UPSTREAM_URIS: "grpc://currency:9090, http://cache:9090/abc/123123, http://payments:9090"
UPSTREAM_WORKERS: 2
MESSAGE: "API response"
NAME: "api"
SERVER_TYPE: "http"
TIMING_50_PERCENTILE: 20ms
TIMING_90_PERCENTILE: 30ms
TIMING_99_PERCENTILE: 40ms
TIMING_VARIANCE: 10
HTTP_CLIENT_APPEND_REQUEST: "true"
TRACING_ZIPKIN: "http://jaeger:9411"
ports:
- "8081:80"
cache:
image: obourdon/fake-service:0.23.2-test
environment:
LISTEN_ADDR: 0.0.0.0:9090
MESSAGE: "Cache response"
NAME: "cache"
SERVER_TYPE: "http"
TIMING_50_PERCENTILE: 1ms
TIMING_90_PERCENTILE: 2ms
TIMING_99_PERCENTILE: 3ms
TIMING_VARIANCE: 10
TRACING_ZIPKIN: "http://jaeger:9411"
ports:
- "8082:80"
payments:
image: obourdon/fake-service:0.23.2-test
environment:
LISTEN_ADDR: 0.0.0.0:9090
UPSTREAM_URIS: "grpc://currency:9090"
MESSAGE: "Payments response"
NAME: "payments"
SERVER_TYPE: "http"
TRACING_ZIPKIN: "http://jaeger:9411"
HTTP_CLIENT_APPEND_REQUEST: "true"
ports:
- "8083:80"
# Will throw errors for 20% of all requests
currency:
image: obourdon/fake-service:0.23.2-test
environment:
LISTEN_ADDR: 0.0.0.0:9090
MESSAGE: "Currency response"
NAME: "currency"
SERVER_TYPE: "grpc"
ERROR_RATE: 0.2
ERROR_CODE: 14
ERROR_TYPE: "http_error"
TRACING_ZIPKIN: "http://jaeger:9411"
ports:
- "8084:80"
jaeger:
image: jaegertracing/all-in-one:1.13
environment:
COLLECTOR_ZIPKIN_HTTP_PORT: 9411
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "9411:9411"
You can then curl
either on the original 9090
port or use the 808x
ones with or without /ui even behind a proxy
HTH