AI-Hypercomputer/JetStream

Question: `prometheus_port` flag for pytorch server

Opened this issue · 0 comments

https://github.com/AI-Hypercomputer/JetStream/blob/main/docs/observability-prometheus-metrics-in-jetstream-server.md only mentions the prometheus_port if the jetstream runs with maxengine_server.

However, such option doesn't exist under https://github.com/AI-Hypercomputer/jetstream-pytorch. I wonder if jetstream-pytorch also expose prometheus metrics related to the inference server?