Make it possible to export multiple sets of metrics in the same process
manueljacob opened this issue · 1 comments
Use case
There are multiple instances of an application (running on different servers). Some metrics are instance-specific (e.g. about the requests handled by a specific instance). Some metrics are not instance-specific (those are based on data from a single database). In our case, the application is a Rails application, but that shouldn’t matter for much for this feature request.
For the instance-specific metrics, each instance is scraped (in our case, by Prometheus). The metrics that are not instance-specific should be exported by each instance but only a single instance should scraped at a given time (in our case, there is load balancing by Kubernetes, but the details don’t matter here).
Current solutions
If we export both sets of metrics on each instance, the same data (metrics that are not instance-specific) will be collected multiple times by Prometheus, wasting resources and making handling of the data more complicated.
We could launch separate processes to export the metrics that are not instance-specific, but that complicates deployment and uses extra resources.
Desired solution
It should be possible to export both sets of metrics in the same process but on different ports and / or paths.
Proposed feature
It should be possible to create multiple instances of the Yabeda
class that each can be configured separately.
Hey, thanks for writing the issue!
So, you want to be able to expose two metrics endpoints from every process/pod on different paths or ports:
-
one serving only per-process or per-pod metrics (all counters incremented and histograms measured plus maybe some subset of
collect
blocks), scraped directly -
one serving only per-application metrics (basically only executing
collect
blocks, but maybe not all of them), scraped through k8s service or balancer.
Am I right?
For now, you can take a look how the same problem (instance-specific vs common database-sourced metrics) is solved in yabeda-sidekiq via collect_cluster_metrics
config flag. It is meant to be used specifically in this workaround with separate metrics exporter process:
We could launch separate processes to export the metrics that are not instance-specific, but that complicates deployment and uses extra resources.
For now, this is the only way to handle this.