[request]: Cached /metrics result
victoramsantos opened this issue · 3 comments
Use case. Why is this important?
I'm working in a company where we are already reaching the AWS quota limits for API calls for cloudwatch. We are thinking about solutions where we can reduce these calls without impacting user experience like removing metrics or increase too much the period_seconds
for all metrics.
I want to discuss if would be interesting to have a cache solution for cloudwatch-exporter. Like a ttl
that even though we have other requests to /metrics
we will still answering with the cache until this ttl
goes way and then we would apply another request to collect metrics and to cache the new answer repeating the process.
This solution could reduce to half of our requests (since we have 2 prometheus replicas running).
Is this a desirable feature that we could spend some time on?
Caching of the /metrics
would probably be more likely implemented as caching of specific cloudwatch API calls. For example, ListMetrics
caching was added in #453.
Something similar could be done for the actual metric fetching calls as well.
As a workaround, it's already possible to implement this with any caching reverse proxy. For example it's pretty easy to do with an EnvoyProxy sidecar. This is what we do in production.
@matthiasr I was thinking TTLs could be configured with more granularity. Caching some metric data longer than others.