ooni/sysadmin

Prometheus / Grafana: store data for longer times

FedericoCeratto opened this issue · 10 comments

Investigate solutions for long term storage, ideally > 1y.

@SuperQ do you have some tips on what we can do for this? Is it recommended to extend the prometheus retention time from 15d, to say 300+?

Yes, it's totally fine to change the Prometheus retention to 365d. Things to consider:

  • Backups (Prometheus provides a snapshot API)
  • Capacity planning storage
  • Capacity planning memory
  • Queries that need to pull in a year of data

The last two are probably the trickiest. It's easy enough for Prometheus to store the data, but depending on the index sizes and queries you want to run over a long period of time, you will need more memory on the Prometheus server to query all that data.

One of the things that can help a lot for this is to have recording rules that summarize the data you want to query over a long period of time. For example, if you have data scraped ever 15 seconds, having a recording rule with a 1 minute interval that produces a fewer metrics, can save an order of magnitude at query time.

For example, node_cpu_seconds_total can have quite a lot of metrics, but if you only care about node-level CPU utilization, it can be a lot fewer metrics to look at with a recording rule. A single node utilization with a 1 min recording interval needs about 525k samples to query a full year. But that's a lot less than per-cpu-mode at 15 seconds.

The default Prometheus query limiter is set to 50 million samples per query. (--query.max-samples=50000000). To query all this data, you'll need about 100MiB of temporary memory for this very large query.

@SuperQ thanks for the detailed response. What is your take on using something like https://thanos.io/?

It seems to support putting the metrics data inside of different storage system like an object store, which could work well for us.

Thanos or Cortex are both good options for external long-term storage. I use Thanos at work, as we're running in GCP and can use the GCS object storage, and we're mostly in one region so query latency/bandwidth to the individual Prometheus servers isn't a problem.

I don't remember what the Prometheus server setup is like for Ooni. Is there more than one? How widely distributed?

I don't remember what the Prometheus server setup is like for Ooni. Is there more than one? How widely distributed?

We currently have a single host doing the scraping, metrics storage and charting.

See: https://github.com/ooni/sysadmin/blob/master/ansible/deploy-prometheus.yml

With just a single host, adding Thanos would be overcomplicated and unnecessary. The Prometheus TSDB is just fine for that kind of setup. Things like Thanos are good for when you have many Prometheus servers spread over a large network.

@SuperQ can you please clarify how this is achieved: "have recording rules that summarize the data you want to query over a long period of time"? I'm looking at various issues in the Prometheus repository and it seems that downsampling is not supported and out of scope.

You can write a recording rule like this:

groups:
- name: CPU rules
  interval: 1m
  rules:
  # CPU in use ratio.
  - record: instance:node_cpu_utilization:ratio
    expr: >
      1 -
      avg without (cpu,mode) (
        rate(node_cpu_seconds_total{mode="idle"}[1m])
      )

This will create a single per-instance downsampled CPU utilizaiton metric. This metric will contain less granular data, making it easier to query over a long period of time. This works fine for small installations like the Ooni project.

Once you get into multiple Prometheus servers with many millions of metrics, things like Thanos can be added to provide additional scalability.

As an MVP for this cycle we will bump it up by 30 days and then check how much is the storage increase and then re-assess.

We have for the time being bumped it up to 30 days. Let's see how it goes.