Alignak-monitoring-contrib/alignak-backend

Old information purge strategy

Opened this issue ยท 1 comments

This issue is to discuss what is to be done in the Alignak backend to manage old information

When your Alignak instance run for a while, some Aligna backend DB collections are becoming really huge (eg. logcheckresult or history).
Several solutions may take place to manage this:

  • do not care about it ๐Ÿ˜ž
  • let the DBA handle this problem ๐Ÿค
  • include a simple purge script into the backend logic ๐Ÿ‘

It looks interesting to include a purge strategy in the Alignak backend to make its run easier...

As of now, all the check results and events are stored in the same collection: logcheckresult or history.

Two solutions:

  • create a new scheduled task that will periodically fetch and remove old items
  • store check results in daily named mongo collections

Pros and cons of each solution is to be discussed

Solution 1: unique collection and scheduled task

On first approach, this solution looks as the simplest.

Alignak backend is Eve based and Eve creates a unique collection for each API endpoint. We keep this logic if we keep a unique collection for the check resuls.

Some scheduled tasks are already existing and we only add one. This new task will use mongo queries to remove old items in the collection. Something like:

db.logcheckresult.remove({insertDate : {$lt : deletionDate}});

The drawbacks is that the items removal will create some load and I/O on the database ๐Ÿ‘Ž

Solution 2: daily collection

This solution looks more complex to implement but it is the most DBA friendly

One daily collection is created for the check results, thus resulting in one collection per day. The old collections are easily droppable without any load or much I/O on the DB ๐Ÿ‘