Old information purge strategy
Opened this issue ยท 1 comments
This issue is to discuss what is to be done in the Alignak backend to manage old information
When your Alignak instance run for a while, some Aligna backend DB collections are becoming really huge (eg. logcheckresult
or history
).
Several solutions may take place to manage this:
- do not care about it ๐
- let the DBA handle this problem ๐ค
- include a simple purge script into the backend logic ๐
It looks interesting to include a purge strategy in the Alignak backend to make its run easier...
As of now, all the check results and events are stored in the same collection: logcheckresult
or history
.
Two solutions:
- create a new scheduled task that will periodically fetch and remove old items
- store check results in daily named mongo collections
Pros and cons of each solution is to be discussed
Solution 1: unique collection and scheduled task
On first approach, this solution looks as the simplest.
Alignak backend is Eve based and Eve creates a unique collection for each API endpoint. We keep this logic if we keep a unique collection for the check resuls.
Some scheduled tasks are already existing and we only add one. This new task will use mongo queries to remove old items in the collection. Something like:
db.logcheckresult.remove({insertDate : {$lt : deletionDate}});
The drawbacks is that the items removal will create some load and I/O on the database ๐
Solution 2: daily collection
This solution looks more complex to implement but it is the most DBA friendly
One daily collection is created for the check results, thus resulting in one collection per day. The old collections are easily droppable without any load or much I/O on the DB ๐