long status file parse time causes incomplete object with background parsing enabled
Closed this issue · 2 comments
We have huge status file(16M) so I was hoping that background parsing would help us a lot. Unfortunately I had to rollback due to an issue when api calls are made while nagios[:status] still being populated - this causes 404 errors in cases where we get hostgroups and then query hostgroup members for particular service (that is present for every hostgroup member).
I tried to fix this issue with simple mutex.synchronize around parse! in bg thread and having mutex.synchronize in "before do", but it essentially disables background parsing because all http threads have to wait for bg thread to complete parsing.
The right way of dealing with this situation would be having an active nagios[:status] instance and "being populated" instance and swap them (or reference) once parsing is complete. I'll try to implement something like this myself, but I wonder if Dmytro has any better ideas on how to tackle this problem.
The right way of dealing with this situation would be having an active nagios[:status] instance and "being populated" instance and swap them (or reference) once parsing is complete.
To say the truth I don't see any better solution for this.
Another option is not deleting parsed status data before next parse (it's implemented ruby-nagios now). In this case older status will still be available while parsing new. However, this leads to the situation, that deleted from Nagios hosts are still remaining in nagios[:status] hash.
Thank you. Appreciate your help.