EMonitorLockException in ThreadSafeQueueU.pas when logging from thread
Closed this issue · 2 comments
Hi Daniele,
After receiving an Eurekalog crash report from a customer, I tried to analyse the problem. I could not reproduce it on my PC yet and it seems to occur randomly on the customer's computer, so it looks like a typical race condition. Here are the relevant lines of the Eurekalog report:
EMonitorLockException Object lock not owned.
|005B7380|Application.exe|ThreadSafeQueueU.pas |{ThreadSafeQueueU}TThreadSafeQueue<LoggerPro |TLogItem>.Enqueue |157[0] |
|005B61B5|Application.exe|LoggerPro.pas |TLogWriter |Log |557[5] |
|0110346B|Application.exe|System.Generics.Defaults.pas | | |39[0] |
|010FF5A2|Application.exe|Update.Index.pas |TUpdater |WaitForCompletion |845[4] |
|010FF560|Application.exe|Update.Index.pas |TUpdater |WaitForCompletion |841[0] |
We have the following scenario: The main thread starts a background task (IFuture), which builds up an index of files for later use. Both, main and background task log their status with LoggerPro. If the main task needs the index, it calls the following routine:
function TUpdater.WaitForCompletion: boolean;
begin
Log.Enter('Updater.WaitForCompletion', 'INDEX');
// Waits until the IFuture task is finished and returns its status
// The calling thread is blocked while waiting
Result := (FTask.Value = stLoaded);
Log.Exit('Updater.WaitForCompletion', 'INDEX');
end;
Log.Enter and Log.Exit are just two wrapper functions around Log.Debug that I have added to LoggerPro. The Exception occured on the Log.Exit
call
Normally the task takes some time, so the typical log output is:
2022-04-21 15:22:13:937 [TID 16780][INFO ] [INDEX ] Updater.Update in: C:\Users\Public\Documents\#####
2022-04-21 15:22:13:937 [TID 16780][DEBUG ] [INDEX ] --> Updater.StopUpdate
2022-04-21 15:22:13:937 [TID 16780][DEBUG ] [INDEX ] <-- Updater.StopUpdate
2022-04-21 15:22:13:987 [TID 16780][INFO ] [INDEX ] Contains 240 Files
2022-04-21 15:22:13:987 [TID 16780][INFO ] [INDEX ] Contains 4931 Entries
2022-04-21 15:22:13:987 [TID 3280][DEBUG ] [INDEX ] Renewing Index
2022-04-21 15:22:13:987 [TID 16780][DEBUG ] [INDEX ] --> Updater.WaitForCompletion
2022-04-21 15:22:14:080 [TID 3280][INFO ] [INDEX ] Remove 0 Files
2022-04-21 15:22:14:080 [TID 3280][INFO ] [INDEX ] Check 238 xxx Files
2022-04-21 15:22:14:080 [TID 3280][INFO ] [INDEX ] Check 2 yyy Files
2022-04-21 15:22:14:242 [TID 3280][INFO ] [INDEX ] Contains 240 Files
2022-04-21 15:22:14:242 [TID 3280][INFO ] [INDEX ] Contains 4931 Entries
2022-04-21 15:22:14:242 [TID 3280][DEBUG ] [INDEX ] Updater.Status - stLoaded
2022-04-21 15:22:14:242 [TID 16780][DEBUG ] [INDEX ] <-- Updater.WaitForCompletion
... Normal work continues here
This is the log output when there is "nothing" to do:
2022-04-21 14:18:14:972 [TID 22208][INFO ] [INDEX ] Updater.Update in: C:\Users\Public\Documents\#####
2022-04-21 14:18:14:972 [TID 22208][DEBUG ] [INDEX ] --> Updater.StopUpdate
2022-04-21 14:18:14:972 [TID 22208][DEBUG ] [INDEX ] <-- Updater.StopUpdate
2022-04-21 14:18:14:973 [TID 22208][INFO ] [INDEX ] Contains 0 Files
2022-04-21 14:18:14:973 [TID 22208][INFO ] [INDEX ] Contains 0 Entries
2022-04-21 14:18:14:974 [TID 14984][DEBUG ] [INDEX ] Renewing Index
2022-04-21 14:18:14:974 [TID 14984][INFO ] [INDEX ] Remove 0 Files
2022-04-21 14:18:14:974 [TID 14984][INFO ] [INDEX ] Check 0 xxx Files
2022-04-21 14:18:14:974 [TID 14984][INFO ] [INDEX ] Check 0 yyy Files
2022-04-21 14:18:14:977 [TID 14984][INFO ] [INDEX ] Contains 0 Files
2022-04-21 14:18:14:977 [TID 14984][INFO ] [INDEX ] Contains 0 Entries
2022-04-21 14:18:14:977 [TID 14984][DEBUG ] [INDEX ] Updater.Status - stLoaded
2022-04-21 14:18:14:979 [TID 22208][DEBUG ] [INDEX ] --> Updater.WaitForCompletion
2022-04-21 14:18:14:979 [TID 22208][DEBUG ] [INDEX ] <-- Updater.WaitForCompletion
... Normal work continues here
Note that the updater was finished (stLoaded) before WaitForCompletion was called.
This is the LoggerPro output matching the bug report. The EMonitorLockException
occured on the Log.Exit
call in TUpdater.WaitForCompletion
and this line is missing from the log:
2022-04-21 14:42:03:003 [TID 20620][INFO ] [INDEX ] Updater.Update in: C:\Users\Public\Documents\#####
2022-04-21 14:42:03:003 [TID 20620][DEBUG ] [INDEX ] --> Updater.StopUpdate
2022-04-21 14:42:03:003 [TID 20620][DEBUG ] [INDEX ] <-- Updater.StopUpdate
2022-04-21 14:42:03:004 [TID 20620][INFO ] [INDEX ] Contains 0 Files
2022-04-21 14:42:03:004 [TID 20620][INFO ] [INDEX ] Contains 0 Entries
2022-04-21 14:42:03:004 [TID 3312][DEBUG ] [INDEX ] Renewing Index
2022-04-21 14:42:03:004 [TID 3312][INFO ] [INDEX ] Remove 0 Files
2022-04-21 14:42:03:004 [TID 3312][INFO ] [INDEX ] Check 0 xxx Files
2022-04-21 14:42:03:004 [TID 3312][INFO ] [INDEX ] Check 0 yyy Files
2022-04-21 14:42:03:008 [TID 20620][DEBUG ] [INDEX ] --> Updater.WaitForCompletion
2022-04-21 14:42:03:008 [TID 3312][INFO ] [INDEX ] Contains 0 Files
2022-04-21 14:42:03:008 [TID 3312][INFO ] [INDEX ] Contains 0 Entries
2022-04-21 14:42:03:008 [TID 3312][DEBUG ] [INDEX ] Updater.Status - stLoaded
... Normal work continues here
Note that the updater was still working (on nothing ;-)) when WaitForCompletion was called, but it probably all happend in the same millisecond, so I assume that thread 3312 still held the lock, when exception thread 20620 called Log.Exit
.
Is there something we are doing wrong or have we come across a race condition in LoggerPro?
Additional information. I completely missed that LoggerPro uses a critical section, but the Eurekalog call stack shows an EMonitorLockException.
I also asked in the English Delphi Praxis and Stefan Glienke pointed this out here: https://en.delphipraxis.net/topic/7006-emonitorlockexception-help-needed/
The Eurekalog call stack makes zero sense compared to the code on github. Either you are using some outdated code or modified something yourself. There is nothing monitor in TThreadSafeQueue.Enqueue. It is using a critical section
So it may be a totally different problem and the Eurekalog stack is wrong. We're still running an old (6.x) version of Eurekalog.
So, I think we should put this on hold