Dragon2fly/logger_tt

Final logs from a process are lost

Closed this issue · 2 comments

I use logger_tt in this context:

  • I use a logger in a worker. Worker is started via multiprocesing.Pool().
  • I work on Windows, on Python 3.9.

At some point worker ends, and after that the Pool is shut down (I use with Pool(5) as pool: ...).
During debugging everything works, but when not debugging last logs (up to 80 lines in my tests) from the worker are lost.

If I put a time.sleep(5) after worker stops, but before Pool is shut down (so inside with) the logs are not lost any more.

    with Pool(5) as pol:
        processing_workers = processing_pool.map_async(processing_worker, [1, 2, 3, 4, 5])
        processing_workers.get() # Wait for the workers to finish
        time.sleep(5)   # Wait for the logs to arrive from the workers. Without it some logs are lost.

Please note that putting time.sleep(5) after the with statement causes logs to be lost anyway.

    with Pool(5) as pol:
        processing_workers = processing_pool.map_async(processing_worker, [1, 2, 3, 4, 5])
        processing_workers.get() # Wait for the workers to finish

    time.sleep(5)   # Logs are lost anyway.

Hi @ZbigniewRA

According to your code, you created a pool as pol but you had never used it.
You were using processing_pool instead.

Even though replacing processing_pool with pol, I cannot recreate your situation. All logs are recorded.
Can you give a complete, runnable minimal example?

This is weird. I was getting lost logs every time I run my program without sleep(5) specifically inside with.
Today I cannot reproduce the issue at all.
The only difference is that I rebooted my machine.
I will let you know if it will ever happen again. Sorry for taking your time.