Multi-gpu training error: semaphore_tracker
Closed this issue · 3 comments
Thank you for your contribution. When I used 4 GPUs to train, I encountered the following error:
UserWarning: semaphore_tracker: There appear to be 65 leaked semaphores to clean up at shutdown
My environment is rtx 2080ti 11G*4
Hi,
I think this is a warning, and it is related to your environment.
Would it influence your training procedure or performance?
If it has no extra effects. You can use "export PYTHONWARNINGS='ignore:semaphore_tracker:UserWarning" before running the code.
Hi,
I think this is a warning, and it is related to your environment.
Would it influence your training procedure or performance?If it has no extra effects. You can use "export PYTHONWARNINGS='ignore:semaphore_tracker:UserWarning" before running the code.
@djiajunustc Thank you for your prompt reply. This error really affected my training. I tried the method you mentioned, but unfortunately it didn't work. After the error occurred, individual GPUs(not all 4Gpus) were 100% occupied, but the training progress bar did not move. This problem confuses me. Do you have any other good suggestions? Thx
workers=0 worked