The training hangs after reloading one of master/worker pods
dmitsf opened this issue · 5 comments
dmitsf commented
Hello!
I'm setting up training with PyTorchJobs. I have the problem: if one of the pods (doesn't matter, master or worker) reloads, the whole process hangs. The reason for reloading can be different, usually, it's due to Google Cloud Engine node rescheduling. Also, I tried to kill pods myself - the behavior was the same.
Can I avoid this behavior and make training tolerant to pods' reloading?
gaocegege commented
Can you tell us the pytorch version?
dmitsf commented
I use pytorch 1.9.0.
gaocegege commented
Are you using torch.distributed.run?
dmitsf commented
I don't use it at the moment.
I followed mnist example to adjust my training script.
gaocegege commented
Can you please show us the script and the YAML file? PyTorch 1.9 introduced elastic training and it may hang.