Multi-gpu training RuntimeError
JxuHenry opened this issue · 0 comments
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/ac/lib/python3.8/site-packages/torchaudio/transforms/transforms.py", line 106, i$
forward
return F.spectrogram(
File "/root/miniconda3/envs/ac/lib/python3.8/site-packages/torchaudio/functional/functional.py", line 112, in
spectrogram
spec_f = torch.stft(
File "/root/miniconda3/envs/ac/lib/python3.8/site-packages/torch/functional.py", line 606, in stft
return VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
RuntimeError: stft input and window must be on the same device but got self on cuda:1 and window on cuda:0
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1237 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 1238) of binary: /r
oot/miniconda3/envs/ac/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/ac/bin/torchrun", line 33, in
sys.exit(load_entry_point('torch==1.12.1', 'console_scripts', 'torchrun')())
File "/root/miniconda3/envs/ac/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/
init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/envs/ac/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)