out of memory error
Opened this issue · 1 comments
I use your pretrained model to evaluate the noisy wavs. but I encountered out of memory error. my gpu has 24G memory. so I want to know is it normal? if so, how to fix this problem? Can this codes support multi gpu?
It depends how long your files are: there is currently no segmenting implemented, which means the model will process the whole file at once. If the file is e.g. 3 minute-long, it will not fit in memory.
Check that you are indeed in eval
model (model.eval()
) such that you do not accumulate gradients during the forward pass.
The code does support multi-GPU for training, because of the PyTorch-Lightning framework, but for inference you would have to do it manually I believe, although I am not 100% certain about that (maybe there is something in PyTorch Lightning too, you could check it out).