Any chance to decrease GPU momory allocation?
Closed this issue · 1 comments
iceiilin commented
Hi, I ran this model with a 24G GPU memory, and got the error msg:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 320.00 MiB. GPU 0 has a total capacty of 23.69 GiB of which 148.81 MiB is free. Including non-PyTorch memory, this process has 23.48 GiB memory in use. Of the allocated memory 21.73 GiB is allocated by PyTorch, and 731.58 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.
Is there any chance to decrease GPU momory allocation? Could you please share your thoughts? Thanks.
iceiilin commented
Close as I will do further practice according to notes on Readme