innnky/so-vits-svc

torch.cuda.OutOfMemoryError: CUDA out of memory. How do i fix this??

Opened this issue · 2 comments

I use miniconda and I try python train.py -c configs/config.json -m 32k then i got this error

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB 
(GPU 0; 2.00 GiB total capacity; 1.72 GiB already allocated; 0 bytes free; 1.74 GiB reserved in total by PyTorch) 
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

i set batch_size to 1 and num_workers to 3 but didn't fix it

I think your vram is just too small, try running it on google colab

everything is fine on google colab, except !python train.py -c configs/config.json -m 32k

Traceback (most recent call last):
  File "train.py", line 288, in <module>
    main()
  File "train.py", line 41, in main
    assert torch.cuda.is_available(), "CPU training is not allowed."
AssertionError: CPU training is not allowed.