TheLastBen/PPS

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

Opened this issue · 5 comments

ww-9 commented

I'm running on Paperspace on RTX 5000, SDXL-LoRA-PPS.ipynb notebook and get the following error on the Train LoRA step:
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

try a different GPU

ww-9 commented

Thanks for the quick response! It worked with P6000 and 24 GiB VRAM. Save_VRAM option is enabled, and there in the comment it says that 10GB VRAM should be enough for LoRA_Dim = 64, while on 16GB P5000 also "OutOfMemoryError: CUDA out of memory". Is there any other settings I can tweak to fit 16GB vram?

Enabling Save_VRAM should fit it in a 10GB, I'll check it out

I am running into the same issue. Any help much appreciated.

it should be working now