SHI-Labs/Versatile-Diffusion

can this already be ran?

demirklvc opened this issue · 3 comments

hello! can this already be ran? how would the code line for variation look?

i have an 24gb ram 3090.

Yes, this repo is runnable, we released the evaluation code and shell comment to run with single/multi GPU. Refer to the Evaluation session for more information.

env:3090ti
RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 23.99 GiB total capacity; 22.72 GiB already allocated; 0 bytes free; 23.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.