tin2tin/Pallaidium

16GB VRAM Usage

markhill343 opened this issue · 2 comments

Hi!

first i wanted to thank you for the project!

I have 16GB VRAM available.
While running Image2Video the VRAM usage never exeeds ~8gb

Settings i used:
grafik

Nvidia-smi
grafik

Vram usage:
Screenshot 2024-09-21 154317

Code Extract:
grafik
The result from the low vram calc results in 15.99 which is of course smaller than 16. Is this expected behaviour?
Can this be altered or would it result in a overflow of VRAM and a crash?

Hi @markhill343

You could change the limit value to lower that line (save the script, F3 > reload scripts), and see how long it'll take. (If you're not familiar with dev in Blender, here are some hints for fast tweaking of a script: #105 (comment) )

These are the optimize functions:

if gfx_device == "mps":

The challenging part is staying below overshooting the VRAM because offloading into the Static Memory will slow things. AFAIR is the "hi" vram code demanding around 18 GB VRAM. And going directly to CUDA it's around 35-36 GB of VRAM. For me with 24 GB VRAM it's faster render to stay below 18 GB than 35 GB.

Let me know how it goes. Are you on Windows?

yes i am on windows.

You are absolutly right!

"low_vram" settings with about 8gb: ~15s/it
Then i just changed the "low_vram" limit from a 16 into a 15 in the init.py and did 2 runs.

The first time it seemed to not overspil into the ram: ~24s/it
The second time it did resulting in a really bad time: ~67s/it