CUDA out of memory
Closed this issue · 3 comments
I can't run sample.py
on A100 40GB due to CUDA out of memory.
I think that MotionClone as a training-free method should run properly on consumer GPU like 4090 24GB.
I can try successfully on 4090 GPU to limit the generation video size to 256 256 16. It also should be set in invert.py
@euminds Hi, when I decrease the resolution into 256*256, I find that the model is easy to collapse, which generates meaningless noise-like output video. Do you counter with the same issue?
Sorry for being late. We have updated the code. Now MotionClone is able to 1) directly performs motion customization without cumbersome video inversion ; 2) significantly reduces memory consumption.
In our experiments, For 16×512×512 text-to-video, the memory consumption is about 14GB . For MotionClone combined with image-to-video or sketch-to-video, the memory is about 22 GB. Hope this helps.