Tangshitao/MVDiffusion

GPU resources required for inference

Opened this issue · 1 comments

Hello, thank you for your excellent work.

Does text-to-multi-view inference(demo.py) require 4x A6000 GPU to complete?

I'm using a 3090 GPU and inference occurs with CUDA out of memory.

Can you try fp16? You can add fp=16 here