HeliosZhao/Make-A-Protagonist

Motion vectors

rakesh-reddy95 opened this issue · 4 comments

Hi @HeliosZhao,

How can I use motion vectors ( extracted from FFMPEG) as a guidance for generation in inference. Like we use segmentation map in the current pipeline during the inference. And will it effect in positive way for video generation?

Hi,
If you plan to use the motion vectors in the same way as the segmentation map, I recommend you train a ControlNet model based on the motion vectors, and then you can directly integrate it into the inference stage.

But training a Controlnet with motion vectors needs a video dataset along with text prompts for each frame. Can't train on coco or any video dataset right?

Yes. It cannot be trained on COCO. Videos or at least two consecutive frames are required for the motion vectors.

You may refer to [VideoComposer] about extracting motion vectors and train with such conditions.

Thank you.