Experimental method to use reference video to drive motion in generations without training in ComfyUI.
Important
This is currently (WIP), and is in early release for testing.
More details will be added soon, but if you want to use the early release:
-
Install to your
custom_nodesdirectory viagit clone https://github.com/ExponentialML/ComfyUI_LiveDirector.git. -
Only AnimateDiff Lightning models are supported (you must use CFG of 1).
-
As stated above, AnimateDiff is only supported. This repository was tested with https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved.
-
Use IPAdapter to control the spatial part of the generation.