DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video.
See our paper: DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
- Consistency in human facial representation
- Concurrent object motions and camera movements
- Unrealistic drags
DragNUWA 1.5 uses Stable Video Diffusion as a backbone to animate an image according to specific path.
Please refer to assets/DragNUWA1.5/figure_raw
for raw gifs.
DragNUWA 1.0 utilizes text, images, and trajectory as three essential control factors to facilitate highly controllable video generation from semantic, spatial, and temporal aspects.
git clone https://github.com/ProjectNUWA/DragNUWA.git
cd DragNUWA
conda create -n DragNUWA python=3.8
conda activate DragNUWA
pip install -r environment.txt
Download the Pretrained Weights to models/
directory or directly run bash models/Download.sh
.
python DragNUWA_demo.py
It will launch a gradio demo, and you can drag an image and animate it!
We appreciate the open source of the following projects: Stable Video Diffusion Hugging Face UniMatch
@article{yin2023dragnuwa,
title={Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory},
author={Yin, Shengming and Wu, Chenfei and Liang, Jian and Shi, Jie and Li, Houqiang and Ming, Gong and Duan, Nan},
journal={arXiv preprint arXiv:2308.08089},
year={2023}
}