/StableVideo

[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing

Primary LanguagePythonApache License 2.0Apache-2.0

StableVideo

StableVideo: Text-driven Consistency-aware Diffusion Video Editing
Wenhao Chai, Xun Guo✉️, Gaoang Wang, Yan Lu
ICCV 2023

boat.mp4
car.mp4
blackswan.mp4

VRAM requirement

VRAM (MiB)
float32 29145
amp 23005
amp + cpu 17639
amp + cpu + xformers 14185
  • cpu: use cpu cache, args: save_memory

under default setting (e.g. resolution, etc.) in app.py

Installation

git clone https://github.com/rese1f/StableVideo.git
conda create -n stablevideo python=3.11
pip install -r requirements.txt
(optional) pip install xformers 

(optional) We also provide CPU only version huggingface demo.

git lfs install
git clone https://huggingface.co/spaces/Reself/StableVideo
pip install -r requirements.txt

Download Pretrained Model

All models and detectors can be downloaded from ControlNet Hugging Face page at Download Link.

Download example videos

Download the example atlas for car-turn, boat, libby, blackswa, bear, bicycle_tali, giraffe, kite-surf, lucia and motorbike at Download Link shared by Text2LIVE authors.

You can also train on your own video following NLA.

And it will create a folder data:

StableVideo
├── ...
├── ckpt
│   ├── cldm_v15.yaml
|   ├── dpt_hybrid-midas-501f0c75.pt
│   ├── control_sd15_canny.pth
│   └── control_sd15_depth.pth
├── data
│   └── car-turn
│       ├── checkpoint # NLA models are stored here
│       ├── car-turn # contains video frames
│       ├── ...
│   ├── blackswan
│   ├── ...
└── ...

Run and Play!

Run the following command to start.

python app.py

the result .mp4 video and keyframe will be stored in the directory ./log after clicking render button.

You can also edit the mask region for the foreground atlas as follows. Currently there might be a bug in Gradio. Please carefully check if the editable output foreground atlas block looks the same as the one above. If not, try to restart the entire program.

Citation

If our work is useful for your research, please consider citing as below. Many thanks :)

@article{chai2023stablevideo,
  title={StableVideo: Text-driven Consistency-aware Diffusion Video Editing},
  author={Chai, Wenhao and Guo, Xun and Wang, Gaoang and Lu, Yan},
  journal={arXiv preprint arXiv:2308.09592},
  year={2023}
}

Acknowledgement

This implementation is built partly on Text2LIVE and ControlNet.