/RAVE

RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models - CVPR 2024 - Official Repo

Primary LanguagePythonMIT LicenseMIT

RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models - Official Repo

CVPR 2024 (Highlight)

Ozgur Kara, Bariscan Kurtkaya, Hidir Yesiltepe, James M. Rehg, Pinar Yanardag

Web Demo GitHub

Visitors

teaser (Note that the videos on GitHub are heavily compressed. The full videos are available on the project webpage.)

Abstract

TL; DR: RAVE is a zero-shot, lightweight, and fast framework for text-guided video editing, supporting videos of any length utilizing text-to-image pretrained diffusion models.

Click for the full abstract

Recent advancements in diffusion-based models have demonstrated significant success in generating images from text. However, video editing models have not yet reached the same level of visual quality and user control. To address this, we introduce RAVE, a zero-shot video editing method that leverages pre-trained text-to-image diffusion models without additional training. RAVE takes an input video and a text prompt to produce high-quality videos while preserving the original motion and semantic structure. It employs a novel noise shuffling strategy, leveraging spatio-temporal interactions between frames, to produce temporally consistent videos faster than existing methods. It is also efficient in terms of memory requirements, allowing it to handle longer videos. RAVE is capable of a wide range of edits, from local attribute modifications to shape transformations. In order to demonstrate the versatility of RAVE, we create a comprehensive video evaluation dataset ranging from object-focused scenes to complex human activities like dancing and typing, and dynamic scenes featuring swimming fish and boats. Our qualitative and quantitative experiments highlight the effectiveness of RAVE in diverse video editing scenarios compared to existing methods.


Features:

  • Zero-shot framework
  • Working fast
  • No restriction on video length
  • Standardized dataset for evaluating text-guided video-editing methods
  • Compatible with off-the-shelf pre-trained approaches (e.g. CivitAI)

Updates

  • [12/2023] Gradio demo is released, HuggingFace Space demo will be released soon
  • [12/2023] Paper is available on ArXiv, project webpage is ready and code is released.

TODO

  • Share the dataset
  • Add more examples
  • Optimize preprocessing
  • Add CivitAI models to Grad.io
  • Prepare a grad.io based GUI
  • Integrate MultiControlNet
  • Adapt CIVIT AI models

Installation and Inference

Setup Environment

Please install our environment using 'requirements.txt' file as:

conda create -n rave python=3.8
conda activate rave
conda install pip
pip cache purge
pip install -r requirements.txt

Also, please install PyTorch and Xformers as

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install xformers==0.0.20

to set up the Conda environment.

Our code was tested on Linux with the following versions:

timm==0.6.7 torch==2.0.1+cu118 xformers==0.0.20 diffusers==0.18.2 torch.version.cuda==11.8 python==3.8.0

WebUI Demo

To run our grad.io based web demo, run the following command:

python webui.py

Then, specify your configurations and perform editing.

Inference

To run RAVE, please follow these steps:

1- Put the video you want to edit under data/mp4_videos as an MP4 file. Note that we suggest using videos with a size of 512x512 or 512x320.

2- Prepare a config file under the configs directory. Change the name of the video_name parameter to the name of the MP4 file. You can find detailed descriptions of the parameters and example configurations there.

3- Run the following command:

python scripts/run_experiment.py [PATH OF CONFIG FILE]

4- The results will be generated under the results directory. Also, the latents and controls are saved under the generated directory to speed up the editing with different prompts on the same video. Note that the names of the preprocessors available can be found in utils/constants.py.

Use Customized Models from CIVIT AI

Our code allows to run any customized model from CIVIT AI. To use these models, please follow the steps:

1- Determine which model you want to use from CIVIT AI, and obtain its index. (e.g. the index for RealisticVision V5.1 is 130072, you can find the id of the model in the website link as a parameter assigned to 'VersionId', e.g. https://civitai.com/models/4201?modelVersionId=130072)

2- In the current directory, run the following code. It downloads the model in safetensors format, and converts it to '.bin' format that is compatible with diffusers.

bash CIVIT_AI/civit_ai.sh 130072

3- Copy the path of the converted model, $CWD/CIVIT_AI/diffusers_models/[CUSTOMIZED MODEL] (e.g. CIVIT_AI/diffusers_models/realisticVisionV60B1_v51VAE for 130072), and use the path in the config file.

Dataset

Dataset will be released soon.

Examples

Type of Edits

1- Local Editing 2- Visual Style Editing 3- Background Editing
4- Shape/Attribute Editing 5- Extreme Shape Editing

Editing on Various Types of Motions

1- Exo-motion 2- Ego-motion 3- Ego-exo motion
4- Occlusions 5- Multiple objects with appearance/disappearance

Citation

@inproceedings{kara2024rave,
  title={RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models},
  author={Ozgur Kara and Bariscan Kurtkaya and Hidir Yesiltepe and James M. Rehg and Pinar Yanardag},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

Maintenance

This is the official repository for RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models. Feel free to contact for any questions or discussions Ozgur Kara.