/streamv2v

Official Pytorch implementation of StreamV2V.

Primary LanguagePythonOtherNOASSERTION

StreamV2V

English | 中文 | 日本語

Looking Backward: Streaming Video-to-Video Translation with Feature Banks
Feng Liang, Akio Kodaira, Chenfeng Xu, Masayoshi Tomizuka, Kurt Keutzer, Diana Marculescu

arXiv Project Page

Highlight

Our StreamV2V could perform real-time video-2-video translation on one 4090TI GPU. Check the video and try it by youself!

Video

For functionality, our StreamV2V supports face swap (e.g., to Elon Musk or Will Smith) and video stylization (e.g., to Claymation or doodle art). Check the video and reproduce the results!

Video

Installation

Please see the installation guide.

Getting started

Please see getting started instruction.

Realtime camera demo on GPU

Please see the demo with camera guide.

LICENSE

StreamV2V is licensed under a UT Austin Research LICENSE.

Acknowledgements

Our StreamV2V is highly dependended on the open-source community. Our code is copied and adapted from < StreamDiffusion with LCM-LORA. Besides the base SD 1.5 model, we also use a variaty of LORAs from CIVITAI.

Citing StreamV2V 🙏

If you use StreamV2V in your research or wish to refer to the baseline results published in the paper, please use the following BibTeX entry.

StreamV2V TBA

@article{kodaira2023streamdiffusion,
  title={StreamDiffusion: A Pipeline-level Solution for Real-time Interactive Generation},
  author={Kodaira, Akio and Xu, Chenfeng and Hazama, Toshiki and Yoshimoto, Takanori and Ohno, Kohei and Mitsuhori, Shogo and Sugano, Soichi and Cho, Hanying and Liu, Zhijian and Keutzer, Kurt},
  journal={arXiv preprint arXiv:2312.12491},
  year={2023}
}