/StyleCrafter-hf-local

Local application of StyleCrafter

Primary LanguagePython

StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter

                 

GongyeLiu, Menghan Xia*, Yong Zhang, Haoxin Chen, Jinbo Xing,
Xintao Wang, Yujiu Yang*, Ying Shan


(* corresponding authors)

From Tsinghua University and Tencent AI Lab.

๐Ÿ”† Introduction

TL;DR: We propose StyleCrafter, a generic method that enhances pre-trained T2V models with style control, supporting Style-Guided Text-to-Image Generation and Style-Guided Text-to-Video Generation.

1. โญโญ Style-Guided Text-to-Video Generation.

Style-guided text-to-video results. Resolution: 320 x 512; Frames: 16. (Compressed)

2. Style-Guided Text-to-Image Generation.

Style-guided text-to-image results. Resolution: 512 x 512. (Compressed)

๐Ÿ“ Changelog

  • [2023.12.08]: ๐Ÿ”ฅ๐Ÿ”ฅ Release the Huggingface online demo.
  • [2023.12.05]: ๐Ÿ”ฅ๐Ÿ”ฅ Release the code and checkpoint.
  • [2023.11.30]: ๐Ÿ”ฅ๐Ÿ”ฅ Release the project page.

โณ TODO

  • Remove Video Watermark(due to trained on WebVid10M).

๐Ÿงฐ Models

Model Resolution Checkpoint
StyleCrafter 320x512 Hugging Face

It takes approximately 5 seconds to generate a 512ร—512 image and 85 seconds to generate a 320ร—512 video with 16 frames using a single NVIDIA A100 (40G) GPU. A GPU with at least 16G GPU memory is required to perform the inference process.

๐Ÿ’ซ Local Inference

  1. Run Install_cn.ps1 in PowerShell
  2. Run run_gui.ps1 in PowerShell

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Crafter Family

VideoCrafter1: Framework for high-quality text-to-video generation.

ScaleCrafter: Tuning-free method for high-resolution image/video generation.

TaleCrafter: An interactive story visualization tool that supports multiple characters.

LongerCrafter: Tuning-free method for longer high-quality video generation.

DynamiCrafter Animate open-domain still images to high-quality videos.

๐Ÿ“ข Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


๐Ÿ™ Acknowledgements

We would like to thank AK(@_akhaliq) for the help of setting up online demo.

๐Ÿ“ญ Contact

If your have any comments or questions, feel free to contact lgy22@mails.tsinghua.edu.cn