/Open-Sora-Plan

This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.

Primary LanguagePythonApache License 2.0Apache-2.0

Open-Sora Plan

slack badge WeChat badge hf_space Twitter
hf_space License GitHub repo contributors GitHub Commit Pr GitHub issues GitHub closed issues
GitHub repo stars  GitHub repo forks  GitHub repo watchers  GitHub repo size

v1.0.0 badge [![Twitter](https://img.shields.io/badge/-Twitter@LinBin46984-black?logo=twitter&logoColor=1D9BF0)](https://x.com/LinBin46984/status/1763476690385424554?s=20)
[![hf_space](https://img.shields.io/badge/๐Ÿค—-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/LanguageBind/Open-Sora-Plan-v1.0.0) [![hf_space](https://img.shields.io/badge/๐Ÿค—-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/fffiloni/Open-Sora-Plan-v1-0-0) [![Replicate demo and cloud API](https://replicate.com/camenduru/open-sora-plan-512x512/badge)](https://replicate.com/camenduru/open-sora-plan-512x512) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/Open-Sora-Plan-jupyter/blob/main/Open_Sora_Plan_jupyter.ipynb)

We are thrilled to present Open-Sora-Plan v1.1.0, which significantly enhances video generation quality and text control capabilities. See our report. We show compressed .gif on GitHub, which loses some quality.

Thanks to HUAWEI Ascend Team for supporting us. In the second stage, we used Huawei Ascend computing power for training. This stage's training and inference were fully supported by Huawei. Models trained on Huawei Ascend can also be loaded into GPUs and generate videos of the same quality.

็›ฎๅ‰ๅทฒ็ปๆ”ฏๆŒไฝฟ็”จๅ›ฝไบงAI่ฎก็ฎ—็ณป็ปŸ(ๅŽไธบๆ˜‡่…พ๏ผŒๆœŸๅพ…ๆ›ดๅคšๅ›ฝไบง็ฎ—ๅŠ›่Šฏ็‰‡)่ฟ›่กŒๅฎŒๆ•ด็š„่ฎญ็ปƒๅ’ŒๆŽจ็†ใ€‚ๅœจ้กน็›ฎ็ฌฌไบŒ้˜ถๆฎต๏ผŒๆ‰€ๆœ‰่ฎญ็ปƒๅ’ŒๆŽจ็†ไปปๅŠกๅฎŒๅ…จ็”ฑๅŽไธบๆ˜‡่…พ่ฎก็ฎ—็ณป็ปŸๆ”ฏๆŒใ€‚ๆญคๅค–๏ผŒๅŸบไบŽๅŽไธบๆ˜‡่…พ็š„512ๅก้›†็พค่ฎญ็ปƒๅ‡บ็š„ๆจกๅž‹๏ผŒไนŸๅฏไปฅๆ— ็ผๅœฐๅœจGPUไธŠ่ฟ่กŒ๏ผŒๅนถไฟๆŒ็›ธๅŒ็š„่ง†้ข‘่ดจ้‡ใ€‚่ฏฆ็ป†ไฟกๆฏ่ฏทๅ‚่€ƒๆˆ‘ไปฌ็š„hw branch.

221ร—512ร—512 Text-to-Video Generation

221ร—512ร—512 (9.2s) 221ร—512ร—512 (9.2s) 221ร—512ร—512 (9.2s)
3D animation of a small, round, fluffy creature with big, expressive eyes explores ... A single drop of liquid metal falls from a floating orb, landing on a mirror-like ... The video presents an abstract composition centered around a hexagonal shape adorned ...
A drone camera circles around a beautiful historic church built on a rocky outcropping ... Aerial view of Santorini during the blue hour, showcasing the stunning architecture ... An aerial shot of a lighthouse standing tall on a rocky cliff, its beacon cutting ...
A snowy forest landscape with a dirt road running through it. The road is flanked by ... Drone shot along the Hawaii jungle coastline, sunny day. Kayaks in the water. The camera rotates around a large stack of vintage televisions all showing different ...

65ร—512ร—512 Text-to-Video Generation

65ร—512ร—512 (2.7s) 65ร—512ร—512 (2.7s) 65ร—512ร—512 (2.7s)
In an ornate, historical hall, a massive tidal wave peaks and begins to crash. Two ... A Shiba Inu dog wearing a beret and black turtleneck. A painting of a boat on water comes to life, with waves crashing and the boat becoming ...
A person clad in a space suit with a helmet and equipped with a chest light and arm ... 3D animation of a small, round, fluffy creature with big, expressive eyes explores a ... In a studio, there is a painting depicting a ship sailing through the rough sea.
A robot dog trots down a deserted alley at night, its metallic paws clinking softly ... A lone surfer rides a massive wave, skillfully maneuvering through the surf. The water ... A solitary cheetah sprints across the savannah, its powerful muscles propelling it ...

65ร—512ร—512 Video Editing

generated 65ร—512ร—512 (2.7s) edited 65ร—512ร—512 (2.7s)

512ร—512 Text-to-Image Generation

๐Ÿ“ฐ News

[2024.05.27] ๐Ÿš€๐Ÿš€๐Ÿš€ We are launching Open-Sora Plan v1.1.0, which significantly improves video quality and length, and is fully open source! Please check out our latest report.

[2024.04.09] ๐Ÿš€ Excited to share our latest exploration on metamorphic time-lapse video generation: MagicTime, which learns real-world physics knowledge from time-lapse videos. Here is the dataset for train (updating): Open-Sora-Dataset.

[2024.04.07] ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Today, we are thrilled to present Open-Sora-Plan v1.0.0, which significantly enhances video generation quality and text control capabilities. See our report. Thanks to HUAWEI NPU for supporting us.

[2024.03.27] ๐Ÿš€๐Ÿš€๐Ÿš€ We release the report of VideoCausalVAE, which supports both images and videos. We present our reconstructed video in this demonstration as follows. The text-to-video model is on the way.

View more

[2024.03.10] ๐Ÿš€๐Ÿš€๐Ÿš€ This repo supports training a latent size of 225ร—90ร—90 (tร—hร—w), which means we are able to train 1 minute of 1080P video with 30FPS (2ร— interpolated frames and 2ร— super resolution) under class-condition.

[2024.03.08] We support the training code of text condition with 16 frames of 512x512. The code is mainly borrowed from Latte.

[2024.03.07] We support training with 128 frames (when sample rate = 3, which is about 13 seconds) of 256x256, or 64 frames (which is about 6 seconds) of 512x512.

[2024.03.05] See our latest todo, pull requests are welcome.

[2024.03.04] We re-organize and modulize our code to make it easy to contribute to the project, to contribute please see the Repo structure.

[2024.03.03] We open some discussions to clarify several issues.

[2024.03.01] Training code is available now! Learn more on our project page. Please feel free to watch ๐Ÿ‘€ this repository for the latest updates.

๐Ÿ’ช Goal

This project aims to create a simple and scalable repo, to reproduce Sora (OpenAI, but we prefer to call it "ClosedAI" ). We wish the open-source community can contribute to this project. Pull requests are welcome!!!

ๆœฌ้กน็›ฎๅธŒๆœ›้€š่ฟ‡ๅผ€ๆบ็คพๅŒบ็š„ๅŠ›้‡ๅค็ŽฐSora๏ผŒ็”ฑๅŒ—ๅคง-ๅ…”ๅฑ•AIGC่”ๅˆๅฎž้ชŒๅฎคๅ…ฑๅŒๅ‘่ตท๏ผŒๅฝ“ๅ‰็‰ˆๆœฌ็ฆป็›ฎๆ ‡ๅทฎ่ทไป็„ถ่พƒๅคง๏ผŒไป้œ€ๆŒ็ปญๅฎŒๅ–„ๅ’Œๅฟซ้€Ÿ่ฟญไปฃ๏ผŒๆฌข่ฟŽPull request๏ผ๏ผ๏ผ

Project stages:

  • Primary
  1. Setup the codebase and train an un-conditional model on a landscape dataset.
  2. Train models that boost resolution and duration.
  • Extensions
  1. Conduct text2video experiments on landscape dataset.
  2. Train the 1080p model on video2text dataset.
  3. Control model with more conditions.
โœŠ Todo

Setup the codebase and train an unconditional model on landscape dataset

  • Fix typos & Update readme. ๐Ÿค Thanks to @mio2333, @CreamyLong, @chg0901, @Nyx-177, @HowardLi1984, @sennnnn, @Jason-fan20
  • Setup environment. ๐Ÿค Thanks to @nameless1117
  • Add docker file. โŒ› [WIP] ๐Ÿค Thanks to @Mon-ius, @SimonLeeGit
  • Enable type hints for functions. ๐Ÿค Thanks to @RuslanPeresy, ๐Ÿ™ [Need your contribution]
  • Resume from checkpoint.
  • Add Video-VQVAE model, which is borrowed from VideoGPT.
  • Support variable aspect ratios, resolutions, durations training on DiT.
  • Support Dynamic mask input inspired by FiT.
  • Add class-conditioning on embeddings.
  • Incorporating Latte as main codebase.
  • Add VAE model, which is borrowed from Stable Diffusion.
  • Joint dynamic mask input with VAE.
  • Add VQVAE from VQGAN. ๐Ÿ™ [Need your contribution]
  • Make the codebase ready for the cluster training. Add SLURM scripts. ๐Ÿ™ [Need your contribution]
  • Refactor VideoGPT. ๐Ÿค Thanks to @qqingzheng, @luo3300612, @sennnnn
  • Add sampling script.
  • Add DDP sampling script. โŒ› [WIP]
  • Use accelerate on multi-node. ๐Ÿค Thanks to @sysuyy
  • Incorporate SiT. ๐Ÿค Thanks to @khan-yin
  • Add evaluation scripts (FVD, CLIP score). ๐Ÿค Thanks to @rain305f

Train models that boost resolution and duration

  • Add PI to support out-of-domain size. ๐Ÿค Thanks to @jpthu17
  • Add 2D RoPE to improve generalization ability as FiT. ๐Ÿค Thanks to @jpthu17
  • Compress KV according to PixArt-sigma.
  • Support deepspeed for videogpt training. ๐Ÿค Thanks to @sennnnn
  • Train a low dimension Video-AE, whether it is VAE or VQVAE.
  • Extract offline feature.
  • Train with offline feature.
  • Add frame interpolation model. ๐Ÿค Thanks to @yunyangge
  • Add super resolution model. ๐Ÿค Thanks to @Linzy19
  • Add accelerate to automatically manage training.
  • Joint training with images.
  • Implement MaskDiT technique for fast training. ๐Ÿ™ [Need your contribution]
  • Incorporate NaViT. ๐Ÿ™ [Need your contribution]
  • Add FreeNoise support for training-free longer video generation. ๐Ÿ™ [Need your contribution]

Conduct text2video experiments on landscape dataset.

  • Load pretrained weights from Latte.
  • Implement PeRFlow for improving the sampling process. ๐Ÿ™ [Need your contribution]
  • Finish data loading, pre-processing utils.
  • Add T5 support.
  • Add CLIP support. ๐Ÿค Thanks to @Ytimed2020
  • Add text2image training script.
  • Add prompt captioner.
    • Collect training data.
      • Need video-text pairs with caption. ๐Ÿ™ [Need your contribution]
      • Extract multi-frame descriptions by large image-language models. ๐Ÿค Thanks to @HowardLi1984
      • Extract video description by large video-language models. ๐Ÿ™ [Need your contribution]
      • Integrate captions to get a dense caption by using a large language model, such as GPT-4. ๐Ÿค Thanks to @HowardLi1984
    • Train a captioner to refine captions. ๐Ÿš€ [Require more computation]

Train the 1080p model on video2text dataset

  • Looking for a suitable dataset, welcome to discuss and recommend. ๐Ÿ™ [Need your contribution]
  • Add synthetic video created by game engines or 3D representations. ๐Ÿ™ [Need your contribution]
  • Finish data loading, and pre-processing utils.
  • Support memory friendly training.
    • Add flash-attention2 from pytorch.
    • Add xformers. ๐Ÿค Thanks to @jialin-zhao
    • Support mixed precision training.
    • Add gradient checkpoint.
    • Support for ReBased and Ring attention. ๐Ÿค Thanks to @kabachuha
    • Train using the deepspeed engine. ๐Ÿค Thanks to @sennnnn
  • Train with a text condition. Here we could conduct different experiments: ๐Ÿš€ [Require more computation]
    • Train with T5 conditioning.
    • Train with CLIP conditioning.
    • Train with CLIP + T5 conditioning (probably costly during training and experiments).
  • Support Chinese. โŒ› [WIP]

Control model with more condition

  • Incorporating ControlNet. โŒ› [WIP] ๐Ÿ™ [Need your contribution]
  • Incorporating ReVideo. โŒ› [WIP]

๐Ÿ“‚ Repo structure (WIP)

โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ docs
โ”‚   โ”œโ”€โ”€ Data.md                    -> Datasets description.
โ”‚   โ”œโ”€โ”€ Contribution_Guidelines.md -> Contribution guidelines description.
โ”œโ”€โ”€ scripts                        -> All scripts.
โ”œโ”€โ”€ opensora
โ”‚   โ”œโ”€โ”€ dataset
โ”‚   โ”œโ”€โ”€ models
โ”‚   โ”‚   โ”œโ”€โ”€ ae                     -> Compress videos to latents
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ imagebase
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ vae
โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ vqvae
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ videobase
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ vae
โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ vqvae
โ”‚   โ”‚   โ”œโ”€โ”€ captioner
โ”‚   โ”‚   โ”œโ”€โ”€ diffusion              -> Denoise latents
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diffusion         
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ dit
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ latte
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ unet
โ”‚   โ”‚   โ”œโ”€โ”€ frame_interpolation
โ”‚   โ”‚   โ”œโ”€โ”€ super_resolution
โ”‚   โ”‚   โ””โ”€โ”€ text_encoder
โ”‚   โ”œโ”€โ”€ sample
โ”‚   โ”œโ”€โ”€ train                      -> Training code
โ”‚   โ””โ”€โ”€ utils

๐Ÿ› ๏ธ Requirements and Installation

  1. Clone this repository and navigate to Open-Sora-Plan folder
git clone https://github.com/PKU-YuanGroup/Open-Sora-Plan
cd Open-Sora-Plan
  1. Install required packages
conda create -n opensora python=3.8 -y
conda activate opensora
pip install -e .
  1. Install additional packages for training cases
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
  1. Install optional requirements such as static type checking:
pip install -e '.[dev]'

๐Ÿ—๏ธ Usage

๐Ÿค— Demo

Gradio Web UI

Highly recommend trying out our web demo by the following command. We also provide online demo hf_space.

v1.0.0

Highly recommend trying out our web demo by the following command. We also provide online demo hf_space and hf_space in Huggingface Spaces.

๐Ÿค Enjoying the Replicate demo and cloud API and Open In Colab, created by @camenduru, who generously supports our research!

python -m opensora.serve.gradio_web_server

CLI Inference

sh scripts/text_condition/sample_video.sh

Datasets

Refer to Data.md

Evaluation

Refer to the document EVAL.md.

CausalVideoVAE

Reconstructing

Example:

python examples/rec_imvi_vae.py --video_path test_video.mp4 --rec_path output_video.mp4 --fps 24 --resolution 512 --crop_size 512 --num_frames 128 --sample_rate 1 --ae CausalVAEModel_4x8x8 --model_path pretrained_488_release --enable_tiling --enable_time_chunk

Parameter explanation:

  • --enable_tiling: This parameter is a flag to enable a tiling conv.

Training and Eval

Please refer to the document CausalVideoVAE.

VideoGPT VQVAE

Please refer to the document VQVAE.

Video Diffusion Transformer

Training

sh scripts/text_condition/train_videoae_65x512x512.sh
sh scripts/text_condition/train_videoae_221x512x512.sh
sh scripts/text_condition/train_videoae_513x512x512.sh

๐Ÿ’ก How to Contribute to the Open-Sora Plan Community

We greatly appreciate your contributions to the Open-Sora Plan open-source community and helping us make it even better than it is now!

For more details, please refer to the Contribution Guidelines

๐Ÿ‘ Acknowledgement

  • Latte: The main codebase we built upon and it is an wonderful video generated model.
  • PixArt-alpha: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis.
  • ShareGPT4Video: Improving Video Understanding and Generation with Better Captions.
  • VideoGPT: Video Generation using VQ-VAE and Transformers.
  • DiT: Scalable Diffusion Models with Transformers.
  • FiT: Flexible Vision Transformer for Diffusion Model.
  • Positional Interpolation: Extending Context Window of Large Language Models via Positional Interpolation.

๐Ÿ”’ License

โœ๏ธ Citing

BibTeX

@software{pku_yuan_lab_and_tuzhan_ai_etc_2024_10948109,
  author       = {PKU-Yuan Lab and Tuzhan AI etc.},
  title        = {Open-Sora-Plan},
  month        = apr,
  year         = 2024,
  publisher    = {GitHub},
  doi          = {10.5281/zenodo.10948109},
  url          = {https://doi.org/10.5281/zenodo.10948109}
}

Latest DOI

DOI

๐Ÿค Community contributors