v1.0.0 badge
[![Twitter](https://img.shields.io/badge/-Twitter@LinBin46984-black?logo=twitter&logoColor=1D9BF0)](https://x.com/LinBin46984/status/1763476690385424554?s=20)[![hf_space](https://img.shields.io/badge/๐ค-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/LanguageBind/Open-Sora-Plan-v1.0.0) [![hf_space](https://img.shields.io/badge/๐ค-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/fffiloni/Open-Sora-Plan-v1-0-0) [![Replicate demo and cloud API](https://replicate.com/camenduru/open-sora-plan-512x512/badge)](https://replicate.com/camenduru/open-sora-plan-512x512) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/Open-Sora-Plan-jupyter/blob/main/Open_Sora_Plan_jupyter.ipynb)
We are thrilled to present Open-Sora-Plan v1.1.0, which significantly enhances video generation quality and text control capabilities. See our report. We show compressed .gif on GitHub, which loses some quality.
Thanks to HUAWEI Ascend Team for supporting us. In the second stage, we used Huawei Ascend computing power for training. This stage's training and inference were fully supported by Huawei. Models trained on Huawei Ascend can also be loaded into GPUs and generate videos of the same quality.
็ฎๅๅทฒ็ปๆฏๆไฝฟ็จๅฝไบงAI่ฎก็ฎ็ณป็ป(ๅไธบๆ่ พ๏ผๆๅพ ๆดๅคๅฝไบง็ฎๅ่ฏ็)่ฟ่กๅฎๆด็่ฎญ็ปๅๆจ็ใๅจ้กน็ฎ็ฌฌไบ้ถๆฎต๏ผๆๆ่ฎญ็ปๅๆจ็ไปปๅกๅฎๅ จ็ฑๅไธบๆ่ พ่ฎก็ฎ็ณป็ปๆฏๆใๆญคๅค๏ผๅบไบๅไธบๆ่ พ็512ๅก้็พค่ฎญ็ปๅบ็ๆจกๅ๏ผไนๅฏไปฅๆ ็ผๅฐๅจGPUไธ่ฟ่ก๏ผๅนถไฟๆ็ธๅ็่ง้ข่ดจ้ใ่ฏฆ็ปไฟกๆฏ่ฏทๅ่ๆไปฌ็hw branch.
generated 65ร512ร512 (2.7s) | edited 65ร512ร512 (2.7s) |
---|---|
[2024.05.27] ๐๐๐ We are launching Open-Sora Plan v1.1.0, which significantly improves video quality and length, and is fully open source! Please check out our latest report.
[2024.04.09] ๐ Excited to share our latest exploration on metamorphic time-lapse video generation: MagicTime, which learns real-world physics knowledge from time-lapse videos. Here is the dataset for train (updating): Open-Sora-Dataset.
[2024.04.07] ๐ฅ๐ฅ๐ฅ Today, we are thrilled to present Open-Sora-Plan v1.0.0, which significantly enhances video generation quality and text control capabilities. See our report. Thanks to HUAWEI NPU for supporting us.
[2024.03.27] ๐๐๐ We release the report of VideoCausalVAE, which supports both images and videos. We present our reconstructed video in this demonstration as follows. The text-to-video model is on the way.
View more
[2024.03.10] ๐๐๐ This repo supports training a latent size of 225ร90ร90 (tรhรw), which means we are able to train 1 minute of 1080P video with 30FPS (2ร interpolated frames and 2ร super resolution) under class-condition.
[2024.03.08] We support the training code of text condition with 16 frames of 512x512. The code is mainly borrowed from Latte.
[2024.03.07] We support training with 128 frames (when sample rate = 3, which is about 13 seconds) of 256x256, or 64 frames (which is about 6 seconds) of 512x512.
[2024.03.05] See our latest todo, pull requests are welcome.
[2024.03.04] We re-organize and modulize our code to make it easy to contribute to the project, to contribute please see the Repo structure.
[2024.03.03] We open some discussions to clarify several issues.
[2024.03.01] Training code is available now! Learn more on our project page. Please feel free to watch ๐ this repository for the latest updates.
This project aims to create a simple and scalable repo, to reproduce Sora (OpenAI, but we prefer to call it "ClosedAI" ). We wish the open-source community can contribute to this project. Pull requests are welcome!!!
ๆฌ้กน็ฎๅธๆ้่ฟๅผๆบ็คพๅบ็ๅ้ๅค็ฐSora๏ผ็ฑๅๅคง-ๅ ๅฑAIGC่ๅๅฎ้ชๅฎคๅ ฑๅๅ่ตท๏ผๅฝๅ็ๆฌ็ฆป็ฎๆ ๅทฎ่ทไป็ถ่พๅคง๏ผไป้ๆ็ปญๅฎๅๅๅฟซ้่ฟญไปฃ๏ผๆฌข่ฟPull request๏ผ๏ผ๏ผ
Project stages:
- Primary
- Setup the codebase and train an un-conditional model on a landscape dataset.
- Train models that boost resolution and duration.
- Extensions
- Conduct text2video experiments on landscape dataset.
- Train the 1080p model on video2text dataset.
- Control model with more conditions.
โ Todo
- Fix typos & Update readme. ๐ค Thanks to @mio2333, @CreamyLong, @chg0901, @Nyx-177, @HowardLi1984, @sennnnn, @Jason-fan20
- Setup environment. ๐ค Thanks to @nameless1117
- Add docker file. โ [WIP] ๐ค Thanks to @Mon-ius, @SimonLeeGit
- Enable type hints for functions. ๐ค Thanks to @RuslanPeresy, ๐ [Need your contribution]
- Resume from checkpoint.
- Add Video-VQVAE model, which is borrowed from VideoGPT.
- Support variable aspect ratios, resolutions, durations training on DiT.
- Support Dynamic mask input inspired by FiT.
- Add class-conditioning on embeddings.
- Incorporating Latte as main codebase.
- Add VAE model, which is borrowed from Stable Diffusion.
- Joint dynamic mask input with VAE.
- Add VQVAE from VQGAN. ๐ [Need your contribution]
- Make the codebase ready for the cluster training. Add SLURM scripts. ๐ [Need your contribution]
- Refactor VideoGPT. ๐ค Thanks to @qqingzheng, @luo3300612, @sennnnn
- Add sampling script.
- Add DDP sampling script. โ [WIP]
- Use accelerate on multi-node. ๐ค Thanks to @sysuyy
- Incorporate SiT. ๐ค Thanks to @khan-yin
- Add evaluation scripts (FVD, CLIP score). ๐ค Thanks to @rain305f
- Add PI to support out-of-domain size. ๐ค Thanks to @jpthu17
- Add 2D RoPE to improve generalization ability as FiT. ๐ค Thanks to @jpthu17
- Compress KV according to PixArt-sigma.
- Support deepspeed for videogpt training. ๐ค Thanks to @sennnnn
- Train a low dimension Video-AE, whether it is VAE or VQVAE.
- Extract offline feature.
- Train with offline feature.
- Add frame interpolation model. ๐ค Thanks to @yunyangge
- Add super resolution model. ๐ค Thanks to @Linzy19
- Add accelerate to automatically manage training.
- Joint training with images.
- Implement MaskDiT technique for fast training. ๐ [Need your contribution]
- Incorporate NaViT. ๐ [Need your contribution]
- Add FreeNoise support for training-free longer video generation. ๐ [Need your contribution]
- Load pretrained weights from Latte.
- Implement PeRFlow for improving the sampling process. ๐ [Need your contribution]
- Finish data loading, pre-processing utils.
- Add T5 support.
- Add CLIP support. ๐ค Thanks to @Ytimed2020
- Add text2image training script.
- Add prompt captioner.
- Collect training data.
- Need video-text pairs with caption. ๐ [Need your contribution]
- Extract multi-frame descriptions by large image-language models. ๐ค Thanks to @HowardLi1984
- Extract video description by large video-language models. ๐ [Need your contribution]
- Integrate captions to get a dense caption by using a large language model, such as GPT-4. ๐ค Thanks to @HowardLi1984
- Train a captioner to refine captions. ๐ [Require more computation]
- Collect training data.
- Looking for a suitable dataset, welcome to discuss and recommend. ๐ [Need your contribution]
- Add synthetic video created by game engines or 3D representations. ๐ [Need your contribution]
- Finish data loading, and pre-processing utils.
- Support memory friendly training.
- Add flash-attention2 from pytorch.
- Add xformers. ๐ค Thanks to @jialin-zhao
- Support mixed precision training.
- Add gradient checkpoint.
- Support for ReBased and Ring attention. ๐ค Thanks to @kabachuha
- Train using the deepspeed engine. ๐ค Thanks to @sennnnn
- Train with a text condition. Here we could conduct different experiments: ๐ [Require more computation]
- Train with T5 conditioning.
- Train with CLIP conditioning.
- Train with CLIP + T5 conditioning (probably costly during training and experiments).
- Support Chinese. โ [WIP]
- Incorporating ControlNet. โ [WIP] ๐ [Need your contribution]
- Incorporating ReVideo. โ [WIP]
โโโ README.md
โโโ docs
โ โโโ Data.md -> Datasets description.
โ โโโ Contribution_Guidelines.md -> Contribution guidelines description.
โโโ scripts -> All scripts.
โโโ opensora
โ โโโ dataset
โ โโโ models
โ โ โโโ ae -> Compress videos to latents
โ โ โ โโโ imagebase
โ โ โ โ โโโ vae
โ โ โ โ โโโ vqvae
โ โ โ โโโ videobase
โ โ โ โโโ vae
โ โ โ โโโ vqvae
โ โ โโโ captioner
โ โ โโโ diffusion -> Denoise latents
โ โ โ โโโ diffusion
โ โ โ โโโ dit
โ โ โ โโโ latte
โ โ โ โโโ unet
โ โ โโโ frame_interpolation
โ โ โโโ super_resolution
โ โ โโโ text_encoder
โ โโโ sample
โ โโโ train -> Training code
โ โโโ utils
- Clone this repository and navigate to Open-Sora-Plan folder
git clone https://github.com/PKU-YuanGroup/Open-Sora-Plan
cd Open-Sora-Plan
- Install required packages
conda create -n opensora python=3.8 -y
conda activate opensora
pip install -e .
- Install additional packages for training cases
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
- Install optional requirements such as static type checking:
pip install -e '.[dev]'
Highly recommend trying out our web demo by the following command. We also provide online demo .
v1.0.0
Highly recommend trying out our web demo by the following command. We also provide online demo and in Huggingface Spaces.
๐ค Enjoying the and , created by @camenduru, who generously supports our research!
python -m opensora.serve.gradio_web_server
sh scripts/text_condition/sample_video.sh
Refer to Data.md
Refer to the document EVAL.md.
Example:
python examples/rec_imvi_vae.py --video_path test_video.mp4 --rec_path output_video.mp4 --fps 24 --resolution 512 --crop_size 512 --num_frames 128 --sample_rate 1 --ae CausalVAEModel_4x8x8 --model_path pretrained_488_release --enable_tiling --enable_time_chunk
Parameter explanation:
--enable_tiling
: This parameter is a flag to enable a tiling conv.
Please refer to the document CausalVideoVAE.
Please refer to the document VQVAE.
sh scripts/text_condition/train_videoae_65x512x512.sh
sh scripts/text_condition/train_videoae_221x512x512.sh
sh scripts/text_condition/train_videoae_513x512x512.sh
We greatly appreciate your contributions to the Open-Sora Plan open-source community and helping us make it even better than it is now!
For more details, please refer to the Contribution Guidelines
- Latte: The main codebase we built upon and it is an wonderful video generated model.
- PixArt-alpha: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis.
- ShareGPT4Video: Improving Video Understanding and Generation with Better Captions.
- VideoGPT: Video Generation using VQ-VAE and Transformers.
- DiT: Scalable Diffusion Models with Transformers.
- FiT: Flexible Vision Transformer for Diffusion Model.
- Positional Interpolation: Extending Context Window of Large Language Models via Positional Interpolation.
- See LICENSE for details.
@software{pku_yuan_lab_and_tuzhan_ai_etc_2024_10948109,
author = {PKU-Yuan Lab and Tuzhan AI etc.},
title = {Open-Sora-Plan},
month = apr,
year = 2024,
publisher = {GitHub},
doi = {10.5281/zenodo.10948109},
url = {https://doi.org/10.5281/zenodo.10948109}
}