For results sharing and discussion: https://discord.gg/hSJZ35fV
For codebase and deployment-related discussion: https://discord.gg/2HFUHT9p
Official implementation of StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation.
28a42f-2cf8-a5b-ec52-f3aaba486f.mp4
StoryDiffusion can create a magic story by generating consistent images and videos. Our work mainly has two parts:
- Consistent self-attention for character-consistent image generation over long-range sequences. It is hot-pluggable and compatible with all SD1.5 and SDXL-based image diffusion models. For the current implementation, the user needs to provide at least 3 text prompts for the consistent self-attention module. We recommend at least 5 - 6 text prompts for better layout arrangement.
- Motion predictor for long-range video generation, which predicts motion between Condition Images in a compressed image semantic space, achieving larger motion prediction.
Leveraging the images produced through our Consistent Self-Attention mechanism, we can extend the process to create videos by seamlessly transitioning between these images. This can be considered as a two-stage long video generation approach.
Note: results are highly compressed for speed, you can visit our website for the high-quality version.
Combining the two parts, we can generate very long and high-quality AIGC videos.
Video1 | Video2 | Video3 |
---|---|---|
Our Image-to-Video model can generate a video by providing a sequence of user-input condition images.
Video1 | Video2 | Video3 |
---|---|---|
Video4 | Video5 | Video6 |
---|---|---|
Video1 | Video2 | Video3 |
---|---|---|
Video4 | Video5 | Video6 |
---|---|---|
- Commic Results of StoryDiffusion.
- Video Results of StoryDiffusion.
- Source code of Comic Generation
- Source code of gradio demo
- Source code of Video Generation Model
- Pretrained weight of Video Generation Model
- Python >= 3.8 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 2.0.0
conda create --name storydiffusion python=3.10
conda activate storydiffusion
pip install -U pip
# Install requirements
pip install -r requirements.txt
Currently, we provide two ways for you to generate comics.
You can open the Comic_Generation.ipynb
and run the code.
Run the following command:
python gradio_app_sdxl_specific_id.py
If you have any questions, you are very welcome to email ypzhousdu@gmail.com and zhoudaquan21@gmail.com
This project strives to impact the domain of AI-driven image and video generation positively. Users are granted the freedom to create images and videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
If you find StoryDiffusion useful for your research and applications, please cite using this BibTeX:
@article{Zhou2024storydiffusion,
title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
year={2024}
}