🤗 ToonCrafter can interpolate two cartoon images by leveraging the pre-trained image-to-video diffusion priors. Please check our project page and paper for more information.
- [2024.05.29]: 🔥🔥 Release code and model weights.
- [2024.05.28]: Launch the project page and update the arXiv preprint.
Model | Resolution | GPU Mem. & Inference Time (A100, ddim 50steps) | Checkpoint |
---|---|---|---|
ToonCrafter_512 | 320x512 | 12.8GB & 20s (perframe_ae=True ) |
Hugging Face |
Currently, our ToonCrafter can support generating videos of up to 16 frames with a resolution of 512x320. The inference time can be reduced by using fewer DDIM steps.
conda create -n tooncrafter python=3.8.5
conda activate tooncrafter
pip install -r requirements.txt
Download pretrained ToonCrafter_512 and put the model.ckpt
in checkpoints/tooncrafter_512_interp_v1/model.ckpt
.
sh scripts/run_application.sh # Generate frame interpolation
Download the pretrained model and put it in the corresponding directory according to the previous guidelines.
python gradio_app.py
This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.