miracle-fmh's Stars
imDazui/Tvlist-awesome-m3u-m3u8
直播源相关资源汇总 📺 💯 IPTV、M3U —— 勤洗手、戴口罩,祝愿所有人百毒不侵
salesforce/LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
openai/consistency_models
Official repo for consistency models.
HVision-NKU/StoryDiffusion
Accepted as [NeurIPS 2024] Spotlight Presentation Paper
luosiallen/latent-consistency-model
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference
ali-vilab/VGen
Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models
bghira/SimpleTuner
A general fine-tuning kit geared toward diffusion models.
Vchitect/Latte
Latte: Latent Diffusion Transformer for Video Generation.
NUS-HPC-AI-Lab/OpenDiT
OpenDiT: An Easy, Fast and Memory-Efficient System for DiT Training and Inference
mini-sora/minisora
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
s9roll7/animatediff-cli-prompt-travel
animatediff prompt travel
showlab/Show-1
Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
Vchitect/SEINE
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
Vchitect/LaVie
[IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models
chuanyangjin/fast-DiT
Fast Diffusion Models with Transformers
willisma/SiT
Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"
stochasticai/x-stable-diffusion
Real-time inference for Stable Diffusion - 0.88s latency. Covers AITemplate, nvFuser, TensorRT, FlashAttention. Join our Discord communty: https://discord.com/invite/TgHXuSJEk6
NVlabs/edm2
Analyzing and Improving the Training Dynamics of Diffusion Models (EDM2)
YingqingHe/ScaleCrafter
[ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.
Zhen-Dong/Magic-Me
Codes for ID-Specific Video Customized Diffusion
vvictoryuki/AnimateZero
Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"
apapiu/transformer_latent_diffusion
Text to Image Latent Diffusion using a Transformer core
tumurzakov/AnimateDiff
AnimationDiff with train
YuchuanTian/U-DiT
[NeurIPS 2024] The official code of "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers"
mmathew23/improved_edm
Implementation of "Analyzing and Improving the Training Dynamics of Diffusion Models"
CiaraStrawberry/Temporal-Image-AnimateDiff
A retrain of AnimateDiff to be conditional on an init image
crystallee-ai/animatediff-controlnet
add a controlnet to animatediff to animate a given image
G-U-N/Gen-L-2
Long video generation with short video diffusion models.
manshoety/AD-Evo-Tuner-V2
Motion Module fine tuner for AnimateDiff.