diffusion-transformer
There are 21 repositories under diffusion-transformer topic.
Alpha-VLLM/Lumina-T2X
Lumina-T2X is a unified framework for Text to Any Modality Generation
shallowdream204/DreamClear
[NeurIPS 2024🔥] DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
lucasnewman/f5-tts-mlx
Implementation of F5-TTS in MLX
wangjiangshan0725/RF-Solver-Edit
Taming FLUX for Image Inversion & Editing; OpenSora for Video Inversion & Editing! (Official implementation for Taming Rectified Flow for Inversion and Editing.)
DiT-3D/DiT-3D
🔥🔥🔥Official Codebase of "DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation"
TiankaiHang/Min-SNR-Diffusion-Training
[ICCV 2023] Efficient Diffusion Training via Min-SNR Weighting Strategy
AdaCache-DiT/AdaCache
Adaptive Caching for Faster Video Generation with Diffusion Transformers
yangluo7/CAME
The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"
desaixie/pa_vdm
ArXiv paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151
lucasnewman/f5-tts-swift
Implementation of F5-TTS in Swift using MLX
milmor/diffusion-transformer
Implementation of Diffusion Transformer Model in Pytorch
prathebaselva/FORA
FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.
explainingai-code/DiT-PyTorch
This repo implements Diffusion Transformers(DiT) in PyTorch and provides training and inference code on CelebHQ dataset
milmor/diffusion-transformer-keras
Implementation of Latent Diffusion Transformer Model in Tensorflow / Keras
K1nght/Unified-Unlearning-w-Remain-Geometry
[NeurIPS2024 (Spotlight)] "Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement" by Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang
ArchiMickey/Just-a-DiT
A repo of a modified version of Diffusion Transformer
explainingai-code/VideoGeneration-PyTorch
This repo implements Video generation model using Latent Diffusion Transformers(Latte) in PyTorch and provides training and inference code on Moving mnist dataset and UCF101 dataset
VachanVY/diffusion-transformer
Pytorch and JAX Implementation of Scalable Diffusion Models with Transformers | Diffusion Transformers in Pytorch and JAX
dirmeier/diffusion-transformer
A diffusion transformer implementation in Flax
u84819482/Nano-diffusion
Minimal DDPM/DiT-based generation of MNIST digits