Eun0's Stars
kijai/ComfyUI-DynamiCrafterWrapper
Wrapper to use DynamiCrafter models in ComfyUI
ToonCrafter/ToonCrafter
a research paper for generative cartoon interpolation
nosiu/InstantID-faceswap
InstantID : Zero-shot Identity-Preserving Generation in Seconds š„
ShineChen1024/MagicClothing
Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis
banodoco/Steerable-Motion
A ComfyUI node for driving videos using batches of images.
YangLing0818/RPG-DiffusionMaster
[ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (PRG)
aigc3d/motionshop
Project page of replacing the human motion in the video with a virtual 3D human
MooreThreads/Moore-AnimateAnyone
Character Animation (AnimateAnyone, Face Reenactment)
guoqincode/Open-AnimateAnyone
Unofficial Implementation of Animate Anyone
Picsart-AI-Research/Specialist-Diffusion
[CVPR 2023] Specialist Diffusion: Extremely Low-Shot Fine-Tuning of Large Diffusion Models
zllrunning/face-parsing.PyTorch
Using modified BiSeNet for face parsing in PyTorch
PRIV-Creation/Awesome-Controllable-T2I-Diffusion-Models
A collection of resources on controllable generation with text-to-image diffusion models.
mkshing/e4t-diffusion
Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models
haofanwang/Lora-for-Diffusers
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchersš„
Amblyopius/Stable-Diffusion-ONNX-FP16
Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all DirectML supported cards including AMD and Intel.
haofanwang/ControlNet-for-Diffusers
Transfer the ControlNet with any basemodel in diffusersš„
haofanwang/Train-ControlNet-in-Diffusers
We show you how to train a ControlNet with your own control hint in diffusers framework
MichalGeyer/plug-and-play
Official Pytorch Implementation for āPlug-and-Play Diffusion Features for Text-Driven Image-to-Image Translationā (CVPR 2023)
XavierXiao/Dreambooth-Stable-Diffusion
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
ShivamShrirao/diffusers
š¤ Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
zyxElsa/InST
Official implementation of the paper āInversion-Based Style Transfer with Diffusion Modelsā (CVPR 2023)
justinpinkney/stable-diffusion
kwonminki/One-sentence_Diffusion_summary
The repo for studying and sharing diffusion models.
williamyang1991/DualStyleGAN
[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
penway/LFM
Latent feature maximization, loss module for DCGAN
NVlabs/denoising-diffusion-gan
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs https://arxiv.org/abs/2112.07804
gwang-kim/DiffusionCLIP
[CVPR 2022] Official PyTorch Implementation for DiffusionCLIP: Text-guided Image Manipulation Using Diffusion Models
chengzhipanpan/PaSeR
Code for EMNLP paper `Sentence Representation Learning with Generative Objective rather than Contrastive Objective`
CompVis/stable-diffusion
A latent text-to-image diffusion model
KyubumShin/airush_auto_submit
airush_auto_submit