ChenHsing's Stars
ccfddl/ccf-deadlines
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
wenhao728/awesome-diffusion-v2v
Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code.
Tencent/HunyuanDiT
Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
HL-hanlin/Ctrl-Adapter
Official implementation of Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
voxel51/fiftyone
Refine high-quality datasets and visual AI models
Francis-Rings/MotionFollower
MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion
TencentARC/SmartEdit
Official code of SmartEdit [CVPR-2024 Highlight]
Stability-AI/generative-models
Generative Models by Stability AI
NExT-GPT/NExT-GPT
Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
kyegomez/qformer
Implementation of Qformer from BLIP2 in Zeta Lego blocks.
TencentARC/PhotoMaker
PhotoMaker [CVPR 2024]
PixArt-alpha/PixArt-alpha
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
Vchitect/SEINE
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
yandex-research/adaptive-diffusion
[CVPR'2024] Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models
daooshee/HD-VG-130M
The HD-VG-130M Dataset
OpenMotionLab/MotionGPT
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
pixeli99/SVD_Xtend
Stable Video Diffusion Training Code and Extensions.
luosiallen/latent-consistency-model
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference
dhanushreddy291/sdxl-turbo-cog
SDXL-Turbo is a real-time synthesis model, derived from SDXL 1.0, and utilizes a training method called Adversarial Diffusion Distillation (ADD). It achieves high image quality within one to four sampling steps
aim-uofa/AutoStory
genforce/freecontrol
Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition"
yukw777/VideoBLIP
Supercharged BLIP-2 that can handle videos
WarranWeng/ART.V
ChenHsing/VIDiff
Francis-Rings/MotionEditor
[CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.
openai/consistencydecoder
Consistency Distilled Diff VAE
buyizhiyou/NRVQA
no reference image/video quaity assessment(BRISQUE/NIQE/PIQE/DIQA/deepBIQ/VSFA
wengzejia1/Open-VCLIP
showlab/loveu-tgve-2023
Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.
cientgu/InstructDiffusion
PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.