shu-le's Stars
yangxiaofeng/rectified_flow_prior
Official code for paper: Text-to-Image Rectified Flow as Plug-and-Play Priors
hanlinm2/projective-geometry
[CVPR 2024] Shadows Don’t Lie and Lines Can’t Bend! Generative Models don’t know Projective Geometry...for now
PKU-Alignment/safe-sora
SafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enhance the helpfulness and harmlessness of Large Vision Models (LVMs).
liuff19/DreamReward
[ECCV 2024] DreamReward: Text-to-3D Generation with Human Preference
google-research-datasets/richhf-18k
RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with the file name of the associated labeled images (no urls or images are included in this dataset).
jacobgil/pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
AILab-CVC/VideoCrafter
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
naver/dust3r
DUSt3R: Geometric 3D Vision Made Easy
desaixie/carve3d
Code for Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning
mini-sora/minisora
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
thu-nics/DiTFastAttn
gnobitab/RectifiedFlow
Official Implementation of Rectified Flow (ICLR2023 Spotlight)
marcus-jw/Multi-Objective-Reinforcement-Learning-from-AI-Feedback
Implementation of a Multi-Objective Reinforcement Learning from AI Feedback system using torch, transformers and TRL. This project aims to test whether switching to a Multi-objective reward function in the preference model improves the safety-performance of the final model.
mapo-t2i/mapo
Official codebase for Margin-aware Preference Optimization for Aligning Diffusion Models without Reference (MaPO).
Eduard6421/PQPP
Adlith/MoE-Jetpack
RockeyCoss/SPO
Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step
ShareGPT4Omni/ShareGPT4Video
[NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions
castorini/daam
Diffusion attentive attribution maps for interpreting Stable Diffusion.
dexgfsdfdsg/LP-3DGS
kvablack/ddpo-pytorch
DDPO for finetuning diffusion models, implemented in PyTorch with LoRA support
RLHFlow/RLHF-Reward-Modeling
Recipes to train reward model for RLHF.
RLHF-V/RLHF-V
[CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback
astra-vision/PaSCo
[CVPR 2024 Oral, Best Paper Award Candidate] Official repository of "PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness"
nikosips/met
A large-scale dataset for instance-level recognition for artworks is introduced.
ZHZisZZ/modpo
[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
Allenpandas/2023-Reinforcement-Learning-Conferences-Papers
The proceedings of top conference in 2023 on the topic of Reinforcement Learning (RL), including: AAAI, IJCAI, NeurIPS, ICML, ICLR, ICRA, AAMAS and more.
Karine-Huang/T2I-CompBench
[Neurips 2023] T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation
SalesforceAIResearch/DiffusionDPO
Code for "Diffusion Model Alignment Using Direct Preference Optimization"