maynardsd's Stars
visionxiang/awesome-camouflaged-object-detection
A curated list of awesome resources for camouflaged/concealed object detection (COD).
TianxingWu/FreeInit
[ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models
guoyww/AnimateDiff
Official implementation of AnimateDiff.
a554b554/AutoSurveyGPT
Automatically literature survey/review with GPT! An intelligent research assistant leveraging GPT-3.5 /GPT-4 to find, analyze, and rank relevant academic papers from Google Scholar based on user-provided search queries and topics
dreamoving/dreamoving.github.io
Homepage of DreaMoving
vvictoryuki/AnimateZero
Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"
Stability-AI/generative-models
Generative Models by Stability AI
PRIS-CV/DemoFusion
Let us democratise high-resolution generation! (CVPR 2024)
magic-research/magic-animate
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
HumanAIGC/AnimateAnyone
Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
THUDM/CogVideo
Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
SysCV/sam-hq
Segment Anything in High Quality [NeurIPS 2023]
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
wangkai930418/awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas
KBH00/Semantic-Fast-SAM
SSA + FastSAM/Semantic Fast Segment Anything , or Fast Semantic Segment Anything
CASIA-IVA-Lab/AnomalyGPT
[AAAI 2024 Oral] AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models
CASIA-IVA-Lab/FastSAM
Fast Segment Anything
facebookresearch/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
gaomingqi/Awesome-Video-Object-Segmentation
A curated list of video object segmentation (vos) papers, datasets, and projects.
luanshiyinyang/awesome-multiple-object-tracking
Resources for Multiple Object Tracking (MOT)
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
UX-Decoder/Semantic-SAM
[ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"
suhwan-cho/awesome-video-object-segmentation
A list of video object segmentation (VOS) papers
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
gaomingqi/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
eshoyuan/TrackGPT
TrackGPT: Track What You Need in Videos via Text Prompts
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
jiawen-zhu/HQTrack
Tracking Anything in High Quality
alaamaalouf/FollowAnything