Deante-dx's Stars
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
facebookresearch/detr
End-to-End Object Detection with Transformers
facebookresearch/sam2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
pyro-ppl/pyro
Deep universal probabilistic programming with Python and PyTorch
ChaoningZhang/MobileSAM
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
daquexian/onnx-simplifier
Simplify your onnx model
z-x-yang/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
siyuanliii/masa
Official Implementation of CVPR24 highligt paper: Matching Anything by Segmenting Anything
Francis-Rings/StableAnimator
We present StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos without any post-processing, conditioned on a reference image and a sequence of poses.
SysCV/sam-pt
SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.
facebookresearch/hiera
Hiera: A fast, powerful, and simple hierarchical vision transformer.
airockchip/rknn-llm
Gy920/segment-anything-2-real-time
Run Segment Anything Model 2 on a live video stream
wei-mao-2019/LearnTrajDep
code for learning trajectory dependencies for human motion prediction
ibaiGorordo/ONNX-SAM2-Segment-Anything
Python scripts for the Segment Anythin 2 (SAM2) model in ONNX
heyoeyo/muggled_sam
Muggled SAM: Segmentation without the magic
Aimol-l/OrtInference
Using OnnxRuntime to inference yolov10,yolov10+SAM ,yolov10+bytetrack and SAM2 by c++ .
patrick-tssn/Streaming-Grounded-SAM-2
Grounded Tracking for Streaming Videos
sajjad-sh33/YOLO_SAM2
Self-Prompting Polyp Segmentation in Colonoscopy Using Hybrid YOLO-SAM 2 Model
axinc-ai/segment-anything-2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
naver-ai/tc-clip
[ECCV 2024] Official PyTorch implementation of TC-CLIP "Leveraging Temporal Contextualization for Video Action Recognition"
Aimol-l/SAM2Export
try to export sam2 to onnx.
AsukaCamellia/TCPFormer
AAAI 2025
vita-epfl/multi-transmotion
[CoRL 2024] Official implementation of "Multi-Transmotion: Pre-trained Model for Human Motion Prediction" in PyTorch.
ShuoShenDe/Grounded-Sam2-Tracking
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Deante-dx/TFAN
TFAN for human motion prediction
pappoja/SAM2_Tool
In this project, I walk through a user-friendly tool that I created to run SOTA video segmentation and auto-label data for object detection and tracking tasks.
SimonZeng7108/EfficientSAM2-for-tracking
This repo contains code for EfficientSAM2 based on Rep-ViT and Segment Anything 2 models.
JunkyByte/Track-Anything-Mobile
(This Fork adds MobileSAM support) Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Jshulgach/Grounded-SAM-2-Stream
Track anything in streaming with Grounding DINO, SAM 2, and LLM