EmpErorWGA's Stars
basilevh/tcow
Tracking through Containers and Occluders in the Wild (CVPR 2023) - Official Implementation
yuzhms/Streaming-Video-Model
[CVPR2023] Code for "Streaming Video Model"
jozhang97/DETA
Detection Transformers with Assignment
amusi/CVPR2024-Papers-with-Code
CVPR 2024 论文和开源项目合集
microsoft/VideoX
VideoX: a collection of video cross-modal models
gaomingqi/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
dvlab-research/3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
z-x-yang/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
wangxiyang2022/YONTD-MOT
This is the offical implementation of the paper "You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking "
exiawsh/StreamPETR
[ICCV 2023] StreamPETR: Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection
Fangyi-Chen/SQR
Mingzhen-Huang/DETracker
Tracking Multiple Deformable Objects in Egocentric Videos (CVPR 2023)
colorfulfuture/Awesome-Trajectory-Motion-Prediction-Papers
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
jxbbb/ADAPT
This repository is an official implementation of ADAPT: Action-aware Driving Caption Transformer, accepted by ICRA 2023.
weiyithu/SurroundOcc
[ICCV 2023] SurroundOcc: Multi-camera 3D Occupancy Prediction for Autonomous Driving
oneline-wsq/nuscenes
hustvl/VAD
[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
OpenDriveLab/UniAD
[CVPR'23 Best Paper Award] Planning-oriented Autonomous Driving
MasterBin-IIAU/UNINEXT
[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval
TRI-ML/PF-Track
Implementation of PF-Track
wudongming97/RMOT
[CVPR2023] Referring Multi-Object Tracking
wzzheng/TPVFormer
[CVPR 2023] An academic alternative to Tesla's occupancy network for autonomous driving.
HUSTDML/CTTrack
tusen-ai/SimpleTrack
dvl-tum/mot_neural_solver
Official PyTorch implementation of "Learning a Neural Solver for Multiple Object Tracking" (CVPR 2020 Oral).
aleksandrkim61/EagerMOT
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]
aleksandrkim61/PolarMOT
Official code for "PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?" [ECCV 2022]
ifzhang/ByteTrack
[ECCV 2022] ByteTrack: Multi-Object Tracking by Associating Every Detection Box
Little-Podi/Transformer_Tracking
This repository is a paper digest of Transformer-related approaches in visual tracking tasks.