Pinned Repositories
AlphaCLIP
[CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
BalancedGroupSoftmax
CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.
CUDA
Official Implementation of Curriculum of Data Augmentation for Long-tailed Recognition (CUDA) (ICLR'23 Spotlight)
d2l-zh
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被55个国家的300所大学用于教学。
DBNet.pytorch
A pytorch re-implementation of Real-time Scene Text Detection with Differentiable Binarization
DeAOT
Associating Objects with Transformers for Video Object Segmentation
deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
DeepSORT-C-
MOT using deepsort yolo3 with C++
Deformable-DETR
Deformable DETR: Deformable Transformers for End-to-End Object Detection.
Trajectory-Long-tail-Distribution-for-MOT
⭕️ [CVPR2024] Official codes for "Delving into the Trajectory Long-tail Distribution for Muti-object Tracking"
chen-si-jia's Repositories
chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT
⭕️ [CVPR2024] Official codes for "Delving into the Trajectory Long-tail Distribution for Muti-object Tracking"
chen-si-jia/AlphaCLIP
[CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
chen-si-jia/BalancedGroupSoftmax
CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.
chen-si-jia/CUDA
Official Implementation of Curriculum of Data Augmentation for Long-tailed Recognition (CUDA) (ICLR'23 Spotlight)
chen-si-jia/DeAOT
Associating Objects with Transformers for Video Object Segmentation
chen-si-jia/Deformable-DETR
Deformable DETR: Deformable Transformers for End-to-End Object Detection.
chen-si-jia/DIVOTrack
chen-si-jia/FairMOT
[IJCV-2021] FairMOT: On the Fairness of Detection and Re-Identification in Multi-Object Tracking
chen-si-jia/DepthMOT
chen-si-jia/detr
End-to-End Object Detection with Transformers
chen-si-jia/GeneralTrack
chen-si-jia/gpt4free
decentralising the Ai Industry, just some language model api's...
chen-si-jia/Human-Trajectory-Prediction-via-Neural-Social-Physics
Our ECCV 2022 paper Human Trajectory Prediction via Neural Social Physics
chen-si-jia/HUST--
华中科技大学研究生课程资料
chen-si-jia/iKUN
iKUN: Speak to Trackers without Retraining
chen-si-jia/Imbalanced_SAM
The offical implement of ImbSAM (Imbalanced-SAM)
chen-si-jia/imgclsmob
多种网络多种框架实现
chen-si-jia/MOTR
[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer
chen-si-jia/Multiclass-SGCN
chen-si-jia/NetTrack
Official code for NetTrack [CVPR 2024]
chen-si-jia/REDet
Pytorch implementation of REDet, ACCV 2022
chen-si-jia/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
chen-si-jia/Single_Object_Tracking_Paper_List
Paper list for single object tracking (State-of-the-art SOT trackers)
chen-si-jia/Stable-SAM
chen-si-jia/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
chen-si-jia/trackformer
Implementation of "TrackFormer: Multi-Object Tracking with Transformers”. [Conference on Computer Vision and Pattern Recognition (CVPR), 2022]
chen-si-jia/Transformer-makemore
An autoregressive character-level language model for making more things
chen-si-jia/TransTrack
Multiple Object Tracking with Transformer
chen-si-jia/XMem
[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
chen-si-jia/zotero-pdf-translate
PDF translation add-on for Zotero