yuluobusifan's Stars
changzy00/pytorch-attention
🦖Pytorch implementation of popular Attention Mechanisms, Vision Transformers, MLP-Like models and CNNs.🔥🔥🔥
DavidZhangdw/Visual-Tracking-Development
Visual Object Tracking
SparkTempest/BAT
Bi-directional Adapter for Multi-modal Tracking
singularity-s0/fudan_sports_autoreserve
复旦大学体育场馆自动预约 FDU Sports Auto Reserve
zhuiyuan733/Fudan_Sportscourt_Order
HusterYoung/MPLT
botaoye/OSTrack
[ECCV 2022] Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework
RyanHTR/TBSI
yangmengmeng1997/APFNet
jiawen-zhu/ViPT
[CVPR23] Visual Prompt Multi-Modal Tracking
hazdzz/STGCN
The PyTorch implementation of STGCN.
megvii-research/video_analyst
A series of basic algorithms that are useful for video understanding, including Single Object Tracking (SOT), Video Object Segmentation (VOS) and so on.
xingchenzhang/RGB-T-fusion-tracking-papers-and-results
The papers and results about RGB-T fusion tracking
HenJigg/my-todoapp
该项目为2022年WPF项目实战合集源代码
manjunath5496/RGB-T-Fusion-Tracking-Papers
"Real learning comes about when the competitive spirit has ceased."― Jiddu Krishnamurti
Alexadlu/MANet_pp
RGBT Tracking via Multi-Adapter Network with Hierarchical Divergence Loss (IEEE T-IP2021)
Alexadlu/MANet
Multi-Adapter RGBT Tracking implementation on Pytorch (ICCVW2019)
Alexadlu/DMCNet
Duality-Gated Mutual Condition Network for RGBT Tracking (IEEE T-NNLS 2022)
mmic-lcl/Datasets-and-benchmark-code
huanglianghua/siamfc-pytorch
A clean PyTorch implementation of SiamFC tracking/training, evaluated on 7 datasets.
bertinetto/cfnet
[CVPR'17] Training a Correlation Filter end-to-end allows lightweight networks of 2 layers (600 kB) to high performance at fast speed..
bertinetto/siamese-fc
Arbitrary object tracking at 50-100 FPS with Fully Convolutional Siamese networks.
666DZY666/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape