YeLyuUT
I am a ph.D. candidate from University of Twente. My research interest is object recognition from images and videos.
University of TwenteITC building, Hengelosestraat 99, 7514 AE Enschede, Netherlands
YeLyuUT's Stars
ml-tooling/best-of-ml-python
🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
TRI-ML/packnet-sfm
TRI-ML Monocular Depth Estimation Repository
TRI-ML/DDAD
Dense Depth for Autonomous Driving (DDAD) dataset.
openai/gym
A toolkit for developing and comparing reinforcement learning algorithms.
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
peng-zhihui/DeepVision
在我很多项目中用到的CV算法推理框架应用。
Unity-Technologies/ml-agents
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
joe-siyuan-qiao/WeightStandardization
Standardizing weights to accelerate micro-batch training
karpathy/arxiv-sanity-preserver
Web interface for browsing, search and filtering recent arxiv submissions
dk-liang/Awesome-Visual-Transformer
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
chenxin-dlut/TransT
Transformer Tracking (CVPR2021)
huggingface/pytorch-image-models
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
luanshiyinyang/awesome-multiple-object-tracking
Resources for Multiple Object Tracking (MOT)
facebookresearch/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
xinge008/Cylinder3D
Rank 1st in the leaderboard of SemanticKITTI semantic segmentation (both single-scan and multi-scan) (Nov. 2020) (CVPR2021 Oral)
Epiphqny/VisTR
[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers
QingyongHu/SoTA-Point-Cloud
🔥[IEEE TPAMI 2020] Deep Learning for 3D Point Clouds: A Survey
microsoft/Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
facebookresearch/moco
PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
QUVA-Lab/e2cnn
E(2)-Equivariant CNNs Library for Pytorch
joe-siyuan-qiao/ViP-DeepLab
PeizeSun/TransTrack
Multiple Object Tracking with Transformer
foolwood/benchmark_results
Visual Tracking Paper List
fundamentalvision/Deformable-DETR
Deformable DETR: Deformable Transformers for End-to-End Object Detection.
xingyizhou/CenterTrack
Simultaneous object detection and tracking using center points.
xingyizhou/CenterNet
Object detection, 3D detection, and pose estimation using center point detection:
ifzhang/FairMOT
[IJCV-2021] FairMOT: On the Fairness of Detection and Re-Identification in Multi-Object Tracking
youngwanLEE/centermask2
[CVPR 2020] CenterMask : Real-time Anchor-Free Instance Segmentation
yihongXU/deepMOT
Official Implementation of How To Train Your Deep Multi-Object Tracker (CVPR2020)
mcahny/vps
Official pytorch implementation for "Video Panoptic Segmentation" (CVPR 2020 Oral)