BubblyYi
I am an investigator in the SenseTime. My research interests are in 3D Vision and Object Detection.
SenseTime Group Limited Shanghai, China
Pinned Repositories
3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
badslam
Bundle Adjusted Direct RGB-D SLAM
CaptainBlackboard
船长关于机器学习、计算机视觉和工程技术的总结和分享
Coronary-Artery-Tracking-via-3D-CNN-Classification
The PyTorch re-implement of a 3D CNN Tracker to extract coronary artery centerlines with state-of-the-art (SOTA) performance. (paper: 'Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier')
CoronaryArteryStenosisScoreClassification
CNN for Classification of Coronary Artery Stenosis Score inMPR Images.
interview
2020秋招 计算机视觉算法岗面经整理——包含实习和校招等 内推整理
MMPedestron
[ECCV2024] Official implementation of the paper "When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset"
SpringBoot-Hibernate-BookManagement
RGBX_Semantic_Segmentation
mmdetection
OpenMMLab Detection Toolbox and Benchmark
BubblyYi's Repositories
BubblyYi/ACAM_Demo
Real-time Action detection demo for the work Actor Conditioned Attention Maps. This repo includes a complete pipeline for person detection/tracking and analyzing their actions in real-time.
BubblyYi/AlgorithmsByPython
算法/数据结构/Python/剑指offer/机器学习/leetcode
BubblyYi/AttentionGatedVNet3D
Attention Gated VNet3D Model for KiTS19——2019 Kidney Tumor Segmentation Challenge
BubblyYi/awesome_3DReconstruction_list
A curated list of papers & resources linked to 3D reconstruction from images.
BubblyYi/colmap
COLMAP - Structure-from-Motion and Multi-View Stereo
BubblyYi/cvpr2019
cvpr2019 papers,极市团队整理
BubblyYi/deep-high-resolution-net.pytorch
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"
BubblyYi/FSA-Net
[CVPR19] FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation from a Single Image
BubblyYi/future_pose_estimator
Predict near future pose estimation for F1/10th car in front using April Tags. Written by Christopher Kao for EAS 499 engineering senior thesis.
BubblyYi/GazeCorrection
GazeCorrection: Self-Guided Eye Manipulation in the wild using Self-Supervised Generative Adversarial Networks
BubblyYi/GazeML
Gaze Estimation using Deep Learning, a Tensorflow-based framework.
BubblyYi/gazeworkshop.github.io
Gaze Estimation and Prediction in the Wild ICCV 2019 Workshop - Webpage maintained by Nora Horanyi, University of Birmingham
BubblyYi/libfacedetection-python-bindings
This repo provides a python binding of libfacedetection from Yu
BubblyYi/meshroom
3D Reconstruction Software
BubblyYi/MPIIGaze
MPII-GAZE: A dataset for 3D gaze recognition. More info here: https://www.mpi-inf.mpg.de/de/abteilungen/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/appearance-based-gaze-estimation-in-the-wild-mpiigaze/
BubblyYi/OCHumanApi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
BubblyYi/opencv
Open Source Computer Vision Library
BubblyYi/OpenNI2
OpenNI2
BubblyYi/OpenSfM
Open source Structure from Motion pipeline
BubblyYi/PyGaze
an open-source, cross-platform toolbox for minimal-effort programming of eye tracking experiments
BubblyYi/python-pcl
Python bindings to the pointcloud library (pcl)
BubblyYi/python_openni2_samples
This repository hosts python3 example scripts to use Terabee 3Dcam 80x60 with OpenNI2.
BubblyYi/pytorch-pose-hg-3d
PyTorch implementation for 3D human pose estimation
BubblyYi/ROS-Academy-for-Beginners
**大学MOOC《机器人操作系统入门》课程代码示例
BubblyYi/Scan2CAD
[CVPR'19] Dataset and code used in the research project Scan2CAD: Learning CAD Model Alignment in RGB-D Scans
BubblyYi/SiamMask
[CVPR2019] Fast Online Object Tracking and Segmentation: A Unifying Approach
BubblyYi/slam-python
用python学习rgbd-slam系列
BubblyYi/slambook
BubblyYi/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
BubblyYi/Where-are-they-looking-PyTorch
Where are they looking? - Gaze Following via Attention modelling and Deep Learning