Pinned Repositories
BAE-NET
The code for paper "BAE-NET: Branched Autoencoder for Shape Co-Segmentation".
brics_3d
BRICS_3D - 3D Perception and Modeling Library
BSP-NET-original
Tensorflow 1.15 implementation of BSP-NET, along with other scripts used in our paper.
caffe
Caffe: a fast open framework for deep learning.
graphics
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
gym
A toolkit for developing and comparing reinforcement learning algorithms.
ngp_pl
Instant-ngp in pytorch+cuda trained with pytorch-lightning (high quality with high speed, with only few lines of legible code)
ORB-SLAM2-GPU2016-final
ORB_SLAM
A Versatile and Accurate Monocular SLAM
ORB_SLAM2
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
gs14iitbbs's Repositories
gs14iitbbs/BAE-NET
The code for paper "BAE-NET: Branched Autoencoder for Shape Co-Segmentation".
gs14iitbbs/brics_3d
BRICS_3D - 3D Perception and Modeling Library
gs14iitbbs/BSP-NET-original
Tensorflow 1.15 implementation of BSP-NET, along with other scripts used in our paper.
gs14iitbbs/caffe
Caffe: a fast open framework for deep learning.
gs14iitbbs/graphics
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
gs14iitbbs/gym
A toolkit for developing and comparing reinforcement learning algorithms.
gs14iitbbs/ngp_pl
Instant-ngp in pytorch+cuda trained with pytorch-lightning (high quality with high speed, with only few lines of legible code)
gs14iitbbs/ORB-SLAM2-GPU2016-final
gs14iitbbs/ORB_SLAM
A Versatile and Accurate Monocular SLAM
gs14iitbbs/ORB_SLAM2
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
gs14iitbbs/pyslam
pySLAM contains a monocular Visual Odometry (VO) pipeline in Python. It supports many modern local features based on Deep Learning.
gs14iitbbs/SemanticSegmentationModel
gs14iitbbs/SfmLearner-Pytorch
Pytorch version of SfmLearner from Tinghui Zhou et al.
gs14iitbbs/slambench2
SLAM performance evaluation framework
gs14iitbbs/VO-SLAM-Review
SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometer (VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether features are extracted or not: feature point-based methods, and direct methods without feature points. VO based on feature points is stable and insensitive to illumination and dynamic objects