Pinned Repositories
active_geometryaware
BagFromImages
Create a rosbag from a collection of images
code-of-learn-deep-learning-with-pytorch
This is code of book "Learn Deep Learning with PyTorch"
DeepLearning
DeepLIO
Deep Lidar Inertial Odometry
dl_with_pytorch
Deep learning with PyTorch
evaluate_ate_scale
Modified tool of the TUM RGB-D dataset that automatically computes the optimal scale factor that aligns trajectory and groundtruth. Useful to evaluate monocular VO/SLAM.
io
semantic_slam
ORB-SLAM2 combined with yolov3 object detection, considering the relationship among objects
slambook2
edition 2 of the slambook
rginjapan's Repositories
rginjapan/semantic_slam
ORB-SLAM2 combined with yolov3 object detection, considering the relationship among objects
rginjapan/io
rginjapan/slambook2
edition 2 of the slambook
rginjapan/DeepLearning
rginjapan/DeepLIO
Deep Lidar Inertial Odometry
rginjapan/dl_with_pytorch
Deep learning with PyTorch
rginjapan/flownet3d
FlowNet3D: Learning Scene Flow in 3D Point Clouds
rginjapan/GibsonEnv
Gibson Environments: Real-World Perception for Embodied Agents
rginjapan/git_study
study git command
rginjapan/GSLAM
A General Simultaneous Localization and Mapping Framework which supports feature based or direct method and different sensors including monocular camera, RGB-D sensors or any other input types can be handled.
rginjapan/gtsam
GTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices.
rginjapan/gtsam_vio
State estimation using iSAM2 from the GTSAM library. (GRASP Lab @ Penn Engineering)
rginjapan/ICRA2019-paper-list
ICRA2019 paper list from PaopaoRobot
rginjapan/ImmersiveDroneInterface
The Immersive Semi-Autonomous Aerial Command System is an open-source aerial vehicle command and control platform, designed for immersive interfaces (such as the Oculus Rift). This system provides an intuitive and seamless extension of human operators’ perception and control capabilities over the air, enabling a variety of research applications.
rginjapan/interpy-zh
📘《Python进阶》(Intermediate Python 中文版)
rginjapan/lcd
rginjapan/machine_learing_study
rginjapan/mujoco-python-viewer
Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.
rginjapan/openvslam
A Versatile Visual SLAM Framework
rginjapan/Paper_Reading_List
Recommended Papers. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Learning (cs.LG)
rginjapan/partnet_dataset
PartNet Dataset Official Release Repo
rginjapan/Python
Demo and other Python3 code
rginjapan/PyTorch_Tutorial
《Pytorch模型训练实用教程》中配套代码
rginjapan/semantic_slam-1
Real time semantic slam in ROS with a hand held RGB-D camera
rginjapan/state_estimation
state estimation for slam
rginjapan/SuperPoint_SLAM
SuperPoint + ORB_SLAM2
rginjapan/UnVIO
The source code of IJCAI2020 paper "Unsupervised Monocular Visual-inertial Odometry Network".
rginjapan/VINS-Mono-Learning
VINS-Mono代码注释,仅供学习
rginjapan/VIO_Tutotial_Course
VIO_Tutotial_Course homework of He Yijia and Gao Xiang
rginjapan/VisualInertialOdometry
A project of Visual Inertial Odometry for Autonomous Vehicle