Pinned Repositories
Awesome-3D-Object-Detection
Papers, code and datasets about deep learning for 3D Object Detection.
Awesome-BEV-Perception-Multi-Cameras
Awesome papers about Multi-Camera 3D Object Detection and Segmentation in Bird-Eye-View, such as DETR3D, BEVDet, BEVFormer
Awesome-LiDAR-Camera-Calibration
A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes
awesome-self-driving-car
An awesome list of self-driving cars
bevfusion
[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
cv-detect-robot
`yolov5+tensorRT+ROS+deepstream`High-performance deployment(高性能部署)
mmdet3d_ros2
perception_pcl
PCL (Point Cloud Library) ROS interface stack
pytorch-auto-drive
PytorchAutoDrive: Segmentation models (ERFNet, ENet, DeepLab, FCN...) and Lane detection models (SCNN, PRNet, RESA, LSTR, BézierLaneNet...) based on PyTorch with fast training, visualization, benchmarking & deployment help
resume
J-xinyu's Repositories
J-xinyu/mmdet3d_ros2
J-xinyu/Awesome-3D-Object-Detection
Papers, code and datasets about deep learning for 3D Object Detection.
J-xinyu/Awesome-BEV-Perception-Multi-Cameras
Awesome papers about Multi-Camera 3D Object Detection and Segmentation in Bird-Eye-View, such as DETR3D, BEVDet, BEVFormer
J-xinyu/Awesome-LiDAR-Camera-Calibration
A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes
J-xinyu/awesome-self-driving-car
An awesome list of self-driving cars
J-xinyu/bevfusion
[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
J-xinyu/cv-detect-robot
`yolov5+tensorRT+ROS+deepstream`High-performance deployment(高性能部署)
J-xinyu/perception_pcl
PCL (Point Cloud Library) ROS interface stack
J-xinyu/pytorch-auto-drive
PytorchAutoDrive: Segmentation models (ERFNet, ENet, DeepLab, FCN...) and Lane detection models (SCNN, PRNet, RESA, LSTR, BézierLaneNet...) based on PyTorch with fast training, visualization, benchmarking & deployment help
J-xinyu/resume
J-xinyu/ros2_tao_pointpillars
ROS2 node for 3D object detection using TAO-PointPillars.
J-xinyu/ros2_vision
J-xinyu/tugbot_autoware_pkgs
Gazebo のtugbotをAutoware Universeで動かすためのもの
J-xinyu/Vehicle-CV-ADAS
The project can achieve FCWS, LDWS, and LKAS functions solely using only visual sensors. using YOLOv5 / YOLOv5-lite / YOLOv8 and Ultra-Fast-Lane-Detection-v2 .