ZhouZijie77's Stars
exacity/deeplearningbook-chinese
Deep Learning Book Chinese Translation
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
IDEA-Research/DINO
[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"
IDEA-Research/T-Rex
[ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy
chaytonmin/Awesome-BEV-Perception-Multi-Cameras
Awesome papers about Multi-Camera 3D Object Detection and Segmentation in Bird's-Eye-View, such as DETR3D, BEVDet, BEVFormer, BEVDepth, UniAD
CVPR2023-3D-Occupancy-Prediction/CVPR2023-3D-Occupancy-Prediction
CVPR2023-Occupancy-Prediction-Challenge
HongbiaoZ/autonomous_exploration_development_environment
Leveraging system development and robot deployment for ground-based autonomous navigation and exploration.
bradyz/cross_view_transformers
Cross-view Transformers for real-time Map-view Semantic Segmentation (CVPR 2022 Oral)
OpenGVLab/VideoMAEv2
[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
facebookresearch/OrienterNet
Source Code for Paper "OrienterNet Visual Localization in 2D Public Maps with Neural Matching"
longzw1997/Open-GroundingDino
This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.
aliyun/NeWCRFs
mpc001/Visual_Speech_Recognition_for_Multiple_Languages
Visual Speech Recognition for Multiple Languages
weiyithu/SurroundDepth
[CoRL 2022] SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
chaytonmin/Awesome-Occupancy-Prediction-Autonomous-Driving
Awesome papers about Multi-Camera Semantic Occupancy Prediction, such as TPVFormer, OccFormer, Occ3D, OpenOccupancy
haomo-ai/Cam4DOcc
[CVPR 2024] Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications
zju3dv/GIFT
Code for "GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs" NeurIPS 2019
ucaszyp/STEPS
This is the official repository for ICRA-2023 paper "STEPS: Joint Self-supervised Nighttime Image Enhancement and Depth Estimation"
LiuFG/Camera-Lidar-Fusion-ROS
fully applied in ROS. simply fuse the category and location information
DataXujing/TensorRT-DETR
:zap::zap::zap:NVIDIA-阿里2021 TRT比赛 `二等奖` 代码提交 团队:美迪康 AI Lab :rocket::rocket::rocket:
biter0088/pc-nerf
lus6-Jenny/RING
[IEEE T-RO 2023] Source code of RING and RING++ for loop closure detection in LiDAR SLAM.
RPM-Robotics-Lab/file_player_mulran
File Player for MulRan Dataset
adambielski/GrouPy
Group Equivariant Convolutional Neural Networks
Project-MANAS/ars_40X
Driver for the Continental radar ARS_404 / ARS_408.
ZhouZijie77/LCPR
[IEEE RA-L 2024] LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition
gergondet/ros_h264_streamer
A simple ROS node to stream/receive h.264 encoded images through a UDP/TCP socket
nacayu/ARS_408_ROS_Toolkit
BIT-XJY/EINet
Explicit Interaction for Fusion-Based Place Recognition
IRMVLab/MADiff
MADiff: Motion-Aware Mamba Diffusion Models for Hand Trajectory Prediction on Egocentric Videos