mrqrs's Stars
LiheYoung/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
ucla-mobility/V2V4Real
[CVPR2023 Highlight] The official codebase for paper "V2V4Real: A large-scale real-world dataset for Vehicle-to-Vehicle Cooperative Perception"
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Fudan-ProjectTitan/OpenAnnotate3D
OpenAnnotate3D: Open-Vocabulary Auto-Labeling System for Multi-modal Data
Haiyang-W/UniTR
[ICCV2023] Official Implementation of "UniTR: A Unified and Efficient Multi-Modal Transformer for Bird’s-Eye-View Representation"
OpenDriveLab/UniAD
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
NVIDIA-AI-IOT/Lidar_AI_Solution
A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, BEVFusion) and the related libs (cuPCL, 3D SparseConvolution, YUV2RGB, cuOSD,).
kafeiyin00/WHU-HelmetDataset
Werable Mapping Dataset
pengsida/learning_research
本人的科研经验
dvlab-research/3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
WHU-USI3DV/WHU-TLS
[ESI highly cited] TLS point cloud registration benchmark consists of 115 scans collected from 11 different scenarios
WHU-USI3DV/SGHR
[CVPR 2023] Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting
skyhehe123/MSF
MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection from Point Cloud Sequences (CVPR 2023)
Pointcept/Pointcept
Pointcept: a codebase for point cloud perception research. Latest works: PTv3 (CVPR'24 Oral), PPT (CVPR'24), OA-CNNs (CVPR'24), MSC (CVPR'23)
dvlab-research/VoxelNeXt
VoxelNeXt: Fully Sparse VoxelNet for 3D Object Detection and Tracking (CVPR 2023)
hailanyi/VirConv
Virtual Sparse Convolution for Multimodal 3D Object Detection
mrqrs/CG-SSD
dvlab-research/spconv-plus
TuSimple/centerformer
Implementation for CenterFormer: Center-based Transformer for 3D Object Detection (ECCV 2022)
YoushaaMurhij/FMFNet
Pytorch implementation for the paper: "FMFNet: Improve the 3D Object Detection and Tracking via Feature Map Flow" [IJCNN-2022]
sshaoshuai/MTR
MTR: Motion Transformer with Global Intention Localization and Local Movement Refinement, NeurIPS 2022.
stepankonev/waymo-motion-prediction-challenge-2022-multipath-plus-plus
Solution for Waymo Motion Prediction Challenge 2022. Our implementation of MultiPath++
MCG-NJU/CamLiFlow
[CVPR 2022 Oral & TPAMI 2023] Learning Optical Flow and Scene Flow with Bidirectional Camera-LiDAR Fusion
dvlab-research/LargeKernel3D
LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs (CVPR 2023)
VISION-SJTU/PillarNet-LTS
fundamentalvision/BEVFormer
[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for autonomous driving perception, e.g., 3D object detection and semantic map segmentation.
scutan90/DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
dvlab-research/FocalsConv
Focal Sparse Convolutional Networks for 3D Object Detection (CVPR 2022, Oral)
tusen-ai/SST
Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
isl-org/Open3D
Open3D: A Modern Library for 3D Data Processing