OpenDriveLab
AI for Robotics and Autonomous Driving, affiliated at The University of Hong Kong (HKU).
Hong Kong
Pinned Repositories
AgiBot-World
[IROS 2025] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
Birds-eye-view-Perception
[IEEE T-PAMI 2023] Awesome BEV perception research and cookbook for all level audience in autonomous diriving
DriveAGI
[CVPR 2024 Highlight] GenAD: Generalized Predictive Model for Autonomous Driving
DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
End-to-end-Autonomous-Driving
[IEEE T-PAMI 2024] All you need for End-to-end Autonomous Driving
FreeTacMan
FreeTacMan: Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation
OccNet
[ICCV 2023] OccNet: Scene as Occupancy
UniAD
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
UniVLA
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
Vista
[NeurIPS 2024] A Generalizable World Model for Autonomous Driving
OpenDriveLab's Repositories
OpenDriveLab/OccNet
[ICCV 2023] OccNet: Scene as Occupancy
OpenDriveLab/OpenLane-V2
[NeurIPS 2023 Track Datasets and Benchmarks] OpenLane-V2: The First Perception and Reasoning Benchmark for Road Driving
OpenDriveLab/TCP
[NeurIPS 2022] Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline.
OpenDriveLab/ST-P3
[ECCV 2022] ST-P3, an end-to-end vision-based autonomous driving framework via spatial-temporal feature learning.
OpenDriveLab/OpenScene
3D Occupancy Prediction Benchmark in Autonomous Driving
OpenDriveLab/LaneSegNet
[ICLR 2024] Map Learning with Lane Segment for Autonomous Driving
OpenDriveLab/MPI
[RSS 2024] Learning Manipulation by Predicting Interaction
OpenDriveLab/LightwheelOcc
LightwheelOcc: A 3D Occupancy Synthetic Dataset in Autonomous Driving
OpenDriveLab/CVPR2024Challenge_Assets
Assets for Competition Servers on Huggingface