bradyz's Stars
VSCodeVim/Vim
:star: Vim for Visual Studio Code
synercys/annotated_latex_equations
Examples of how to create colorful, annotated equations in Latex using Tikz.
OpenDriveLab/End-to-end-Autonomous-Driving
[IEEE T-PAMI 2024] All you need for End-to-end Autonomous Driving
mit-han-lab/bevfusion
[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
woven-planet/l5kit
L5Kit - https://woven.toyota
motional/nuplan-devkit
The devkit of the nuPlan dataset.
zju3dv/LoG
Level of Gaussians
UMich-CURLY-teaching/UMich-ROB-530-public
UMich 500-Level Mobile Robotics Course
exiawsh/StreamPETR
[ICCV 2023] StreamPETR: Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection
bradyz/cross_view_transformers
Cross-view Transformers for real-time Map-view Semantic Segmentation (CVPR 2022 Oral)
OpenDriveLab/OpenLane
[ECCV 2022 Oral] OpenLane: Large-scale Realistic 3D Lane Dataset
facebookresearch/LaViLa
Code release for "Learning Video Representations from Large Language Models"
dotchen/LAV
(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.
OpenDriveLab/TCP
[NeurIPS 2022] Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline.
xingyizhou/GTR
Global Tracking Transformers, CVPR 2022
wayveai/mile
PyTorch code for the paper "Model-Based Imitation Learning for Urban Driving".
georghess/neurad-studio
[CVPR2024] NeuRAD: Neural Rendering for Autonomous Driving
facebookresearch/nocturne
A data-driven, fast driving simulator for multi-agent coordination under partial observability.
jozhang97/DETA
Detection Transformers with Assignment
nachiket92/PGP
Code for "Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals," CoRL 2021.
DerrickXuNu/CoBEVT
[CoRL2022] CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers
hturki/suds
Scalable Urban Dynamic Scenes
Hannibal046/nanoRWKV
The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.
Kin-Zhang/carla-expert
All kind of experts that can collect data for e2e learning in CARLA; 根据现有的开源代码,收集的相关experts
UT-Austin-RPL/FORGE
Code for Few-View Object Reconstruction with Unknown Categories and Camera Poses at 3DV 2024 (oral)
basilevh/occlusions-4d
Revealing Occlusions with 4D Neural Fields (CVPR 2022 Oral) - Official Implementation
seawee1/driver-dojo
A benchmark towards generalizable reinforcement learning for autonomous driving.
jozhang97/MutateEverything
facebookresearch/chat2map-official
[CVPR 2023] Code and datasets for 'Chat2Map Efficient Scene Mapping from Multi-Ego Conversations'
alex-petrenko/animations
Manim animations for various projects