dkoalal's Stars
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
OpenDriveLab/UniAD
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
torch-points3d/torch-points3d
Pytorch framework for doing deep learning on point clouds.
OpenDriveLab/End-to-end-Autonomous-Driving
[IEEE T-PAMI 2024] All you need for End-to-end Autonomous Driving
Pointcept/Pointcept
Pointcept: a codebase for point cloud perception research. Latest works: PTv3 (CVPR'24 Oral), PPT (CVPR'24), OA-CNNs (CVPR'24), MSC (CVPR'23)
rusty1s/pytorch_scatter
PyTorch Extension Library of Optimized Scatter Operations
autonomousvision/transfuser
[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
tusen-ai/SST
Code for a series of work in LiDAR perception, including SST (CVPR 22), FSD (NeurIPS 22), FSD++ (TPAMI 23), FSDv2, and CTRL (ICCV 23, oral).
hustvl/VAD
[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
MenghaoGuo/PCT
Jittor implementation of PCT:Point Cloud Transformer
qq456cvb/Point-Transformers
Point Transformers
facebookresearch/3detr
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection
opendilab/InterFuser
[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
Julie-tang00/Point-BERT
[CVPR 2022] Pre-Training 3D Point Cloud Transformers with Masked Point Modeling
dotchen/LAV
(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.
Haiyang-W/DSVT
[CVPR2023] Official Implementation of "DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets"
OpenDriveLab/TCP
[NeurIPS 2022] Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline.
wayveai/mile
PyTorch code for the paper "Model-Based Imitation Learning for Urban Driving".
OpenDriveLab/ST-P3
[ECCV 2022] ST-P3, an end-to-end vision-based autonomous driving framework via spatial-temporal feature learning.
autonomousvision/neat
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
dvlab-research/SphereFormer
The official implementation for "Spherical Transformer for LiDAR-based 3D Recognition" (CVPR 2023).
PointsCoder/VOTR
Voxel Transformer for 3D object detection
autonomousvision/carla_garage
[ICCV'23] Hidden Biases of End-to-End Driving Models
dvlab-research/LargeKernel3D
LargeKernel3D: Scaling up Kernels in 3D Sparse CNNs (CVPR 2023)
OpenDriveLab/DriveAdapter
[ICCV 2023 Oral] A New Paradigm for End-to-end Autonomous Driving to Alleviate Causal Confusion
skyhehe123/VoxSeT
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)
bradyz/2020_CARLA_challenge
"Learning by Cheating" (CoRL 2019) submission for the 2020 CARLA Challenge
dvlab-research/SparseTransformer
A fast and memory-efficient libarary for sparse transformer with varying token numbers (e.g., 3D point cloud).
dvlab-research/spconv-plus
Kin-Zhang/carla-expert
All kind of experts that can collect data for e2e learning in CARLA; 根据现有的开源代码,收集的相关experts