PigLogic-Cyber's Stars
facebookresearch/faiss
A library for efficient similarity search and clustering of dense vectors.
thuml/Time-Series-Library
A Library for Advanced Deep Time Series Models.
OpenGVLab/LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
NVlabs/VILA
VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and cloud.
PhoebusSi/Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
autonomousvision/tuplan_garage
[CoRL'23] Parting with Misconceptions about Learning-based Vehicle Motion Planning
vasgaowei/BEV-Perception
Bird's Eye View Perception
JiehongLin/SAM-6D
[CVPR2024] Code for "SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation".
PJLab-ADG/awesome-knowledge-driven-AD
A curated list of awesome knowledge-driven autonomous driving (continually updated)
HorizonRobotics/Sparse4D
wayveai/mile
PyTorch code for the paper "Model-Based Imitation Learning for Urban Driving".
jchengai/pluto
PLUTO: Push the Limit of Imitation Learning-based Planning for Autonomous Driving
autonomousvision/carla_garage
[ICCV'23] Hidden Biases of End-to-End Driving Models & A starter kit for the CARLA leaderboard 2.0.
MCG-NJU/SparseOcc
[ECCV 2024] Fully Sparse 3D Occupancy Prediction & RayIoU Evaluation Metric
NVlabs/OmniDrive
PJLab-ADG/DiLu
[ICLR 2024] DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models
HKUST-Aerial-Robotics/SIMPL
SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving
er-muyue/BeMapNet
Tsinghua-MARS-Lab/neural_map_prior
The official implementation of the CVPR2023 paper titled “Neural Map Prior for Autonomous Driving”.
jchengai/forecast-mae
[ICCV'2023] Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders
happinesslz/LION
[NeurIPS 2024] Official code of ”LION: Linear Group RNN for 3D Object Detection in Point Clouds“
opendilab/SmartRefine
[CVPR 2024] SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
GAP-LAB-CUHK-SZ/SAMPro3D
SAMPro3D: Locating SAM Prompts in 3D for Zero-Shot Scene Segmentation
LiewFeng/RayDN
[ECCV 2024] Ray Denoising (RayDN): Depth-aware Hard Negative Sampling for Multi-view 3D Object Detection
alfredgu001324/MapBEVPrediction
[ECCV 2024] Accelerating Online Mapping and Behavior Prediction via Direct BEV Feature Attention
EMZucas/minidrive
Chengran-Yuan/DRAMA
AutonomousVehicleLaboratory/SemVecNet
IranQin/SupFusion
This is the official code of the paper "SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection"