EmpErorWGA's Stars
kvcache-ai/Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
rainmaker22/SMART
SMART: Scalable Multi-agent Real-time Motion Simulation via Next-token Prediction
OpenGVLab/InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4V. 接近GPT-4V表现的可商用开源多模态对话模型
swc-17/SparseDrive
SparseDrive: End-to-End Autonomous Driving via Sparse Scene Representation
PJLab-ADG/LeapAD
Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving
OpenDriveLab/Vista
A Generalizable World Model for Autonomous Driving
NVlabs/OmniDrive
itcharge/LeetCode-Py
⛽️「算法通关手册」:超详细的「算法与数据结构」基础讲解教程,从零基础开始学习算法知识,850+ 道「LeetCode 题目」详细解析,200 道「大厂面试热门题目」。
OpenDriveLab/Birds-eye-view-Perception
[IEEE T-PAMI] Awesome BEV perception research and cookbook for all level audience in autonomous diriving
zijing2333/CSView
CSView是一个互联网面试知识学习和汇总项目,包括面试高频算法、系统设计、计算机网络、操作系统、C++、Java、golang、MySQL、Redis、K8s、消息队列等常见面试题。
megvii-research/Far3D
[AAAI2024] Far3D: Expanding the Horizon for Surround-view 3D Object Detection
rolsheng/MM-VUFM4DS
A systematic survey of multi-modal and multi-task visual understanding foundation models for driving scenarios
OpenDriveLab/ELM
[ECCV 2024] Embodied Understanding of Driving Scenarios
zixian2021/AI-interview-cards
最完整的AI算法面试题目仓库,1000道,25个类目
OpenDriveLab/DriveAGI
[Incl. GenAD, CVPR 2024 Highlight] Embracing Foundation Models into Autonomous Agent and System
xai-org/grok-1
Grok open release
AdaCompNUS/WhatMatters
This repository contains the code for the paper What Truly Matters in Trajectory Prediction for Autonomous Driving?
opendilab/LMDrive
[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
BraveGroup/Drive-WM
[CVPR 2024] A world model for autonomous driving.
wayveai/Driving-with-LLMs
PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
IrohXu/Awesome-Multimodal-LLM-Autonomous-Driving
[WACV 2024 Survey Paper] Multimodal Large Language Models for Autonomous Driving
PJLab-ADG/GPT4V-AD-Exploration
On the Road with GPT-4V(ision): Explorations of Utilizing Visual-Language Model as Autonomous Driving Agent
OpenDriveLab/DriveLM
[ECCV 2024] DriveLM: Driving with Graph Visual Question Answering
Thinklab-SJTU/Awesome-LLM4AD
A curated list of awesome LLM for Autonomous Driving resources (continually updated)
OpenDriveLab/OpenLane-V2
[NeurIPS 2023 Track Datasets and Benchmarks] OpenLane-V2: The First Perception and Reasoning Benchmark for Road Driving
Haiyang-W/UniTR
[ICCV2023] Official Implementation of "UniTR: A Unified and Efficient Multi-Modal Transformer for Bird’s-Eye-View Representation"
wudongming97/Prompt4Driving
zhejz/HPTR
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding. NeurIPS 2023.
ziqipang/StreamingForecasting
[IROS 2023] "Streaming Motion Forecasting for Autonomous Driving"
NVlabs/DQTrack
Official PyTorch implementation of End-to-end 3D Tracking with Decoupled Queries [ICCV 2023]