Liuxueyi's Stars
hiyouga/LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
QwenLM/Qwen2-VL
Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
QwenLM/Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
PointsCoder/GPT-Driver
Learning to Drive with GPT
OpenDriveLab/DriveAdapter
[ICCV 2023 Oral] A New Paradigm for End-to-end Autonomous Driving to Alleviate Causal Confusion
bdvisl/DriveInsight
runningcheese/MirrorSite
镜像网站合集
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
Thinklab-SJTU/Bench2DriveZoo
BEVFormer, UniAD, VAD in Closed-Loop CARLA Evaluation with World Model RL Expert Think2Drive
Thinklab-SJTU/Bench2Drive
[NeurIPS 2024 Datasets and Benchmarks Track] Closed-Loop E2E-AD Benchmark Enhanced by World Model RL Expert
tulerfeng/PlanKD
[CVPR 2024] On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving
E2E-AD/AD-MLP
hustvl/VAD
[ICCV 2023] VAD: Vectorized Scene Representation for Efficient Autonomous Driving
autonomousvision/transfuser
[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
autonomousvision/tuplan_garage
[CoRL'23] Parting with Misconceptions about Learning-based Vehicle Motion Planning
OpenDriveLab/ELM
[ECCV 2024] Embodied Understanding of Driving Scenarios
autonomousvision/navsim
[NeurIPS 2024] NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking
BraveGroup/LAW
Enhancing End-to-End Autonomous Driving with Latent World Model
NVlabs/Hydra-MDP
h-zhao1997/cobra
Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference
WenjunHuang94/ML-Mamba
ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2
jeffreychou777/LOTVS-MM-AU
[CVPR2024 Highlight] The official repo for paper "Abductive Ego-View Accident Video Understanding for Safe Driving Perception"
jxbbb/ADAPT
This repository is an official implementation of ADAPT: Action-aware Driving Caption Transformer, accepted by ICRA 2023.
gaoyinfeng/PIWM
(T-IV) Dream to Drive with Predictive Individual World Model
xiaoyinliu0714/MICRO
UT-Austin-RPL/amago
a simple and scalable agent for training adaptive policies with sequence-based RL
yyyujintang/Awesome-Mamba-Papers
Awesome Papers related to Mamba.
Event-AHU/Mamba_State_Space_Model_Paper_List
[Mamba-Survey-2024] Paper list for State-Space-Model/Mamba and it's Applications
opendilab/awesome-model-based-RL
A curated list of awesome model based RL resources (continually updated)
zhaozijie2022/rl-course-control-of-pendulum
倒立摆控制, 强化学习作业1