wayveai/Driving-with-LLMs
PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
PythonApache-2.0
Issues
- 0
- 0
decapoda-research/llama-7b-hf
#27 opened by CerrieJ - 2
decapoda-research/llama-7b-hf/ Not Found
#26 opened by conysauce - 2
Paper citations
#25 opened by AHPUymhd - 1
Thanks for your work.I have a question. Since the prompt in the model's input already describes the content represented by the vectors, why is it necessary to align the vectors with the LLM during the pre-training process? Are the vectors used to help the model further understand the driving scenario based on the prompt? Are the labels in the pre-training process the prompts generated by lanGen? What is the purpose of the 100k question-answer pairs in the pre-training process?
#24 opened by lilpeng - 4
View Results
#19 opened by PzWHU - 0
Training with real world datasets
#23 opened by ksm26 - 1
real vehicles
#18 opened by AlleyOop23 - 1
real environment
#17 opened by AlleyOop23 - 1
- 1
The inference generation is very slow
#16 opened by Alkaiddd - 1
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
#21 opened by guiyuliu - 1
- 10
- 1
The code and training command of the stage-1: vector representation pre-training stage
#3 opened by Phoebe-ovo - 1
- 2
Pre trained model
#9 opened by abhigoku10 - 1
Visualization of the results
#10 opened by abhigoku10 - 2
How to Use Multi-GPU Training
#8 opened by Dylandtt - 2
About the prompt
#12 opened by frkmac3 - 0
The environment
#13 opened by Arandinglv - 1
Unspecified signals
#11 opened by cneeruko - 0
Dataset and LLM
#7 opened by Haonote - 2
401 Client Error: Unauthorized for url
#5 opened by Leonard-Yao