Learning from All Vehicles
Dian Chen, Philipp Krähenbühl
CVPR 2022 (also arXiV 2203.11934)
This repo contains code for paper Learning from all vehicles.
It distills a model that performs joint perception, multi-modal prediction and planning, and we hope it to be a great starter kit for end-to-end autonomous driving research.
If you find our repo, dataset or paper useful, please cite us as
@inproceedings{chen2022lav,
title={Learning from all vehicles},
author={Chen, Dian and Kr{\"a}henb{\"u}hl, Philipp},
booktitle={CVPR},
year={2022}
}
Checkout our demo videos at: https://dotchen.github.io/LAV/
- To run CARLA and train the models, make sure you are using a machine with at least a mid-end GPU.
- Please follow INSTALL.md to setup the environment.
We adopt a LBC-style staged privileged distillation framework. Please refer to TRAINING.md for more details.
We additionally provide examplery trained weights in the weights
folder if you would like to directly evaluate.
They are trained on Town01, 03, 04, 06.
Make sure you are launching CARLA with the -vulkan
flag.
Inside the root LAV repo, run
ROUTES=[PATH TO ROUTES] ./leaderboard/scripts/run_evaluation.sh
Use ROUTES=assets/routes_lav_valid.xml
to run our ablation routes, or ROUTES=leaderboard/data/routes_valid.xml
for the validation routes provided by leaderboard.
We also release our LAV dataset. Download the dataset HERE.
See TRAINING.md for more details.
We thank Tianwei Yin for the pillar generation code. The ERFNet codes are taken from the official ERFNet repo.
This repo is released under the Apache 2.0 License (please refer to the LICENSE file for details).