This repository contains code and models for our paper:
Vision Transformers for Dense Prediction
René Ranftl, Alexey Bochkovskiy, Vladlen Koltun
ICCV 2021
- [August 2021] Models refactored to support model scripting and tracing
- [March 2021] Initial release of inference code and models
- Download the model weights and place them in the
weights
folder:
Monodepth:
Segmentation:
-
Set up dependencies:
pip install -r requirements.txt
The code was tested with Python 3.7, PyTorch 1.9.0, OpenCV 4.5.1, and timm 0.4.9
-
Place one or more input images in the folder
input
. -
Run a monocular depth estimation model:
python run_monodepth.py
Or run a semantic segmentation model:
python run_segmentation.py
-
The results are written to the folder
output_monodepth
andoutput_semseg
, respectively.
Use the flag -t
to switch between different models. Possible options are dpt_hybrid
(default) and dpt_large
.
Additional models:
- Monodepth finetuned on KITTI: dpt_hybrid-kitti-e7069aae.pt Mirror
- Monodepth finetuned on NYUv2: dpt_hybrid-nyu-b3a2ef48.pt Mirror
Run with
python run_monodepth -t [dpt_hybrid_kitti|dpt_hybrid_nyu]
Hints on how to evaluate monodepth models can be found here: https://github.com/intel-isl/DPT/blob/main/EVALUATION.md
Please cite our papers if you use this code or any of the models.
@inproceedings{Ranftl2021,
author = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
booktitle = {ICCV},
year = {2021},
}
@article{Ranftl2020,
author = {Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun},
title = {Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year = {2020},
}
Our work builds on and uses code from timm and PyTorch-Encoding. We'd like to thank the authors for making these libraries available.
MIT License