/DS-Depth

Primary LanguagePython

DS-Depth: Dynamic and Static Depth Estimation via a Fusion Cost Volume

arXiv IEEE

DS-Depth: Dynamic and Static Depth Estimation via a Fusion Cost Volume
Paper
Xingyu Miao, Yang Bai, Haoran Duan, Yawen Huang, Fan Wan, Xinxing Xu, Yang Long, Yefeng Zheng
Accepted by IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)

Setup

To get started, please create the conda environment by running

cd DSdepth
conda env create -f environment.yaml
conda activate dsdepth

Train

To train a KITTI model, run:

python -m dsdepth.train \
    --data_path <your_KITTI_path> \
    --log_dir <your_save_path> \
    --model_name <your_model_name>

For instructions on downloading the KITTI dataset, see Monodepth2

To train a CityScapes model, run:

python -m dsdepth.train \
    --data_path <your_preprocessed_cityscapes_path> \
    --log_dir <your_save_path> \
    --model_name <your_model_name> \
    --dataset cityscapes_preprocessed \
    --split cityscapes_preprocessed \
    --freeze_teacher_epoch 5 \
    --height 192 --width 512

This assumes you have already preprocessed the CityScapes dataset. If you have not yet processed the CityScapes data set, please refer to ManyDepth for processing.

Evaluation

KITTI dataset

First you have run export_gt_depth.py to extract ground truth files.

To evaluate a model on KITTI, run:

python -m dsdepth.evaluate_depth \
    --data_path <your_KITTI_path> \
    --load_weights_folder <your_model_path>
    --eval_mono
    --eval_split eigen

Cityscapes dataset

The ground truth depth files Here.

To evaluate a model on Cityscapes, run:

python -m dsdepth.evaluate_depth \
    --data_path <your_cityscapes_path> \
    --load_weights_folder <your_model_path>
    --eval_mono \
    --eval_split cityscapes

And to evaluate a model on Cityscapes (Dynamic region only), run:

python -m dsdepth.evaluate_depth_dynamic \
    --data_path <your_cityscapes_path> \
    --load_weights_folder <your_model_path>
    --eval_mono \
    --eval_split cityscapes

Please make sure you switch the dynamic region dataloader. And the dynamic object masks for Cityscapes dataset can download from Here.

Pretrained weights

You can download weights for some pretrained models here:

If you have any concern with this paper or implementation, welcome to open an issue or email me at xingyu.miao@durham.ac.uk.

Citation

If you find this code useful for your research, please consider citing the following paper:

@ARTICLE{10220114,
  author={Miao, Xingyu and Bai, Yang and Duan, Haoran and Huang, Yawen and Wan, Fan and Xu, Xinxing and Long, Yang and Zheng, Yefeng},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  title={DS-Depth: Dynamic and Static Depth Estimation via a Fusion Cost Volume}, 
  year={2023},
  doi={10.1109/TCSVT.2023.3305776}}

Acknowledgments

Our training code is build upon ManyDepth.