Pytorch implementation of CaFNet: A Confidence-Driven Framework for Radar Camera Depth Estimation
IROS 2024
Models have been tested using Python 3.7/3.8, Pytorch 1.10.1+cu111
If you use this work, please cite our paper:
@misc{sun2024cafnetconfidencedrivenframeworkradar,
title={CaFNet: A Confidence-Driven Framework for Radar Camera Depth Estimation},
author={Huawei Sun and Hao Feng and Julius Ott and Lorenzo Servadei and Robert Wille},
year={2024},
eprint={2407.00697},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.00697},
}
Note: Run all bash scripts from the root directory.
We use the nuScenes dataset that can be downloaded here.
Please create a folder called dataset
and place the downloaded nuScenes dataset into it.
Generate the panoptic segmentation masks using the following:
python setup/gen_panoptic_seg.py
Then run the following bash script to generate the preprocessed dataset for training:
bash setup_dataset_nuscenes.sh
bash setup_dataset_nuscenes_radar.sh
Then run the following bash script to generate the preprocessed dataset for testing:
bash setup_dataset_nuscenes_test.sh
bash setup_dataset_nuscenes_radar_test.sh
This will generate the training dataset in a folder called data/nuscenes_derived
To train CaFNet on the nuScenes dataset, you may run
python main.py arguments_train_nuscenes.txt
To evaluate model on the nuScenes dataset, you may run:
python test.py arguments_test_nuscenes.txt
You may replace the path dirs in the arguments files.
Our work builds on and uses code from radar-camera-fusion-depth, bts. We'd like to thank the authors for making these libraries and frameworks available.