EvPSNet is the first approach to tackle the task of uncertainty-aware panoptic segmentation, with an aim to provide per-pixel semantic and instance label together with per-pixel panoptic uncertainty estimation.
This repository contains the PyTorch implementation of our RAL'2023 paper Uncertainty-aware Panoptic Segmentation. The repository builds on EfficientPS, mmdetection and gen-efficientnet-pytorch codebases.
If you find the code useful for your research, please consider citing our paper:
@article{sirohi2023uncertainty,
title={Uncertainty-aware panoptic segmentation},
author={Sirohi, Kshitij and Marvi, Sajad and B{\"u}scher, Daniel and Burgard, Wolfram},
journal={IEEE Robotics and Automation Letters},
volume={8},
number={5},
pages={2629--2636},
year={2023},
publisher={IEEE}
}
- Linux
- Python 3.7
- PyTorch 1.7
- CUDA 10.2
- GCC 7 or 8
IMPORTANT NOTE: These requirements are not necessarily mandatory. However, we have only tested the code under the above settings and cannot provide support for other setups.
a. Create a conda virtual environment from the provided environment.yml and activate it.
git clone https://github.com/kshitij3112/EvPSNet.git
cd EvPSNet
conda env create -n EvPSnet_env --file=environment.yml
conda activate EvPSnet_env
b. Install all other dependencies using pip:
pip install -r requirements.txt
c. Install EfficientNet implementation
cd efficientNet
python setup.py develop
d. Install EvPSNet implementation
cd ..
python setup.py develop
It is recommended to symlink the dataset root to $EvPSNet/data
.
If your folder structure is different, you may need to change the corresponding paths in config files.
EvPSNet
├── mmdet
├── tools
├── configs
└── data
└── cityscapes
├── annotations
├── train
├── val
├── stuffthingmaps
├── cityscapes_panoptic_val.json
└── cityscapes_panoptic_val
The cityscapes annotations have to be converted into the aforementioned format using
tools/convert_datasets/cityscapes.py
:
python tools/convert_cityscapes.py ROOT_DIRECTORY_OF_CITYSCAPES ./data/cityscapes/
cd ..
git clone https://github.com/mcordts/cityscapesScripts.git
cd cityscapesScripts/cityscapesscripts/preparation
python createPanopticImgs.py --dataset-folder path_to_cityscapes_gtFine_folder --output-folder ../../../EvPSNet/data/cityscapes --set-names val
Edit the config file appropriately in configs folder. Train with a single GPU:
python tools/train.py configs/EvPSNet_unc_singlegpu.py --work_dir work_dirs/checkpoints --validate
Train with multiple GPUS:
./tools/dist_train.sh configs/EvPSNet_unc_mutigpu.py ${GPU_NUM} --work_dir work_dirs/checkpoints --validate
- --resume_from ${CHECKPOINT_FILE}: Resume from a previous checkpoint file.
Test with a single GPU:
python tools/test.py configs/EvPSNet_unc_singlegpu.py ${CHECKPOINT_FILE} --eval panoptic
Test with multiple GPUS:
./tools/dist_test.sh configs/EvPSNet_unc_mutigpu.py ${CHECKPOINT_FILE} ${GPU_NUM} --eval panoptic
- tool/cityscapes_inference.py: saves predictions in the official cityscapes panoptic format.
- tool/cityscapes_save_predictions.py: saves color visualizations.
- This is an impmentation of EvPSNet in PyTorch. Please refer to the metrics reported in Uncertainty-aware Panoptic Segmentation when making comparisons.
We have used utility functions from other open-source projects. We especially thank the authors of:
For academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.