/SegLossBias

Code for the paper : Do we really need dice? The hidden region-size biases of segmentation losses. MeDIA 2023. https://www.sciencedirect.com/science/article/abs/pii/S136184152300275X

Primary LanguagePythonMIT LicenseMIT

The hidden label-marginal biases of segmentation losses

Code for the paper : The hidden label-marginal biases of segmentation losses.

[arxiv]

Table of Content

Prerequisites

Note : I only test the code with python 3.8 and 3.9. Environment manager like conda or virtualenv is strongly recommended.

  1. Install pytorch and opencv tailored for your environment:

    torch==1.7.1
    torchvision==0.8.2
    opencv-python==4.5.1.48
    
  2. Other depencencies

    pip install -r requirements.txt
    
  3. Install the library

    pip install -e .
    

Prepare dataset

  • Retinal Lesions :

    This dataset is freely available but you need to submit a request based on their repo. We provide the experimental data split files [link] .

    Here is the data structure to reproduce :

    ── data
    │   └── retinal-lesions-v20191227
    │       ├── classes.txt
    │       ├── images_896x896
    │       ├── lesion_segs_896x896
    │       ├── test.txt
    │       ├── train.txt
    │       └── val.txt
    
  • Cityscapes :

    To download the dataset, please refer to the official site.

    Here is the recommended data structure :

    ├── Data
    │   ├── cityscapes
    │       ├── gtFine
    │       ├── leftImg8bit
    │       ├── license.txt
    │       └── README
    

    By convention, **labelTrainIds.png are used for cityscapes training and validation. We use the scripts in mmsegmentation to generate **labelTrainIds.png.

  • 3D medical imaging datasets (e.g., Livers & Tumors, Pancreas & Tumors, AMOS): please refer to nnUNetv1.

Quick start

Testing with trained model

We provide two best models we currently trained on Retinal Lesions and Cityscapes for quick testing.

  • Retinal Lesions: model

    python tools/test_net.py --config-file ./configs/retinal-lesions/unet_bce-l1_896x896.yaml \\
        TEST.CHECKPOINT_PATH ./trained/retinal-lesions_r50unet_896x896_bce-l1.pth (OR_YOUR_LOCAL_PATH)
    
  • Cityscapes : model

    python tools/test_net.py --config-file ./configs/cityscapes/r50fpn_512x1024_ce_l1.yaml \\
        TEST.CHECKPOINT_PATH ./trained/cityscapes_r50fpn_512x1024_ce-l1.pth TEST.SPLIT val
    
  • AMOS : model. Need to be testing with nnUNetv1.

Configuration system

We use YACS to define and manage all the configurations. In a nutshell, you typically create a YAML configuration file for each experiment.

All configurable options are defined in defaults.py with default values. Note that each YAML configuration file only overrides the options that are chaning in that experiment. You can also overriede options from the command line using a list of fully-qualified key, value pairs.

Training

The configurations may seem confusing at first. You can play quickly with the provided configuration files.

  • Retinal Lesions
python tools/train_net.py --config-file ./configs/retinal-lesions/unet_bce-l1_896x896.yaml
  • Cityscapes
python tools/train_net.py --config-file ./configs/cityscapes/r50fpn_512x1024_ce_l1.yaml
  • 3D medical imaging datasets (e.g., Livers & Tumors, Pancreas & Tumors, AMOS): plug the implemented losses under losses_nnunet into nnUNetv1.

License

This work is licensed under MIT License. See LICENSE for details.

If you find this paper/code useful for your research, please consider citing :

@misc{liu2021hidden, title={The hidden label-marginal biases of segmentation losses}, author={Bingyuan Liu and Jose Dolz and Adrian Galdran and Riadh Kobbi and Ismail Ben Ayed}, year={2021}, eprint={2104.08717}, archivePrefix={arXiv}, primaryClass={cs.CV} }