/Dual-Normalization

[CVPR‘22] Generalizable Cross-modality Medical Image Segmentation via Style Augmentation and Dual Normalization

Primary LanguagePythonMIT LicenseMIT

Generalizable Cross-modality Medical Image Segmentation via Style Augmentation and Dual Normalization

by Ziqi Zhou, Lei Qi, Xin Yang, Dong Ni, Yinghuan Shi.

Introduction

This repository is for our CVPR 2022 paper 'Generalizable Cross-modality Medical Image Segmentation via Style Augmentation and Dual Normalization'.

Data Preparation

Dataset

BraTS 2018 | MMWHS | Abdominal-MRI | Abdominal-CT

File Organization

T2 as source domain

├── [Your BraTS2018 Path]
    ├── npz_data
        ├── train
            ├── t2_ss
                ├── sample1.npz, sample2.npz, xxx
            └── t2_sd
        ├── test
            ├── t1
                ├── test_sample1.npz, test_sample2.npz, xxx
            ├── t1ce
            └── flair

Training and Testing

Train on source domain T2.

python -W ignore train_dn_unet.py \
  --train_domain_list_1 t2_ss --train_domain_list_2 t2_sd --n_classes 2 \
  --batch_size 16 --n_epochs 50 --save_step 10 --lr 0.001 --gpu_ids 0 \
  --result_dir ./results/unet_dn_t2 --data_dir [Your BraTS2018 Path]/npz_data

Test on target domains (T1, T1ce and Flair).

python -W ignore test_dn_unet.py \
  --test_domain_list t1 t1ce flair --model_dir ./results/unet_dn_t2/model
  --batch_size 32 --save_label --label_dir ./vis/unet_dn_t2 --gpu_ids 0 \
  --num_classes 2 --data_dir [Your BraTS2018 Path]/npz_data

Acknowledgement

The U-Net model is borrowed from Fed-DG. The Style Augmentation (SA) module is based on the nonlinear transformation in Models Genesis. The Dual-Normalizaiton is borrow from DSBN. We thank all of them for their great contributions.

Citation

If you find this project useful for your research, please consider citing:

@inproceedings{zhou2022dn,
  title={Generalizable Cross-modality Medical Image Segmentation via Style Augmentation and Dual Normalization},
  author={Zhou, Ziqi and Qi, Lei and Yang, Xin and Ni, Dong and Shi, Yinghuan},
  booktitle={CVPR},
  year={2022}
}