/wmh_ibbmTum

winning method for WMH segmentation challenge in MICCAI 2017

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Instructions for running the winning method in MICCAI 2017 WMH segmentation challenge

Thanks to Weiqing, python3 verison is here: https://github.com/FourierX9/wmh_ibbmTum.

Testing your cases

An easy-to-use demo code could be downloaded here: https://drive.google.com/file/d/1tjk8CXjGYeddbaPCc1P5r-_ACUFcMut4/view?usp=sharing . It supports single modality (FLAIR) and two-modality (FLAIR and T1) as the the input. The detailed instructions are in ReadMe inside. Please have a look at it. Simply, just run:

python test_your_data.py

The dockerfile submitted to the WMH challenge is available:

docker pull wmhchallenge/sysu_media

Some instructions for the public codes

The public codes are for researchers who want to work and improve the current state-of-the-art. Basic knowledge on python is required.

Requirements: 
Keras 2.0.5, Tensorflow 1.8, Python 2.7, h5py 

For the .npy files to run the leave-one-subject-out experiments, please download via: https://drive.google.com/open?id=1m0H9vbFV8yijvuTsAqRAUQGGitanNw_k . This is the preprocessed data we used both the challenge and our NeuroImage paper. The preprocessing steps can be found in testing code. We followed the same procedures. The number of slices of each are reduced a bit by removing the first and last few slices, i.e., the num of slices in each subject in Utrecht were reduced to 38, in Amsterdam were 38 and GE3T were 63. The order of the subject were result generated by reading all the dir name in each subset and performing dir.sort(). For example the order in Utrecht should be: 0, 11, 17, 19, 2, 21... So the structure is like this:

Utrecht = data[0:760, ...], Singapore = data[760:1520, ...], GE3T = data[1520:2780, ...]

Decriptions for the python code:

train_leave_one_out.py: train U-Net models under leave-one-subject-out protocol. For options, you can train models with single modelity or without data augmentation.
test_leave_one_out.py: test U-Net models under leave-one-subject-out protocol. The codes also include the preprocessing of the original data.
evaluation.py: evaluation code provided by the challenge organizor. This code has some numerical issues in Python 3+ when calculating the Hausdorff distance.
images_three_datasets_sorted.npy: preprocessed dataset including Utrecht, Singapore and GE3T. The order of the patients is sorted.
masks_three_datasets_sorted.npy: preprocessed masks including Utrecht, Singapore and GE3T corresponding to the preprocessed data. The order of the patients is sorted.

Citation

The detailed description of our method is published in NeuroImage. Please cite our work if you find the codeis useful for your research.