UNet++: A Nested U-Net Architecture for Medical Image Segmentation
This is an implementation of "UNet++: A Nested U-Net Architecture for Medical Image Segmentation" in Keras deep learning framework (Tensorflow as backend). UNet++ (nested U-Net architecture) is proposed for a more precise segmentation. We introduce intermediate layers to skip connections of U-Net, which naturally form multiple new up-sampling paths from different depths, ensembling U-Nets of various receptive fields.
Paper
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
Zhou Zongwei, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang
Biomedical Informatics, Arizona State University
Deep Learning in Medical Image Analysis (DLMIA) 2018. (Oral)
- View Publication
- View Code
- View Slides
- View Poster
@incollection{zhou2018unet++,
title={UNet++: A Nested U-Net Architecture for Medical Image Segmentation},
author={Zhou, Zongwei and Siddiquee, Md Mahfuzur Rahman and Tajbakhsh, Nima and Liang, Jianming},
booktitle={Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support},
pages={3--11},
year={2018},
publisher={Springer}
}
Requirements
Python 3.x, Keras 2.2.2, Tensorflow 1.4.1 and other common packages listed in requirements.txt
.
Available architectures
Available backbones
Backbone model | Name | Weights |
---|---|---|
VGG16 | vgg16 |
imagenet |
VGG19 | vgg19 |
imagenet |
ResNet18 | resnet18 |
imagenet |
ResNet34 | resnet34 |
imagenet |
ResNet50 | resnet50 |
imagenet imagenet11k-places365ch |
ResNet101 | resnet101 |
imagenet |
ResNet152 | resnet152 |
imagenet imagenet11k |
ResNeXt50 | resnext50 |
imagenet |
ResNeXt101 | resnext101 |
imagenet |
DenseNet121 | densenet121 |
imagenet |
DenseNet169 | densenet169 |
imagenet |
DenseNet201 | densenet201 |
imagenet |
Inception V3 | inceptionv3 |
imagenet |
Inception ResNet V2 | inceptionresnetv2 |
imagenet |
Installation
git clone https://github.com/MrGiovanni/UNetPlusPlus.git
cd UNetPlusPlus
pip install -r requirements.txt
git submodule update --init --recursive
Running the scripts
Data Science Bowl 2018
Application 1:CUDA_VISIBLE_DEVICES=0 python DSB2018_application.py --run 1 \
--arch Xnet \
--backbone vgg16 \
--init random \
--decoder transpose \
--input_rows 96 \
--input_cols 96 \
--input_deps 3 \
--nb_class 1 \
--batch_size 2048 \
--weights None \
--verbose 1
Liver Tumor Segmentation Challenge (LiTS)
Application 2:Polyp Segmentation (ASU-Mayo)
Application 3:Lung Image Database Consortium image collection (LIDC-IDRI)
Application 4:Multiparametric Brain Tumor Segmentation (BRATS 2013)
Application 5:CUDA_VISIBLE_DEVICES=0 python BRATS2013_application.py --run 1 \
--arch Xnet \
--backbone vgg16 \
--init random \
--decoder transpose \
--input_rows 256 \
--input_cols 256 \
--input_deps 3 \
--nb_class 1 \
--batch_size 2048 \
--weights None \
--verbose 1
Code examples for your own data
Train a UNet++ structure (Xnet
in the code):
from segmentation_models import Unet, Nestnet, Xnet
# prepare data
x, y = ... # range in [0,1]
# prepare model
model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build UNet++
# model = Unet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build U-Net
# model = NestNet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build DLA
model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
# train model
model.fit(x, y)
To do
- Add VGG backbone for UNet++
- Add ResNet backbone for UNet++
- Add ResNeXt backbone for UNet++
- Add DenseNet backbone for UNet++
- Add Inception backbone for UNet++
- Add Tiramisu and Tiramisu++
- Add FPN++
- Add Linknet++
- Add PSPNet++
Maintainers
- Zongwei Zhou, homepage: zongweiz.com
- Md Mahfuzur Rahman Siddiquee, github: mahfuzmohammad
- Nima Tajbakhsh, github: ntajbakhsh
Acknowledgments
This repository has been built upon qubvel/segmentation_models. We appreciate the effort of Pavel Yakubovskiy for providing well-organized segmentation models to the community. This research has been supported partially by NIH under Award Number R01HL128785, by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH.