GI Tract Segmentation

Author: Fernández Hernández, Alberto

Date: 2022 - 07 - 13

Summary 📖

The main purpose is create a model to automatically segment the stomach and intestines (small and large) on MRI scans, in order to outline the position of the stomach and intestines to adjust the direction of the x-ray beams to increase the dose delivery to the tumor and avoid main organs. A method to segment the stomach and intestines would make treatments much faster and would allow more patients to get more effective treatment. The MRI scans are from actual cancer patients who had 1-5 MRI scans on separate days during their radiation treatment.

Architecture diagram

Deep learning model

Two models are proposed:

  • UNet VS Feature Pyramid Network (FPN)
# Model Number of parameters Backbone Inference Time (GPU) - minutes *
# Unet 8.7 M Efficientnet-B1 2:50 min.
# FPN 8.2 M Efficientnet-B1 3:06 min.

* 3759 non-empty-masks images are "inferred", with batch size: 1

Non empty masks

# Model Dice score large bowel Dice score small bowel Dice score stomach
# Unet 0.81 0.79 0.90
# FPN 0.73 0.73 0.89

Empty masks

# Model Dice score large bowel Dice score small bowel Dice score stomach
# Unet 0.99 0.99 0.99
# FPN 0.95 0.95 0.99

Model monitoring: Weight and biases

Click here to check models monitoring

Preview

Dataset source

UW-Madison GI Tract Image Segmentation - Kaggle dataset

Tools

  • Storage: Google Cloud Storage
  • Code: Python 3.7 + Google Cloud functions
  • Libraries:
    • PyTorch
    • Albumentations
    • Image Segmentation Models (smp) library
    • Pydicom
    • Sci-kit Learn
    • Matplotlib
    • Streamlit
    • Skimage
    • tqdm
    • wandb
    • OpenCV
    • Numpy
    • Google-cloud

References

UW-Madison GI Tract Image Segmentation - Kaggle dataset

Segmentation models PyTorch libraries

A prior knowledge guided deep learning based semi-automatic segmentation for complex anatomy on MRI

A2-FPN for Semantic Segmentation of Fine-Resolution Remotely Sensed Images

Weight and Biases