/brain-mri-semantic-segmentation

Implementing a U-Net for image segmentation of brain CT scans.

Primary LanguagePython

Semantic Segmentation of Brain Tumours

This repository implements two semantic segmentation models UNet-3 (3 skip-cons) and UNet-4 (4 skip cons). the models can be found in ./imgseg/networks.py

Documentation

The documentation of the whole procedure from start to finish is documented in the notebook documentation.ipynb.

Models

The models can be downloaded from here: https://drive.google.com/drive/folders/1nm74FqZckW0pwa-L1O_SwnRV7-iAA-sR?usp=sharing

During the evaluation the segmentation model with the name dice_unet4_50eps_16bs.pth performed best and it is recommended to download this model for inference. The evaluation was made on certain patients from the used kaggle dataset for training. I can not guarantee that the model performs good on other images, which might origin from a different distribution.

  • Unet with 4 skip connections
  • Trained with normal Dice-Loss
  • 50 epochs in total
  • Batch size of 16
  • Adam optimizer with additional step-wise learning rate scheduler

Installation

Manual

  1. Clone project locally via SSH:
git clone git@github.com:SimonStaehli/image-segmentation.git

or via HTTPS:

git clone https://github.com/SimonStaehli/image-segmentation.git
  1. Download models from Google Drive and add it to the folder ./models here: https://drive.google.com/drive/folders/1nm74FqZckW0pwa-L1O_SwnRV7-iAA-sR?usp=sharing

  2. Install required packages

pip install -r requirements.txt

Note: if there is a problem with the installation please refer to the official pytorch installation guide: https://pytorch.org/get-started/locally/

With Docker

docker build . -f Dockerfile -t brain-mri-semantic-segmentation
docker run -it brain-mri-semantic-segmentation

Run Locally

Requires all installation steps already made.

Following command will segment all images in the folder image_in and store the masks in the folder image_out.

python inference.py -i image_in -o image_out --overlay True

References

Dataset used for training: https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation

Kaggle Notebook used for training: https://www.kaggle.com/code/simonstaehli/image-segmentation