/retinal_vessel_segmentation

Retinal's vessel segmentation with deep U-Net

Primary LanguagePythonMIT LicenseMIT

README 😬

No seriously, readme, I'm useful 😅

Abstract

This direcory contains a part of a project for the Image processing course at UNIBO.
This project consist in the segmentation of retinal's vessel taken from different patients.
The dataset used is the Digital Retinal Images for Vessel Extraction (DRIVE), available at this folder.
We faced this challenge with two main approaches: machine learnig (U-Net) and classical segmentation techniques (mainly with Imagej). In this repo you will find only the machine learnig part of those two.
The machine learning model is implemented in pytorch with the support of torchvision.

Repository structure

The repository contain these files/folders:

  • dataset: folder with the images of the DRIVE database with some preprocessing and the outupts of the models pretrained
  • models: folder with pytorch pretrained models saved (actually the files contain only the parameters)
  • eval.txt: imagej macro to evaluate the output of the network with specificity, precision, ...
  • evaluation*.txt_: evalutaion of the outputs of the pretrained network with 1500 epochs, both for the model with and without edges
  • unet_for_the_win.py: python file where the model is implemented

N.B. The database has 20 images for training (with associated ground truths) and 20 for test (without ground truths). To evaluate the results of our model we divided the training folder in 15 training / 5 test to have groud truths also for the test images.

Model

The model that we implemented is a U-Net with four encoding layers, scaled with max pooling, followed by four decoder steps with skip connection evry max pooling step.
Moreover, we decided to add a layer to the rgb channels with the edge detection grayscale image of the rgb one. Therefore, the input channels of the networks are 4 (rgb + edges) and not only 3. The reason for this choice is that this way the network should be facilitated in detecting the smaller vessels, which are also the most challenging to segment.
Actually in the unet_for_the_win.py file there are two different networks (with/without edges) with the same structure but one model takes as input the rgb image plus the layer with the graylevel images of the detected edges (4 channels), while the other only the rgb image (3 channels). The models present in the models folder are all trained with the edge layers, with the support of a GPU.
By the way, I want to thanks Colab for letting us use their GPU, without which the training would have been impossible for us 🙃

Install and run the code

To clone this folder, from terminal move into the desired folder and clone this repository using the following command:

git clone https://github.com/TommyGiak/retinal_vessel_segmentation.git

Then I suggest to use an editor like Spyder or VS Code to have the possibility to run different cells indipendently, but if you are bold enough you can run:

python unet_for_the_win.py

N.B. Remember to adjust the paths if you use an editor and be careful with the names of the file, overwriting and so on...

Results

Our performance results are shown in the evaluation_results.txt file.
An example of segmented output of the network, without any thresholding, is show below, in comparison with the input and ground truth.

  • input image rgb:

inp0

  • input edges:

inp0

  • output of the network trained with edges (without thresholding):

res0

  • output of the network trained without edges (without thresholding):

res_no0

  • ground truth:

res0

That's all for now! 👋