/Image-Segmentation

Image Segmentation using U-NET Architecture

Primary LanguageJupyter Notebook

Image-Segmentation

Image Segmentation using U-NET Architecture.

Problem Statement

Find the nuclei in divergent images to advance medical discovery.


Overview

This problem has been given in the 2018 Data Science Bowl in Kaggle. We aim to create an algorithm to automate nucleus detection. By automating nucleus detection we could unlock cures faster, from rare disorders to common cold. Identifying the cells’ nuclei is the starting point for most analyses because most of the human body’s 30 trillion cells contain a nucleus full of DNA, the genetic code that programs each cell. Identifying nuclei allows researchers to identify each individual cell in a sample, and by measuring how cells react to various treatments, the researcher can understand the underlying biological processes at work.


Things to do

  • Import the necessary dependencies.
  • Load the dataset.
  • Define the image dimensions.
  • Resize the image and masks according to the defined image dimensions.
  • Build the U-NET Model accoriding to the requirements.
  • Train the model.
  • Make Predictions.
  • Show the output.

Dependencies

  • TensorFlow
  • OS
  • Numpy
  • Matplotlib
  • Scikit Learn
  • TQDM
  • Random

Resizing the image and masks.

The datset can be obtained from the following link: Dataset Link. The datset is divided into two parts:

  • stage1_train: contains the images and annotated masks.

  • stage1_test: contains the images used for testing the model.


This dataset contains a large number of segmented nuclei images. The images were acquired under a variety of conditions and vary in the cell type, magnification, and imaging modality (brightfield vs. fluorescence). The dataset is designed to challenge an algorithm's ability to generalize across these variations. Each image is represented by an associated ImageId. Files belonging to an image are contained in a folder with this ImageId. Within this folder are two subfolders:

  • images: contains the image file.

  • masks: contains the segmented masks of each nucleus. This folder is only included in the training set. Each mask contains one nucleus. Masks are not allowed to overlap (no pixel belongs to two masks).

The images and masks of the training set are converted to a fixed dimensions of 128*128*3. The images of the test set are converted to the same dimensions of 128*128*3. Create a numpy array X_train to store the image pixels of the training set and another numpy array Y_train to store the the merged mask pixels of each image of the training set. Store the image pixels of the test set and store it in a numpy array named X_test.


U-NET Architecture

The U-NET was developed by Olaf Ronneberger et al. for Bio Medical Image Segmentation. The architecture contains two paths. First path is the contraction path (also called as the encoder) which is used to capture the context in the image. The encoder is just a traditional stack of convolutional and max pooling layers. The second path is the symmetric expanding path (also called as the decoder) which is used to enable precise localization using transposed convolutions. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains Convolutional layers and does not contain any Dense layer because of which it can accept image of any size.

This is a pretty simple architecture and easy to implement architecture. The link to the paper is: Paper Link.

The architecture has been modified as per the rquirements for this project. The details of the modified architecture is given below:

There are two Conv2D layer followed by a MaxPooling layer in every stage of the encoder. Similarly in the decoder part there are two Conv2DTranspose layers in each stage. Each Convolution operation is followed by a Dropout.

The total no:of parameters are:

Total params: 1,941,105
Trainable params: 1,941,105
Non-trainable params: 0


Results

The output image will have yellow patches that shows the position of the nucleus in a cell.

© Contributed By: Souvik Ghosh.