/Texture-Segmentation

Volumetric Texture Segmentation by Discriminant Feature Selection and Multiresolution Classification

Primary LanguageMATLABGNU General Public License v3.0GPL-3.0

Texture-Segmentation

Texture segmentation: an objective comparison between traditional and deep-learning methodologies


Test data, Matlab code and data sets and user manuals


This paper has been published in Applied Sciences https://www.mdpi.com/2076-3417/9/18/3900. If you find it useful, please cite accordingly.

Abstract

This paper compares a series of traditional and deep learning methodologies for the segmentation of textures. Six well-known texture composites first published by Randen and Husoy were used to compare traditional segmentation techniques (co-occurrence, filtering, local binary patterns, watershed, multiresolution sub-band filtering) against a deep-learning approach based on the U-Net architecture. For the latter, the effects of depth of the network, number of epochs and different optimisation algorithms were investigated. Overall, the best results were provided by the deep-learning approach. However, the best results were distributed within the parameters, and many configurations provided results well below the traditional techniques.

IMPORTANT



Data and programs: everything is in matlab format.

Short Tutorial

Clear all the data and close all windows

  clear all
  close all

Load the matrix with the data from Randen's paper

  load randenData
  whos
  Name             Size               Bytes  Class    Attributes

  dataRanden       1x9              9438192  cell               
  maskRanden       1x9              9438192  cell               
  trainRanden      1x9             40371184  cell               

display the first composite image with five textures

imagesc(dataRanden{1})
colormap gray

This is just one of the figures with different textures, the whole set, with training data (not Matlab) if needed is available at Trygve Randen's webpage:
http://www.ux.uis.no/~tranden/

display the corresponding mask

imagesc(maskRanden{1})
colormap jet

display a montage with the training data

montage(mat2gray( trainRanden{1}))


To generate training data that will be used to train U-Nets in 32x32 patches you can use either of the following files:

prepareTrainingLabelsRanden.m
prepareTrainingLabelsRanden_HorVerDiag.m

The first one will only prepare patches with two vertical textures, whilst the second will arrange in diagonal, vertical and horizontal arrangements. Obviously, you will need to change the lines where folders are defined so that you save these training pairs of textures and labels correctly in your computer. The patches look like this:

Finally, to train and compare results you need to run the file:

segmentationTextureUnet.m

This file will loop over different training options and network configurations, so it takes long, very long especially if you do not have GPUs enabled.

Current results are shown below.

More details are described in the paper.