/keras_conv_img_segmetation

Use encoder-decoder model to identify boundary points of images faster and more accurately

Primary LanguagePython

keras_conv_img_segmetation

It uses a Fully Convolutional Netword that recognize boundary points in images.

The examples of training is generated by generate_example.py. Initially I would worry about overfitting my own algorithm of image generation but as I will show below, it works fine on real world images.

The loss function is not square of difference or any other commonly used loss. It is designed to solve the problem that initially when training the network, the network gets stuck in a local miminum when it basically predicts that everything is not a boundary point (because boundary points are quite low in the generated examples). The function f(x) is designed according to the following principles:

  1. f(0) = 0
  2. f(x) is increasing when x is positive and decreasing when negative
  3. f(x) is concave
  4. f(1) / f(-1) is equal to some positive constant so that we can adjust an asymetry between two kinds of mistakes I then arrive at the formula:

For instance, when f(1)/f(-1) = 10 (I used 100 for the training), n=3.32, and the graph look like this: function f(x)

Here are several examples of the predictions made by the network, for predictions, the white lines are predicted boundary original image prediction visualized

original image prediction visualized

original image prediction visualized

original image prediction visualized

original image prediction visualized