AI_Internship

This is vietnhh branch, AI intership at BAP software.

Libraries

  • TensorFlow 2.2.0
  • cv2
  • numpy
  • matplotlib

Hair Segmentation

You can try to predict image in testing by cmd below:

cd vietnhh
cd HairSegmentation
python predict.py

Train your own models

If you want to train your own models, you can use cmd below:

cd hair-segmentation-unet
python hair_segmentation.py

Link to download dataset is attached in my notebook (HairSegmentation.ipynb).

Docker

https://hub.docker.com/repository/docker/nhhviet98/hair-segmentation

Folder tree:

├───HairSegmentation
│   ├───dataset
│   │   ├───MASKS
│   │   │   ├───Testing
│   │   │   └───Training
│   │   └───Original
│   │       ├───Testing
│   │       └───Training
│   ├───dataset-large
│   │   ├───MASKS
│   │   │   ├───Testing
│   │   │   └───Training
│   │   └───Original
│   │       ├───Testing
│   │       └───Training
│   ├───data_augmentation
│   │   └───__pycache__
│   ├───data_loader
│   │   └───__pycache__
│   ├───models
│   │   └───__pycache__
│   └───__pycache__

Model Structure

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 224, 224, 1) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 224, 224, 32) 320         input_1[0][0]                    
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 224, 224, 32) 9248        conv2d[0][0]                     
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 224, 224, 32) 128         conv2d_1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 112, 112, 32) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 112, 112, 64) 18496       max_pooling2d[0][0]              
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 112, 112, 64) 36928       conv2d_2[0][0]                   
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 112, 112, 64) 256         conv2d_3[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 56, 56, 64)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 56, 56, 128)  73856       max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 56, 56, 128)  147584      conv2d_4[0][0]                   
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 56, 56, 128)  512         conv2d_5[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 28, 28, 128)  0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 28, 28, 256)  295168      max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 28, 28, 256)  590080      conv2d_6[0][0]                   
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 28, 28, 256)  1024        conv2d_7[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 14, 14, 256)  0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 14, 14, 512)  1180160     max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 14, 14, 512)  2359808     conv2d_8[0][0]                   
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 14, 14, 512)  2048        conv2d_9[0][0]                   
__________________________________________________________________________________________________
tf.__operators__.getitem (Slici (None, 28, 28, 256)  0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_transpose (Conv2DTranspo (None, 28, 28, 256)  1179904     batch_normalization_4[0][0]      
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 28, 28, 512)  0           tf.__operators__.getitem[0][0]   
                                                                 conv2d_transpose[0][0]           
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 28, 28, 256)  1179904     concatenate[0][0]                
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 28, 28, 256)  590080      conv2d_10[0][0]                  
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 28, 28, 256)  1024        conv2d_11[0][0]                  
__________________________________________________________________________________________________
tf.__operators__.getitem_1 (Sli (None, 56, 56, 128)  0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 56, 56, 128)  295040      batch_normalization_5[0][0]      
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 56, 56, 256)  0           tf.__operators__.getitem_1[0][0] 
                                                                 conv2d_transpose_1[0][0]         
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 56, 56, 128)  295040      concatenate_1[0][0]              
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 56, 56, 128)  147584      conv2d_12[0][0]                  
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 56, 56, 128)  512         conv2d_13[0][0]                  
__________________________________________________________________________________________________
tf.__operators__.getitem_2 (Sli (None, 112, 112, 64) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 112, 112, 64) 73792       batch_normalization_6[0][0]      
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 112, 112, 128 0           tf.__operators__.getitem_2[0][0] 
                                                                 conv2d_transpose_2[0][0]         
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 112, 112, 64) 73792       concatenate_2[0][0]              
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 112, 112, 64) 36928       conv2d_14[0][0]                  
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 112, 112, 64) 256         conv2d_15[0][0]                  
__________________________________________________________________________________________________
tf.__operators__.getitem_3 (Sli (None, 224, 224, 32) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 224, 224, 32) 18464       batch_normalization_7[0][0]      
__________________________________________________________________________________________________
concatenate_3 (Concatenate)     (None, 224, 224, 64) 0           tf.__operators__.getitem_3[0][0] 
                                                                 conv2d_transpose_3[0][0]         
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 224, 224, 32) 18464       concatenate_3[0][0]              
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 224, 224, 32) 9248        conv2d_16[0][0]                  
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 224, 224, 1)  33          conv2d_17[0][0]                  
==================================================================================================
Total params: 8,635,681
Trainable params: 8,632,801
Non-trainable params: 2,880
__________________________________________________________________________________________________

This model have iou = 0.8527

In this project. I used my custom generator with my custom data augmentation. You also try another data augmentation methods.

Or you can try data generator of high level API of TensorFlow (Keras)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1844: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
  warnings.warn('`Model.fit_generator` is deprecated and '
Epoch 1/30
733/733 [==============================] - 692s 934ms/step - loss: 0.3082 - iou: 0.5240 - val_loss: 0.1946 - val_iou: 0.6922
Epoch 2/30
733/733 [==============================] - 676s 922ms/step - loss: 0.1621 - iou: 0.7229 - val_loss: 0.1881 - val_iou: 0.6444
Epoch 3/30
733/733 [==============================] - 674s 919ms/step - loss: 0.1374 - iou: 0.7613 - val_loss: 0.1760 - val_iou: 0.7470
Epoch 4/30
733/733 [==============================] - 679s 926ms/step - loss: 0.1241 - iou: 0.7825 - val_loss: 0.1293 - val_iou: 0.7748
Epoch 5/30
733/733 [==============================] - 679s 926ms/step - loss: 0.1117 - iou: 0.8010 - val_loss: 0.1152 - val_iou: 0.8054
Epoch 6/30
733/733 [==============================] - 678s 924ms/step - loss: 0.1011 - iou: 0.8176 - val_loss: 0.1002 - val_iou: 0.8168
Epoch 7/30
733/733 [==============================] - 680s 928ms/step - loss: 0.0959 - iou: 0.8264 - val_loss: 0.1069 - val_iou: 0.8101
Epoch 8/30
733/733 [==============================] - 677s 923ms/step - loss: 0.0941 - iou: 0.8282 - val_loss: 0.0989 - val_iou: 0.8231
Epoch 9/30
733/733 [==============================] - 682s 930ms/step - loss: 0.0912 - iou: 0.8333 - val_loss: 0.0978 - val_iou: 0.8338
Epoch 10/30
733/733 [==============================] - 681s 929ms/step - loss: 0.0899 - iou: 0.8365 - val_loss: 0.0946 - val_iou: 0.8290
Epoch 11/30
733/733 [==============================] - 682s 930ms/step - loss: 0.0861 - iou: 0.8417 - val_loss: 0.0892 - val_iou: 0.8402
Epoch 12/30
733/733 [==============================] - 678s 925ms/step - loss: 0.0841 - iou: 0.8451 - val_loss: 0.0868 - val_iou: 0.8411
Epoch 13/30
733/733 [==============================] - 681s 928ms/step - loss: 0.0829 - iou: 0.8475 - val_loss: 0.0876 - val_iou: 0.8449
Epoch 14/30
733/733 [==============================] - 683s 932ms/step - loss: 0.0819 - iou: 0.8499 - val_loss: 0.0868 - val_iou: 0.8469
Epoch 15/30
733/733 [==============================] - 688s 938ms/step - loss: 0.0823 - iou: 0.8499 - val_loss: 0.0864 - val_iou: 0.8452
Epoch 16/30
733/733 [==============================] - 686s 935ms/step - loss: 0.0798 - iou: 0.8521 - val_loss: 0.0847 - val_iou: 0.8503
Epoch 17/30
733/733 [==============================] - 694s 947ms/step - loss: 0.0795 - iou: 0.8537 - val_loss: 0.0843 - val_iou: 0.8455
Epoch 18/30
733/733 [==============================] - 697s 951ms/step - loss: 0.0802 - iou: 0.8518 - val_loss: 0.0841 - val_iou: 0.8513
Epoch 19/30
733/733 [==============================] - 684s 933ms/step - loss: 0.0791 - iou: 0.8530 - val_loss: 0.0834 - val_iou: 0.8510
Epoch 20/30
733/733 [==============================] - 681s 929ms/step - loss: 0.0792 - iou: 0.8542 - val_loss: 0.0842 - val_iou: 0.8502
Epoch 21/30
733/733 [==============================] - 674s 919ms/step - loss: 0.0785 - iou: 0.8551 - val_loss: 0.0830 - val_iou: 0.8515
Epoch 22/30
733/733 [==============================] - 677s 924ms/step - loss: 0.0786 - iou: 0.8543 - val_loss: 0.0834 - val_iou: 0.8515
Epoch 23/30
733/733 [==============================] - 675s 920ms/step - loss: 0.0786 - iou: 0.8549 - val_loss: 0.0835 - val_iou: 0.8518
Epoch 24/30
733/733 [==============================] - 686s 935ms/step - loss: 0.0782 - iou: 0.8557 - val_loss: 0.0830 - val_iou: 0.8521
Epoch 25/30
733/733 [==============================] - 684s 932ms/step - loss: 0.0787 - iou: 0.8548 - val_loss: 0.0827 - val_iou: 0.8519
Epoch 26/30
733/733 [==============================] - 689s 939ms/step - loss: 0.0779 - iou: 0.8547 - val_loss: 0.0829 - val_iou: 0.8522
Epoch 27/30
733/733 [==============================] - 690s 940ms/step - loss: 0.0783 - iou: 0.8559 - val_loss: 0.0833 - val_iou: 0.8515
Epoch 28/30
733/733 [==============================] - 687s 938ms/step - loss: 0.0780 - iou: 0.8557 - val_loss: 0.0826 - val_iou: 0.8520
Epoch 29/30
733/733 [==============================] - 676s 921ms/step - loss: 0.0787 - iou: 0.8537 - val_loss: 0.0832 - val_iou: 0.8526
Epoch 30/30
733/733 [==============================] - 677s 923ms/step - loss: 0.0776 - iou: 0.8557 - val_loss: 0.0830 - val_iou: 0.8527
<tensorflow.python.keras.callbacks.History at 0x7f0c1058e9b0>