anilsathyan7/Portrait-Segmentation

Training with mobilenetv3-unet

rose-jinyang opened this issue · 2 comments

Hello
How are you?
I trained a new model with Slim-net on AISegment dataset successfully by your help.
The accuracy of the model is high but the inference time is a little slow.
I am going to train a new model with mobilenetv3-unet architecture.
But I found a strange part in your script for MobileNetv3 network.

image

The number of channels is 4 rather than 3.
So I changed this channel value to 3.
Also I used the DataLoader class in slim512.ipynb.
I used a mask image that has pixel value 0 or 255.
But While training, the training loss value is a negative value.
So I used the same mask images(0 or 1 pixel value) in training with Slim-Net.
The training was done successfully but the accuracy of the model is low.
How should I understand all of these facts?
Thanks

I think it could be a typo, because when i looked at the trained models its input has 3 channels.
For problem with accuracy see: #5 (comment)

At that time there was no imagenet/pascal pretrianed network available for mobilenetv3 in keras; so i only trained it using portarit images by replicating the tflite architecture with some modifications.Whereas for mobilenetv2 there were pretrained networks at that time.

Now you can use original pretrianed mobilenetv3 as base in keras, checkout:keras-team/keras-applications#183

Thanks