bonlime/keras-deeplab-v3-plus

Training grey-scale image (ch1) tips?

dhlee-jubilee opened this issue · 4 comments

I tried training the older version model.
The images are grey-scale image, but the performance is lower than U-Net. it's weired

I experimented fixed the params.

  • Image size = (256, 256, 1)
  • classes = 1
  • backbone = 'xception'
  • Adam(lr = 1e-4)

And changed the params.

  • OS = 8 (batch = 4) / 16 (batch = 12)
  • metrics = accuracy / dice coeff (custom)
  • loss = binary_crossentropy / dice_coeff_loss(custom)

I can't figure out why the performance is low.
Is there some tips for 1ch image or other problem with the parameter settings??
And what's the criteria of well training? loss, accuracy or dice_coeff?

maybe it'll be better with 2 classes? 1 for desired object and one for background
I've had this issue once with deeplab in tf

I'm having the same problem (256 x 256 x 1 images, num_classes = 1, using dice_coeff to measure accuracy). It performs very badly compared to UNet. Did you end up figuring out a solution?

what worked for me was using 2 classes instead of 1
so you have a class for background and a class for object

thanks so much! what loss function did you use?