google-research/deeplab2

Training with smaller dataset

Closed this issue · 2 comments

Hi,
I was trying to train deeplab2 model only for semantic segmentation with mapillary dataset.
I am getting decent results with training with a large dataset (8200 images).
Since I need a model to be trained for class 'Tunnel' alone, I selected the images and labels with tunnels and tried to train with those images (total size of dataset in around 210) for a single class. I have used all images for training and no testing and evaluation split done.
But am getting a result like the semantic map covering the whole image. I have tried with iterations as low as 2000 and as high as 60k but same result. Am I doing anything wrong here? I have used learning rate of 0.001.

when I trained with 60k iterations the training loss was gradually reducing from 0.65 to 0.03. But result was as stated above

Hi @nithinme3,

Apologize for the late reply.
Regarding your problem, it is not clear to us how to efficiently solve it (since it is not related to the codebase itself).
Nevertheless, you could probably try something like tuning the loss weight for the tunnel class, which appears less frequently than the other class.
Additionally, you could check some methods related to training with imbalanced classes (e.g., long-tailed classes).

Cheers,

Closing the issue, as there is no active discussion for a while.
Please feel free to open a new issue if needed.