NielsRogge/Transformers-Tutorials

Mask2Former panoptic fine-tune

strangeTany opened this issue · 3 comments

Hello @NielsRogge, I am trying to fine tune facebook/mask2former-swin-large-ade-panoptic model on my dataset. I folowed steps from your tutorial however the output using post_process_panoptic_segmentation is invalid. It returns empty segemntation result

{'segmentation': tensor([[-1., -1., -1.,  ..., -1., -1., -1.],  
         [-1., -1., -1.,  ..., -1., -1., -1.],
         [-1., -1., -1.,  ..., -1., -1., -1.],
         ...,
         [-1., -1., -1.,  ..., -1., -1., -1.],
         [-1., -1., -1.,  ..., -1., -1., -1.],
         [-1., -1., -1.,  ..., -1., -1., -1.]]),
 'segments_info': []}`

If I change post processing to post_process_semantic_segmentation it returns valid result.
My dataset using ade classes with several added classes, at all 200 classes. The dataset format and preparation is the same as yours. I've tried to change parametrs of processor: ignore_index=255 and reduce_labels=True (without this I have errors before learning),
The same situation happens if I run locally your tutorial notebook. Do you know what can cause it?

Hi @strangeTany thanks a lot for reporting an issue!
Would you mind providing some short reproducible example (with an already trained model) so I can investigate the issue?

The issue happens even when I just want to rerun tutorial notebook. Here is an rerunned notebook with only changed number of epochs (from 100 to 10) on colab https://colab.research.google.com/drive/1Dc7FJyNY7_btSu11ux0hiz_4d2j01GK_?usp=sharing

I found solution. Apperantly MaskFormer and Mask2Former do not return really bad predictions. So in my case when I tried to make model return anything with small amount of epochs it can't make valid output. 100 epochs is enogh to make good fine tuning for valid answer. Otherwise it returns empty prediction

@NielsRogge if you could add this information to tutoriial it could help people