NVlabs/SegFormer

why does the model have to pre-trained on ImageNet-1k datasets? Did you test results directly on ADE20K without using pre-training?

754467737 opened this issue · 1 comments

why does the model have to pre-trained on ImageNet-1k datasets? Did you test results directly on ADE20K without using pre-training?

I am not sure about your question, but I'll try to help =) The pre-trained model on ImageNet-1k was used as starting point to train on ADE20k or another dataset. The final trained models should be found in the following links:

google drive |
onedrive