BIT-DA/SePiCo

about performance

Closed this issue · 4 comments

Thanks for your great work, but a minor issue.
Are the results reported in your paper the average of the effects of multiple seeds? If not, is there an average for experiment 4 (gta->cityscapes and synthia->cityscapes based on SegFormer) that I can refer to? This result is preferably a fair comparison with DAFormer, that is, the test number is 1/4 of the number of times you publish code and the training resolution is 512x512. I hope to cite this result in my paper for comparison.

Hi @Renp1ngs

Thanks for your interest in the paper. The results are the average of three random seeds (42, 76, and 2022) and the training resolution is 640x640.

Pardon. I didn't catch the question about the test number is 1/4 of the number of times the code.

Thanks for your quick reply.

I didn't catch the question about the test number is 1/4 of the number of times the code.

In DAFormer config:

evaluation=dict(
    interval=4000,
    metric="mIoU" )

But in SePiCo config:

evaluation=dict(
    interval=1000,
    metric="mIoU" )

That means your model is tested more times. This doesn't seem like a fair comparison .

Can you provide the training log of your experiment 4 (syn->cs and gta->cs)? I want to observe the loss and performance of the whole training process.

I see.

However, the interval does not matter, since we just keep the last checkpoint (last.pth) for evaluation 🤣

I may need to find the training log, and I will contact you if there is any news.

Thanks! Have a good day!