YanchaoYang/FDA

Unable to reproduce the results of Sim2Real Adaptation Using FDA (single beta)

royee182 opened this issue · 9 comments

Hi,

I run the code " train.py" to reproduce "Sim2Real Adaptation Using FDA (single beta)" performance following the procedure explained in Usage 2 in READme.
Unfortunately, I failed to reproduce the results and got 42.69 42.81 41.32 for β=0.01,0.05,0.09 respectively.
Can you please help me to understand the potential reasons behind this performance drop?

Regards,
Yiting Cheng

have you downloaded the correct initialization ckpt for deeplab?

If yes, could you please try to set line 10 in data/init.py to: image_sizes = {'cityscapes': (1280,720), ...}, and try the training again and let me know what you get? I need that info to know if I have set the wrong default value.

Yes, I downloaded the initialization ckpt with your link, I will try to run it again with the setting image_sizes = {'cityscapes': (1280,720)...}.
Thanks for your reply.

Hi, I've tried with the setting, but will lead to error. Because 1280,720 is the same setting with GTA5, and in line 38 in gta5_dataset.py it will makes left=0.

It was not a problem for me with python version 3.5.2, could you find a to bypass this? maybe change the related functions to either an older or newer corresponding version?

Hi, I'd like to confirm that,
With the setting image_sizes = {'cityscapes': (1280,720)...}
makes line 38 in gta5_dataset.py, left=0
so in line 41 in gta5_dataset.py,
left = np.random.randint(0, high=left)
It comes np.random.randint(0, 0)
Is that works well for you?

Yes, should be no problem, the return will be 0.

Ok, thanks, I will try it again.

Hi, with the setting image_sizes = {'cityscapes': (1280,720)...}, I got a better result 44.55,43.72,41.67 for β=0.01,0.05,0.09 respectively, but still can not reproce the performance as report .

This is a bit weird. The scores between different betas should be similar in the first run.

Could you make the "--num-steps-stop" for learning rate scheduler to 150000, and check ckpts between 100000 and 150000 and see if the difference is consistent between different betas? (Usually, in the first run, the performance starts to converge around 100000.)

Also, what is the value for "switch2entropy" in your first run? Could you share with me your training command?