Albert0147/G-SFDA

Question about VisDA2017

Closed this issue · 13 comments

Good work!

we know VisDA2017 has three part: train dataset, validation dataset, and test dataset.

In code (train_src_visda.py and train_tar_visda.py)

for training source model, G-SFDA use VisDA's train dataset (90% train and 10% test)
for training target model, G-SFDA use VisDA's validation dataset (shuffle =ture, batchSize)
for testing target model, G-SFDA also use VisDA's validation dataset (shuffle =false, batchSize*3)

one question: why not use VisDA's test dataset?

Hi, as the VisDA does not directly provide the label for its 'test' set, we do not use it, as all recent DA papers did.

Thanks for relpy.

another question: why set batchSize*3 when testing target model?

G-SFDA ues all and same VisDA's validation dataset to train and test target model, is it right?

Just to accelerate the evaluation, since you do not train this time.

Thanks but sorry, i don't understand it

It does not matter, it will not influence the results. Small batch size means you need to load more times from the dataset to gpu memory and may slow the evaluation.

Ok got it.

G-SFDA use all and same VisDA's validation dataset to train and test target model, is it ok?

Yeah, just as all DA methods do.

When training source model ,
G-SFDA use 90% VisDA's train dataset for traing ,and use 10% VisDA's train dataset for testing.

while when training target model,
G-SFDA use 100% VisDA's validation dataset for traing ,and use same 100% VisDA's validation dataset for testing.

Did i understand right?

yeah

Nice.

in code train_src_visda.py, lines 252.

When training source model ,
Why G-SFDA use 100% VisDA's validation dataset for testing source model?

just test on target domain...

That's mean test init target model?

yep