Albert0147/G-SFDA

Concern about the Source-Free assumption

Closed this issue · 3 comments

Hi,
I have some doubts about the source-free assumption you do in the paper.
According to Algorithm 1, the pretrained Source model also employs A_t.
In my understanding this is not a source-pretraining stage, in fact adaptation is already happening, since the model is exposed to both source and target. Besides, this means source and target are available at the same time, i.e. the source free assumption drops.

I would be happy if you could clarify this.
Best,
P.

Our paper already mentions in multiple positions that we train A_s and A_t only on source data in the source pretraining stage, as a good initialization for A_t. It is identical to train two different classifier heads.

Ok thanks for the clarification, I see that in section 3.3 you clarify that A_t is only trained on source.

However, this even strengthens my concern about the source-free assumption.

In fact, having A_s and A_t trained on source means you do have the source available for training them, clashing with the source-free assumption. Usually a simple model naively pre-trained on the source is available with no bells and whistles. Instead, you assume you have the source to also train A_s and A_t.

So imagine you want to adapt a model trained e.g. on Imagenet. You download e.g a Resnet50 from the pytorch repo. How would you train A_s and A_T with no access to source (imagenet) data?

This is only to avoid forgetting. It can be removed if only for source free da. BTW, I guess we can train another classifier head with a source model, e.g., maybe use distillation.