slei109/PATNet

Question about "6.3 Implementation Details"

Closed this issue · 1 comments

Thanks for your work!
In section 6.3, the paper states: An Adam optimizer is used to fine-tune PATM, with a learning rate of 1e-3 for Deepglobe and
ISIC, 5e-5 for Chest X-ray and FSS-1000.

Is the "cross-domain" proposed in the paper after fine-tune on other dataset? Not directly using the weights trained on the PASCAL VOC on each dataset of the proposed benchmark?

I don't quite get your questions. The model trained with Pascal will be fine-tune with the support images during testing. For example, for 1-shot setting on Chest X-ray dataset, the model will be fine-tuned with the only one image in the support set and evaluate the performance on the query data. The initialization weight will always be the weight trained with Pascal.