slei109/PATNet

Detail of TFI

Closed this issue · 2 comments

Hello, I'm interested in your work. But here are two questions:

  1. The article does not specify the number of fine-tune iterations for TFI.
  2. I noticed that the finetune_reference only one kl variable is calculated, but this kl variable is not associated with the weight of self.reference_layer, how can I use kl_loss to update reference_layer?
    Could you please provide more details? Thank you so much!
  1. A few fine-tuning step is enough. We fine-tuned around 50 steps.
  2. You can set the reference_layer as the unique training module via your optimizer.

Hi @TungChintao , did you manage to get the finetune code running? I found PATNetwork.Transformation_Feature() is not used in PATNetwork.finetune_reference()