ayulockin/SwAV-TF

Fine-tuning with 10% labeled data with SwAV-learned embeddings

sayakpaul opened this issue · 0 comments

@ayulockin

From the fine-tuning notebooks (10 epochs and 40 epochs) I have the following observations:

  • With SwAV embeddings from 10 epochs, the model tends to have a pretty large overfit margin (note that this after the final fine-tuning is done):

    image

Final progress (with EarlyStopping) -

loss: 0.5366 - acc: 0.8120 - val_loss: 1.8524 - val_acc: 0.5000

  • With the same embeddings along with augmentation, the model seems to recover the large gap -

    image

Final progress (with EarlyStopping) -

loss: 0.9433 - acc: 0.6104 - val_loss: 1.2685 - val_acc: 0.5455

  • Going in the same order of experiments, with embeddings from 40 epochs, the following is what we get after the final fine-tuning (without any augmentation) -

    image

Final progress -

loss: 1.3076 - acc: 0.5613 - val_loss: 1.7242 - val_acc: 0.4800

Note that the model does not suffer from large overfitting gap in this case.

  • With augmentation, we get -

    image

Final progress -

loss: 0.9239 - acc: 0.6621 - val_loss: 1.3685 - val_acc: 0.5291

This performance is almost similar to what we got in this setting with the embeddings from 10 epochs.