leftthomas/SimCLR

results reported in your repo are finetuning and not linear evalaution

fawazsammani opened this issue · 1 comments

Hello. I've seen in your code after you do the pre-training (in linear.py), you finetune the whole network, while the paper performs linear evaluation and not finetuning (it only trains the linear classifier layer with the ResNet-50 features frozen). In the paper, the results obtained with linear evaluation on CIFAR10 for 500 pre-training epochs and a batch size of 512 is around 93%. Your results are close (92%) but this is via fine-tuning the whole network (not linear evaluation as in the paper) which means it it logical to obtain higher score than linear evaluation. In fact, by doing this the results should outperform the resnet-50 supervised baseline which can get 93.62%.

Therefore, may you tell me the score you got via linear evaluation and without fine-tuning? Thanks!

@fawazsammani line 78-79 is the code to froze the backbone.