openai/supervised-reptile

How to interpret the batch accuracy for train and test

Opened this issue · 1 comments

image
Hi I am constantly getting 1.0000 or 0.5000 or 0.0000 on custom dataset. What does these value mean?? and why I am getting only 1.000 or 0.5000 or 0.0000??

Hello, this is due to the fact that, according to the paper, we sample K + 1 datapoints, We train on the K data points and evaluate only on the last one, therefore, the accuracy is described as num_correct / num_classes so basically if you have 2 classes and you predicted 1 correct and the other not, then the accuracy = 1 / 2 = 0.5