Unable to replicate results in the paper
Closed this issue · 3 comments
Hi there
First of all, thanks for this excellent algorithm and accompanying code. I have found both very useful for my masters' thesis.
One issue I've been having though is replicating the semi-supervised CIFAR10 results reported in your paper (10.55% error with data augmentation and entropy minimisation). When I run this code with the suggested semi-supervised/entropy minimisation CIFAR10 parameters, the test error I get at the end is usually more in the range of 14%, at best.
Is this the code that you used to produce the results in your paper? If so, would it be possible to get the exact parameter values used to replicate those results?
Kind regards,
Liam Schoneveld
Sorry for the late reply.
Is this the code that you used to produce the results in your paper?
Yes, the hyperparameters in the README are the exact hyperparameters for reproducing the results in the paper.
I uploaded the model trained in my environment (tensorflow-gpu 1.1.0 and scipy 1.9.0), https://drive.google.com/file/d/0B8HZ50DPgR3eVWYwekhwOGFPUjA/view?usp=sharing .
I trained the model with the code in this repository, and I confirmed that the trained model has 10.6 % test error.
Could you check the accuracy of this trained model in your environment?
If you are not able to get the same test error rate, the preprocessed dataset might be different from the one I used.
Thanks for the reply. I'll re-run the model at some point in the next week or so and report my results, but sounds like it's a problem on my end.
Hi,
Thanks for this excellent algorithm and accompanying code.
After reading your code, I found one issue, that is in your train_semisup.py file of line 8. Why you set the is_training as true when you do your evaluation? Will that influence the final result? What if I change it to false?
Best,
Xiang