Size and accuracy
Cyber-Neuron opened this issue · 5 comments
Hi,
Based on the provided pretrained model (res18_2bit), I got 64.690% and the quantized model size is 5MB (gzip) or 3.4MB (7zip). It is quite different from the results in your paper. Can you please point out why is that? I just run: python main.py -a resnet18 --bit 2 --pretrained resnet18_2bit.pth
Thanks
- -e
Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?
Hi, I met the same problem. The reproduced accuracy is 64.74%, which is much lower than the result in the paper. Have you solved this problem?
Kind of. The batch size matters, however the accuracy is still around ~65% which is the same as other 2-bit quant methods.
Hi,
the accuracy mismatch is probably due to the different implementation of the data-loader between my training environment and the official pytorch environments.
Did you verify it through direct training?
Hi, I found a typo in the dataloader, can you test it now?