Deploy after fold seems not to work
Opened this issue · 1 comments
renzoandri commented
Following the sample code from https://colab.research.google.com/drive/1AmcITfN2ELQe07WKQ9szaxq-WSu4hdQb#scrollTo=nQabIE9Guyrh
if in the QuantizedDeployable part, the model is not loaded again (i.e., state_dict = torch.load('checkpoint/mnist_fq_mixed.pth')['state_dict']), the accuracy is just 17.87% in the deployable version, which should work.
FrancescoConti commented
Hi @renzoandri, as we checked outside, it looks like your error is due to a small problem in your test. Still, folding seems to cause a greater-than-expected degradation in ID mode (-1.2% when I expect no degradation), I will keep the issue open as a remainder.