ModelTC/MQBench

The QAT top1@acc of mobilenet_v2 a4w4 LSQ cannot be reproduced as the paper shown 70.6%.

LuletterSoul opened this issue · 1 comments

Hi, thanks for providing this amazing quantization framework ! I want to reproduce the Top1@acc of mobilenet_v2 a4w4 LSQ under academic setting. The quantization configuration is as below:

dict(qtype='affine',    
 w_qscheme=QuantizeScheme(symmetry=True, per_channel=True, pot_scale=False, bit=4, symmetric_range=False, p=2.4),
                                 a_qscheme=QuantizeScheme(symmetry=True, per_channel=False, pot_scale=False, bit=4, symmetric_range=False, p=2.4),
                                 default_weight_quantize=LearnableFakeQuantize,
                                 default_act_quantize=LearnableFakeQuantize,
                                 default_weight_observer=MSEObserver,
                                 default_act_observer=EMAMSEObserver),

For the training strategy, I set weght decay=0, lr = 1e-3 and batch_size=128 per GPU using 8 cards Nvidia A100. And the adjust_learning_rate strategy is remained the same as main.py. However, the highest top1@Acc I reproduced in the validation set was only 68.66%, which is far from the 70.6% as the paper presented.

Which part I have missed ?

This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!