Questions on test precison and Quantize.G function
Closed this issue · 2 comments
hustsxy commented
Thanks for the great work. I have 2 questions from playing with the code.
- For the test accuracy in the demo code (basically the function getErrorTest()), is it tested with weight precision of 8-bit or 2-bit? Because the weights are quantized to 2-bit at the beginning and then updated with 8-bit gradients, it seems to me the weights are 8-bit during inference too. I must have missed something here, please kindly let me know.
- In Quantize.G(), if LR is 1, then the norm will be -1/0/+1, and then it means the weight update will always be 1 step up/down at most, is it correct?
Thanks in advance!
boluoweifenda commented
- In both training and testing, the weights are quantized first and then feed into the networks. Please see this:
https://github.com/boluoweifenda/WAGE/blob/master/source/NN.py#L157 - Yes, you are right, the update is up to 1 step.
hustsxy commented
- In both training and testing, the weights are quantized first and then feed into the networks. Please see this:
https://github.com/boluoweifenda/WAGE/blob/master/source/NN.py#L157- Yes, you are right, the update is up to 1 step.
Thanks for your explanation!