CAS-CLab/quantized-cnn

Error correction and fine-tune questions

a7200123456 opened this issue · 2 comments

Hello!!
I have two questions about the implementation.

  1. How many training data would be used in error correction step? And did FC layer and Conv layer use the same amount of training data to do error correction?
  2. In the paper, you mention fine-tune after quantization. How did fine-tune work? Will fine-tune maintain the structure of D and B?

Thanks!!

  1. We use 250~500 samples for each conv. layers, and 25,000 samples for each fc. layer. One sample can produce much more constraints in conv. layers than that in fc. layers.
  2. In the paper, "fine-tune" means update subsequent layers while parameters in the quantized layers remain unchanged. Here, the value of D and B is maintained. However, there is another fine-tuning strategy, i.e. update D via backpropagation with B fixed.

Thanks a lot!!!
Your replies really clear things up!!!