/scale-adjusted-training

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Primary LanguagePython

scale-adjusted-training

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Introduction

This repo implement the Scale-Adjusted Training from Towards Efficient Training for Neural Network Quantization including:

  1. Constant rescaling Dorefa-quantize
  2. Calibrated gradient PACT

TODO

  • constant rescaling DoReFaQuantize layer
  • CGPACT layer
  • test with mobilenetv1
  • test with mobilenetv2
  • test with resnet50

Acknowledgement