reading_list

  • Python is highly required. C++ will help you.
  • Pytorch (or Tensorflow) is highly required.
  • People who want to do FPGA stuff is not required for python or pytorch.
  • Person who misses two reports will leave this group automatically.‼️ If you cannot find your name blow, please add it by yourself.
  • Please contact with @wetian to join our mail list.
  • Please join Google hangout https://hangouts.google.com/group/27qPkOU4jV2Adh2Z2

routine of weekly meeting:

  1. submit report before meeting.
  2. each member updates the research activities conducted in the past week (2-3 minutes).
  3. reading group: story one person will be the presenter. at this stage, your responsibility is to present published papers of a theme (a story summarized from 3-5 relevant papers).
  4. the presentation is for half hour, slides will help you a lot. Presenter are supposed to upload their slides to github.
  5. the information of report should be uploaded here

Week 1:

Plan:

  1. Reading codes of XNOR-Net codes
  2. Reading paper of XNOR-Net paper
  3. Understand meaning of xnor operation and concept of quantization networks.

Extensive Reading (Compulsory):

  1. Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks [Qualcomm Research]
  2. Quantizing deep convolutional networks for efficient inference: A whitepaper [Google]

Submit:

# Name File
✔️ Weitian example
✔️ Qian example
✔️ Yuchen example

Week 2:

Plan:

  1. Understand codes of XNOR-Net/util.py codes
  2. Backpropagation of neural network link
  3. Backpropagation of quantization neural network
  4. Tensorflow version of xnor

Extensive Reading (Compulsory):

  1. Review of quantization networks [Xijiao University]
  2. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients [Megvii, Face ++]

Submit:

# Name File
✔️ Weitian week2
✔️ Qian week1&2
✔️ Yuchen week2
✔️ Kexin week2
✔️ Yufei week2

Week 3:

  1. Another version of XNOR-Net (With CUDA code in C++)
  2. Backpropagation of quantization neural network
  3. Report learning curve of CIFAR-10 of XNOR-Net

Extensive Reading (Compulsory):

  1. Finish reading of previous articles.
  2. Deep Learning with Limited Numerical Precision [2015 ICML]

Submit:

# Name File
✔️ Weitian week3
✔️ Qian week3
✔️ Yuchen week3
✔️ Kexin week3
Yudian
✔️ Yufei week3

Week 4

  1. Summarized optimization problems and methods for quantization neural networks
  2. Read code of HWGQ

Extensive Reading (Compulsory):

  1. Training Quantized Nets: A Deeper Understanding
  2. Deep Learning with Limited Numerical Precision
  3. Deep Learning with Low Precision by Half-wave Gaussian Quantization
  4. Training and Inference with Integers in Deep Neural Networks

Submit:

# Name File
✔️ Weitian week4
✔️ Qian week4
✔️ Yuchen week4
✔️ Kexin week4
Yudian
✔️ Yufei week4

Week 5

  1. Reproduce HWGQ with TensorFlow (If you want to join in one paper now, you should do this.) Here is my implement.
  2. Write HWGQ layer in HWGQ with Cuda.

Extensive Reading (Compulsory):

  1. Training Quantized Nets: A Deeper Understanding
  2. Deep Learning with Limited Numerical Precision
  3. Deep Learning with Low Precision by Half-wave Gaussian Quantization
  4. Training and Inference with Integers in Deep Neural Networks

Submit:

# Name File
✔️ Weitian week5
✔️ Qian week5
✔️ Yuchen week5
🕒 Kexin
🕒 Yufei
🕒 Yuhang
🕒 Jieming
🕒 Suxin

Week 6

  1. Understand Batch Normalization.
  2. How to do batch normalization in quantization neural network. (Please refer to paper in extensive reading)

Extensive Reading (Compulsory):

  1. Quantizing deep convolutional networks for efficient inference: A whitepaper [Google]

Submit:

# Name File
🕒 Weitian
🕒 Qian
🕒 Yuchen
🕒 Kexin
🕒 Yudian