Python is highly required. C++ will help you.
Pytorch (or Tensorflow) is highly required.
People who want to do FPGA stuff is not required for python or pytorch.
Person who misses two reports will leave this group automatically. ‼️ If you cannot find your name blow, please add it by yourself.
Please contact with @wetian to join our mail list.
Please join Google hangout https://hangouts.google.com/group/27qPkOU4jV2Adh2Z2
routine of weekly meeting:
submit report before meeting.
each member updates the research activities conducted in the past week (2-3 minutes).
reading group: story one person will be the presenter. at this stage, your responsibility is to present published papers of a theme (a story summarized from 3-5 relevant papers).
the presentation is for half hour, slides will help you a lot.
Presenter are supposed to upload their slides to github.
the information of report should be uploaded here
Reading codes of XNOR-Net codes
Reading paper of XNOR-Net paper
Understand meaning of xnor operation and concept of quantization networks.
Extensive Reading (Compulsory):
Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks [Qualcomm Research]
Quantizing deep convolutional networks for efficient inference: A whitepaper [Google]
Understand codes of XNOR-Net/util.py codes
Backpropagation of neural network link
Backpropagation of quantization neural network
Tensorflow version of xnor
Extensive Reading (Compulsory):
Review of quantization networks [Xijiao University]
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients [Megvii, Face ++]
Another version of XNOR-Net (With CUDA code in C++)
Backpropagation of quantization neural network
Report learning curve of CIFAR-10 of XNOR-Net
Extensive Reading (Compulsory):
Finish reading of previous articles.
Deep Learning with Limited Numerical Precision [2015 ICML]
Summarized optimization problems and methods for quantization neural networks
Read code of HWGQ
Extensive Reading (Compulsory):
Training Quantized Nets: A Deeper Understanding
Deep Learning with Limited Numerical Precision
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Training and Inference with Integers in Deep Neural Networks
Reproduce HWGQ with TensorFlow (If you want to join in one paper now, you should do this.) Here is my implement.
Write HWGQ layer in HWGQ with Cuda.
Extensive Reading (Compulsory):
Training Quantized Nets: A Deeper Understanding
Deep Learning with Limited Numerical Precision
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Training and Inference with Integers in Deep Neural Networks
#
Name
File
✔️
Weitian
week5
✔️
Qian
week5
✔️
Yuchen
week5
🕒
Kexin
🕒
Yufei
🕒
Yuhang
🕒
Jieming
🕒
Suxin
Understand Batch Normalization .
How to do batch normalization in quantization neural network. (Please refer to paper in extensive reading)
Extensive Reading (Compulsory):
Quantizing deep convolutional networks for efficient inference: A whitepaper [Google]
#
Name
File
🕒
Weitian
🕒
Qian
🕒
Yuchen
🕒
Kexin
🕒
Yudian