/nnq_cnd_study

nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study

MIT LicenseMIT

Neural Network Quantization & Compact Networks Design

This is a repository of Facebook Group AI Robotics KR.

nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study.

It will be focusing on paper reviews for deep neural networks, model compression, compact network design, and Quantization.

Online Study supported by AI Robotics KR group have been ongoing since September 1st.

Prerequisite

  • Basic Understanding for deep learning algorithms like DNN, RNN, CNN is preferred
  • Passion for learning
  • Persistence
  • Motivation

Learning Objectives

  • Deep understanding for Deep Neural Network Quantization & Compact Networks Design Algorithms

How to Study

  • Online Presentation
  • Q & A

Participants:

Slack : @Hwigeon Oh, @Seojin Kim, @DongJunMin, @이경준, @Hyunwoo Kim, @Constant, @임병학, @KimYoungBin, @Sanggun Kim, @martin, @Joh, @김석중, @Yongwoo Kim, @MinSeop Lee, @Woz.D, @inwoong.lee (이인웅), @Hoyeolchoi, @Bochan Kim, @Young Seok Kim, @taehkim, @Seongmock Yoo, @Mike.Oh, @최승호, @Davidlee, @Stella Yang, @sejungkwon, @Jaeyoung Lee, @Hyungjun Kim, @tae-ha, @Jeonghoon.


Contributors:

Main Study Learder: Jeonghoon Kim(GitHub:IntelligenceDatum).

Compact Networks Design Leader: Seo Yeon Stella Yang(GitHub:howtowhy).

Presenter:Jeonghoon Kim, Stella Yang, Sanggun Kim, Hyunwoo Kim, Seojin Kim, Hwigeon Oh, Seojin Kim, Seokjoong Kim, Martin Hwang, Youngbin Kim, Sang-soo Park, Jaeyoung Lee, Yongwoo Kim, Hyungjun Kim, Sejung Kwon, 이경준, Bochan Kim, 이인웅.


Presentation with Video :

Neural Network Quantization & Compact Network Design Study

Week1: Introduction of NNQ&CND

Title: A Piece of Weight
Presentor: 김정훈 (Jeonghoon Kim)
PPT: https://drive.google.com/open?id=1RQAiIFX7wOUMiZXPCIZbXb_6DtLlV38e
Video: https://youtu.be/pohMFz-uQJ0

Title: Compact Network Design Overview
Presentor: Stella Yang
Video: https://youtu.be/R3pE-pGBbBg
PPT: https://drive.google.com/open?id=1bTy68uO1Ta4tJLYDLA7d6GJfRx1YbcM4

Week2: BNN & MobileNet

Paper: Binarized Neural Networks:Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
Presentor: 김정훈 (Jeonghoon Kim)
Video: https://youtu.be/n89CsZpZcNk
PPT: https://drive.google.com/open?id=1DoeGj-goeI5WMIu5LPTQ6aFZ2czl7eNP

Paper: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Presentor: 김상근 (Sanggun Kim)
Video: https://youtu.be/GyQUBLDQEJI
PPT: https://drive.google.com/open?id=1oQI8Pv7N66pZHflx0CyMIyahhbA-Dce7

Week3: FINN (BNNs Hardware) & MobileNetV2

Paper: FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Presentor: 김현우 (Hyun-Woo Kim)
Video: https://youtu.be/DjS8wvXaE8c
PPT: https://drive.google.com/file/d/11uj-UaLiOEBIxExpo43OV5l6b4MSFTuD/view?usp=sharing

Paper: MobileNetV2: Inverted Residuals and Linear Bottlenecks
Presentor: 김서진
Video: not available / (스터디노트 업로드 예정)
PPT: https://drive.google.com/file/d/1NTfct371Lpasly8XW7zt7OzOTh87sVLA/view?usp=sharing

Week4: XNOR-Net & SqueezeNet

Paper: XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Presentor: 오휘건
Video: https://youtu.be/N6oP-8E5cWA
PPT: https://drive.google.com/open?id=1bz3C-fFVSCrOdnbi-8lf_2NS1yhpGdVO

Paper: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size
Presentor: Martin Hwang
Video: https://youtu.be/eH5O5nDiFoY
PPT: https://drive.google.com/open?id=1HNRhl1lxb7oe0gFsbv9f2fduCr_f_G4O

Week5: BNN+ & SqueezeNext

Paper: BNN+: Improved Binary Network Training
Presentor: 김영빈
Video: https://youtu.be/M7-lBoiFHRI

Paper: SqueezeNext: Hardware-Aware Neural Network Design
Presentor: 박상수
Video: https://youtu.be/sbKl92j9Xrs

Week6: Loss-aware Binarization

Paper: Loss-aware Binarization of Deep Networks
Presentor: 이재영 (Jaeyoung Lee)
Video: https://youtu.be/Bs3SVcvr5cA

Week7: Scalpel, Hardware-aware pruning

Paper: Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism
Presentor: Constant (Sang-Soo) Park
Video: https://youtu.be/DmCCREJ1zAA

WEEK8: DOREFANET && SHUFFLENET

Paper: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients
Presentor: Yongwoo Kim
Video: https://youtu.be/DEnQKMXzx7o

Paper: ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Presentor: Jaeyoung Lee
Video: https://youtu.be/l-Q06pAfBHw

WEEK9: LQ-NETS && BI-REAL NETS

Paper: LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Presentor: Hyungjun Kim
Video: https://youtu.be/ca_d03MYeJE

Paper: Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
Presentor: YoungBin Kim
Video: https://youtu.be/pQmvmcPZHmM

WEEK10: QUANTIZATION & DISTILLATION

Paper: Model Compression via Quantization and Distillation
Presentor: Seokjoong Kim (김민성)
Video: https://youtu.be/xOMuav0UVXg

WEEK11: ALTERNATING MULTI-BIT QUANTIZATION & DENSENET !!

Paper: Alternating multi-bit quantization for recurrent neural networks
Presentor: Eunhui Kim
Video: https://youtu.be/iibC1NZv0S4

Paper: DenseNet: Densely Connected Convolutional Networks
Presentor: Kyeong-Jun Lee
Video: https://youtu.be/bhvxLB6Qa60

Week12

Week13: DEFENSIVE QUANTIZATION & EFFICIENTNET

Paper: Defensive Quantization: When Efficiency Meets Robustness
Presentor: Bochan Kim
Video: https://youtu.be/7UfmDlLHOFA

Paper: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Presentor: Martin Hwang
Video: https://youtu.be/58ZxZSLr_bU

Week14: QIL & MobileNetV3

Paper: Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
Presentor: 이인웅
Video: https://youtu.be/VLyhhcPwxWc

Paper: Searching for MobileNetV3
Presentor: Seo Yeon Stella Yang
Video: https://youtu.be/JPs2Uy9DLO8


Schedule (Presentation List):

Week Subject Presenter
Week 1 1. Introduction
2. Introduction.
1.Jeonghoon Kim
2.Stella Yang
Week 2 1. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1.
2. Mobilenets: Efficient convolutional neural networks for mobile vision applications.
1.Jeonghoon Kim
2.Sanggun Kim
Week 3 1.Finn: A framework for fast, scalable binarized neural network inference.
2. MobileNetV2: Inverted Residuals and Linear Bottlenecks
1.Hyunwoo Kim
2.Seojin Kim
Week 4 1.XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks.
2. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size.
1.Hwigeon Oh
2.Martin Hwang
Week 5 1.BNN+: Improved binary network training.
2. Squeezenext: Hardware-aware neural network design.
1.Youngbin Kim
2.Sang-soo Park
Week 6 1.Loss-aware binarization of deep networks.
2. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.
1.Jaeyoung Lee
2. Sanggun Kim
Week 7 1. Loss-aware weight quantization of deep networks.
2. Scalpel: Customizing dnn pruning to the underlying hardware parallelism.
1.Youngbin Kim
2.Sang-soo Park
Week 8 1.Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients.
2.ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices.
1.Yongwoo Kim
2.Jaeyoung Lee
Week 9 1.Lq-nets: Learned quantization for highly accurate and compact deep neural networks.
2. Model compression via distillation and quantization.
1.Hyungjun Kim
2. Seokjoong Kim
Week 10 1. Alternating Multi-bit Quantization for Recurrent Neural Networks.
2. Densely Connected Convolutional Networks.
1.Eunhui Kim
2.이경준
Week 11 1.TBD
2. All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification.
1.Sejung Kwon
2.Stella Yang
Week 12 1.Analysis of Quantized Models.
2.EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.
1.Bochan Kim
2. Martin Hwang
Week 13 1.Learning to quantize deep networks by optimizing quantization intervals with task loss.
2. Amc: Automl for model compression and acceleration on mobile devices.
1.이인웅
2.Seokjoong Kim

References

https://github.com/ai-robotics-kr/nnq_cnd_study/blob/master/AwesomePapers.md