network-compression

There are 32 repositories under network-compression topic.

  • datawhalechina/leedl-tutorial

    《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases

    Language:Jupyter Notebook13.9k2821022.9k
  • IntelLabs/distiller

    Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller

    Language:Jupyter Notebook4.4k132350801
  • quic/aimet

    AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.

    Language:Python2.2k511.4k384
  • clovaai/overhaul-distillation

    Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)

    Language:Python414113978
  • sony/model_optimization

    Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.

    Language:Python3312412153
  • A-suozhang/awesome-quantization-and-fixed-point-training

    Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design

  • jshilong/FisherPruning

    Group Fisher Pruning for Practical Network Compression(ICML2021)

    Language:Python15472615
  • uber-research/permute-quantize-finetune

    Using ideas from product quantization for state-of-the-art neural network compression.

    Language:Python14510215
  • bhheo/AB_distillation

    Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)

    Language:Python1045419
  • musco-ai/musco-pytorch

    MUSCO: MUlti-Stage COmpression of neural networks

    Language:Jupyter Notebook719916
  • bhheo/BSS_distillation

    Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)

    Language:Python704311
  • ofsoundof/group_sparsity

    Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.

    Language:Python635514
  • ofsoundof/dhp

    This is the official implementation of "DHP: Differentiable Meta Pruning via HyperNetworks".

    Language:Python58828
  • ofsoundof/learning_filter_basis

    Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019

    Language:Python18343
  • Zhengyu-Li/Deep-Network-Compression-based-on-Student-Teacher-Network-

    Deep Neural Network Compression based on Student-Teacher Network

    Language:Python14312
  • pvti/CORING

    :ring: Efficient tensor decomposition-based filter pruning

    Language:Jupyter Notebook12202
  • yukaikw/Machine-Learning

    李宏毅教授 ML 2020 機器學習課程筆記 & 實作

    Language:Python12000
  • cambridge-mlg/arch_uncert

    Code for "Variational Depth Search in ResNets" (https://arxiv.org/abs/2002.02797)

    Language:Jupyter Notebook9714
  • malena1906/Pruning-Weights-with-Biobjective-Optimization-Keras

    Overparameterization and overfitting are common concerns when designing and training deep neural networks. Network pruning is an effective strategy used to reduce or limit the network complexity, but often suffers from time and computational intensive procedures to identify the most important connections and best performing hyperparameters. We suggest a pruning strategy which is completely integrated in the training process and which requires only marginal extra computational cost. The method relies on unstructured weight pruning which is re-interpreted in a multiobjective learning approach. A batchwise Pruning strategy is selected to be compared using different optimization methods, of which one is a multiobjective optimization algorithm. As it takes over the choice of the weighting of the objective functions, it has a great advantage in terms of reducing the time consuming hyperparameter search each neural network training suffers from. Without any a priori training, post training, or parameter fine tuning we achieve highly reductions of the dense layers of two commonly used convolution neural networks (CNNs) resulting in only a marginal loss of performance. Our results empirically demonstrate that dense layers are overparameterized as with reducing up to 98 % of its edges they provide almost the same results. We contradict the theory that retraining after pruning neural networks is of great importance and opens new insights into the usage of multiobjective optimization techniques in machine learning algorithms in a Keras framework. The Stochastic Multi Gradient Descent Algorithm implementation in Python3 is for usage with Keras and adopted from paper of S. Liu and L. N. Vicente: "The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning". It is combined with weight pruning strategies to reduce network complexity and inference time.

    Language:Python9202
  • musco-ai/musco-tf

    MUSCO: Multi-Stage COmpression of neural networks

    Language:Python8322
  • yuxwind/ExactCompression

    [NeurIPS 2021] Official PyTorch Code of Scaling Up Exact Neural Network Compression by ReLU Stability

    Language:Python7100
  • kartikgupta-at-anu/md-bnn

    Code implementation of our AISTATS'21 paper "Mirror Descent View for Neural Network Quantization"

    Language:Python6100
  • quic/aimet-pages

    AIMET GitHub pages documentation

    Language:HTML6403
  • Yifan122/NetworkCompress

    pruning

    Language:Python5100
  • IU-SAIGE/sparse_mle

    2020 INTERSPEECH, "Sparse Mixture of Local Experts for Efficient Speech Enhancement".

    Language:Python4301
  • sliming-ai/sliming-ai.github.io

    🧠 Singular values-driven automated filter pruning

    Language:JavaScript43
  • MortalHappiness/ML2019SPRING

    Homework for Machine Learning (2019, Spring) at NTU

    Language:Python3002
  • shaharpit809/Deep-Learning-Models

    This repository consists of application of Deep Learning Models like DNN, CNN (1D and 2D), RNN (LSTM and GRU) and Variational Autoencoders written from scratch in tensorflow.

    Language:Jupyter Notebook2103
  • maxblumental/network-compression

    Language:Jupyter Notebook1201
  • Hulalazz/Deep-Compression-AlexNet

    Deep Compression on AlexNet

    Language:Python10
  • Hulalazz/Embedded-Neural-Network

    collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning