sparsity
There are 135 repositories under sparsity topic.
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
pytorch/ao
PyTorch native quantization and sparsity for training and inference
neuralmagic/sparseml
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
vllm-project/llm-compressor
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
PaddlePaddle/PaddleSlim
PaddleSlim is an open-source library for deep model compression and architecture search.
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
openvinotoolkit/nncf
Neural Network Compression Framework for enhanced OpenVINO™ inference
Eric-mingjie/network-slimming
Network Slimming (Pytorch) (ICCV 2017)
Bobo-y/flexible-yolov5
More readable and flexible yolov5 with more backbone(gcn, resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer, etc) and (cbam,dcn and so on), and tensorrt
FMInference/H2O
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
wenwei202/caffe
Caffe for Sparse and Low-rank Deep Neural Networks
intel/neural-speed
An innovative library for efficient LLM inference via low-bit quantization
mehtadushy/SelecSLS-Pytorch
Reference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
bwohlberg/sporco
Sparse Optimisation Research Code
dcmocanu/sparse-evolutionary-artificial-neural-networks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
SYSU-SAIL/SMSR
[CVPR 2021] Exploring Sparsity in Image Super-Resolution for Efficient Inference
IntelLabs/SkimCaffe
Caffe for Sparse Convolutional Neural Network
vene/sparse-structured-attention
Sparse and structured neural attention mechanisms
jack-willturner/deep-compression
Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
lucaslie/torchprune
A research library for pytorch-based neural network pruning, compression, and more.
NVIDIA-AI-IOT/clip-distillation
Zero-label image classification via OpenCLIP knowledge distillation
openvinotoolkit/mmdetection
OpenVINO Training Extensions Object Detection
RAIVNLab/STR
Soft Threshold Weight Reparameterization for Learnable Sparsity
OpenSparseLLMs/LLaMA-MoE-v2
🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
wenwei202/iss-rnns
Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)
luuyin/OWL
Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"
vene/pyowl
Ordered Weighted L1 regularization for classification and regression in Python
adrhill/SparseConnectivityTracer.jl
Fast operator-overloading Jacobian & Hessian sparsity detection.
rajmic/declipping2020_codes
Codes and data coming with article "A Survey and an Extensive Evaluation of Popular Audio Declipping Methods", and others closely related
satabios/sconce
E2E AutoML Model Compression Package
Shiweiliuiiiiiii/In-Time-Over-Parameterization
[ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
SIP-AAU/Magni
A package for AFM image reconstruction and compressed sensing in general
MingSun-Tse/Why-the-State-of-Pruning-so-Confusing
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
VITA-Group/Sparsity-Win-Robust-Generalization
[ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun Wang*, Santosh Balachandra*, Haoyu Ma*, Zehao Wang, Zhangyang Wang
RabadanLab/randomly
A Library for Denoising Single-Cell Data with Random Matrix Theory