low-rank-approximation
There are 65 repositories under low-rank-approximation topic.
wenwei202/caffe
Caffe for Sparse and Low-rank Deep Neural Networks
je-suis-tm/machine-learning
Python machine learning applications in image processing, recommender system, matrix completion, netflix problem and algorithm implementations including Co-clustering, Funk SVD, SVD++, Non-negative Matrix Factorization, Koren Neighborhood Model, Koren Integrated Model, Dawid-Skene, Platt-Burges, Expectation Maximization, Factor Analysis, ISTA, FISTA, ADMM, Gaussian Mixture Model, OPTICS, DBSCAN, Random Forest, Decision Tree, Support Vector Machine, Independent Component Analysis, Latent Semantic Indexing, Principal Component Analysis, Singular Value Decomposition, K Nearest Neighbors, K Means, Naïve Bayes Mixture Model, Gaussian Discriminant Analysis, Newton Method, Coordinate Descent, Gradient Descent, Elastic Net Regression, Ridge Regression, Lasso Regression, Least Squares, Logistic Regression, Linear Regression
lixilinx/psgd_torch
Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditioner and more)
csjunxu/MCWNNM-ICCV2017
Multi-channel Weighted Nuclear Norm Minimization for Real Color Image Denoising, ICCV 2017.
brian6091/Dreambooth
Fine-tuning of diffusion models
rockerBOO/lora-inspector
LoRA (Low-Rank Adaptation) inspector for Stable Diffusion
vortex-exoplanet/VIP
VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.
ofsoundof/group_sparsity
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
AndreiChertkov/teneva
A framework based on the tensor train decomposition for working with multivariate functions and multidimensional arrays
lixilinx/psgd_tf
Tensorflow implementation of preconditioned stochastic gradient descent
pashtari/factorizer
Pytorch implementation of Factorizer.
ecrc/hicma
HiCMA: Hierarchical Computations on Manycore Architectures
garyfanhku/Galore-pytorch
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
twinkle0331/Xcompression
[ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)
ofsoundof/learning_filter_basis
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
DavisLaboratory/msImpute
Methods for label-free mass spectrometry proteomics
AndreiChertkov/fpcross
Solver in the low-rank tensor train format with cross approximation approach for the multidimensional Fokker-Planck equation
pashtari/lrf
PyTorch implementation of low-rank factorization (LRF) methods for data compression
degleris1/CMF.jl
Convolutive Matrix Factorization in Julia
UniPD-DII-ETCOMP/DenseMatrixMarket
Dense Matrix Market
loyalliu/MS-HTC
Multi-slice MR Reconstruction with Low-Rank Tensor Completion
roland1993/d_RPCA
Deformable Groupwise Image Registration using Low-Rank and Sparse Decomposition
kingofspace0wzz/multilayer_nmf
My experiment of multilayer NMF, a deep neural network in which the first several layers take Semi-NMF as its pseudo-activation-function that finds the latent sturcture embedding in the original data unsupervisely.
mpimd-csc/Structure-preserving_STTM
This repository contains MATLAB files for the implementation of work proposed in the paper Efficient Structure-preserving Support Tensor Train Machine.
musco-ai/musco-tf
MUSCO: Multi-Stage COmpression of neural networks
hi-paris/Lowrankdensity
Lowrankdensity
ecrc/stars-h
Software for Testing Accuracy, Reliability and Scalability of Hierarchical computations.
loyalliu/MS-HTC2
Calibrationless Multi-Slice Cartesian MRI via Orthogonally Alternating Phase Encoding Direction and Joint Low-Rank Tensor Completion
ztanml/arLMM
Approximate Ridge Linear Mixed Models (arLMM)
anishacharya/Online-Embedding-Compression-AAAI-2019
Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization during training, to compress the word embedding layer which represents the size bottleneck for most NLP models. Our models are trained, compressed and then further re-trained on the downstream task to recover accuracy while maintaining the reduced size. Empirically, we show that the proposed method can achieve 90% compression with minimal impact in accuracy for sentence classification tasks, and outperforms alternative methods like fixed-point quantization or offline word embedding compression. We also analyze the inference time and storage space for our method through FLOP calculations, showing that we can compress DNN models by a configurable ratio and regain accuracy loss without introducing additional latency compared to fixed point quantization. Finally, we introduce a novel learning rate schedule, the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate to outperform other popular adaptive learning rate algorithms on a sentence classification benchmark.
IST-DASLab/EFCP
The repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:
mzalaya/collectivemc
Implementation of Collective Matrix Completion by Mokhtar Z. Alaya and Olga Klopp https://arxiv.org/abs/1807.09010
FMatti/ACA-SPSD
Adaptive cross approximation (ACA) algorithms for symmetric positive semi-definite (SPSD) matrices.
gdikov/stochastic-segmentation-networks
Tutorial reimplementation of Monteiro et al. (2020) on a toy problem.
kaylode/rec-sys
Introducing traditional algorithms in Recommendation System.