low-rank-approximation

There are 65 repositories under low-rank-approximation topic.

  • wenwei202/caffe

    Caffe for Sparse and Low-rank Deep Neural Networks

    Language:C++3793535134
  • machine-learning

    je-suis-tm/machine-learning

    Python machine learning applications in image processing, recommender system, matrix completion, netflix problem and algorithm implementations including Co-clustering, Funk SVD, SVD++, Non-negative Matrix Factorization, Koren Neighborhood Model, Koren Integrated Model, Dawid-Skene, Platt-Burges, Expectation Maximization, Factor Analysis, ISTA, FISTA, ADMM, Gaussian Mixture Model, OPTICS, DBSCAN, Random Forest, Decision Tree, Support Vector Machine, Independent Component Analysis, Latent Semantic Indexing, Principal Component Analysis, Singular Value Decomposition, K Nearest Neighbors, K Means, Naïve Bayes Mixture Model, Gaussian Discriminant Analysis, Newton Method, Coordinate Descent, Gradient Descent, Elastic Net Regression, Ridge Regression, Lasso Regression, Least Squares, Logistic Regression, Linear Regression

    Language:Jupyter Notebook2305151
  • lixilinx/psgd_torch

    Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditioner and more)

    Language:Python143738
  • csjunxu/MCWNNM-ICCV2017

    Multi-channel Weighted Nuclear Norm Minimization for Real Color Image Denoising, ICCV 2017.

    Language:MATLAB1159237
  • brian6091/Dreambooth

    Fine-tuning of diffusion models

    Language:Python9854320
  • lora-inspector

    rockerBOO/lora-inspector

    LoRA (Low-Rank Adaptation) inspector for Stable Diffusion

    Language:Python87676
  • vortex-exoplanet/VIP

    VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.

    Language:Python701912358
  • ofsoundof/group_sparsity

    Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.

    Language:Python635514
  • AndreiChertkov/teneva

    A framework based on the tensor train decomposition for working with multivariate functions and multidimensional arrays

    Language:Python45232
  • lixilinx/psgd_tf

    Tensorflow implementation of preconditioned stochastic gradient descent

    Language:Python343314
  • pashtari/factorizer

    Pytorch implementation of Factorizer.

    Language:Python34242
  • ecrc/hicma

    HiCMA: Hierarchical Computations on Manycore Architectures

    Language:Jupyter Notebook299113
  • garyfanhku/Galore-pytorch

    GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

    Language:Python21301
  • twinkle0331/Xcompression

    [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)

    Language:Python20310
  • ofsoundof/learning_filter_basis

    Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019

    Language:Python18343
  • DavisLaboratory/msImpute

    Methods for label-free mass spectrometry proteomics

    Language:R14560
  • AndreiChertkov/fpcross

    Solver in the low-rank tensor train format with cross approximation approach for the multidimensional Fokker-Planck equation

    Language:Python12301
  • pashtari/lrf

    PyTorch implementation of low-rank factorization (LRF) methods for data compression

    Language:Jupyter Notebook12102
  • degleris1/CMF.jl

    Convolutive Matrix Factorization in Julia

    Language:Jupyter Notebook11352
  • UniPD-DII-ETCOMP/DenseMatrixMarket

    Dense Matrix Market

    Language:MATLAB11202
  • loyalliu/MS-HTC

    Multi-slice MR Reconstruction with Low-Rank Tensor Completion

    Language:MATLAB10003
  • roland1993/d_RPCA

    Deformable Groupwise Image Registration using Low-Rank and Sparse Decomposition

    Language:MATLAB9306
  • kingofspace0wzz/multilayer_nmf

    My experiment of multilayer NMF, a deep neural network in which the first several layers take Semi-NMF as its pseudo-activation-function that finds the latent sturcture embedding in the original data unsupervisely.

    Language:Python8303
  • mpimd-csc/Structure-preserving_STTM

    This repository contains MATLAB files for the implementation of work proposed in the paper Efficient Structure-preserving Support Tensor Train Machine.

    Language:HTML8100
  • musco-ai/musco-tf

    MUSCO: Multi-Stage COmpression of neural networks

    Language:Python8322
  • hi-paris/Lowrankdensity

    Lowrankdensity

    Language:Python7103
  • ecrc/stars-h

    Software for Testing Accuracy, Reliability and Scalability of Hierarchical computations.

    Language:C6619
  • loyalliu/MS-HTC2

    Calibrationless Multi-Slice Cartesian MRI via Orthogonally Alternating Phase Encoding Direction and Joint Low-Rank Tensor Completion

    Language:MATLAB5100
  • ztanml/arLMM

    Approximate Ridge Linear Mixed Models (arLMM)

    Language:C5201
  • anishacharya/Online-Embedding-Compression-AAAI-2019

    Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization during training, to compress the word embedding layer which represents the size bottleneck for most NLP models. Our models are trained, compressed and then further re-trained on the downstream task to recover accuracy while maintaining the reduced size. Empirically, we show that the proposed method can achieve 90% compression with minimal impact in accuracy for sentence classification tasks, and outperforms alternative methods like fixed-point quantization or offline word embedding compression. We also analyze the inference time and storage space for our method through FLOP calculations, showing that we can compress DNN models by a configurable ratio and regain accuracy loss without introducing additional latency compared to fixed point quantization. Finally, we introduce a novel learning rate schedule, the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate to outperform other popular adaptive learning rate algorithms on a sentence classification benchmark.

    Language:Python4202
  • IST-DASLab/EFCP

    The repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:

    Language:Python450
  • mzalaya/collectivemc

    Implementation of Collective Matrix Completion by Mokhtar Z. Alaya and Olga Klopp https://arxiv.org/abs/1807.09010

    Language:Jupyter Notebook4200
  • FMatti/ACA-SPSD

    Adaptive cross approximation (ACA) algorithms for symmetric positive semi-definite (SPSD) matrices.

    Language:Jupyter Notebook3201
  • gdikov/stochastic-segmentation-networks

    Tutorial reimplementation of Monteiro et al. (2020) on a toy problem.

    Language:Jupyter Notebook3200
  • kaylode/rec-sys

    Introducing traditional algorithms in Recommendation System.

    Language:Jupyter Notebook310