KarenUllrich
Research scientist (s/h) at FAIR NY + collab. w/ Vector Institute. <3 Deep Learning + Information Theory. Previously, Machine Learning PhD at UoAmsterdam.
RS FAIR NYAmsterdam
Pinned Repositories
NeuralCompression
A collection of tools for neural compression enthusiasts.
binary-VAE
A minimal implementation of a VAE with BinConcrete (relaxed Bernoulli) latent distribution in TensorFlow.
models
Models and examples built with TensorFlow
Pytorch-Backprojection
This code accompanies "Differentiable probabilistic models of scientific imaging with the Fourier slice theorem", UAI 2019
pytorch-binary-converter
Turning float tensors to binary tensors according to IEEE-754 standard.
Spearmint-TheanoEdition
Spearmint uses Gaussian Processes to automatically optimize hyper parameter. This is a fork of Spearmint for the deep learning community. More specifically, it provides support for Theano users.
Tutorial-SoftWeightSharingForNNCompression
A tutorial on 'Soft weight-sharing for Neural Network compression' published at ICLR2017
Tutorial_BayesianCompressionForDL
A tutorial on "Bayesian Compression for Deep Learning" published at NIPS (2017).
lossyless
Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".
KarenUllrich's Repositories
KarenUllrich/Tutorial_BayesianCompressionForDL
A tutorial on "Bayesian Compression for Deep Learning" published at NIPS (2017).
KarenUllrich/Tutorial-SoftWeightSharingForNNCompression
A tutorial on 'Soft weight-sharing for Neural Network compression' published at ICLR2017
KarenUllrich/pytorch-binary-converter
Turning float tensors to binary tensors according to IEEE-754 standard.
KarenUllrich/Pytorch-Backprojection
This code accompanies "Differentiable probabilistic models of scientific imaging with the Fourier slice theorem", UAI 2019
KarenUllrich/binary-VAE
A minimal implementation of a VAE with BinConcrete (relaxed Bernoulli) latent distribution in TensorFlow.
KarenUllrich/Spearmint-TheanoEdition
Spearmint uses Gaussian Processes to automatically optimize hyper parameter. This is a fork of Spearmint for the deep learning community. More specifically, it provides support for Theano users.
KarenUllrich/models
Models and examples built with TensorFlow
KarenUllrich/bits-back
KarenUllrich/bitswap
Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables
KarenUllrich/DiscreteCatchUp
Experiments for continuous to binary and back
KarenUllrich/examples
KarenUllrich/HDToolsPython
A toolbox for analyzing high dimensionality issues in machine learning. Written in Python.
KarenUllrich/hidden-networks
KarenUllrich/karenullrich.github.io
KarenUllrich/multiset-compression
Official code accompanying the arXiv paper Compressing Multisets with Large Alphabets
KarenUllrich/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
KarenUllrich/sigma-gpt
σ-GPT: A New Approach to Autoregressive Models
KarenUllrich/Spearmint-for-Theano
Spearmint Bayesian optimization codebase
KarenUllrich/tensorflow
Computation using data flow graphs for scalable machine learning
KarenUllrich/Theano
Optimizing GPU-meta-programming code generating array oriented optimizing math compiler in Python
KarenUllrich/uva-iai.github.io
Homepage of UVA IAI