Pinned Repositories
edward
A library for probabilistic modeling, inference, and criticism. Deep generative models, variational inference. Runs on TensorFlow.
m2d2
M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer
models
Models built with TensorFlow
nlptools
A toolkit that wraps various natural language processing implementations behind a common interface.
particle-filter
multi-agent based particle filter for moving object tracking
pSGLD
Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD)
SG_MCMC
Implementation of Stochastic Gradient MCMC algorithms
shogun
The Shogun Machine Learning Toolbox (Source Code)
variational-dropout
Replication of the paper "Variational Dropout and the Local Reparameterization Trick" using Lasagne.
vd-ard-bdl16
The code for the experiments section of the Dropout-based Automatic Relevance Determination paper at the Bayesian Deep Learning NIPS 2016 Workshop
beyzaa's Repositories
beyzaa/models
Models built with TensorFlow
beyzaa/video-summarization
sport video summarization tool
beyzaa/edward
A library for probabilistic modeling, inference, and criticism. Deep generative models, variational inference. Runs on TensorFlow.
beyzaa/m2d2
M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer
beyzaa/nlptools
A toolkit that wraps various natural language processing implementations behind a common interface.
beyzaa/particle-filter
multi-agent based particle filter for moving object tracking
beyzaa/pSGLD
Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD)
beyzaa/SG_MCMC
Implementation of Stochastic Gradient MCMC algorithms
beyzaa/shogun
The Shogun Machine Learning Toolbox (Source Code)
beyzaa/variational-dropout
Replication of the paper "Variational Dropout and the Local Reparameterization Trick" using Lasagne.
beyzaa/vd-ard-bdl16
The code for the experiments section of the Dropout-based Automatic Relevance Determination paper at the Bayesian Deep Learning NIPS 2016 Workshop