/ARM-gradient

Low-variance, unbiased, theoretically grounded and computationally efficient gradient for binary latent variable models, based on variable augmentation and antithetic sampling. ICLR 2019

Primary LanguagePythonMIT LicenseMIT

ARM: Augment-REINFORCE-Merge Gradient for Stochastic Binary Networks

Code to show the simulation results in ARM: Augment-REINFORCE-Merge Gradient for Stochastic Binary Networks

Data sets

The MNIST data is self-contained and the Omniglot data is in the repository.

Citations

Below are the paper to cite if you find the algorithms in this repository useful in your own research:

@inproceedings{yin2018arm,
title={{ARM}: Augment-{REINFORCE}-Merge Gradient for Stochastic Binary Networks},
author={Mingzhang Yin and Mingyuan Zhou},
booktitle={International Conference on Learning Representations},
year={2019}
}

License Info

This code is offered under the MIT License.