Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.
It supports modeling languages including
- TensorFlow (with neural networks via Keras, TensorFlow Slim, or Pretty Tensor)
- Stan
- PyMC3
- Python, through NumPy/SciPy
It supports inference via
- Variational inference
- Black box variational inference
- Stochastic variational inference
- Variational auto-encoders
- Inclusive KL divergence: KL(p||q)
- Marginal posterior optimization (empirical Bayes, marginal
maximum likelihood)
- Variational EM
- Maximum a posteriori estimation (penalized maximum likelihood,
maximum likelihood)
- Laplace approximation
It supports criticism of the model and inference via
- Point-based evaluations
- Posterior predictive checks
Edward is built on top of TensorFlow, enabling features such as computational graphs, distributed training, CPU/GPU integration, automatic differentiation, and visualization with TensorBoard.