AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for AdvancedHMC's usage is given below. AdvancedHMC.jl is part of Turing.jl, a probabilistic programming library in Julia. If you are interested in using AdvancedHMC.jl through a probabilistic programming language, please check it out!
Interfaces
- Python interface for AdvancedHMC
NEWS
- We presented a paper for AdvancedHMC.jl at AABI 2019 in Vancouver, Canada. (abs, pdf, OpenReview)
- We presented a poster for AdvancedHMC.jl at StanCon 2019 in Cambridge, UK. (pdf)
API CHANGES
- [v0.2.22] Three functions are renamed.
Preconditioner(metric::AbstractMetric)
->MassMatrixAdaptor(metric)
andNesterovDualAveraging(δ, integrator::AbstractIntegrator)
->StepSizeAdaptor(δ, integrator)
find_good_eps
->find_good_stepsize
- [v0.2.15]
n_adapts
is no longer needed to constructStanHMCAdaptor
; the old constructor is deprecated. - [v0.2.8] Two Hamiltonian trajectory sampling methods are renamed to avoid a name clash with Distributions.
Multinomial
->MultinomialTS
Slice
->SliceTS
- [v0.2.0] The gradient function passed to
Hamiltonian
is supposed to return a value-gradient tuple now.
using AdvancedHMC, Distributions, ForwardDiff
# Choose parameter dimensionality and initial parameter value
D = 10; initial_θ = rand(D)
# Define the target distribution
ℓπ(θ) = logpdf(MvNormal(zeros(D), ones(D)), θ)
# Set the number of samples to draw and warmup iterations
n_samples, n_adapts = 2_000, 1_000
# Define a Hamiltonian system
metric = DiagEuclideanMetric(D)
hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)
# Define a leapfrog solver, with initial step size chosen heuristically
initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
integrator = Leapfrog(initial_ϵ)
# Define an HMC sampler, with the following components
# - multinomial sampling scheme,
# - generalised No-U-Turn criteria, and
# - windowed adaption for step-size and diagonal mass matrix
proposal = NUTS{MultinomialTS, GeneralisedNoUTurn}(integrator)
adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(0.8, integrator))
# Run the sampler to draw samples from the specified Gaussian, where
# - `samples` will store the samples
# - `stats` will store diagnostic statistics for each sample
samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; progress=true)
An important design goal of AdvancedHMC.jl is modularity; we would like to support algorithmic research on HMC. This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e. mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes etc. The minimal example above can be modified to suit particular inference problems by picking components from the list below.
- Unit metric:
UnitEuclideanMetric(dim)
- Diagonal metric:
DiagEuclideanMetric(dim)
- Dense metric:
DenseEuclideanMetric(dim)
where dim
is the dimensionality of the sampling space.
- Ordinary leapfrog integrator:
Leapfrog(ϵ)
- Jittered leapfrog integrator with jitter rate
n
:JitteredLeapfrog(ϵ, n)
- Tempered leapfrog integrator with tempering rate
a
:TemperedLeapfrog(ϵ, a)
where ϵ
is the step size of leapfrog integration.
- Static HMC with a fixed number of steps (
n_steps
) (Neal, R. M. (2011)):StaticTrajectory(integrator, n_steps)
- HMC with a fixed total trajectory length (
trajectory_length
) (Neal, R. M. (2011)):HMCDA(integrator, trajectory_length)
- Original NUTS with slice sampling (Hoffman, M. D., & Gelman, A. (2014)):
NUTS{SliceTS,ClassicNoUTurn}(integrator)
- Generalised NUTS with slice sampling (Betancourt, M. (2017)):
NUTS{SliceTS,GeneralisedNoUTurn}(integrator)
- Original NUTS with multinomial sampling (Betancourt, M. (2017)):
NUTS{MultinomialTS,ClassicNoUTurn}(integrator)
- Generalised NUTS with multinomial sampling (Betancourt, M. (2017)):
NUTS{MultinomialTS,GeneralisedNoUTurn}(integrator)
- Adapt the mass matrix
metric
of the Hamiltonian dynamics:mma = MassMatrixAdaptor(metric)
- This is lowered to
UnitMassMatrix
,WelfordVar
orWelfordCov
based on the type of the mass matrixmetric
- This is lowered to
- Adapt the step size of the leapfrog integrator
integrator
:ssa = StepSizeAdaptor(δ, integrator)
- It uses Nesterov's dual averaging with
δ
as the target acceptance rate.
- It uses Nesterov's dual averaging with
- Combine the two above naively:
NaiveHMCAdaptor(mma, ssa)
- Combine the first two using Stan's windowed adaptation:
StanHMCAdaptor(mma, ssa)
AdvancedHMC
supports both AD-based (Zygote
, Tracker
and ForwardDiff
) and user-specified gradients. In order to use user-specified gradients, please replace ForwardDiff
with ℓπ_grad
in the Hamiltonian
constructor, where the gradient function ℓπ_grad
should return a tuple containing both the log-posterior and its gradient.
All the combinations are tested in this file except from using tempered leapfrog integrator together with adaptation, which we found unstable empirically.
function sample(
rng::Union{AbstractRNG, AbstractVector{<:AbstractRNG}},
h::Hamiltonian,
τ::AbstractProposal,
θ::AbstractVector{<:AbstractFloat},
n_samples::Int,
adaptor::AbstractAdaptor=NoAdaptation(),
n_adapts::Int=min(div(n_samples, 10), 1_000);
drop_warmup=false,
verbose::Bool=true,
progress::Bool=false,
)
Draw n_samples
samples using the proposal τ
under the Hamiltonian system h
- The randomness is controlled by
rng
.- If
rng
is not provided,GLOBAL_RNG
will be used.
- If
- The initial point is given by
θ
. - The adaptor is set by
adaptor
, for which the default is no adaptation.- It will perform
n_adapts
steps of adaptation, for which the default is1_000
or 10% ofn_samples
, whichever is lower.
- It will perform
drop_warmup
specifies whether to drop samples.verbose
controls the verbosity.progress
controls whether to show the progress meter or not.
If you use AdvancedHMC.jl for your own research, please consider citing the following publication:
Hong Ge, Kai Xu, and Zoubin Ghahramani: "Turing: a language for flexible probabilistic inference.", International Conference on Artificial Intelligence and Statistics, 2018. (abs, pdf, BibTeX)
-
Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv)
-
Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.
-
Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123-214. (arXiv)
-
Betancourt, M. J., Byrne, S., & Girolami, M. (2014). Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669.
-
Betancourt, M. (2016). Identifying the optimal integration time in Hamiltonian Monte Carlo. arXiv preprint arXiv:1601.00225.
-
Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623. (arXiv)