/MCMCBenchmarks.jl

Comparing performance and results of mcmc options using Julia

Primary LanguageJuliaMIT LicenseMIT

MCMCBenchmarks

Lifecycle Build Status codecov.io

Introduction

MCMCBenchmarks provides a lightweight yet flexible framework for benchmarking MCMC samplers in terms of runtime, memory usage, convergence metrics and effective sample size. Currently, MCMCBenchmarks provides out of the box support for benchmarking the No-U-Turn Sampler (NUTS) algorithm as implemented in CmdStan, DynamicHMC and AdvancedHMC via Turing. However, methods can be extended to accommodate other samplers and test models.

Documentation

  • Docsdocumentation of the in-development version.

Overview of Features

  • Benchmarking Parameters: vary factors such as sample size, data-generating parameters, prior distributions, and target acceptance rate. The use of optional keywords allows other benchmarking parameters to be varied.
  • Meta-data: saves relevant benchmarking information regarding package version, Julia version and system specs.
  • Plotting: generate and save plots comparing samplers in terms of run time, memory usage, convergence diagnostics and effective sample size.

MCMCBenchmarkSuite

Although users can create custom benchmarks with MCMCBenchmarks, we provide a companion benchmark suite, featuring models with a wide range of complexity. The benchmark suite can be found at MCMCBenchmarkSuite. Click here to see an overview of key benchmarking results.