/stancon18

A primer on Hidden Markov Models using Stan. A Case Study submitted as a candidate for a contributed talk in StanCon 2018. Under review.

Primary LanguageHTMLCreative Commons Attribution 4.0 InternationalCC-BY-4.0

A Tutorial on Hidden Markov Models using Stan

This case study documents the implementation in Stan (Carpenter et al. 2016) of the Hidden Markov Model (HMM) for unsupervised learning (Baum and Petrie 1966; Baum and Eagon 1967; Baum and Sell 1968; Baum et al. 1970; Baum 1972). Additionally, we present the adaptations needed for the Input-Output Hidden Markov Model (IOHMM). IOHMM is an architecture proposed by Bengio and Frasconi (1995) to map input sequences, sometimes called the control signal, to output sequences. Compared to HMM, it aims at being especially effective at learning long term memory, that is when input-output sequences span long points. In all cases, we provide a fully Bayesian estimation of the model parameters and inference on hidden quantities, namely filtered state belief, smoothed state belief and jointly most probable state path.

A Tutorial on Hidden Markov Models using Stan is distributed under the Creative Commons Attribution 4.0 International Public License. Accompanying code is distributed under the GNU General Public License v3.0. See the README file for details. All files are available in the stancon18 GitHub repository.