/multi-armed-bandits

Some code for demonstrating multi-armed bandits and reinforcement learning

Primary LanguageJupyter NotebookMIT LicenseMIT

multi-armed-bandits

Some code for demonstrating multi-armed bandits and reinforcement learning

Michael Bloem

Mosaic Data Science

mbloem@mosaicdatascience.com