/Multi-Arm-Bandit

Multi Arm Bandit problem using Epsilon Greedy, UCB and Thomson sampling methods

Primary LanguageJupyter Notebook

Stargazers