/MABandits_Recommender

This repository contains the code for implementing the Linear Upper Confidence Bound (LinUCB) and Thompson Sampling (TS) algorithms for multi-armed bandit problems.

Primary LanguageJupyter Notebook

Stargazers