Different RL algorithms implemented from scratch to the Easy 21 card game
This is based the Easy21 assignment from David Silver's RL Course
$V^*(s) = max_a \ Q^{\ast}(s,a)$
$\text{With } \epsilon \text{-greedy exploration strategy: }\epsilon = N_0 / (N_0 + N(s_t)) \text{, where } N_0 = 100 \text{ is a constant} $
$\text{With a time-varying scalar step-size of } \alpha_{t} = 1/N(s_t, a_t) $
For 10,000,000 episodes:
To use: run monte_carlo.py
$With \ parameter \ values \ λ ∈ \{0, 0.1, 0.2, ..., 1\}$
$\text{With } \epsilon \text{-greedy exploration strategy: }\epsilon = N_0 / (N_0 + N(s_t)) \text{, where } N_0 = 100 \text{ is a constant} $
$\text{With a time-varying scalar step-size of } \alpha_{t} = 1/N(s_t, a_t) $
For 10,000 episodes:
To use: run sarsa.py
Sarsa($\lambda$) with Linear Function Approximation
$\text{Binary feature vector }\phi(s, a) \text{ with 3 ∗ 6 ∗ 2 = 36 features } $
$\text{Dealer(s) = } \lbrace{[1, 4], [4, 7], [7, 10]}\rbrace $
$\text{Player(s) = } \lbrace[1, 6], [4, 9], [7, 12], [10, 15], [13, 18], [16, 21]\rbrace $
$a = \lbrace{\text{hit}, \text{stick}}\rbrace $
$\text{With parameter values }\lambda \in \lbrace{0, 0.1, 0.2, ..., 1}\rbrace $
$\text{With } \epsilon \text{-greedy exploration strategy: }\epsilon = 0.05 $
$\text{With a constant step-size of } \alpha_{t} = 0.01 $
For 10,000 episodes:
To use: run sarsa_linear.py
numpy, tqdm, matplotlib, pandas