/reinforcement-learning-an-introduction

Python code for Reinforcement Learning: An Introduction

Primary LanguagePython

Reinforcement Learning: An Introduction

Python code for Sutton & Barto's book Reinforcement Learning: An Introduction (2nd Edition)

Contents

Click to view the sample output

Chapter 1

  1. Tic-Tac-Toe

Chapter 2

  1. Figure 2.2: Average performance of epsilon-greedy action-value methods on the 10-armed testbed
  2. Figure 2.3: Optimistic initial action-value estimates
  3. Figure 2.4: Average performance of UCB action selection on the 10-armed testbed
  4. Figure 2.5: Average performance of the gradient bandit algorithm

Chapter 3

  1. Figure 3.5: Grid example with random policy
  2. Figure 3.8: Optimal solutions to the gridworld example

Chapter 4

  1. Figure 4.1: Convergence of iterative policy evaluation on a small gridworld
  2. Figure 4.2: Jack’s car rental problem
  3. Figure 4.3: The solution to the gambler’s problem

Chapter 5

  1. Figure 5.1: Approximate state-value functions for the blackjack policy
  2. Figure 5.4: Weighted importance sampling
  3. Figure 5.5: Ordinary importance sampling with surprisingly unstable estimates

Chapter 6

  1. Figure 6.2: Random walk
  2. Figure 6.3: Batch updating
  3. Figure 6.4: Sarsa applied to windy grid world
  4. Figure 6.5: The cliff-walking task
  5. Figure 6.7: Interim and asymptotic performance of TD control methods
  6. Figure 6.8: Comparison of Q-learning and Double Q-learning

Chapter 7

  1. Figure 7.2: Performance of n-step TD methods on 19-state random walk

Chapter 8

  1. Figure 8.3: Average learning curves for Dyna-Q agents varying in their number of planning steps
  2. Figure 8.5: Average performance of Dyna agents on a blocking task
  3. Figure 8.6: Average performance of Dyna agents on a shortcut task
  4. Figure 8.7: Prioritized sweeping significantly shortens learning time on the Dyna maze task

Chapter 9

  1. Figure 9.1: Gradient Monte Carlo algorithm on the 1000-state random walk task
  2. Figure 9.2: Semi-gradient n-steps TD algorithm on the 1000-state random walk task
  3. Figure 9.8: Example of feature width’s effect on initial generalization and asymptotic accuracy
  4. Figure 9.10: Single tiling and multiple tilings on the 1000-state random walk task

Environment

  • python 2.7
  • numpy
  • matplotlib