/Reinforcement-Learning

Repository for Reinforcement Learning class at Tufts University

Primary LanguagePython

RL

Repository for Reinforcement Learning class at Tufts University

The problems in this repository are based on Sutton and Barto's Reinforcement Learning: An Introduction, 2nd Edition.

Write ups and code for all projects are included inside

Projects included:

  1. HW1: This homework explores the performance of an agent in an K-Armed Bandit problem environment under different parameters.

  2. HW2: This homework explores a Monte Carlo on-policy control method for the Racetrack problem in Chapter 5: Monte Carlo methods in the book (Exercise 5.8)

  3. HW3: This homework explores two varieties of the Dyna-Q+ algorithm that use different exploration techniques.

  4. HW4: This homework explores different learning algorithms in an existing Ms Pac-Man agent, as well as the difference in performance between linear and polynomial function approximations for the state of the agent.

  5. Final Project: Stable Locomotion in Unstructured Terrain using Curriculum Learning for Online Parameter Adaptation

    Abstract: We present a learning-based approach to optimize the gait of a hexapod robot for forward progression and stability. Using a central pattern generator (CPG) model for parameterized locomotion, we propose the use of reinforcement learning to learn and adapt parameters online to maximize the distance traversed by the robot in a stable fashion. We present a curriculum of different terrains of increasing complexity as a way of speeding up the learning of the robot and obtaining higher reward over non-learning and learning-from-scratch approaches. Our experimental results show that it is possible for the hexapod to learn to walk over the planks in the most stable way possible by keeping the step height of its gait as low as possible. Setting up a curriculum for the agent to successfully learn to walk over taller obstacles proved to be a challenging and time consuming task, and requires additional work.