/RL-Algorithms

Comparison of classical RL algorithms

Primary LanguageJupyter Notebook

Comparison of classical RL algorithms

This project aims to implement and compare three reinforcement learning algorithms Q-Learning, SARSA, and Monte Carlo. For this, the Taxi-v3 environment in the Gym library has been used. The agent of each of these algorithms starts exploring and learning in this environment. Also, to better understand how these algorithms work, the parameters in them and the environment have been changed, and a comparison has been made on the results obtained. In the following gifs, you can see the movement of each agent for 5 episodes.

Q-Learning SARSA Monte Carlo

The implementation of Q-Learning, SARSA, and Monte Carlo algorithms is in the Q_Agent.py, SARSA_Agent.py, and MonteCarlo_Agent.py files, respectively. The comparison of the effect of changing the parameters in Taxi_QAgent.ipynb, Taxi_SARSA.ipynb, and Taxi_MCAgent.ipynb and Comparison of three algorithms in Taxi_CompareAgents.ipynb file.

Result

Affect of Changing Q-Learning parameters

Learning Rate Discount Factor

Affect of Changing SARSA parameters

Learning Rate Discount Factor

Affect of Changing Monte Carlo parameters

Reward Steps

Compare Q-Learning, SARSA, Monte Carlo

Reward Steps