Hello there! 👋 Welcome to this repository dedicated to understanding the core concepts and algorithms of Reinforcement Learning (RL).
Reinforcement Learning (RL) is a type of machine learning where an agent learns how to behave in an environment by performing actions and receiving rewards. This repo contains experiments and implementations of some fundamental RL algorithms to provide a deep dive into their inner workings.
-
Epsilon-Greedy: A simple but effective method where the agent occasionally tries a random action (exploration), but most of the time chooses the action that it predicts will have the highest reward (exploitation).
-
Upper Confidence Bound (UCB): This algorithm deals with the exploration vs exploitation dilemma by choosing the action that has the highest upper confidence bound with respect to its estimated rewards. It's a way to balance the trade-off between trying out new things and sticking with what's known to work.
One of the highlights of this repo is the inclusion of Jupyter notebooks. These interactive notebooks provide a hands-on experience, allowing you to visualize and experiment with the algorithms in real-time.
👉 Click here to dive into the Jupyter notebooks!
- Clone this repository:
git clone https://github.com/your-username/rl-fundamentals.git
Contribution Feel free to raise issues, submit pull requests, or simply share your feedback. All contributions are welcome!
License This project is under the MIT License. See LICENSE for more details.