This project utilizes a notebook to train a model for balancing a matchstick on a box, leveraging Deep Q-Learning. It's an experiment in applying reinforcement learning techniques to solve the classic "CartPole" problem from OpenAI's Gym.
The goal is to develop an agent capable of maintaining a pole in an upright position as long as possible by moving the cart on which it's mounted. This project is an introduction to the concepts of Deep Q-Learning, including experience replay and the use of a neural network to approximate Q-values.
To run this project, you will need:
- Python 3.8 or above
- An environment manager (e.g., conda or venv)
- Jupyter Notebook or JupyterLab
- Clone the repository to your local machine.
- Create a virtual environment:
python -m venv venv
- Activate the virtual environment:
- On Windows:
venv\Scripts\activate
- On Unix or MacOS:
source venv/bin/activate
- Install the required packages:
pip install -r requirements.txt
- Open the notebook
cartpole.ipynb
in Jupyter Notebook or JupyterLab and follow the instructions.
- Python: The main programming language used.
- OpenAI Gym: Provides the CartPole environment.
- TensorFlow: Used for creating and training the neural network.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.