A Reinforcement Learning project that trains various Double Deep Q-Networks (DDQN) to excel in the Chrome Dino Game by dodging obstacles and maximizing its score through iterative learning.
Team members:
The diagram illustrates the architecture of the DDQN. The agent receives a stack of 4 images (80x80 pixels each) representing the last states and uses its Q-Network to predict Q-values for possible actions ("Do nothing" or "Jump"). Based on these Q-values, the agent selects an action and interacts with the environment, resulting in a new state and reward.
The chart compares the maximum scores reached by different variants in a random environment for 20 test rounds. Variants 1 (Blue), 2 (Orange), and 3 (Green) also surpassed the baseline (Grey), with Variant 3 (Green) performing slightly better than Variants 1 and 2.
- Python 3.8 or higher
- Google Chrome-Browser
-
Clone the repository:
git clone EmreYY20/DinoRL
-
Navigate to the project directory:
cd DinoRL
-
Install the required Python libraries:
pip install -r requirements.txt
-
Start localhost with:
python -m http.server 8000
-
Run the code by selecting a config file. In the config file select if you want to train or test:
python main.py -c config/config1
To the run the baseline:
python baseline/main_modified.py -c config/baseline_config
After training, a tfevents-file is created in the runs folder which can be openend using TensorBoard.
This project is licensed under the MIT License - see the License file for details.