/DRL-HC-methods

In this repo I explore the Hill Climbing improvements like adaptative nopise scaling and cross-entropy to use them to solve the enviroment CartPole-v0 from OpenAI-GYM.

Primary LanguageJupyter NotebookGNU General Public License v3.0GPL-3.0

Hill Climbing methods

Description

In this repo I explore the Hill Climbing improvements like adaptative nopise scaling and cross-entropy to use them to solve the enviroment CartPole-v0 from OpenAI-GYM.

Usage

The RL algorithm is located under the "ce_w_ans_agent.py" file and to test it working over the gym environment you shuld run the jupyter notebook OpenAI_Gym_CartPole-v0.ipynb in which you can train the agent from scratch or coment the training fase and load the weigths to test it.

Installation

To use this code you need to install the following packages:

  • gym
  • numpy
  • jupiyter
  • matplotlib

License

GNU General Public License v3.0