This project collects a set of neuroevolution experiments with/towards deep networks for reinforcement learning control problems using an unsupervised learning feature exctactor.
The experiments for this paper are based on this code.
The algorithms themselves are coded in the machine_learning_workbench
library, specifically using version 0.8.0.
First make sure the OpenAI Gym is pip-installed on python3, instructions here.
You will also need the GVGAI_GYM to access GVGAI environments.
Clone this repository, then execute:
$ bundle install
bundle exec ruby experiments/cartpole.rb
Bug reports and pull requests are welcome on GitHub at https://github.com/giuse/DNE.
The gem is available as open source under the terms of the MIT License.
Please feel free to contribute to this list (see Contributing
above).
- UL-ELR stands for Unsupervised Learning plus Evolutionary Reinforcement Learning, from the paper "Intrinsically Motivated Neuroevolution for Vision-Based Reinforcement Learning" (ICDL2011). Check here for citation reference and pdf.
- BD-NES stands for Block Diagonal Natural Evolution Strategy, from the homonymous paper "Block Diagonal Natural Evolution Strategies" (PPSN2012). Check here for citation reference and pdf.
- RNES stands for Radial Natural Evolution Strategy, from the paper "Novelty-Based Restarts for Evolution Strategies" (CEC2011). Check here for citation reference and pdf.
- Online VQ stands for Online Vector Quantization, from the paper "Intrinsically Motivated Neuroevolution for Vision-Based Reinforcement Learning" (ICDL2011). Check here for citation reference and pdf.
- The OpenAI Gym is described here and available on this repo
- PyCall.rb is available on this repo.