RL4J has been migrated to a sub-module of the mono-repository here: https://github.com/deeplearning4j/deeplearning4j All future development will continue at that repository, which should be used for all issues and pull requests.
RL4J is a reinforcement learning framework integrated with deeplearning4j and released under an Apache 2.0 open-source license. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
- DQN (Deep Q Learning with double DQN)
- Async RL (A3C, Async NStepQlearning)
Both for Low-Dimensional (array of info) and high-dimensional (pixels) input.
Here is a useful blog post I wrote to introduce you to reinforcement learning, DQN and Async RL:
This is a tech preview and distributed as is. Comments are welcome on our gitter channel: gitter
** INSTALL rl4j-api before installing all (see below)!**
- mvn install -pl rl4j-api
- [if you want rl4j-gym too] Download and mvn install: gym-java-client
- mvn install
- Install gym-http-api.
- launch http api server.
- run with this main
Doom is not ready yet but you can make it work if you feel adventurous with some additional steps:
- You will need vizdoom, compile the native lib and move it into the root of your project in a folder
- export MAVEN_OPTS=-Djava.library.path=THEFOLDEROFTHELIB
- mvn compile exec:java -Dexec.mainClass="YOURMAINCLASS"
- Download and unzip Malmo from here
- export MALMO_HOME=YOURMALMO_FOLDER
- export MALMO_XSD_PATH=$MALMO_HOME/Schemas
- launch malmo per instructions
- run with this main
- Documentation
- Serialization/Deserialization (load save)
- Compression of pixels in order to store 1M state in a reasonnable amount of memory
- Async learning: A3C and nstep learning (requires some missing features from dl4j (calc and apply gradients)).
- Continuous control
- Policy Gradient
- Update gym-java-client when gym-http-api gets compatible with pixels environments to play with Pong, Doom, etc ..