This repository contains the code and pdf of a series of blog post called "dissecting reinforcement learning" which I published on my blog mpatacchiola.io/blog. Moreover there are links to resources that can be useful for a reinforcement learning practitioner. If you have some good references which may be of interest please send me a pull request and I will integrate them in the README.
The source code is contained in src with the name of the subfolders following the post number. In pdf there are the A3 documents of each post for offline reading. In images there are the raw svg file containing the images used in each post.
Installation
The source code does not require any particular installation procedure. The code can be used in Linux, Windows, OS X, and embedded devices like Raspberry Pi, BeagleBone, and Intel Edison. The only requirement is Numpy which is already present in Linux and can be easily installed in Windows and OS X through Anaconda or Miniconda. Some examples require Matplotlib for data visualization and animations.
Posts Content
-
[Post one] [code] [pdf] - Markov chains. Markov Decision Process. Bellman Equation. Value and Policy iteration algorithms.
-
[Post two] [code] [pdf] - Monte Carlo methods for prediction and control. Generalised Policy Iteration. Action Values and Q-function.
-
[Post three] [code] [pdf] - Temporal Differencing Learning, Animal Learning, TD(0), TD(λ) and Eligibility Traces, SARSA, Q-Learning.
-
[Post four] [code] [pdf] - Neurobiology behind Actor-Critic methods, computational Actor-Critic methods, Actor-only and Critic-only methods.
-
[Post five] [code] [pdf] - Evolutionary Algorithms introduction, Genetic Algorithm in Reinforcement Learning, Genetic Algorithms for policy selection.
-
[Post six] [code] [pdf] - Reinforcement learning applications, Multi-Armed Bandit, Mountain Car, Inverted Pendulum, Drone landing, Hard problems.
-
[Post seven] [code] [pdf] - Function approximation, Intuition, Linear approximator, Applications, High-order approximators.
Resources
Software:
-
[Google DeepMind Lab] [github] - DeepMind Lab is a fully 3D game-like platform tailored for agent-based AI research.
-
[OpenAI Gym] [github] - A toolkit for developing and comparing reinforcement learning algorithms.
-
[OpenAI Universe] [github] - Measurement and training for artificial intelligence.
-
[RL toolkit] - Collection of utilities and demos developed by the RLAI group which may be useful for anyone trying to learn, teach or use reinforcement learning (by Richard Sutton).
-
[setosa blog] - A useful visual explanation of Markov chains.
Books and Articles:
-
Artificial intelligence: a modern approach. (chapters 17 and 21) Russell, S. J., Norvig, P., Canny, J. F., Malik, J. M., & Edwards, D. D. (2003). Upper Saddle River: Prentice hall. [web] [github]
-
Christopher Watkins doctoral dissertation, which introduced the Q-learning for the first time [pdf]
-
Evolutionary Algorithms for Reinforcement Learning. Moriarty, D. E., Schultz, A. C., & Grefenstette, J. J. (1999). [pdf]
-
Machine Learning (chapter 13) Mitchell T. (1997) [web]
-
Reinforcement learning: An introduction. Sutton, R. S., & Barto, A. G. (1998). Cambridge: MIT press. [html]
-
Reinforcement learning: An introduction (second edition). Sutton, R. S., & Barto, A. G. (in progress). [pdf]
-
Reinforcement Learning in a Nutshell. Heidrich-Meisner, V., Lauer, M., Igel, C., & Riedmiller, M. A. (2007) [pdf]
-
Statistical Reinforcement Learning: Modern Machine Learning Approaches, Sugiyama, M. (2015) [web]
License
The MIT License (MIT) Copyright (c) 2017 Massimiliano Patacchiola Website: http://mpatacchiola.github.io/blog
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.