Hands-On Reinforcement Learning Tutorial!
Yam Peleg
www.github.com/ypeleg/keras_rl_tutorial
1. How to tune in?
tl;dr:
0.0.0 - Preamble
0.0.1 - Demo1
0.0.2 Keras Functional API
1.0.0 Hands On Tutorial - Flappy Bird
1.0.1 - QLearning Baby Steps
1.0.2 - Reinforcement Learning Summary
1.0.3 OpenAI Gym
2.0.2 Keras RL
If you wanted to listen to someone speaks three hours straight about deep learning, You could have done so by the comfort of your house.
But you are here! Physically!
So...
This tutorial is extremely hands-on! You are strongly encouraged to play with it yourself!
Options:
$\varepsilon$ . Run the notebooks locally
-
git clone https://github.com/ypeleg/keras_rl_tutorial
-
You might think that the goal of this tutorial is for you to play around a bit with deep learning. You Are Wrong.
The real goal of the tutorial is
To give you the Flexibility to use all this In your own domain!
Therefore, running all of this on your machine is by far the best option if you can get it working!
a. Play with the notebooks dynamically (on Google Colab)
- Anyone can use the colab.research.google.com/notebook website (by clicking on the icon bellow) to run the notebook in her/his web-browser. You can then play with it as long as you like!
- For this tutorial:
b. Play with the notebooks dynamically (on MyBinder)
Anyone can use the mybinder.org website (by clicking on the icon above) to run the notebook in her/his web-browser. You can then play with it as long as you like, for instance by modifying the values or experimenting with the code.
c. View the notebooks statically. (if all else failed..)
- Either directly in GitHub: ypeleg/ExpertDL;
- Or on nbviewer: notebooks.