/keras_rl_tutorial

Primary LanguageJupyter Notebook


Hands-On Reinforcement Learning Tutorial!

Yam Peleg


www.github.com/ypeleg/keras_rl_tutorial


1. How to tune in?

tl;dr:


0.0.0 - Preamble Open In Colab
0.0.1 - Demo1 Open In Colab
0.0.2 Keras Functional API Open In Colab
1.0.0 Hands On Tutorial - Flappy Bird Open In Colab
1.0.1 - QLearning Baby Steps Open In Colab
1.0.2 - Reinforcement Learning Summary Open In Colab
1.0.3 OpenAI Gym Open In Colab
2.0.2 Keras RL Open In Colab

If you wanted to listen to someone speaks three hours straight about deep learning, You could have done so by the comfort of your house.

But you are here! Physically!

So...

This tutorial is extremely hands-on! You are strongly encouraged to play with it yourself!

Options:

$\varepsilon$. Run the notebooks locally

  • git clone https://github.com/ypeleg/keras_rl_tutorial

  • You might think that the goal of this tutorial is for you to play around a bit with deep learning. You Are Wrong.


The real goal of the tutorial is

To give you the Flexibility to use all this In your own domain!


Therefore, running all of this on your machine is by far the best option if you can get it working!


a. Play with the notebooks dynamically (on Google Colab)


b. Play with the notebooks dynamically (on MyBinder)

Binder

Anyone can use the mybinder.org website (by clicking on the icon above) to run the notebook in her/his web-browser. You can then play with it as long as you like, for instance by modifying the values or experimenting with the code.

c. View the notebooks statically. (if all else failed..)