Deep Reinforcement Learning Course
⚠️ The new version of Deep Reinforcement Learning Course starts on October the 2nd 2020. ➡️ More info here ⬅️
Syllabus
Chapter 1: Introduction to Deeep Reinforcement Learning
ARTICLE Introduction to Deep Reinforcement Learning
📜 VIDEO Introduction to Deep Reinforcement Learning
📹ARTICLE: Q-Learning, let’s create an autonomous Taxi 🚖 (Part 1/2)
📜VIDEO Q-Learning, let’s create an autonomous Taxi 🚖 (Part 1/2)
📹 [ARTICLE: Q-Learning, let’s create an autonomous Taxi 🚖 (Part 2/2)] 📅Friday📅
📹 [VIDEO: Q-Learning, let’s create an autonomous Taxi 🚖 (Part 2/2)] 📅Friday📅
FROZENLAKE IMPLEMENTATION
Implementing a Q-learning agent that plays Taxi-v2 🚕
📹 ARTICLE // DOOM IMPLEMENTATION
📜 Create a DQN Agent that learns to play Atari Space Invaders 👾
📹 ARTICLE // CARTPOLE IMPLEMENTATION // DOOM IMPLEMENTATION
📜 Create an Agent that learns to play Doom deathmatch
📹 ARTICLE// Doom Deadly corridor IMPLEMENTATION
📜 Create an Agent that learns to play Doom Deadly corridor
📹 ARTICLE
📜 Create an Agent that learns to play Sonic
📹 ARTICLE
📜 Create an Agent that learns to play Sonic the Hedgehog 2 and 3
👨💻 ARTICLE
📜 A trained RND agent that learned to play Montezuma's revenge (21 hours of training with a Tesla K80
👨💻 Any questions 👨💻
If you have any questions, feel free to ask me:
📧: simonini.thomas.pro@gmail.com
Github: https://github.com/simoninithomas/Deep_reinforcement_learning_Course
🌐 : https://simoninithomas.github.io/deep-rl-course/
Twitter: @ThomasSimonini
Don't forget to follow me on twitter, github and Medium to be alerted of the new articles that I publish
How to help 🙌
3 ways:
- Clap our articles and like our videos a lot:Clapping in Medium means that you really like our articles. And the more claps we have, the more our article is shared Liking our videos help them to be much more visible to the deep learning community.
- Share and speak about our articles and videos: By sharing our articles and videos you help us to spread the word.
- Improve our notebooks: if you found a bug or a better implementation you can send a pull request.