Repository for all the code from my youtube channel
You can find me at https://youtube.com/MachineLearningWithPhil
My crude implementation of a convolutional neural network to perform image classification on data gathered
by the Magellan spacecraft. The data is horribly skewed, as most images do not contain a volcano.
This means we'll have to do some creative data engineering for our model training.
Please note that in the test set, 84.1% of the data is "no volcano", and our model returns
an accuracy of around 88%, which is better than a model that outputs straight 0s for predictions.
You can check out the video for this at https://youtu.be/Ki-xOKydQrY
You can find the data for this project at https://www.kaggle.com/fmena14/volcanoesvenus/home
My implementation of the Deep Q learning algorithm in PyTorch. Here we teach the algorithm to play the game of space invaders. I haven't had enough time to train this model yet, as it takes quite some time even on my 1080Ti / i7 7820k @ 4.4 GHz. I'll train longer and provide a video on how well it does, at a later time.
The blog post talking about how Deep Q learning works can be found at http://www.neuralnet.ai/coding-a-deep-q-network-in-pytorch/
Video for this is at https://www.youtube.com/watch?v=RfNxXlO6BiA&t=2s
Simple implementation of a convolutional neural network in TensorFlow, version 1.5.
Video tutorial on this code can be found here https://youtu.be/azFyHS0odcM
Achieves accuracy of 98% after 10 epochs of training
Requires data from http://yann.lecun.com/exdb/mnist/
Implementation of Monte Carlo control without exploring starts in the blackjack environment from the OpenAI gym.
Video tutorial on this code can be found at https://youtu.be/e8ofon3sg8E
Algorithm trains for 1,000,000 games and produces a win rate of around 42%, loss rate of 52% and draw rate of 6%
Implementation of off policy Monte Carlo control in the blackjack environment from the OpenAI gym.
Video tutorial on this code can be found at https://youtu.be/TvO0Sa-6UVc
Algorithm trains for 1,000,000 games and produces a win rate of around 29%, loss rate of 66% and draw rate of 5%
Implementation of the Q learning algorithm for the cart pole problem. Code is based on the course by lazy programmer,
which you can find here here
Video tutorial on this code can be found at https://youtu.be/ViwBAK8Hd7Q
Implementation of the double Q learning algorithm in the cart pole environment. This is based on my course on
reinforcement learning, which you can find at this repo
Video tutorial on this code can be found https://youtu.be/Q99bEPStnxk
Implementation of the SARSA algorithm in the cart pole environment. This is based on my course on reinforcement learning,
which can be found here
Video tutorial on this code can be found at https://youtu.be/P9XezMuPfLE