Attempting the #100_days_of_ml_code challenge inspired by Siraj Raval. Will be writing my thoughts at the end of the day, here. All the code can be found in the './code' folder.
Revised tensorflow from scratch. Created a program to train a model on the fashion-mnist data set. However not being able to train for over a few epochs. Working on it. code in the 'code' folder.
Attained 96% accuracy on the fashion-mnist dataset and debugged tensorflow code. Code to the working ipnb linked below. https://github.com/aditya9898/100_days_of_ml_code/blob/master/code/%5Bday%202%5D%20tf%20practice%20%5B3%20aug%202018%5D.ipynb
Attended the ml code jam workshop conducted by Google. Learnt about tf.estimators and intro to tensorflow.
Revised keras modes and made a 3 layer conv model to fit the fashion mnist dataset. Attained 90% accuracy on both test and train sets. Have to learn why batch normalization affects the training of conv nets so much.
Created a convolutional neural net in tensorflow and trained a model on the fashion-mnist dataset. A few errors left to debug. Once finnished, will do an analysis on which architecture is best suited to train a model on fashion-mnist. Code will be uploaded in the ./code folder.
Spent the whole day debugging tensorflow conv net code. When making conv nets in tf, we initialize the weight variables(w1,w2,w3) but tf automatically manages the bias and fully connected layer weights. After training how to return these parameters so as to make predictions on new data?
Learnt transfer learning using keras. Tried using different models such as ResNet50 and MobileNet. Froze the initial layers and trained the final layers to make a smiling/not smiling classifier using MobileNet. Also worked on google colaboratory and used transfer learning on the resnet50 model to train a dataset of 'cats/dogs'. Link to the code below..... https://github.com/aditya9898/100_days_of_ml_code/blob/master/code/%5Bday%207%5D%20keras%20transfer%20learning%20%5B8%20aug%202018%5D.ipynb
Implemented transfer learning on a few real world datasets. Learnt about Train_data_generators in keras and used it to train the dog breed classifier. Used python to make nested folders and to sort the pictures of different breeds of dogs in different folders for it to be used with train data generator.
Used transfer learning to train the dog breed classifier on MobileNet. Achieved an accuracy of 77% in the training set but not much accuracy on the test set.
Attended Week 2 of ML Code Jam by Google and Kerala AI. Met a few freelancers in the field and interracted with them. Got to know about several hackathons and methods to learn ML.
Attempted training the dog breed classifier in a kaggle kernel. Attended a meetup and met a lot of people. Planning on submitting the predictions of the test dataset of the dog breed clkassification competition. Code will be posted in the './code' folder.
Took a break from coding today. Went through a couple of repositories. Revised neural style transfer. Planing on implementing it tomorrow.
Took a break. Could not find time everyday.
Revisited transfer learning in keras. Went through Lesson 1 of fast.ai on google colab. Worked on Neural Style Transfer with help from Siraj Ravals video.
Learnt neural style transfer and the concept of gram matrix. Learnt how to use a pretrained conv net in tensorflow. Also uploaded code for some simple file transfers in python so as to enable the use of keras ImageDataGenerators. For code view ImportantCodeSnipets repo.
Attempted to do style transfer on kaggle. Learnt the tensorflow code to do the same. However errors exist. Working on it.
Continuing to do style transfer on kaggle. Have doubts with the working of tensorflow optimizers and how they work. Working on it. Refreshed knowledge on gram matrices and style cost of neural style transfer. Link to code
Found this amazing blog on style transfer. Going through it. Cleared a few doubts about tensorflow optimizers through reddit.
Tried debugging the style transfer ipnb.
Understood neural style transfer. Learnt features of tensorflow. Implementing neural style transfer from scratch using vgg 19 pretrained model on imagenet.
Implemented neural style transfer on kaggle kernel. The model trains fine. Error in computing style cost. The model used is vgg19 and trained on tensorflow. Working on the style cost function. Find the code here.
Attempted style transfer from scratch again. Still refining model and fixing bugs. The model trains however the content cost fails to decrease. Doing on kaggle kernel.
Attended week 3 of Google ML study jam. Cleared a few doubts about style transfer.
Started writing a a blog on medium about transfer learning in keras. Ran the style transfer ipnb on kaggle and colaboratory. Results are the same. Some errors still exist.
went through tensorflow.org and implemented the mnist classification tutorial on google colab. Learnt about sparse categorical cross entropy loss function. Going through the tensorflow.org tutorials sequentially.
Learnt the clone-commit-push-pull_request cycle and contributed to two repos. Made first open-source contribution.
Attended the GDG Dev fest Bangalore. Met a lot of people and networked. Learnt about google actions.
Read blog on conv nets and revised what the different layers of a conv net actually learn. Wrote blog on transfer learning.
Finished blog on transfer learning. Made transfer-learning repo as a tutorial with the blog.
Started learning reinforcement learning. Created repo which contains my RL journey here .