DAYS OF DEEP LEARNING.

It was formally 30 days of Udacity but I converted it to Days of Deep Learning.

DAY 1 : 26/09/2019

  1. I took the pledge #30daysofUdacity in my Deep Learning Nanodegree.
  2. Completed Lesson 1 on deep learning nanodegree class.
  3. Revisited mlcourses.ai lesson on DEcision Trees and Random Forest. https://www.youtube.com/watch?v=H4XlBTPv5rQ&feature=youtu.be
  4. Watched Siraj Raval video on Decision Forest and Random Forest for clarity https://www.youtube.com/watch?v=QHOazyP-YlM

WHAT I LEARNT

  • The collection of Decision Tree is Random Forest
  • Random Forest used for both Classification and Regression problem
  • Random Forest is good for small dataset.
  • Broadened my knowledge on Matrix multiplication and Matrix dot product
  • Decision tree is prone to error and it is instable

DAY 2 : 27/09/2019

  1. I started Introduction to Neural Networks
  2. Broadened my knowledge on Perceptron https://deepai.org/machine-learning-glossary-and-terms/perceptron
  3. Watched a video on step function https://www.youtube.com/watch?v=tHwpj9b4zZo

WHAT I LEARNT

  • The heart of deep learning is Neural Network
  • Neural networks have nodes, edges, layers
  • Perceptron is an algorithm for binary classifier. It consist of foru main parts : Inptu values, Weights and biases, net sum and an activation function
  • Another name for Perception is Linear Binary Classifier
  • Default equation: ##Equation = Wx + Bias W = Weights x = Values B = Bias

DAY 3 : 28/09/2019

  1. Read about the cost function equation
  2. I learnt categories of model in Data Science
  3. Different between local and global minimum https://statinfer.com/204-5-10-local-vs-global-minimum/

WHAT I LEARNT

  • The co-efficient of a bias is 1
  • If the question is to determine probability of a dataset. Use Predictive Model
  • If the question is to show relationship of a dataset. Use Descriptive Model
  • If the question requires Yes or No answer of a dataset. Use Classification Model
  • Matrix is a rectangular array of numbers. Mathematically : Matrix = M x N
  • Vector is a N x 1 matrix

DAY 4: 29/09/2019

  1. Lesson 1:12 - Non- Linear Region
  2. Lesson 1:13 - Error Function
  3. Lesson 1:14 - Softmax
  4. Broadened my knowledge about softmax https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax

WHAT I LEARNT

  • Error function is used to detect how close we are to the goal
  • Error function must be differentiable
  • Error function must be continuous bot discrete
  • Predictions are answers gotten from Algorithm
  • Converting step function to discrete is by using another activation function called Sigmoid function
  • Softmax is another word for normalization of numbers
  • To be proficint in AI, you need to be proficient in Object Oriented Programming

DAY 5: 30/09/2019

  1. Completed Lesson 1
  2. Started Lesson 2
  3. Explicit video of One-Hot Encoding https://www.youtube.com/watch?v=v_4KWmkwmsU
  4. Watched this video image classifier https://www.youtube.com/watch?v=cAICT4Al5Ow

DAY 6: 01/10/2019

  1. Completed Lesson 2 - Implementing Gradient Descent
  2. Started and Completed Lesson 3 - Training neural networks

DAY 7: 02/10/2019

  1. Trying to Style Trasnfer my picture but not working. Trying to fix the error
  2. Started Lesson 4
  3. Read a little about style Transfer

DAY 8: 03/10/2019

  1. Completed all subtopics in Lesson 2 - Neural Networks
  2. Viewed my first project – Predicting Bike share pattern

WHAT I LEARNT

  • Difference between Underfitting and Overfitting
  • What is Regularization?
  • L1 and L2 Regularization
  • Dropout is the solution to Overfitting
  • Random Restart is use to solve the problem of local minimum
  • Other Activation function aside Sigmoid. They are Hyperbolic Tangent Function and Rectified Linear Unit (RELU) Function

DAY 9: 04/10/2019

  1. Working on my project - Predicting Bike-sharing Patterns
  2. Trying to style transfer my images with the fast style transfer code
  3. Reading about how to build neural network with pytorch
    https://www.datahubbs.com/deep-learning-101-first-neural-network-with-pytorch/?fbclid=IwAR2_MxZ6aAgWKunezBjgCPn4mIE_tpflWLFZ0yMv-kBDsTD8KpwGlqRrYyU

WHAT I LEARNT

  • It is very interesting when you see the concepts of the class been implemented with codes.
  • Learnt Forwardfeed Propagation and Back Propagation
  • How to tune hyperparameters : Learning rate, number of iteractions et cetera

DAY 10: 05/10/2019

  1. Working on my project - Predicting Bike-sharing Patterns
  2. Learning how to use Github professionally. Using https://www.udacity.com material
  3. Built multivariable using Linear regression

DAY 11: 06/10/2019

  1. Working on my project - Predicting Bike-sharing Patterns
  2. Started Sentiment analysis videos

DAY 12: 07/10/2019

  1. Submitted my first project on Predicting Bike-sharing Patterns
  2. Continued lessons on Sentiment analysis

DAY 13: 08/10/2019

  1. I continued my course on sentiment analysis
  2. Dived into some documentation in Python https://docs.python.org/2/library/collections.html

DAY 14: 09/10/2019

  1. Solving the projects in Sentiment analysis videos
  2. Studying some python documentation

DAY 15: 10/10/2019

  1. Solving the projects in Sentiment analysis videos

DAY 16: 11/10/2019

  1. Watched some videos on Neural Network
  2. Read some medium article

DAY 17: 12/10/2019

  1. Attended a meetup where Linear regression and logistic regression were discussed
  2. Learnt about the difference in their equations
  3. Read some articles on neural network
  4. I discovered there are some similarities between AI and Robotics

WHAT I LEARNT

  • I discovered that if linear equation is been used for logistic regression, it makes it difficult to identify the local minimum.

DAY 18: 13/10/2019

  1. Revisiting Bayes Rule in Introduction to Machine Learning Udacity's video
  2. Read article about it https://towardsdatascience.com/what-is-bayes-rule-bb6598d8a2fd

DAY 19:14/10/2019

  1. It is all about Algebraic mathematics
  2. Worked on some Python code

WHAT I LEARNT

  • Impossibility is a Mirage - Brace up
  • Brushin up on Python Object Oriented Programming

DAY 20: 15/10/2019

  1. Learning Algebraic concept. Thank you Udacity for an awesome material
  2. Revisiting sentiment analysis
  3. This article is enlightening https://medium.com/dsnet/chai-time-data-science-show-announcement-bfaaf38df219 Tomorrow is another day for a good progress

WHAT I LEARNT

  • There is nothing as good has learning the basis
  • Avoid assumptions

DAY 21: 16/10/2019

  1. Sorted out the error in my sentiment analysis code
  2. Rounding up the lesson and moving on to Introduction to Deep Learning with Pytorch

DAY 22: 17/10/2019

  1. Going through Introduction to Deep Learning with Pytorch
  2. Read about fully connected network https://www.oreilly.com/library/view/tensorflow-for-deep/9781491980446/ch04.html

DAY 23:18/10/2019

  1. Watching Introduction to Deep Learning with Pytorch videos
  2. Going through the codes too

DAY 24: 19/10/2019

  1. Revised Sentiment analysis project

DAY 25: 20/10/2019

  1. Revising Backpropagation and also reading some articles

DAY 26: 21/10/2019

  1. Learning the basis has been an amazing experience - Essential mathematics topics in AI
  2. Read some articles about AI and Deep Learning

DAY 27: 22/10/2019

  1. Clarified my doubt on Linear regression and logistic regression https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.html
  2. Revising Introduction to deep learning with Pytorch
  3. Watched a video about element-wise operation https://www.youtube.com/watch?v=2GPZlRVhQWY

WHAT I LEARNT

  • I also discovered that DREAMS are VALID but HARD-WORK AUTHENTICATE it.

DAY 28: 23/10/2019

  1. Solving Project 4 in Sentiment analysis
  2. Solved Network Architecture with Pytorch

DAY 29: 24/10/2019

  1. Watched transfer learning videos
  2. Solved some mathematics question
  3. Read some articles about Covolution Neural Network
  4. Read some articles about Recurrent Neural Network

DAY 30: 25/10/2019

  1. Started Convolutional Neural Network
  2. Attended the Webinar by Sourena Yadegari (My Mentor)

DAY 31: 26/10/2019

  1. Continued with my CNN Videos
  2. Read some CNN articles

DAY 32: 27/10/2019

  1. Read an article on MLP, CNN and its application with Pytorch

DAY 33: 28/10/2019

  1. Started the video on MLPs vs CNN
  2. Learnt about Filters and Convolution layer
  3. Filters and Edges
  4. Frequency in Images as regards CNN

WHAT I LEARNT

  • MLPs convert images to Tensor while CNN convert to matrix
  • CNN groups in edges
  • Frequency in CNN is similar to that of sound wave
  • MLPs are fully connected while CNN are sparsely connected. This leads to the network less dense

DAY 34: 29/10/2019

  1. Learnt about Pooling layers
  2. Covolution layers in Pytorch
  3. Image Augmentation
  4. Visualizing CNN
  5. CNNs in Pytorch
  6. Augmentation using transfer learning

WHAT I LEARNT

DAY 35: 30/10/2019

  1. Transfer Learning
  2. Weight Initailization
  3. Constant weight
  4. Normal Distribution and Random Uniform Distribution

WHAT I LEARNT

  • Using constant weights make back propagation to fail becasue it is not design to deal with consistency
  • Back propagation is designed to look at how different weight value affect training loss
  • Solution to a constant weight syndrome is choosing a random weight
  • To get random weight, use uniform distribution
  • Weight Initialization is about giving the model best chance to train

DAY 36: 01/11/2019

  1. Completed my videos on Weight Initialization

DAY 37: 02/11/2019

  1. Working on my second project.

DAY 38: 03/11/2019

  1. Started watching videos on Autoencoder
  2. Working on my second project

WHAT I LEARNT

  • Autoencoder is use to compress images without losing its content

DAY 39: 04/11/2019

  1. Read some articles about CNN and RNN
  2. Article on Pytorch and Neural Network https://medium.com/dair-ai/pytorch-1-2-introduction-guide-f6fa9bb7597c

DAY 40: 05/11/2019

  1. Indepth knowledge abut Haar-code object detection in OpenCV https://docs.opencv.org/trunk/db/d28/tutorial_cascade_classifier.html
  2. Open CV dataset https://github.com/opencv/opencv/tree/master/data/haarcascades

DAY 41: 06/11/2019

  1. Gradient Descent for Machine Learning https://machinelearningmastery.com/gradient-descent-for-machine-learning/
  2. More information about activation functions http://cs231n.github.io/neural-networks-1/#actfun https://www.datasciencecentral.com/profiles/blogs/deep-learning-advantages-of-relu-over-sigmoid-function-in-deep
  3. First research paper on Dropout https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
  4. Watching a video on Numpy https://youtu.be/QUT1VHiLmmI

WHAT I LEARNT

  • Stochastic gradient descent is been used when we have large dataset.
  • Sigmoid function has been mostly used but recently there is a shift in the usage because of its drawback.
  • The two drawbacks for sigmoid are: (1) Sigmoid saturates and kills gradient descent (2) The output is not zero centred
  • Tanh is a scaled sigmoid neuron
  • Tanh non-linearity is mostly preferred to Sigmoid non linearity
  • Numpy is faster than List

DAY 42:07/11/2019

  1. Working on my project

What I Learnt

  • It might take longer than I expected

DAY 43: 08/11/2019

  1. Revising numpy and matrix libraries

WHAT I LEARNT

  • How to write lower and upper triangular matrix in code
  • Learnt how to manipulate numpy to do my task
  • It is always impossible till you TRY. STAY HUNGRY

DAY 44: 09/11/2019

  1. Data Augmentation
  2. Difference between the model architecture
  3. What is Transfer Learning

WHAT I LEARNT

  • Data Augmentation makes the data generalize well
  • The number infront of the model architecture indicates the number of layers. For example; VGG16 has 16 layers in the model
  • The larger the number of layers, computation time is lengthy but reduces the number of error.
  • What is learnt from a datsset, it is transferred to my dataset. This is what we call TRANSFER LEARNING
  • Scale Invariance is when the size of the object doesnot not affect the prediction of the model
  • Rotational Invariance is when the angular position of an object doesnot affect the prediction of the model
  • Translation Invariance is when the shifting of the object to either the left or right doesnot affect the prediction of the model
  • Data Augmentation helps to avoid overfitting

DAY 45: 10/11/2019

  1. VGG Documentation https://arxiv.org/pdf/1409.1556.pdf https://github.com/jcjohnson/cnn-benchmarks
  2. Auto-encoder
  3. Linear Auto-encoder
  4. Linear Upsampling
  5. Convolutional Auto-encoder

WHAT I LEARNT

DAY 46: 11/11/2019

  1. Working on my project

DAY 47: 12/11/2019

  1. Examples of spatial data
  2. Python string Method https://docs.python.org/3/library/stdtypes.html#string-methods

WHAT I LEARNT

  • Images are spatial data
  • List is a mutable and ordered datatype but String is only ordered

DAY 48: 13/11/2019

  1. Read an article on Autoencoder

DAY 49: 14/11/2019

  1. Attended a meet-up with my Udacity mentor
  2. Reviewed my code
  3. Indepth knowledge about control flow - Conditional statement in Python
  4. Autoencoders
  5. Optimizer
  6. Short hand in navigating the jupyter notebook

WHAT I LEARNT

DAY 50: 15/11/2019

  1. Filled 60 days of Udacity

DAY 51: 16/11/2019

  1. I got my badge

DAY 52: 17/11/2019

  1. How to solve this error message - image file is truncated (150 bytes not processed) Use the below code "from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True"

DAY 53: 18/11/2019

  1. Indepth article on VGG16 Architecture https://neurohive.io/en/popular-networks/vgg16/

DAY 54: 19/11/2019

  1. Understanding Resnet, Alexnet, VGG, Inception
    https://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/
  2. Validation and Training Loss https://www.pyimagesearch.com/2019/10/14/why-is-my-validation-loss-lower-than-my-training-loss/

DAY 54: 20/11/2019

1 Research paper

  1. RNN deals with sequential data while CNN deals with spatial images

DAY 55: 21/11/2019

  1. Learnt how to upload images to Udacity workspace

DAY 56: 22/11/2019

  1. Read articles from MIT Technology Review. Always insightful
  2. Documentation in Github https://help.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github

DAY 57: 23/11/2019

  1. Read some article on RNN

DAY 58: 24/11/2019

Back Propagation in RNN

DAY 59: 25/11/2019

  1. Read an article on hyperparameter tunings
  2. Finally done with my second deep learning project

WHAT I LEARNT

  • My model is bias so I was advised by project reviewer to work on Data Augmentation.

DAY 60: 26/11/2019

  1. Article on Natural Language Processing (NLP).

DAY 61: 27/11/2019

  1. Difference between RNN and LSTM.

DAY 62: 28/11/2019

Long Short Term Memory

DAY 63: 29/11/2019

Preparing for Project 3 - Movie TV Predicting

DAY 64: 30/11/2019

RNN and LSTM

DAY 65: 01/12/2019

Implementation of RNN and LSTM

WHAT I LEARNT

  • Matrix operations make training more efficient
  • Batch size correspond to the number of sequences

DAY 66: 02/12/2019

  1. What are Tokenizers in RNN Implementation

DAY 67: 03/12/2019

  1. Number of hidden layers and Units in RNN. https://cs231n.github.io/neural-networks-1/
  2. RNN Hyperparameters
  3. Word Embeddings

WHAT I LEARNT

DAY 68: 04/12/2019

  1. Sentiment in RNN

DAY 69: 05/12/2019

  1. Zoom call with Sourena on Word Embedding, RNN and Code explanation
  2. He explained Python OOP concept and shared some links https://www.reddit.com/r/learnpython/comments/5jgbiq/python_oop_series/

https://www.thedigitalcatonline.com/blog/2014/08/20/python-3-oop-part-1-objects-and-types/

DAY 70: 06/12/2019

  1. Data Augmentation https://medium.com/@thimblot/data-augmentation-boost-your-image-dataset-with-few-lines-of-python-155c2dc1baec

DAY 71: 07/12/2019

  1. Introduction to LSTM
  2. LSTM vs RNN
  3. LSTM Architecture
  4. Learn, Forget and Remember Gate

DAY 72: 08/12/2019

  1. Hyperparameters

WHAT I LEARNT

DAY 73: 09/12/2019

  1. Learnt how to mount google drive
  2. Using Colab for my third Nanodegree Project

WHAT I LEARNT

  • Modified my code

DAY 74: 10/12/2019

  1. Working on my RNN Project

DAY 75: 11/12/2019

  1. Working on my third project

WHAT I LEARNT

  • Preprocessing the data in RNN is the hardest part of the project

DAY 76: 12/12/2019

  1. Started watching GANs videos
  2. Introduction to GANs
  3. Application of GANs
  4. Generator and Discriminator

WHAT I LEARNT

  • GANs is trained by running two (") optimization Algortithms simultaneously
  • Game theory is use to understand GANs mathematically
  • Both discriminator and generator has at least one hidden layer

DAY 77: 13/12/2019

  1. Wrapping up my project
  2. Submitted my project for review

DAY 78: 14/12/2019

  1. Deepmind reinforcement learning framework https://towardsdatascience.com/deepmind-quietly-open-sourced-three-new-impressive-reinforcement-learning-frameworks-f99443910b16
  2. Videos on GANs https://www.youtube.com/watch?v=Sw9r8CL98N0 https://www.youtube.com/watch?v=9JpdAg6uMXs
  3. Article on word embedding https://lilianweng.github.io/lil-log/2017/10/15/learning-word-embedding.html

DAY 79: 15/12/2019

  1. Support Vector Machine

WHAT I LEARNT

  • SVM is robust to outliers

DAY 80: 16/12/2019

  1. https://www.google.com/amp/s/venturebeat.com/2019/12/10/deepminds-dreamer-ai-uses-the-past-to-predict-the-future/amp/
  2. https://venturebeat.com/2019/12/13/deepmind-proposes-novel-way-to-train-safe-reinforcement-learning-ai/

DAY 81: 17/12/2019

  1. Sign language recognition in Pytorch. https://towardsdatascience.com/sign-language-recognition-in-pytorch-5d72688f98b7
  2. https://www.deeplearningwizard.com/deep_learning/practical_pytorch/pytorch_convolutional_neuralnetwork/

DAY 82: 18/12/2019

  1. Video in statistics https://www.youtube.com/channel/UCtYLUTtgS3k1Fg4y5tAhLbw

DAY 83: 19/12/2019

https://apple.news/AoRVlsNoTRPeBcG0mESVNSw

DAY 84: 20/12/2019

Working on project 4- Generate Faces (GANs)

DAY 85: 21/12/2019

Reading some GANs research papers

DAY 86: 22/12/2019

Submitted my Github account for review

DAY 87: 23/12/2019

  1. Machine Learning Workflow i. Explore and Process data
    • Retrieve
    • Clean and Explore
    • Prepare/Transform

DAY 88: 24/12/2019

  1. Machine Learning Workflow ii. Modeling
    • Develop and Train model
    • Validate and evaluate model

WHAT I LEARNT

  • Validation are use for model tuning and selection

DAY 89: 25/12/2019

  1. Machine Learning Workflow iii. Deployment
    • Deploy to production
    • Monitore and update model and data

DAY 90: 26/12/2019

Resources i. SSD [https://arxiv.org/abs/1512.02325] ii. YOLO [https://arxiv.org/abs/1506.02640] iii. Faster R-CNN [https://arxiv.org/abs/1506.01497] iv. MobileNet [https://arxiv.org/abs/1704.04861] v. ResNet [https://arxiv.org/abs/1512.03385] vi. Inception [https://arxiv.org/pdf/1409.4842.pdf]

DAY 91: 27/12/2019

  1. Started Deployment of Sentiment Analysis Lectures

DAY 92: 28/12/2019

  1. Learning how to use Amazon sagemaker.

WHAT I LEARNT

  • To deploy a model is to have an endpoint

DAY 93: 29/12/2019

  1. Reinforcement Learning https://towardsdatascience.com/machine-learning-part-4-reinforcement-learning-43070cbd83ab

DAY 94: 30/12/2019

  1. Completed Lesson 2 of Deployment of Sentiment Analysis

DAY 95: 31/12/2019

  1. Started Lesson 3 of Deployment of Sentiment Analysis

DAY 96: 01/01/2020

  1. Register the previous Lectures
  2. Read some research papers

DAY 97: 02/01/2020

Read some research papers

DAY 98: 03/01/2020

Did some research on how to write better codes

DAY 99: 04/01/2020

  1. Deployment with Sagemaker
  2. Navigating Sagemaker interface

DAY 100: 05/01/2019

  1. Scraping some website for data
  2. Deployment platform

DAY 101: 06/01/2020

  1. Cloud Computing
  2. XGBoost material

WHAT I LEARNT

DAY 102: 07/01/2020

  1. Setting up my AWS Account

DAY 103: 08/01/2020

  1. What are Instance

WHAT I LEARNT

  • Instances are known as Virtual Machine

DAY 104: 09/01/2020

  1. Preparing for deployment with Sagemaker.

DAY 105: 10/01/2020

  1. Wrapped up deployment with sagemaker
  2. Started working on my last Nanodegree project

DAY 106: 11/01/2020

What are corpus?

DAY 107: 12/01/2020

  1. What is NLP
  2. Application of NLP

DAY 108: 13/01/2020

  1. Techniques to solving AI project

DAY 109: 14/01/2020

  1. https://github.com/nscalo/swadel-monolith/tree/master/app/online/SpeechRecognition
  2. https://github.com/nscalo/swadel-monolith/tree/master/app/data/training/HandWrittenDigitsDataGAN
  3. https://github.com/HartP97/Lane-Detection

DAY 110: 15/01/2020

  1. Researching about deploying a deep learning project using Sagemaker

DAY 111: 16/01/2020

  1. Speech Emotion Recognition with Convolutional Neural Network https://towardsdatascience.com/speech-emotion-recognition-with-convolution-neural-network-1e6bb7130ce3

DAY 112: 17/01/2020

Information about Amazon AWS i. https://aws.amazon.com/sagemaker/pricing/

ii. https://aws.amazon.com/sagemaker/faqs/

iii. https://www.youtube.com/user/AmazonWebServices

DAY 113: 18/01/2020

  1. Autonomous Learning library https://medium.com/syncedreview/autonomous-learning-library-simplifies-intelligent-agent-creation-c7ec60576a3e

DAY 114: 19/01/2020

(Machine Learning solutions with code)[https://paperswithcode.com/]

DAY 115: 20/01/2020

Improving my Python skills

DAY 116: 21/01/2020

Trying to set up AWS account

DAY 117: 22/01/2020

Doing some study on the AWS documnetation

DAY 118: 23/01/2020

Preprocessing data for a sentiment analysis project

DAY 119: 24/01/2020

Editted my data with nltk and beautifulsoup

DAY 120: 31/01/2020

Completed my Udacity Deep Learning course

DAY 121: 01/02/2020

https://towardsdatascience.com/web-scraping-5649074f3ead

DAY 122: 02/02/2020

  1. Object-oriented Reinforcement Learning https://towardsdatascience.com/object-oriented-reinforcement-learning-95c284427ea

DAY 123: 03/02/2020

  1. Studying the Mathematics of Machine Learning

DAY 124: 04/02/2020

Studying Linear Algebra

DAY 125: 05/02/2020

Studying Vectors

DAY 126: 06/02/2020

Studying Logistic Regression

DAY 127: 07/02/2020

https://medium.com/datadriveninvestor/deep-learning-for-image-segmentation-d10d19131113

DAY 128: 08/02/2020

  1. https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
  2. https://en.wikipedia.org/wiki/U-Net
  3. https://www.marktechpost.com/free-resources/

DAY 129: 09/02/2020

  1. https://www.analyticsvidhya.com/blog/2019/02/building-crowd-counting-model-python/

DAY 130: 10/02/2020

  1. https://medium.com/analytics-vidhya/beginners-guide-to-object-detection-algorithms-6620fb31c375

DAY 131: 11/02/2020

  1. https://medium.com/deepquestai/object-detection-training-preparing-your-custom-dataset-6248679f0d1d
  2. https://medium.com/datadriveninvestor/how-to-create-custom-coco-data-set-for-object-detection-96ec91958f36
  3. http://cocodataset.org/#detection-2019

DAY 132: 12/02/2020

  1. https://towardsdatascience.com/image-pre-processing-c1aec0be3edf

DAY 133: 13/02/2020

  1. https://convertio.co/html-jpeg/

DAY 134: 14/02/2020

https://medium.com/tektorch-ai/best-image-labeling-tools-for-computer-vision-393e256be0a0

DAY 135: 15/02/2020

https://nanonets.com/blog/how-to-do-semantic-segmentation-using-deep-learning/

DAY 136: 16/02/2020

https://www.mediterranee-infection.com/hydroxychloroquine-and-azithromycin-as-a-treatment-of-covid-19/ https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/

DAY 137: 17/02/2020

https://hackernoon.com/the-best-image-annotation-platforms-for-computer-vision-an-honest-review-of-each-dac7f565fea

DAY 138: 18/02/2020

https://cs50.harvard.edu/ai/ https://deepsense.ai/region-of-interest-pooling-explained/ https://medium.com/@jonathan_hui/image-segmentation-with-mask-r-cnn-ebe6d793272

DAY 139: 19/02/2020

https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

DAY 140: 20/02/2020

https://github.com/VirajDeshwal/COVID-19?files=1

DAY 141: 21/02/2020

https://medium.com/anolytics/how-to-label-data-for-semantic-segmentation-deep-learning-models-907a996f95f7

DAY 142: 22/02/2020

Annotation Tools video

  1. https://www.youtube.com/watch?time_continue=20&v=fXalzNpYLGg&feature=emb_logo
  2. https://www.youtube.com/watch?time_continue=6&v=VyUwHbqJ26g&feature=emb_logo
  3. https://www.youtube.com/watch?v=iTvG3G84Ez4
  4. https://hackernoon.com/the-best-image-annotation-platforms-for-computer-vision-an-honest-review-of-each-dac7f565fea

DAY 143: 23/02/2020

  1. Data Augumentation for Deep Learning
  2. https://www.arunponnusamy.com/preparing-custom-dataset-for-training-yolo-object-detector.html

DAY 144: 24/02/2020

  1. https://www.arunponnusamy.com/preparing-custom-dataset-for-training-yolo-object-detector.html

DAY 145: 25/02/2020

  1. https://www.analyticsvidhya.com/blog/2019/07/computer-vision-implementing-mask-r-cnn-image-segmentation/

DAY 146: 24/04/2020

Machine Learning videos

DAY 147: 07/05/2020

On Seeing Stuff: The Perception of Materials by Humans and Machines

DAY 148: 13/05/2020

https://www.pyimagesearch.com/2016/10/31/detecting-multiple-bright-spots-in-an-image-with-python-and-opencv/

DAY 149: 20/05/2020

To copy a website or webapp

DAY 150: 01/06/2020

  1. https://www.programiz.com/python-programming/modules
  2. https://www.w3schools.com/python/python_modules.asp
  3. https://python-packaging-tutorial.readthedocs.io/en/latest/setup_py.html

DAY 151: 06/07/2020

Building a segmentation model from scratch using Deep Learning

DAY 152: 12/07/2020

  • Learning rate
  • Batch size
  • Number of Workers

DAY 153: 11/08/2020

How to apply continual learning to your machine learning models

DAY 154: 21/08/2020

Fast AI

DAY 155: 01/02/2021

What’s New In Gartner’s Hype Cycle For Emerging Technologies, 2020 secure-data-science-reference-architecture GitHub