People say that nothing develops and teaches you like getting your hands dirty. This repository contains small projects mostly related to Deep Learning but also Data Science in general. Subjects are closely linekd with articles I publish on Medium and are intended to complement those blog posts. For me it is a way to document my learning process, but also to help others understand neural network related issues. I hope that the content of the repository will turn out to be interesting and, above all, useful. I encourage you both to read my posts as well as to check how the code works in the action.
# clone repository
git clone https://github.com/SkalskiP/ILearnDeepLearning.py.git
# navigate to main directory
cd ILearnDeepLearning.py
# set up and activate python environment
python3 -m venv .env
source .env/bin/activate
# install all required packages
pip install -r requirements.txt
This project is mainly focused on visualizing quite complex issues related to gradient descent, activation functions and visualization of classification boundaries while teaching the model. It is a code that complements the issues described in more detail in the article. Here are some of the visualizations that have been created.
Figure 1. A classification boundaries graph created in every iteration of the Keras model.
Finally, the frames were combined to create an animation.
Figure 2. Visualization of the gradient descent.
After a theoretical introduction, the time has come for practical implementation of the neural network using NumPy. In this notebook you will find full source code and a comparison of the performance of the basic implementation with the model created with Keras. You can find a wider commentary to understand the order and meaning of performed functions in a related article.
Figure 3. Visualisation of the classification boundaries achieved with simple NumPy model
This time I focused on the analysis of the reasons for overfitting and ways to prevent it. I made simulations of neural network regulation for different lambda coefficients, analyzing the change of values in the weight matrix. Take a look at the visualizations that were created in the process.
Figure 4. Figure 3. Classification boundaries created by: top right corner - linear regression;
bottom left corner - neural network; bottom right corner - neural network with regularisation
Figure 5. Change of accuracy values in subsequent epochs during neural network learning.
Both in my articles and projects I try to create interesting visualizations, which very often allow me to communicate my ideas much more effectively. I decided to create a short tutorial to show you how to easily create animated visualizations using Matplotlib. I also encourage you to read my post where I described, among other things, how to create a visualization of neural network learning process.
Figure 6. Lorenz Attractor created using the Matplotlib animation API.
This project is licensed under the MIT License - see the LICENSE.md file for details
This is a place where I collect links to interesting articles and papers, which I hope will become the basis for my next projects in the future.
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
- Sequence to Sequence Learning with Neural Networks
- Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
- BLEU: a Method for Automatic Evaluation of Machine Translation
- Neural Machine Translation by Jointly Learning to Align and Translate
- A (Long) Peek into Reinforcement Learning