This repository is my own coding assignment of Andrew Ng's deep learning courses in Coursera.In addition, I also put the original .ipynb
files and data in the corresponding folders.
If you find something wrong, please feel free to contact me : jeff.xinsc@gmail.com.
If this repository is helpful to you, welcome to star or fork.
【Demo1】
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order
【Demo2】
📌assignment3:Planar data classification with one hidden layer
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
【Demo3】
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
- Build and apply a deep neural network to supervised learning
【Demo4】
- Training your neural network requires specifying an initial value of the weights
- Choose the initialization for a new neural network
- A well chosen initialization can:
- Speed up the convergence of gradient descent
- Increase the odds of gradient descent converging to a lower training (and generalization) error
- Use regularization in your deep learning models
- Implement and use gradient checking
【Demo5】
- Understand the intuition between Adam and RMS prop
- Recognize the importance of mini-batch gradient descent
- Learn the effects of momentum on the overall performance of your model
【Demo6】
In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
【Demo7】
In this assignment, you will :
- implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
In this assignment, you will :
- Implement helper functions that you will use when implementing a TensorFlow model.
- Implement a fully functioning ConvNet using TensorFlow.
After this assignment you will be able to:
- Build and train a ConvNet in TensorFlow for a classification problem.
【Demo8】
In this assignment, you will :
- Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
- See how you can in a couple of hours build a deep learning algorithm.
In this assignment, you will :
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
"ResNet50.h5" is too large (>100MB) to push to github.If you need this file, please email me.
【Demo9】
In this assignment, you will :
- Learn about object detection using the very powerful YOLO model.
- Use object detection on a car detection dataset.
- Deal with bounding boxes.
"yolo.h5" is too large (>100MB) to push to github.If you need this file, please email me.
【Demo10】
In this assignment, you will :
- Implement the triplet loss function.
- Use a pretrained model to map face images into 128-dimensional encodings.
- Use these encodings to perform face verification and face recognition.
In this assignment, you will :
- Implement the neural style transfer algorithm.
- Generate novel artistic images using your algorithm.
"imagenet-vgg-verydeep-19.mat" is too large (>100MB) to push to github.If you need this file, please email me.
【Demo11】
In this assignment, you will :
- Implement your first Recurrent Neural Network in numpy.
In this assignment, you will learn:
- How to store text data for processing using an RNN.
- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit.
- How to build a character-level text generation recurrent neural network.
- Why clipping the gradients is important.
In this assignment, you will :
- Apply an LSTM to music generation.
- Generate your own jazz music with deep learning.
【Demo12】
After this assignment you will be able to:
- Load pre-trained word vectors, and measure similarity using cosine similarity.
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to __.
- Modify word embeddings to reduce their gender bias.
In this assignment, you will :
- Use word vector representations to build an Emojifier.
"glove.6B.50d.txt" is too large (>100MB) to push to github.If you need this file, please email me.
【Demo13】
In this assignment, you will :
- Build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25").
- Use an attention model.
In this assignment you will learn to:
- Structure a speech recognition project.
- Synthesize and process audio recordings to create train/dev datasets.
- Train a trigger word detection model and make predictions.
"Trigger word detection/XY_train/X.npy" is too large (>100MB) to push to github.If you need this file, please email me.
"Trigger word detection/XY_dev/X_dev.npy" is too large (>100MB) to push to github.If you need this file, please email me.