- Course page
- Course notes
- Notebook for lesson 1
- Lesson 1 Overview
- Lesson 1: Practical Deep Learning for Coders
wget http://www.platform.ai/files/nbs/lesson1.ipynb
wget http://www.platform.ai/files/nbs/utils.zip
wget http://www.platform.ai/files/nbs/vgg16.zip
wget http://www.platform.ai/data/dogscats.zip
unzip utils.zip
unzip vgg16.zip
unzip dogscats.zip
rm dogscats.zip
rm utils.zip
rm vgg16.zip
Do not use the wget for utils.zip
and vgg16.zip
.
Instead, please download from github repo and also include vgg16bn.py
.
-
- convolution-intro.ipynb - The convolution tutorial notebook used in the introductory lesson presented during the Data Institute launch
- lesson2.ipynb - the main notebook for lesson 2
- redux.ipynb - how to enter the Dogs vs Cats Redux competition, and how to visualize your models correct and incorrect predictions
- sgd-intro.ipynb - the simple SGD tutorial
val_data = get_data(val_batches)
trn_data = get_data(batches)
should be
val_data = get_data(path + 'valid')
trn_data = get_data(path + 'train')
More over, using get_data would cause your memory to be inefficient (on p2.x on aws), it is suggested on the fourms to use batch generator instead.
I overcame the problem by running the code as it is, save using bcolz. Afterwards i restart the notebook, and only ran the load command without get_data.
As an aside, you can use sys.getsizeof()
to understand the memory usage of each python object.
This link is also a good read to understand how to monitor your instance memory stance.
def fit_model(model, batches, val_batches, nb_epoch=1):
model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=nb_epoch,
validation_data=val_batches, nb_val_samples=val_batches.N)
model.evaluate_generator(get_batches('valid', gen, False, batch_size*2), val_batches.N)
The batches.N
and val_batches.N
should be batches.n
and val_batches.n
instead based on the util functions.
- Course page
- Note that there are also articles on Matrix Product, Convolutions (and max pooling), Activations, SGD, Backprop.
- Lesson 3 video
- Lesson 3 notebook
- convolution-intro.ipynb - The convolution tutorial notebook used in the introductory lesson presented during the Data Institute launch. This is covered in week2 as well.
- Mnist walkthrough notebook
- Suggested Extra Readings
- To do:
- Understand Batch-normalization
- Understand difference between Correlation and Convolution.
width_zoom_range
does not exists inimage.ImageDataGenerator
gen = image.ImageDataGenerator(rotation_range=10,
width_shift_range=0.1,height_shift_range=0.1,
width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
Weights
in('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
is not provided.- Tried using
fc_model
weights, but the val_loss explodes upwards. - Tried using
vggbn16
weights, similar case.
Further notes for reference:
- adjusting learning rates - most promising, but have to wait for week4 lessons
- Enquiry about bn_do3_1.h5
- Proposed solution - but does not work for me
wget http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
unzip ml-latest-small.zip
mv ml-latest-small.zip/ ml-small/
other models are available here.
wget http://www.platform.ai/models/glove/6B.50d.tgz
tar -zxf 6B.50d.tgz
-
If you have the error
'The following error happened while compiling the node'
refer to here
basically, go to your keras packages, and edit theano_backend.py
def round(x): return T.round(x, mode='half_to_even')
to
def round(x): return T.round(x, mode='half_away_from_zero')
-
The seed is very important in the training of neural nets, you might not get the same results as the author.
- Additional notes here.