Repository for training session by Titipat Achakulvisut at Kording's Lab.
Presentation covers how to get up running Spark on Quest (Northwestern Computing Clusters). Notebook goes through Map Reduce algorithm where we provide an example on a simple text file. We then implement stochatic gradient descent to train Logistic Regression in order to classify MNIST dataset. Here is a link to a presentation file.
This tutorial is one of the Deep learning tutorial by Kording lab.
In this presentation, we will go through simple neural network architecture and math behind it including forward propagation and back propagation (ref. Andrew Ng lecture).
We will then code the forward propagation and back propagation algorithm and test it on MNIST examples. If we have time, we will go through existing
python packages that implement Neural Network (e.g. Lasagne
, scikit-neuralnetwork
, pybrain
, nolearn
.
In this session, we will go over Recurrent Neural Network (RNN) and its application in Natural Language Processing (NLP). We'll go through architecture of the RNN also a bit of Convolutional Neural Network (CNN) presented by Pavan. We use blog posts and CS224d: Deep Learning for Natural Language Processing as main materials. After the lecture, We will go over implementing RNN for application in NLP.
In this tutorial we show how to use TensorFlow to build and train 2 models: a Softmax Regression Model and a CNN. The example shows how to train the networks and how save the variables and the computation graph for display in the TensorFlow board. Further info can be found here: TensorFlow.org
In this session, we go through various kind of ConvNet visualization techniques. We focus on techniques to visualize the entire neural code, layerwise neural codes, individual units, as well as individual layers and units conditioned on specific images.
Here is a list of papers we discuss.
t-SNE: van der Maaten & Hinton (2008)
t-SNE of convnet codebook: Karpathy's blog
Occlusion technique: Zeiler & Fergus (2014)
Deconvolutional net: Zeiler & Fergus (2014)
Backpropagation technique: Simonyan et al. (2014)
Guided deconvolution: Springenberg, Dosovitskiy et al. (2015)
Visualizing preferred images: Girshick et al. (2013)
Gradient ascent on input image with L2-prior: Simonyan et al. (2014)
Gradient ascent on input image with natural image priors: Yosinski et al. (2015)
Visualizing unique representation of layers: Mahendran & Vedaldi (2014)
Inceptionism: Google blog
Deepart: Gatys et al. 2015
Fooling convnets: Nguyen et al. 2015
We also provide an iTorch demo of how to specify and train a conventional CNN (LeNet) using Torch.