EECS498 Deep Learning for Vision

Completed Assignments for EECS498/598: Deep Learning for Vision Fall 2019.

This course was offered by the University of Michigan to talk really deep about computer vision especially in deep learning. The assignments cover contents including but not limited to CNN architectures, object detection, image captioning, GAN, etc. Students will benefit a lot from this course.

Find course notes and assignments here and be sure to check out video lectrues for Fall 2019!

If you do not have access to the video, you can also go through CS231n at Stanford, which has huge overlaps with this course

All the assignments before are done with Pytorch.

Assignment 1:

  • Q1: PyTorch 101. Walk you through the basics of working with tensors in PyTorch.
  • Q2: k-Nearest Neighbor classifier. Walk you through implementing a kNN classifier.

Assignment 2:

  • Q1: Linear Classifiers. Walk you through implmenting SVM and Softmax classifier.
  • Q2: Two-layer Neural Network. Walk you through implementing a two-layer neural network-based classifier.

Assignment 3:

  • Q1: Fully-Connected Neural Network. Walk you through implementing Fully-Connected Neural Networks
  • Q2: Convolutional Neural Network. Walk you through implementing Convolutional Neural Networks.

Assignment 4:

  • Q1: PyTorch Autograd. Introduce you to the different levels of abstraction that PyTorch provides for building neural network models. You will use this knowledge to implement and train Residual Networks for image classification.
  • Q2: Image Captioning with Recurrent Neural Networks. Walk you through the implementation of vanilla recurrent neural networks (RNN) and Long Short Term Memory (LSTM) RNNs. You will use these networks to train an image captioning model. You will then augment your implementation to perform spatial attention over image regions while generating captions.
  • Q3: Network Visualization. Walk you through the use of image gradients for generating saliency maps, adversarial examples, and class visualizations
  • Q4: Style Transfer. Learn how to create images with the artistic style of one image and the content of another image

Assignment 5:

  • Q1: Single-Stage Detector. Walk you through the implementation of a fully-convolutional single-stage object detector similar to YOLO (Redmon et al, CVPR 2016). You will train and evaluate your detector on the PASCAL VOC 2007 object detection dataset.
  • Q2: Two-Stage Detector. Walk you through the implementation of a two-stage object detector similar to Faster R-CNN (Ren et al, NeurIPS 2015). This will combine a fully-convolutional Region Proposal Network (RPN) and a second-stage recognition network.

Assignment 6:

  • Q1: Generative Adversarial Networks. walk you through the implementation of fully-connected and convolutional generative adversarial networks on the MNIST dataset.