The purpose of this project is to introduce a shortcut to developers and researcher for finding useful resources about Deep Learning.
There are different motivations for this open source project.
There other similar repositories similar to this repository and are very comprehensive and useful and to be honest they made me ponder if there is a necessity for this repository!
The point of this repository is that the resources are being targeted. The organization of the resources is such that the user can easily find the things he/she is looking for. We divided the resources to a large number of categories that in the beginning one may have a headache!!! However, if someone knows what is being located, it is very easy to find the most related resources. Even if someone doesn't know what to look for, in the beginning, the general resources have been provided.
This chapter is associated with the papers published in deep learning.
Imagenet classification with deep convolutional neural networks : [Paper]
Convolutional Neural Networks for Sentence Classification : [Paper]
Large-scale Video Classification with Convolutional Neural Networks : [Paper]
Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks : [Paper]
Deep convolutional neural networks for LVCSR : [Paper]
Face recognition: a convolutional neural-network approach : [Paper]
An empirical exploration of recurrent network architectures : [Paper]
LSTM: A search space odyssey : [Paper]
On the difficulty of training recurrent neural networks : [Paper]
Learning to forget: Continual prediction with LSTM : [Paper]
Extracting and composing robust features with denoising autoencoders : [Paper]
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion : [Paper]
Adversarial Autoencoders : [Paper]
Autoencoders, Unsupervised Learning, and Deep Architectures : [Paper]
Reducing the Dimensionality of Data with Neural Networks : [Paper]
Exploiting generative models discriminative classifiers : [Paper]
Semi-supervised Learning with Deep Generative Models : [Paper]
Generative Adversarial Nets : [Paper]
Generalized Denoising Auto-Encoders as Generative Models : [Paper]
Stochastic Backpropagation and Approximate Inference in Deep Generative Models : [Paper]
Probabilistic models of cognition: exploring representations and inductive biases : [Paper]
On deep generative models with applications to recognition : [Paper]
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift : [Paper]
Dropout: A Simple Way to Prevent Neural Networks from Overfitting : [Paper]
Training Very Deep Networks : [Paper]
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification : [Paper]
Large Scale Distributed Deep Networks : [Paper]
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks : [Paper]
Representation Learning: A Review and New Perspectives : [Paper]
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets : [Paper]
Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks : [Paper]
Distilling the Knowledge in a Neural Network : [Paper]
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition : [Paper]
How transferable are features in deep neural networks? : [Paper]
Human-level control through deep reinforcement learning : [Paper]
Playing Atari with Deep Reinforcement Learning : [Paper]
Continuous control with deep reinforcement learning : [`Paper <https://arxiv.org/abs/1509.02971`_]
Deep Reinforcement Learning with Double Q-Learning : [Paper]
Dueling Network Architectures for Deep Reinforcement Learning : [Paper]
Deep Residual Learning for Image Recognition : [Paper]
Very Deep Convolutional Networks for Large-Scale Image Recognition : [Paper]
Multi-column Deep Neural Networks for Image Classification : [Paper]
DeepID3: Face Recognition with Very Deep Neural Networks : [Paper]
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps : [Paper]
Deep Image: Scaling up Image Recognition : [Paper]
Long-Term Recurrent Convolutional Networks for Visual Recognition and Description : [Paper]
ImageNet Classification with Deep Convolutional Neural Networks : [Paper]
Learning Deep Features for Scene Recognition using Places Database : [Paper]
Scalable Object Detection using Deep Neural Networks : [Paper]
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks : [Paper]
OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks : [Paper]
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition : [Paper]
What is the best multi-stage architecture for object recognition? : [Paper]
Long-Term Recurrent Convolutional Networks for Visual Recognition and Description : [Paper]
Learning Spatiotemporal Features With 3D Convolutional Networks : [Paper]
Describing Videos by Exploiting Temporal Structure : [Paper]
Convolutional Two-Stream Network Fusion for Video Action Recognition : [Paper]
Temporal segment networks: Towards good practices for deep action recognition : [Paper]
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention : [Paper]
Mind's Eye: A Recurrent Visual Representation for Image Caption Generation : [Paper]
Generative Adversarial Text to Image Synthesis : [Paper]
Deep Visual-Semantic Al60ignments for Generating Image Descriptions : [Paper]
Show and Tell: A Neural Image Caption Generator : [Paper]
Distributed Representations of Words and Phrases and their Compositionality : [Paper]
Efficient Estimation of Word Representations in Vector Space : [Paper]
Sequence to Sequence Learning with Neural Networks : [Paper]
Neural Machine Translation by Jointly Learning to Align and Translate : [Paper]
Get To The Point: Summarization with Pointer-Generator Networks : [Paper]
Attention Is All You Need : [Paper]
Convolutional Neural Networks for Sentence Classification : [Paper]
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups : [Paper]
Towards End-to-End Speech Recognition with Recurrent Neural Networks : [Paper]
Speech recognition with deep recurrent neural networks : [Paper]
Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition : [Paper]
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin : [Paper]
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin : [Paper]
A novel scheme for speaker recognition using a phonetically-aware deep neural network : [Paper]
- MNIST Handwritten digits: [Link]
- Face Recognition Technology (FERET) The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties: [Link]
- The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces Between October and December 2000 we collected a database of 41,368 images of 68 people: [Link]
- YouTube Faces DB The data set contains 3,425 videos of 1,595 different people. All the videos were downloaded from YouTube. An average of 2.15 videos are available for each subject: [Link]
- Grammatical Facial Expressions Data Set Developed to assist the the automated analysis of facial expressions: [Link]
- FaceScrub A Dataset With Over 100,000 Face Images of 530 People: [Link]
- IMDB-WIKI 500k+ face images with age and gender labels: [Link]
- COCO Microsoft COCO: Common Objects in Context: [Link]
- ImageNet The famous ImageNet dataset: [Link]
- Open Images Dataset Open Images is a dataset of ~9 million images that have been annotated with image-level labels and object bounding boxes: [Link]
- Caltech-256 Object Category Dataset A large dataset object classification: [Link]
- Pascal VOC dataset A large dataset for classification tasks: [Link]
- CIFAR 10 / CIFAR 100 The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes. CIFAR-100 is similar to CIFAR-10 but it has 100 classes containing 600 images each: [Link]
- HMDB a large human motion database: [Link]
- MHAD Berkeley Multimodal Human Action Database: [Link]
- UCF101 - Action Recognition Data Set UCF101 is an action recognition data set of realistic action videos, collected from YouTube, having 101 action categories. This data set is an extension of UCF50 data set which has 50 action categories: [Link]
- THUMOS Dataset A large dataset for action classification: [Link]
- ActivityNet A Large-Scale Video Benchmark for Human Activity Understanding: [Link]
- 1 Billion Word Language Model Benchmark: The purpose of the project is to make available a standard training and test setup for language modeling experiments: [Link]
- Common Crawl: The Common Crawl corpus contains petabytes of data collected over the last 7 years. It contains raw web page data, extracted metadata and text extractions: [Link]
- Yelp Open Dataset: A subset of Yelp's businesses, reviews, and user data for use in personal, educational, and academic purposes: [Link]
- 20 newsgroups The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups: [Link]
- Broadcast News The 1996 Broadcast News Speech Corpus contains a total of 104 hours of broadcasts from ABC, CNN and CSPAN television networks and NPR and PRI radio networks with corresponding transcripts: [Link]
- The wikitext long term dependency language modeling dataset: A collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. : [Link]
- Question Answering Corpus by Deep Mind and Oxford which is two new corpora of roughly a million news stories with associated queries from the CNN and Daily Mail websites. [Link]
- Stanford Question Answering Dataset (SQuAD) consisting of questions posed by crowdworkers on a set of Wikipedia articles: [Link]
- Amazon question/answer data contains Question and Answer data from Amazon, totaling around 1.4 million answered questions: [Link]
- Multi-Domain Sentiment Dataset TThe Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains): [Link]
- Stanford Sentiment Treebank Dataset The Stanford Sentiment Treebank is the first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language: [Link]
- Large Movie Review Dataset: This is a dataset for binary sentiment classification: [Link]
- Aligned Hansards of the 36th Parliament of Canada dataset contains 1.3 million pairs of aligned text chunks: [Link]
- Europarl: A Parallel Corpus for Statistical Machine Translation dataset extracted from the proceedings of the European Parliament: [Link]
- Legal Case Reports Data Set as a textual corpus of 4000 legal cases for automatic summarization and citation analysis.: [Link]
- TIMIT Acoustic-Phonetic Continuous Speech Corpus The TIMIT corpus of read speech is designed to provide speech data for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems: [Link]
- LibriSpeech LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey: [Link]
- VoxCeleb A large scale audio-visual dataset: [Link]
- NIST Speaker Recognition: [Link]
- Machine Learning by Stanford on Coursera : [Link]
- Neural Networks and Deep Learning Specialization by Coursera: [Link]
- Intro to Deep Learning by Google: [Link]
- NVIDIA Deep Learning Institute by NVIDIA: [Link]
- Convolutional Neural Networks for Visual Recognition by Standford: [Link]
- Deep Learning for Natural Language Processing by Standford: [Link]
- Deep Learning by fast.ai: [Link]
- Deep Learning by Ian Goodfellow: [Link]
- Neural Networks and Deep Learning : [Link]
- Deep Learning with Python: [Link]
- Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems: [Link]
- Colah's blog: [Link]
- Andrej Karpathy blog: [Link]
- The Spectator Shakir's Machine Learning Blog: [Link]
- WILDML: [Link]
- Distill blog: [Link]
- BAIR Berkeley Artificial Inteliigent Research: [Link]
- Sebastian Ruder's blog: [Link]
- inFERENCe: [Link]
- i am trask A Machine Learning Craftsmanship Blog: [Link]
- Deep Learning Tutorials: [Link]
- Deep Learning for NLP with Pytorch by Pytorch: [Link]
- Deep Learning for Natural Language Processing: Tutorials with Jupyter Notebooks by Jon Krohn: [Link]
For typos, unless significant changes, please do not create a pull request. Instead, declare them in issues or email the repository owner. Please note we have a code of conduct, please follow it in all your interactions with the project.
Please consider the following criterions in order to help us in a better way:
- The pull request is mainly expected to be a link suggestion.
- Please make sure your suggested resources are not obsolete or broken.
- Ensure any install or build dependencies are removed before the end of the layer when doing a build and creating a pull request.
- Add comments with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
- You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed.
We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate your kind feedback and support.