Deep Learning Based NLP
https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/
https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/index.html#schedule
https://github.com/amitp-ai/CS224n_Stanford_NLP/tree/main/Project_QA_SQuAD2
Experimented with:
- Bi-directional Attention Flow (BiDAF) with and without character level embeddings (in addition to word level embeddings)
- BiDAF with soft output labelling for lower variance
- Question-Answer Network (QANet)
- QANet with Stochastic Depth Dropout (inside the contextual embeddings layer)
- Bi-directional Embedding Representation for Transformers (BERT) based architecture for Question Answering tasks
Assignment 1: Exploring word vectors
Assignment 2: Understanding and Implementing Word2Vec
Assignment 3: Neural Dependency Parser