/NLP-Models-Tensorflow

Gathers machine learning and Tensorflow deep learning models for NLP problems

Primary LanguageJupyter NotebookMIT LicenseMIT

logo

MIT License


NLP-Models-Tensorflow, Gathers machine learning and tensorflow deep learning models for NLP problems, code simplify inside Jupyter Notebooks 100%.

Table of contents

Objective

Original implementations are quite complex and not really beginner friendly. So I tried to simplify most of it. Also, there are tons of not-yet release papers implementation. So feel free to use it for your own research!

I will attached github repositories for models that I not implemented from scratch, basically I copy, paste and fix those code for deprecated issues.

Tensorflow version

Tensorflow version 1.10 and above only, not included 2.X version.

Contents

Trained on English sentiment dataset.

  1. Basic cell RNN
  2. Bidirectional RNN
  3. LSTM cell RNN
  4. GRU cell RNN
  5. LSTM RNN + Conv2D
  6. K-max Conv1d
  7. LSTM RNN + Conv1D + Highway
  8. LSTM RNN with Attention
  9. Neural Turing Machine
  10. BERT
  11. Dynamic Memory Network
  12. XL-net
Complete list (76 notebooks)
  1. Basic cell RNN
  2. Basic cell RNN + Hinge
  3. Basic cell RNN + Huber
  4. Basic cell Bidirectional RNN
  5. Basic cell Bidirectional RNN + Hinge
  6. Basic cell Bidirectional RNN + Huber
  7. LSTM cell RNN
  8. LSTM cell RNN + Hinge
  9. LSTM cell RNN + Huber
  10. LSTM cell Bidirectional RNN
  11. LSTM cell Bidirectional RNN + Huber
  12. LSTM cell RNN + Dropout + L2
  13. GRU cell RNN
  14. GRU cell RNN + Hinge
  15. GRU cell RNN + Huber
  16. GRU cell Bidirectional RNN
  17. GRU cell Bidirectional RNN + Hinge
  18. GRU cell Bidirectional RNN + Huber
  19. LSTM RNN + Conv2D
  20. K-max Conv1d
  21. LSTM RNN + Conv1D + Highway
  22. LSTM RNN + Basic Attention
  23. LSTM Dilated RNN
  24. Layer-Norm LSTM cell RNN
  25. Only Attention Neural Network
  26. Multihead-Attention Neural Network
  27. Neural Turing Machine
  28. LSTM Seq2Seq
  29. LSTM Seq2Seq + Luong Attention
  30. LSTM Seq2Seq + Bahdanau Attention
  31. LSTM Seq2Seq + Beam Decoder
  32. LSTM Bidirectional Seq2Seq
  33. Pointer Net
  34. LSTM cell RNN + Bahdanau Attention
  35. LSTM cell RNN + Luong Attention
  36. LSTM cell RNN + Stack Bahdanau Luong Attention
  37. LSTM cell Bidirectional RNN + backward Bahdanau + forward Luong
  38. Bytenet
  39. Fast-slow LSTM
  40. Siamese Network
  41. LSTM Seq2Seq + tf.estimator
  42. Capsule layers + RNN LSTM
  43. Capsule layers + LSTM Seq2Seq
  44. Capsule layers + LSTM Bidirectional Seq2Seq
  45. Nested LSTM
  46. LSTM Seq2Seq + Highway
  47. Triplet loss + LSTM
  48. DNC (Differentiable Neural Computer)
  49. ConvLSTM
  50. Temporal Convd Net
  51. Batch-all Triplet-loss + LSTM
  52. Fast-text
  53. Gated Convolution Network
  54. Simple Recurrent Unit
  55. LSTM Hierarchical Attention Network
  56. Bidirectional Transformers
  57. Dynamic Memory Network
  58. Entity Network
  59. End-to-End Memory Network
  60. BOW-Chars Deep sparse Network
  61. Residual Network using Atrous CNN
  62. Residual Network using Atrous CNN + Bahdanau Attention
  63. Deep pyramid CNN
  64. Transformer-XL
  65. Transfer learning GPT-2 345M
  66. Quasi-RNN
  67. Tacotron
  68. Slice GRU
  69. Slice GRU + Bahdanau
  70. Wavenet
  71. Transfer learning BERT Base
  72. Transfer learning XL-net Large
  73. LSTM BiRNN global Max and average pooling
  74. Transfer learning BERT Base drop 6 layers
  75. Transfer learning BERT Large drop 12 layers
  76. Transfer learning XL-net Base

Trained on Cornell Movie Dialog corpus.

  1. Seq2Seq-manual
  2. Seq2Seq-API Greedy
  3. Bidirectional Seq2Seq-manual
  4. Bidirectional Seq2Seq-API Greedy
  5. Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  6. Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  7. Bytenet
  8. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  9. End-to-End Memory Network
  10. Attention is All you need
  11. Transformer-XL + LSTM
  12. GPT-2 + LSTM
  13. Tacotron + Beam decoder
Complete list (54 notebooks)
  1. Basic cell Seq2Seq-manual
  2. LSTM Seq2Seq-manual
  3. GRU Seq2Seq-manual
  4. Basic cell Seq2Seq-API Greedy
  5. LSTM Seq2Seq-API Greedy
  6. GRU Seq2Seq-API Greedy
  7. Basic cell Bidirectional Seq2Seq-manual
  8. LSTM Bidirectional Seq2Seq-manual
  9. GRU Bidirectional Seq2Seq-manual
  10. Basic cell Bidirectional Seq2Seq-API Greedy
  11. LSTM Bidirectional Seq2Seq-API Greedy
  12. GRU Bidirectional Seq2Seq-API Greedy
  13. Basic cell Seq2Seq-manual + Luong Attention
  14. LSTM Seq2Seq-manual + Luong Attention
  15. GRU Seq2Seq-manual + Luong Attention
  16. Basic cell Seq2Seq-manual + Bahdanau Attention
  17. LSTM Seq2Seq-manual + Bahdanau Attention
  18. GRU Seq2Seq-manual + Bahdanau Attention
  19. LSTM Bidirectional Seq2Seq-manual + Luong Attention
  20. GRU Bidirectional Seq2Seq-manual + Luong Attention
  21. LSTM Bidirectional Seq2Seq-manual + Bahdanau Attention
  22. GRU Bidirectional Seq2Seq-manual + Bahdanau Attention
  23. LSTM Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  24. GRU Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  25. LSTM Seq2Seq-API Greedy + Luong Attention
  26. GRU Seq2Seq-API Greedy + Luong Attention
  27. LSTM Seq2Seq-API Greedy + Bahdanau Attention
  28. GRU Seq2Seq-API Greedy + Bahdanau Attention
  29. LSTM Seq2Seq-API Beam Decoder
  30. GRU Seq2Seq-API Beam Decoder
  31. LSTM Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  32. GRU Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  33. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  34. GRU Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  35. Bytenet
  36. LSTM Seq2Seq + tf.estimator
  37. Capsule layers + LSTM Seq2Seq-API Greedy
  38. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  39. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder + Dropout + L2
  40. DNC Seq2Seq
  41. LSTM Bidirectional Seq2Seq-API + Luong Monotic Attention + Beam Decoder
  42. LSTM Bidirectional Seq2Seq-API + Bahdanau Monotic Attention + Beam Decoder
  43. End-to-End Memory Network + Basic cell
  44. End-to-End Memory Network + LSTM cell
  45. Attention is all you need
  46. Transformer-XL
  47. Attention is all you need + Beam Search
  48. Transformer-XL + LSTM
  49. GPT-2 + LSTM
  50. Fairseq
  51. Conv-Encoder + LSTM
  52. Tacotron + Greedy decoder
  53. Tacotron + Beam decoder
  54. Google NMT

Trained on 500 English-Vietnam.

  1. Seq2Seq-manual
  2. Seq2Seq-API Greedy
  3. Bidirectional Seq2Seq-manual
  4. Bidirectional Seq2Seq-API Greedy
  5. Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  6. Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  7. Bytenet
  8. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  9. End-to-End Memory Network
  10. Attention is All you need
  11. BERT + Dilated Fairseq
Complete list (55 notebooks)
  1. Basic cell Seq2Seq-manual
  2. LSTM Seq2Seq-manual
  3. GRU Seq2Seq-manual
  4. Basic cell Seq2Seq-API Greedy
  5. LSTM Seq2Seq-API Greedy
  6. GRU Seq2Seq-API Greedy
  7. Basic cell Bidirectional Seq2Seq-manual
  8. LSTM Bidirectional Seq2Seq-manual
  9. GRU Bidirectional Seq2Seq-manual
  10. Basic cell Bidirectional Seq2Seq-API Greedy
  11. LSTM Bidirectional Seq2Seq-API Greedy
  12. GRU Bidirectional Seq2Seq-API Greedy
  13. Basic cell Seq2Seq-manual + Luong Attention
  14. LSTM Seq2Seq-manual + Luong Attention
  15. GRU Seq2Seq-manual + Luong Attention
  16. Basic cell Seq2Seq-manual + Bahdanau Attention
  17. LSTM Seq2Seq-manual + Bahdanau Attention
  18. GRU Seq2Seq-manual + Bahdanau Attention
  19. LSTM Bidirectional Seq2Seq-manual + Luong Attention
  20. GRU Bidirectional Seq2Seq-manual + Luong Attention
  21. LSTM Bidirectional Seq2Seq-manual + Bahdanau Attention
  22. GRU Bidirectional Seq2Seq-manual + Bahdanau Attention
  23. LSTM Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  24. GRU Bidirectional Seq2Seq-manual + backward Bahdanau + forward Luong
  25. LSTM Seq2Seq-API Greedy + Luong Attention
  26. GRU Seq2Seq-API Greedy + Luong Attention
  27. LSTM Seq2Seq-API Greedy + Bahdanau Attention
  28. GRU Seq2Seq-API Greedy + Bahdanau Attention
  29. LSTM Seq2Seq-API Beam Decoder
  30. GRU Seq2Seq-API Beam Decoder
  31. LSTM Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  32. GRU Bidirectional Seq2Seq-API + Luong Attention + Beam Decoder
  33. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  34. GRU Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder
  35. Bytenet
  36. LSTM Seq2Seq + tf.estimator
  37. Capsule layers + LSTM Seq2Seq-API Greedy
  38. Capsule layers + LSTM Seq2Seq-API + Luong Attention + Beam Decoder
  39. LSTM Bidirectional Seq2Seq-API + backward Bahdanau + forward Luong + Stack Bahdanau Luong Attention + Beam Decoder + Dropout + L2
  40. DNC Seq2Seq
  41. LSTM Bidirectional Seq2Seq-API + Luong Monotic Attention + Beam Decoder
  42. LSTM Bidirectional Seq2Seq-API + Bahdanau Monotic Attention + Beam Decoder
  43. End-to-End Memory Network + Basic cell
  44. End-to-End Memory Network + LSTM cell
  45. Attention is all you need
  46. Transformer-XL
  47. Attention is all you need + Beam Search
  48. Fairseq
  49. Conv-Encoder + LSTM
  50. Bytenet Greedy
  51. Residual GRU Bidirectional Seq2Seq-API Greedy
  52. Google NMT
  53. Dilated Seq2Seq
  54. BERT Encoder + LSTM Luong Decoder
  55. BERT Encoder + Dilated Fairseq

Trained on English sentiment dataset.

  1. Word Vector using CBOW sample softmax
  2. Word Vector using CBOW noise contrastive estimation
  3. Word Vector using skipgram sample softmax
  4. Word Vector using skipgram noise contrastive estimation
  5. Lda2Vec Tensorflow
  6. Supervised Embedded
  7. Triplet-loss + LSTM
  8. LSTM Auto-Encoder
  9. Batch-All Triplet-loss LSTM
  10. Fast-text
  11. ELMO (biLM)
  12. Triplet-loss + BERT

Trained on CONLL POS.

  1. Bidirectional RNN + CRF, test accuracy 92%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 91%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 91%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 3%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 89%
  8. BERT, test accuracy 99%

Trained on CONLL NER.

  1. Bidirectional RNN + CRF, test accuracy 96%
  2. Bidirectional RNN + Luong Attention + CRF, test accuracy 93%
  3. Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 95%
  4. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  5. Char Ngrams + Bidirectional RNN + Bahdanau Attention + CRF, test accuracy 96%
  6. Char Ngrams + Residual Network + Bahdanau Attention + CRF, test accuracy 69%
  7. Char Ngrams + Attention is you all Need + CRF, test accuracy 90%
  8. BERT, test accuracy 99%

Trained on CONLL English Dependency.

  1. Bidirectional RNN + Bahdanau Attention + CRF
  2. Bidirectional RNN + Luong Attention + CRF
  3. Residual Network + Bahdanau Attention + CRF
  4. Residual Network + Bahdanau Attention + Char Embedded + CRF
  5. Attention is all you need + CRF

Trained on SQUAD Dataset.

  1. BERT,
{"exact_match": 77.57805108798486, "f1": 86.18327335287402}

Trained on bAbI Dataset.

  1. End-to-End Memory Network + Basic cell
  2. End-to-End Memory Network + GRU cell
  3. End-to-End Memory Network + LSTM cell
  4. Dynamic Memory

Trained on English Lemmatization.

  1. LSTM + Seq2Seq + Beam
  2. GRU + Seq2Seq + Beam
  3. LSTM + BiRNN + Seq2Seq + Beam
  4. GRU + BiRNN + Seq2Seq + Beam
  5. DNC + Seq2Seq + Greedy
  6. BiRNN + Bahdanau + Copynet

Trained on India news.

Accuracy based on 10 epochs only, calculated using word positions.

  1. LSTM Seq2Seq using topic modelling, test accuracy 13.22%
  2. LSTM Seq2Seq + Luong Attention using topic modelling, test accuracy 12.39%
  3. LSTM Seq2Seq + Beam Decoder using topic modelling, test accuracy 10.67%
  4. LSTM Bidirectional + Luong Attention + Beam Decoder using topic modelling, test accuracy 8.29%
  5. Pointer-Generator + Bahdanau, https://github.com/xueyouluo/my_seq2seq, test accuracy 15.51%
  6. Copynet, test accuracy 11.15%
  7. Pointer-Generator + Luong, https://github.com/xueyouluo/my_seq2seq, test accuracy 16.51%
  8. Dilated Seq2Seq, test accuracy 10.88%
  9. Dilated Seq2Seq + Self Attention, test accuracy 11.54%
  10. BERT + Dilated Fairseq, test accuracy 13.5%
  11. self-attention + Pointer-Generator, test accuracy 4.34%
  12. Dilated-Fairseq + Pointer-Generator, test accuracy 5.57%

Trained on random books.

  1. Skip-thought Vector
  2. Residual Network using Atrous CNN
  3. Residual Network using Atrous CNN + Bahdanau Attention
  1. CNN + LSTM RNN

Trained on Cornell Movie--Dialogs Corpus

  1. BERT

Trained on Toronto speech dataset.

  1. Tacotron, https://github.com/Kyubyong/tacotron_asr
  2. Bidirectional RNN + Greedy CTC
  3. Bidirectional RNN + Beam CTC
  4. Seq2Seq + Bahdanau Attention + Beam CTC
  5. Seq2Seq + Luong Attention + Beam CTC
  6. Bidirectional RNN + Attention + Beam CTC
  7. Wavenet
  8. CNN encoder + RNN decoder + Bahdanau Attention
  9. CNN encoder + RNN decoder + Luong Attention
  10. Dilation CNN + GRU Bidirectional
  11. Deep speech 2
  12. Pyramid Dilated CNN

Trained on Toronto speech dataset.

  1. Tacotron, https://github.com/Kyubyong/tacotron
  2. Fairseq + Dilated CNN vocoder
  3. Seq2Seq + Bahdanau Attention
  4. Seq2Seq + Luong Attention
  5. Dilated CNN + Monothonic Attention + Dilated CNN vocoder
  6. Dilated CNN + Self Attention + Dilated CNN vocoder
  7. Deep CNN + Monothonic Attention + Dilated CNN vocoder
  8. Deep CNN + Self Attention + Dilated CNN vocoder

Trained on Toronto speech dataset.

  1. Dilated CNN

Trained on Shakespeare dataset.

  1. Character-wise RNN + LSTM
  2. Character-wise RNN + Beam search
  3. Character-wise RNN + LSTM + Embedding
  4. Word-wise RNN + LSTM
  5. Word-wise RNN + LSTM + Embedding
  6. Character-wise + Seq2Seq + GRU
  7. Word-wise + Seq2Seq + GRU
  8. Character-wise RNN + LSTM + Bahdanau Attention
  9. Character-wise RNN + LSTM + Luong Attention
  10. Word-wise + Seq2Seq + GRU + Beam
  11. Character-wise + Seq2Seq + GRU + Bahdanau Attention
  12. Word-wise + Seq2Seq + GRU + Bahdanau Attention
  13. Character-wise Dilated CNN + Beam search
  14. Transformer + Beam search
  15. Transformer XL + Beam search

Trained on Malaysia news.

  1. TAT-LSTM
  2. TAV-LSTM
  3. MTA-LSTM
  4. Dilated Fairseq

Trained on Tatoeba dataset.

  1. Fast-text Char N-Grams

Trained on First Quora Dataset Release: Question Pairs.

  1. BiRNN + Contrastive loss, test accuracy 76.50%
  2. Dilated CNN + Contrastive loss, test accuracy 72.98%
  3. Transformer + Contrastive loss, test accuracy 73.48%
  4. Dilated CNN + Cross entropy, test accuracy 72.27%
  5. Transformer + Cross entropy, test accuracy 71.1%
  6. Transfer learning BERT base + Cross entropy, test accuracy 90%
  1. Pretrained Glove
  2. GRU VAE-seq2seq-beam TF-probability
  3. LSTM VAE-seq2seq-beam TF-probability
  4. GRU VAE-seq2seq-beam + Bahdanau Attention TF-probability
  5. VAE + Deterministic Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  6. VAE + VAE Bahdanau Attention, https://github.com/HareeshBahuleyan/tf-var-attention
  1. Bahdanau
  2. Luong
  3. Hierarchical
  4. Additive
  5. Soft
  6. Attention-over-Attention
  7. Bahdanau API
  8. Luong API
  1. Attention heatmap on Bahdanau Attention
  2. Attention heatmap on Luong Attention
  3. BERT attention, https://github.com/hsm207/bert_attn_viz
  4. XLNET attention
  1. Markov chatbot
  2. Decomposition summarization (3 notebooks)