经典书目(百度云
提取码:b5qq)
- 算法的乐趣.
原书地址
- Long Short-term Memory.
地址
- EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks.
地址
- Efficient Estimation of Word Representations in Vector Space.
地址
- Distributed Representations of Sentences and Documents.
地址
- Enriching Word Vectors with Subword Information.
地址
.解读
- GloVe: Global Vectors for Word Representation.
官网
- ELMo (Deep contextualized word representations).
地址
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
地址
- XLNet: Generalized Autoregressive Pretraining for Language Understanding
地址
- A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification.
地址
- Convolutional Neural Networks for Sentence Classification.
地址
- Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification.
地址
- A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation.
地址
- SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.
地址
- Generative Adversarial Text to Image Synthesis.
地址
- Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks.
地址
- Learning Text Similarity with Siamese Recurrent Networks.
地址
- A Question-Focused Multi-Factor Attention Network for Question Answering.
地址
- The Design and Implementation of XiaoIce, an Empathetic Social Chatbot.
地址
- A Knowledge-Grounded Neural Conversation Model.
地址
- Neural Generative Question Answering.
地址
- Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.
地址
- Modeling Multi-turn Conversation with Deep Utterance Aggregation.
地址
- Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.
地址
- Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation.
地址
- Transformer (Attention Is All You Need).
地址
- Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context.
地址
- Get To The Point: Summarization with Pointer-Generator Networks.
地址
- Event Extraction via Dynamic Multi-Pooling Convolutional Neural.
地址
- The Illustrated Transformer.
博文
- Attention-based-model.
地址
- KL divergence.
地址
- Building Autoencoders in Keras.
地址
- Modern Deep Learning Techniques Applied to Natural Language Processing.
地址
- Node2vec embeddings for graph data.
地址
- Bert解读.
地址
地址
- XLNet:运行机制及和Bert的异同比较.
地址
- 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。
地址
- fasttext(skipgram+cbow)
- gensim(word2vec)
- eda
- svm
- fasttext
- textcnn
- bilstm+attention
- rcnn
- han
- bilstm+crf
- siamese
- keras-gpt-2.
地址
- textClassifier.
地址
- attention-is-all-you-need-keras.
地址
- BERT_with_keras.
地址
- ELMo-keras.
地址
- SeqGAN.
地址
- Association of Computational Linguistics(计算语言学协会). ACL
- Empirical Methods in Natural Language Processing. EMNLP
- International Conference on Computational Linguistics. COLING
- Neural Information Processing Systems(神经信息处理系统会议). NIPS
- AAAI Conference on Artificial Intelligence. AAAI
- International Joint Conferences on AI. IJCAI
- International Conference on Machine Learning(国际机器学习大会). ICML