/AWE-model

AWE-Asymmetric Word Embedding for Textual Entailment

Primary LanguagePython

AWE-model

This is the code of our paper "AWE: Asymmetric Word Embedding for Textual Entailment".

Authors: Tengfei Ma, Chiamin Wu, Cao Xiao, Jimeng Sun
It gets state-of-the-art performance on the SciTail dataset(textual entailment) -- 84.4%. 
We release both the code and the pretrained model.
In this repository, we reference the Theano code from ACL2018 paper 
"End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions"

Description:

By using our asymmetric word embedding method, 
models can learn interactions between "premise" and "hypothesis" directly from training data.
By adding this relationships to different models(DEISTE and Decomposable Attention Vanilla), 
our proposed method can improve both of their performance on different datasets(SciTail and SNLI).

Jupyter Notebook code demo:

http://htmlpreview.github.io/?https://github.com/cwu392/AWE-model/blob/master/Final_84p4.html