jwieting/iclr2016
Python code for training all models in the ICLR paper, "Towards Universal Paraphrastic Sentence Embeddings". These models achieve strong performance on semantic similarity tasks without any training or tuning on the training data for those tasks. They also can produce features that are at least as discriminative as skip-thought vectors for semantic similarity tasks at a minimum. Moreover, this code can achieve state-of-the-art results on entailment and sentiment tasks.
Python
Stargazers
- 0entr0py
- ahirner@MoonVision
- amirjUniversity of Glasgow
- bitisony
- carbonz0
- chinakook
- crack521
- crispamares@graphext
- danrubinsBCM One
- ericshapeMicrosoft Azure SQL DB
- forrestbingAlibaba Inc
- fredmonroesplat.ai
- iamsilealibaba
- ili3pOxford University
- ilovejsSydney
- jiacheng-xuSalesforce
- jroakes@locomotive-agency
- jsouzaUnbabel
- leotywy
- lizuyao2010
- lmh1020lmh
- mheilman
- njuhugnXidian Univ.
- pbhatia243Yik Yak Inc
- pyanh
- shriphaniOnai
- sxhfutHefei University of Technology
- tensortalkYou're on TensorTalk.com!
- vanechu
- VikingMew
- wpf5511Shanghai
- xuanhan863Los Angeles, USA
- xueweuchen
- zclflyPeking University
- zhanif3
- zihaoluckyAlibaba Group