cambridgeltl/mirror-bert
[EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
PythonMIT
Stargazers
- afiaka87
- ast123
- c-simoneSapienza NLP
- chiyuzhang94The University of British Columbia
- cs329yangzhong
- dandelin@naver-ai
- daniel347xBrattleboro, VT
- danlou
- dichen-cdHangzhou
- e-bugUniversity of Copenhagen
- fduMark
- fly51flyPRIS
- guanfuchenZhejiang University
- GuanZhengChen
- hardyqrGoogle DeepMind
- hellbellNaver AI Lab
- himktTokyo, Japan
- hongbinyeZhejiang Lab
- huckiyangNVIDIA Research
- jantpkBeijing
- matejgj
- Mind-SLN
- mingbocuiZürich
- mrzhang11Germany
- muhaochen
- nikitavoloboevTbilisi
- nzw0301Preferred Networks, Inc. / Preferred Elements, Inc.
- pashok3dWarsaw
- RiccorlPhD @SapienzaNLP
- sabirdvdofficial github repository ↓
- shamy1997Harbin
- TheodoreGalanosAustrian Institute of Technology
- WissamAntounInria-ALMAnaCH
- wzhouadZoom
- yilunliaoMIT
- young-yhx