facebookresearch/TaBERT
This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
PythonNOASSERTION
Stargazers
- alexrigler@choochoohq
- berlino
- billqxg
- buaahshMicrosoft Research
- bygreencnChina
- chchenhuiSingapore University of Technology and Design
- chenyanghUniversity of Alberta
- chizhuAlibaba
- crazyofappleShenzhen
- donglixpMicrosoft Research
- enningxie
- fly51flyPRIS
- GitHub30Osaka, Japan
- greed2411@skit-ai
- haichao592Beijing
- ibrahimsharaf@Cision
- kambehmwRist Inc.
- LeoWoodCAS, NSL
- LibertFan
- liu-nlperSoochow University
- pcyinCalifornia
- popeye007
- q121q
- roger1993Hong Kong
- ruotianluoWaymo
- sbarman25California
- sblack4@rhythmictech
- Shujian2015Google
- shuyanzhouPittsburgh, PA
- SivilTaramResearch Scientist @ Sea AI Lab
- TarpeliteBeihang University
- titsukiTokyo
- tuzhuchengSan Francisco, CA
- WXONE
- XingLuxiBeijing, China
- zhiyueGuangzhou