mim-solutions/bert_for_longer_texts
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
PythonNOASSERTION
Stargazers
- adibencJakarta
- aieveryday
- Akhilesh64@klearnow
- apacuk
- Bo-Feng-1024Harvard University
- ChristineArnoldtIBM
- daydrillBHSN.AI
- drob-xxBerkeley, CA
- edpowersConnecticut
- FaysalSaber
- GanymedeNilChina,Beijing
- gm-is
- hamzaben86
- ivywze
- JeffCarpenterCanada
- jeohalvesUniversity of Luxembourg
- JorgeOchoaReyesNZQR
- jstremmeOptum Labs
- lig96Seoul, Korea
- Marmann
- MarvinmwMoon and Earth
- maxschr90
- MissArtemis
- mistakiaaround
- MrBananaHuman
- mwachnicki@mim-solutions
- nicole-peltier
- noorhyUniversitas Muhammadiyah Malang
- omojuOakland, CA
- Talank@JankariTech @owncloud
- tanmaychimurkarUniversity of Zurich / ETH Zurich
- tyronechenMonash University
- wwb306
- wygos
- youonfHong Kong
- zhiboBaidu