mim-solutions/bert_for_longer_texts
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
PythonNOASSERTION
Issues
- 0
Loss function and optimizer as parameters
#43 opened by mwachnicki - 5
RuntimeError with the following message: "mat1 and mat2 shapes cannot be multiplied (2x512 and 768x1)
#36 opened by GabrieleAraujo - 1
A few general questions
#30 opened by matteobrv - 1
plz help me
#26 opened by cwoonb - 2
text length warning
#25 opened by cwoonb - 1
use it for multiclass classification
#22 opened by samanjoy2 - 1
Any example colab scripts to fine tune BERT variations for text multi-class classification tasks?
#19 opened by sjinjala23 - 1
- 1
Outputting Attentions
#20 opened by jeffyjeff2893 - 1
- 2
QnA system using BERT
#15 opened by Amrutha1101 - 1
MaskedLM for longer texts
#8 opened by AsmaBaccouche - 4
Obtain embedding vectors
#1 opened by rominaappierdo - 1
- 2
Split Sizes Throws Error
#3 opened by jstremme - 0