/bert_for_longer_texts

BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.

Primary LanguagePythonOtherNOASSERTION

Watchers

No one’s watching this repository yet.