/SCIA-TRANSF

Primary LanguageJupyter NotebookOtherNOASSERTION

Sangkak-proposition

Transformers and BERT Model: Transformers are a particular architecture for deep learning models that revolutionized natural language processing. The defining characteristic for a Transformer is the self-attention mechanism. Tranformer approach has been proven to be the most performing solution for language and NLP task. In this approah we will fine tune Transformer model(BERT) with a our low resource language with some data augmetation in order to increase the low resource training size.