/BERT-FINE-TUNING

BERT, stands for Bidirectional Encoder Representations from Transformers, is a Neural Network based technique for Natural Language processing pretraining .The breakthrough of BERT is in its ability to train language models based on the entire set of words in a sentence or query (bidirectional training) rather than the traditional way of training on the ordered sequence of words (left to right or combined left to right and right to left). BERT allows the language model to learn word context based on surrounding words rather than just the word that immediately precedes or follows it.

Primary LanguageJupyter Notebook

This repository is not active