/Transformers-Tutorials

This repository contains demos I made with the Transformers library by HuggingFace.

Primary LanguageJupyter Notebook

Transformers-Tutorials

Hi there!

This repository contains demos I made with the Transformers library by 🤗 HuggingFace.

Currently, it contains the following demos:

  • BERT (paper):
    • fine-tuning BertForTokenClassification on a named entity recognition (NER) dataset. Open In Colab
  • LayoutLM (paper):
    • fine-tuning LayoutLMForTokenClassification on the FUNSD dataset Open In Colab
    • fine-tuning LayoutLMForSequenceClassification on the RVL-CDIP dataset Open In Colab
    • adding image embeddings to LayoutLM during fine-tuning on the FUNSD dataset Open In Colab
  • TAPAS (paper):
  • Vision Transformer (paper):
    • fine-tuning ViTForImageClassification on CIFAR-10 using PyTorch Lightning Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using the 🤗 Trainer Open In Colab

If you have any questions regarding these demos, feel free to open an issue on this repository.

Btw, I was also the main contributor to add the Vision Transformer (ViT) by Google AI, Data-efficient Image Transformers (DeiT) by Facebook AI, TAbular Parsing (TAPAS) by Google AI and LUKE by Studio Ousia to the library, so all of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!