/Image_Caption_Generator_With_Transformers

This repository contains code for generating captions for images using a Transformer-based model. The model used is the `VisionEncoderDecoderModel` from the Hugging Face Transformers library, specifically the `nlpconnect/vit-gpt2-image-captioning` model.

Primary LanguageJupyter NotebookMIT LicenseMIT

Watchers