/CLIP-E

Code for the paper

Primary LanguageJupyter Notebook

CLIP-E

This github contains the trained models described in our paper "On the use of Vision-Language models for Visual Sentiment Analysis: a study on CLIP" published in the International Conference on Affective Computing + Intelligent Interaction (ACII 2023)

Inference using CLIP-E Contrastive: CLIP_E_inference_Contrastive.ipynb

Inference usinf CLIP-E Crossentropy: CLIP_E_inference_Crossentropy.ipynb