Social Intelligence: Emotion prediction in videos

Project members: Ved Phadke, Aaron Tae, Madelaine Leitman, Jonah Jung

Project lead: Yiling Yun

In this project, we created an interface on Streamlit to predict the emotions of every frame in a video using a series of deep-learning models designed and trained on a dataset called VEATIC. In addition, we evaluated the model from VEATIC by examining the prediction performance in a variety of videos (single-character vs. multi-character, animals & animations vs. humans, landscape vs portrait).

Medium Article

We have published our findings on Medium, found here.

Original VEATIC

  • The original code is here.

  • The VEATIC dataset is here.

  • The pre-trained model is here.

Model evaluations

We collected a new set of videos and tested the performance of the model trained using the VEATIC dataset.

  • The parameters we obtained from training on all 124 videos for 1-5 epochs are here.

  • The videos we collected to test model prediction performance on single-character vs. multi-character, animals & animations vs. humans, and landscape vs portrait videos are here.