declare-lab/MELD

How can I know which face on the video sample is speaking to extract my own visual features ??

Opened this issue · 1 comments

First of all, I would like to congratulate all you for the big effort you did when creating this MELD dataset. However, I would also like to ask you if it is possible to obtain the facial landmarks (or any other kind of information) that will allow me to extract the face of the person actively speaking as you did for extracting the features you provide.

The reason is because I would like to explore my own visual features.

Thanks in advance. Best regards from Valencia,

David

Hi,

I am also trying to get the faces which are speaking in the video.
This research paper does something similar https://arxiv.org/pdf/2101.03149
Here is the code implementation: https://github.com/facebookresearch/VisualVoice/tree/main