/AMLD-24-multimodal

AMLD Workshop

Primary LanguageJupyter NotebookMIT LicenseMIT

Multimodal Machine Learning for Human Behaviour Understanding 🤖

This repo contains all the resources required for the AMLD 2024 workshop - Multimodal Machine Learning for Human Behaviour Understanding.

The goal of this workshop is to introduce Multimodal Machine Learning for human-centric AI applications. We will provide brief overview of processing different modalities like video, audio and text. There will be practical, hands-on code to extract features from these modalities in the below colab notebook.

Follow along with the workshop using the colab notebook

Slides

Find the slides for the workshop here.

Cite

If you use the data in your research, please consider citing

@inproceedings{raoAutomaticAssessmentCommunication2017,
	address = {New York, NY, USA},
	title = {Automatic assessment of communication skill in non-conventional interview settings: a comparative study},
	isbn = {978-1-4503-5543-8},
	url = {https://doi.org/10.1145/3136755.3136756},
	doi = {10.1145/3136755.3136756},
	urldate = {2023-07-20},
	booktitle = {Proceedings of the 19th {ACM} {International} {Conference} on {Multimodal} {Interaction}},
	publisher = {Association for Computing Machinery},
	author = {Rao, Pooja S. B. and Rasipuram, Sowmya and Das, Rahul and Jayagopi, Dinesh Babu},
	month = nov,
	year = {2017},
	keywords = {Multimodal data analysis, communication skills, interface-based interviews, skill assessment},
	pages = {221--229},
}