/envisionBOX_modulesSAGA

modules for envisionbox with SAGA data

Primary LanguageHTML

Practice dataset for the envisionBOX

This is a repository that overviews a dataset so one can starting practicing implementing multimodal signal processing pipelines. Please see the the instructionvideo.

Overview

https://envisionbox.org/embedded_SAGApractice_featureextraction.html

SAGA corpus

  • Lücking, A., Bergman, K., Hahn, F., Kopp, S., & Rieser, H. (2013). Data-based analysis of speech and gesture: The Bielefeld Speech and Gesture Alignment Corpus (SaGA) and its applications. Journal on Multimodal User Interfaces, 7, 5-18.
  • Lücking, A., Bergmann, K., Hahn, F., Kopp, S., & Rieser, H. (2010). The Bielefeld speech and gesture alignment corpus (SaGA). In LREC 2010 workshop: Multimodal corpora–advances in capturing, coding and analyzing multimodality.

Privacy and sharing

A representative of the original SAGA team has communicated with us that the videos and secondary can be shared in a fully anonymized version, which would mean obscuring the face as well as the voice.