This repository holds some of the models and solutions developed by Pablo Barros based on affective recognition and learning.
##Individual Projects
Pre-requisites
tensorflow, keras, matplotlib, h5py, opencv-python, librosa, pillow, imgaug, python_speech_features, hyperas, dlib
If you want to run on a GPU, install tensorflow-gpu instead of tensorflow
Instructions
Each of the examples here run within the KEF framework. Also, each example needs a specific dataset which is not available here. All the demos and examples here run on Python 2.7.
Hand gesture recognition
- NCD_VisionNetwork_SobelXY.py: Multichannel Convolution Neural Network for hand posture recognition using the NCD dataset (Barros et al., 2014)
Auditory emotion recognition
- OMG_Emotion_Audio_MelSpectrum.py: Audio Channel for the OMG-Emotion dataset (Barros et al., 2018)
- RAVDESS_Audio_MelSpectrum_Channel.py: Audio Channel for the RAVDESS dataset (Barros et al., 2018)
Visual emotion recognition
- OMG_Emotion_Face.py: Face Channel for the OMG-Emotion dataset (Barros et al., 2018)
- FERPlus_Vision_FaceChannel.py: Face Channel for the FERPlus dataset (Barros et al., 2018)
Crossmodal emotion recognition
- OMG_Emotion_Crossmodal.py: Cross Channel for the OMG-Emotion dataset (Barros et al., 2018)
- RAVDESS_CrossNetwork_MelSpectrum_Channel.py: Cross Channel for the RAVDESS dataset (Barros et al., 2018)
Trained Models
Each of the examples has a pre-trained model associated with it. Please refer to the TrainedModels folder.
Ready-to-Run Demos
##Associated Projects
Besides the individual projects described here, I also developed other projects related to affective perceptiona and understanding:
- Affective Modelinf for Multiagent Learning (Barros et al., 2020) - https://github.com/pablovin/ChefsHatGYM
- Learning Personalized Affective Representation (Barros et al., 2019) - https://github.com/pablovin/P-AffMem
- Facial Expression Editing (Lindt et al., 2019) - https://github.com/pablovin/FaceEditing_ContinualGAN
##Datasets
Follows the links for different corpora that I developed or was involved on the development. Most of the examples here make use of these corpora:
- OMG-Empathy Prediction
- OMG-Emotion Recognition
- Gesture Commands for Robot InTeraction (GRIT)
- NAO Camera hand posture Database (NCD)
Important references
- Barros, P., Churamani, N., & Sciutti, A. (2020). The FaceChannel: A Light-weight Deep Neural Network for Facial Expression Recognition. arXiv preprint arXiv:2004.08195.
- Barros, P., Parisi, G., & Wermter, S. (2019, May). A Personalized Affective Memory Model for Improving Emotion Recognition.. In International Conference on Machine Learning (pp. 485-494).
- Barros, P., Barakova, E., & Wermter, S. (2018). A Deep Neural Model Of Emotion Appraisal. arXiv preprint arXiv:1808.00252.
- Barros, P., & Wermter, S. (2016). Developing crossmodal expression recognition based on a deep neural model. Adaptive behavior, 24(5), 373-396. http://journals.sagepub.com/doi/full/10.1177/1059712316664017
- Barros, P., & Wermter, S. (2017, May). A self-organizing model for affective memory. In Neural Networks (IJCNN), 2017 International Joint Conference on (pp. 31-38). IEEE.
- Barros, P., Jirak, D., Weber, C., & Wermter, S. (2015). Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Networks, 72, 140-151.
- Barros, P., Magg, S., Weber, C., & Wermter, S. (2014, September). A multichannel convolutional neural network for hand posture recognition. In International Conference on Artificial Neural Networks (pp. 403-410). Springer, Cham.
- All the references
License
All the examples in this repository are distributed under the Creative Commons CC BY-NC-SA 3.0 DE license. If you use this corpus, you have to agree with the following itens:
- To cite our associated references in any of your publication that make any use of these examples.
- To use the corpus for research purpose only.
- To not provide the corpus to any second parties.
Contact
Pablo Barros - pablo.alvesdebarros@iit.it