Pinned Repositories
Co-Speech_Gesture_Generation
This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.
dialogue-samples
Generates a sample of realistic dialogue utterances. Used for testing co-speech gesture generation.
emotion-recognition-using-speech
Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
genea_visualizer
This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2020, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose.
gesticulator
The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture generation"
MevonAI-Speech-Emotion-Recognition
Identify the emotion of multiple speakers in an Audio Segment
multimodal-speech-emotion-recognition
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
NaoGestures
This library aims to ease the realisation of non-verbal gestures on a Softbanks Robotics Nao robot.
naoqi_driver
c++ bridge based on libqi
social-reward-function
A dense multi-modal reward function for social robotics
TomKingsfordUoA's Repositories
TomKingsfordUoA/NaoGestures
This library aims to ease the realisation of non-verbal gestures on a Softbanks Robotics Nao robot.
TomKingsfordUoA/social-reward-function
A dense multi-modal reward function for social robotics
TomKingsfordUoA/Co-Speech_Gesture_Generation
This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.
TomKingsfordUoA/dialogue-samples
Generates a sample of realistic dialogue utterances. Used for testing co-speech gesture generation.
TomKingsfordUoA/emotion-recognition-using-speech
Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras
TomKingsfordUoA/genea_visualizer
This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2020, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose.
TomKingsfordUoA/gesticulator
The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture generation"
TomKingsfordUoA/MevonAI-Speech-Emotion-Recognition
Identify the emotion of multiple speakers in an Audio Segment
TomKingsfordUoA/multimodal-speech-emotion-recognition
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
TomKingsfordUoA/naoqi_driver
c++ bridge based on libqi
TomKingsfordUoA/ResidualMaskingNetwork
Facial Expression Recognition using Residual Masking Network
TomKingsfordUoA/social-reward-dataset
TomKingsfordUoA/social-reward-function-reference-implementations
A collection of co-speech gesture generation models representing the SOTA.