hnbrh
Assistant professor; Interested in cross-disciplinary research in NLP, languages, text and voice.
hnbrh's Stars
Idlak/Living-Audio-Dataset
A "Crowd-Built" continuously growing speech dataset with transcripts. The dataset contains multiple languages and is intended for anyone to be able to add to it.
hirofumi0810/asr_preprocessing
Python implementation of pre-processing for End-to-End speech recognition
Edresson/TTS-Portuguese-Corpus
Open Source Text-To-Speech Portuguese Dataset
gheyret/UQSpeechDataset
Uyghur Single Speaker Speech Dataset. ウイグル語音声データセット
hnbrh/Speech-Command-Recognition-with-Capsule-Network
Speech command recognition with capsule network & various NNs / KWS on Google Speech Command Dataset.
vj-1988/AudioNet-V1
1D CNN based classifier for Speech Commands Dataset
okkyibrohim/id-multi-label-hate-speech-and-abusive-language-detection
The Dataset for Multi Label Hate Speech and Abusive Language Detection in Indonesian Twitter
cyrta/awesome-speech-enhancement
A curated list of awesome Speech Enhancement papers, libraries, datasets, and other resources.
ialfina/id-hatespeech-detection
The Dataset for Hate Speech Detection in Indonesian (Bahasa Indonesia)
kakaobrain/jejueo
Jejueo Datasets for Machine Translation and Speech Synthesis
jupiter126/Create_Speech_Dataset
Creates a speech dataset for deep learning
ainy/shershe
Speech recognition dataset based on russian audiobook, sentance-level split
akhil2495/multi-modal-emotion-recognition
A repository for emotion recognition from speech, text and mocap data from IEMOCAP dataset
codersinthestorm/RecurrentNN_SpeechRecognition
A model based in Tensorflow to recognize words from the 30 word Speech Commands Dataset from Google using LSTM based Recurrent Neural Network.
aymeam/Datasets-for-Hate-Speech-Detection
Datasets for Hate Speech Detection
hnbrh/VERBO-emotional-speech-dataset
VERBO (Voice Emotion Recognition dataBase in pOrtuguese language)
hnbrh/GenderClassifierLibriSpeech
Gender Classification of the speaker from LibriSpeech Dataset
gionanide/Speech_Signal_Processing_and_Classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Demfier/multimodal-speech-emotion-recognition
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
mravanelli/pySpeechRev
This python code performs an efficient speech reverberation starting from a dataset of close-talking speech signals and a collection of acoustic impulse responses.
klintan/swedish-asr-dataset
Jupyter Notebooks for creating Speech datasets
Kyubyong/css10
CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages
festvox/datasets-CMU_Wilderness
CMU Wilderness Multilingual Speech Dataset
Vicomtech/hate-speech-dataset
Hate speech dataset from Stormfront forum manually labelled at sentence level.
t-davidson/hate-speech-and-offensive-language
Repository for the paper "Automated Hate Speech Detection and the Problem of Offensive Language", ICWSM 2017
x4nth055/emotion-recognition-using-speech
Building and training Speech Emotion Recognizer that predicts human emotions using Python, Sci-kit learn and Keras