liu-ioa's Stars
tensorflow/tensorflow
An Open Source Machine Learning Framework for Everyone
JorenSix/TarsosDSP
A Real-Time Audio Processing Framework in Java
capstone2watchout/Sound_Classification_Android
karolpiczak/echonet
Convolutional neural networks for sound classification
imfing/audio-classification
:musical_score: Environmental sound classification using Deep Learning with extracted features
aqibsaeed/Urban-Sound-Classification
Urban sound classification using Deep Learning
flutter/flutter
Flutter makes it easy and fast to build beautiful apps for mobile and beyond
gionanide/Speech_Signal_Processing_and_Classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
Awesome-HarmonyOS/HarmonyOS
A curated list of awesome things related to HarmonyOS. 华为鸿蒙操作系统。
silentZCJ/Animation
qwd/OpenWeatherPlus-Android
An open source weather APP for Android. 天气普拉斯Android版,自带天气数据的开源天气APP。
wasabeef/awesome-android-ui
A curated list of awesome Android UI/UX libraries
kaldi-asr/kaldi
kaldi-asr/kaldi is the official location of the Kaldi project.
zw76859420/ASR_Theory
语音识别理论、论文和PPT
magmaOffenburg/RoboViz
Monitor and visualization tool for the RoboCup 3D Soccer Simulation League
osrf/robocup_3d_simulation
A repository for Gazebo and ROS based robocup_3d_simulation.