Pinned Repositories
add_reverb2
Data augment. Add reverb and noise in speech.
Decision-tree
K236 task
group12
tju_12
linan2
Config files for my GitHub profile.
pytorch-dialect-speech-classification
pytorch-dialect-speech-classification
tensorflow-1.4.0
TensorFlow 1.4.0 installed version.
TensorFlow-speech-enhancement
DNN and RCED speech enhancement
TensorFlow-speech-enhancement-Chinese
基于深度学习的语音增强、去混响
VAD_MATLAB
A simple VAD method
Voice-activity-detection-VAD-paper-and-code
Voice activity detection (VAD) paper and code(From 198*~ )and its classification.
linan2's Repositories
linan2/Voice-activity-detection-VAD-paper-and-code
Voice activity detection (VAD) paper and code(From 198*~ )and its classification.
linan2/TensorFlow-speech-enhancement-Chinese
基于深度学习的语音增强、去混响
linan2/TensorFlow-speech-enhancement
DNN and RCED speech enhancement
linan2/VAD_MATLAB
A simple VAD method
linan2/add_reverb2
Data augment. Add reverb and noise in speech.
linan2/Decision-tree
K236 task
linan2/group12
tju_12
linan2/linan2
Config files for my GitHub profile.
linan2/pytorch-dialect-speech-classification
pytorch-dialect-speech-classification
linan2/tensorflow-1.4.0
TensorFlow 1.4.0 installed version.
linan2/A-Convolutional-Recurrent-Neural-Network-for-Real-Time-Speech-Enhancement
A minimum unofficial implementation of the "A Convolutional Recurrent Neural Network for Real-Time Speech Enhancement" (CRN) using PyTorch
linan2/Conv-TasNet-PyTorch
A PyTorch implementation of Conv-TasNet
linan2/Speech_Signal_Processing_and_Classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
linan2/Tutorial_Separation
This repo summarizes the tutorials, datasets, papers, codes and tools for speech separation and speaker extraction task. You are kindly invited to pull requests.
linan2/Wave-U-Net-for-Speech-Enhancement
Implement Wave-U-Net by PyTorch, and migrate it to the speech enhancement.
linan2/-
linan2/acad-homepage.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
linan2/add_reverb
linan2/awesome-speech-enhancement
speech enhancement\speech seperation\sound source localization
linan2/beamforming
linan2/BSS_COLEGRAM
ICA_NMF_JADE
linan2/DNN_Localization_And_Separation
Speech Localization and Separation using DNNs
linan2/DNS-Challenge-2020
This uppression (DNS) Challenge. We are open sourcing clean speech and noise files as well. Participants of this challenge will use the scripts from this repo to create data to train their noise suppressors. They will compare their method with our baseline noise suppressor and report the results.
linan2/espnet
End-to-End Speech Processing Toolkit
linan2/FinalYearProject
Speech Enhancement using KF
linan2/mic_array
DOA, VAD and KWS for ReSpeaker Microphone Array
linan2/nara_wpe
Different implementations of "Weighted Prediction Error" for speech dereverberation
linan2/rsrgan
Robust Speech Recognition Using Generative Adversarial Networks (GAN)
linan2/TJUThesis_master_2021
天大博士/硕士学位论文Latex模板,根据2021年版要求修改,可直接在Overleaf上运行。:star:所写的论文成功提交天津大学图书馆存档!(2021.12.24)
linan2/Waveforms-Speech-Enhancement-Use-TensorFlow