2624428587's Stars
yxlu-0102/MP-SENet
Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement
caoruitju/RUI_SE
VOICOR: A Residual Iterative Voice Correction Framework for Monaural Speech Enhancement
sweetcocoa/DeepComplexUNetPyTorch
Implementation of Deep Complex UNet Using PyTorch
ShiArthur03/ShiArthur03
snuhcs/Papez
Papez: Resource-Efficient Speech Separation with Auditory Working Memory (ICASSP 2023)
pragyak412/Improving-Voice-Separation-by-Incorporating-End-To-End-Speech-Recognition
Implementing the paper -
etzinis/fedenhance
Code for the paper: Separate but togerher: Unsupervised Federated Learning for Speech Enhancement from non-iid data
sp-uhh/deep-non-linear-filter
Andong-Li-speech/EaBNet
This is the repo of the manuscript "Embedding and Beamforming: All-Neural Causal Beamformer for Multichannel Speech Enhancement", which was submitted to ICASSP2022.
jakugel/unet-variants
Tensorflow 2 code for several U-Net variants to perform direct comparisons including base, attention, dense, ++, squeeze-excite, inception, residual, recurrent-residual
Audio-WestlakeU/McNet
The official repo: "McNet: Fuse Multiple Cues for Multichannel Speech Enhancement", ICASSP 2023
microsoft/DNS-Challenge
This repo contains the scripts, models, and required files for the Deep Noise Suppression (DNS) Challenge.
ModarHalimeh/COSPA
Complex-valued Spatial Autoencoders for Multichannel Speech Enhancement
Audio-WestlakeU/NBSS
The official repo of NBC & SpatialNet for multichannel speech separation, denoising, and dereverberation
facebookresearch/svoice
We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
f90/Wave-U-Net
Implementation of the Wave-U-Net for audio source separation
Audio-WestlakeU/FullSubNet
PyTorch implementation of "FullSubNet: A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
ujscjj/DPTNet
wenet-e2e/wenet
Production First and Production Ready End-to-End Speech Recognition Toolkit
XiangzhuKong/CA-Dense-UNet
An unofficial code reproduction of Channel Attention Dense U-Net for Multichannel Speech Enhancement
felixfuyihui/Uformer
Uformer: A Unet based dilated complex & real dual-path conformer network for simultaneous speech enhancement and dereverberation
jwr1995/DTCN
YUCHEN005/Unified-Enhance-Separation
Code for paper "Unifying Speech Enhancement and Separation with Gradient Modulation for End-to-End Noise-Robust Speech Separation"
aleXiehta/WaveCRN
WaveCRN: An Efficient Convolutional Recurrent Neural Network for End-to-end Speech Enhancement
ShiZiqiang/dual-path-RNNs-DPRNNs-based-speech-separation
A PyTorch implementation of dual-path RNNs (DPRNNs) based speech separation described in "Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation".
jyhan03/icassp22-dataset
Dataset simulation for DPCCN.
Sinica-SLAM/CasNet
maggie0830/DCCRN
implementation of "DCCRN-Deep Complex Convolution Recurrent Network for Phase-Aware Speech Enhancement" by pytorch
alibabasglab/FRCRN
ododoyo/EHNet
A neural network consist of cnn and lstm for speech enhancement