/MUSE-Speech-Enhancement

Official code for MUSE: Flexible Voiceprint Receptive Fields and Multi-Path Fusion Enhanced Taylor Transformer for U-Net-based Speech Enhancemen

Primary LanguagePythonMIT LicenseMIT

MUSE: Flexible Voiceprint Receptive Fields and Multi-Path Fusion Enhanced Taylor Transformer for U-Net-based Speech Enhancemen

Zizhen Lin, Xiaoting Chen, Junyu Wang

Abstract: Achieving a balance between lightweight design and high performance remains a challenging task for speech enhancement. In this paper, we introduce Multi-path Enhanced Taylor (MET) Transformer based U-net for Speech Enhancement (MUSE), a lightweight speech enhancement network built upon the U-net architecture. Our approach incorporates a novel Multi-path Enhanced Taylor (MET) Transformer block, which integrates Deformable Embedding (DE) to enable flexible receptive fields for voiceprints. The MET Transformer is uniquely designed to fuse Channel and Spatial Attention (CSA) branches, facilitating channel information exchange and addressing spatial attention deficits within the Taylor-Transformer framework. Through extensive experiments conducted on the VoiceBank+DEMAND dataset, we demonstrate that MUSE achieves competitive performance while significantly reducing both training and deployment costs, boasting a mere 0.51M parameters.

MUSE was accepted by Interspeech 2024. arxiv

Pre-requisites

  1. Python >= 3.6.
  2. Clone this repository.
  3. Install python requirements. Please refer requirements.txt.
  4. Download and extract the VoiceBank+DEMAND dataset. Use downsampling.py to resample all wav files to 16kHz,
python downsampling.py
  1. move the clean and noisy wavs to VoiceBank+DEMAND/wavs_clean and VoiceBank+DEMAND/wavs_noisy or any path you want, and change the path in train.py [parser.add_argument('--input_clean_wavs_dir', default=], respectively. Notably, different downsampling ways could lead to different result.

Training

For single GPU (Recommend), MUSE needs at least 8GB GPU memery.

python train.py --config config.json

Training with your own data

Edit path in make_file_list.py and run

python make_file_list.py

Then replace the test.txt and training.txt with generated files in folder ./VoiceBank+DEMAND and put your train and test set in the same folder(clean, noisy).

Inference

python inference.py --checkpoint_file /PATH/TO/YOUR/CHECK_POINT/g_xxxxxxx

You can also use the pretrained best checkpoint file we provide in paper_result/g_best.

我们提供了测试用的音频文件以避免音频处理导致的不同结果:

https://pan.baidu.com/s/1CVGK85zHlR3UPMnWWgP6rQ?pwd=1017 提取码: 1017 

Generated wav files are saved in generated_files by default.
You can change the path by adding --output_dir option.

Acknowledgements

We referred to MP-SENet, MB-TaylorFormer