/BAE-Net

BAE-NET: A LOW COMPLEXITY AND HIGH FIDELITY BANDWIDTH-ADAPTIVE NEURAL NETWORK FOR SPEECH SUPER-RESOLUTION

Primary LanguagePython

BAE-Net: A Low complexity and high fidelity Bandwidth-Adaptive neural network for speech super-resolution (https://arxiv.org/abs/2312.13722)

This is the repo of the manuscript "BAE-Net: A Low complexity and high fidelity Bandwidth-Adaptive neural network for speech super-resolution", which is accepted to ICASSP 2024. Some audio samples are provided here and the code for network is released soon.

Speech bandwidth extension (BWE) has demonstrated promising performance in enhancing the perceptual speech quality in real communication systems. Most existing BWE researches primarily focus on fixed upsampling ratios, disregarding the fact that the effective bandwidth of captured audio may fluctuate frequently due to various capturing devices and transmission conditions. In this paper, we propose a novel streaming adaptive bandwidth extension solution dubbed BAE-Net, which is suitable to handle the low-resolution speech with unknown and varying effective bandwidth. To address the challenges of recovering both the high-frequency magnitude and phase speech content blindly, we devise a dual-stream architecture that incorporates the magnitude inpainting and phase refinement. For potential applications on edge devices, this paper also introduces BAE-NET-lite, which is a lightweight, streaming and efficient framework. Quantitative results demonstrate the superiority of BAE-Net in terms of both performance and computational efficiency when compared with existing state-of-the-art BWE methods.

System flowchart of BAE-Net

image

Results:

image

image

Visualization of spectrograms of captured LR speech, generated HR speech by AERO and BAE-Net, and the HR reference.

image