Pinned Repositories
BAMSE
Bone/Air conducted speech signal enhancement exploiting multi-modal framework
CITISEN
DAEME
Speech Enhancement based on Denoising Autoencoder with Multi-branched Encoders
DeepDenoisingAutoencoder
Tensorflow implementation for Speech Enhancement (DDAE)
End-to-end-waveform-utterance-enhancement
End-to-end waveform utterance enhancement for direct evaluation metrics optimization by fully convolutional neural networks (TASLP 2018)
ICSE
INCREASING COMPACTNESS OF DEEP LEARNING BASED SPEECH ENHANCEMENT MODELS WITH PARAMETER PRUNING AND QUANTIZATION TECHNIQUES
Improving-biodiversity-monitoring-through-soundscape-information-retrieval
Investigating the dynamics of biodiversity via passive acoustic monitoring is a challenging task, owing to the difficulty of identifying different animal vocalizations. Several indices have been proposed to measure acoustic complexity and to predict biodiversity. Although these indices perform well under low-noise conditions, they may be biased when environmental and anthropogenic noises are involved. In this paper, we propose a periodicity coded non-negative matrix factorization (PC-NMF) for separating different sound sources from a spectrogram of long-term recordings. The PC-NMF first decomposes a spectrogram into two matrices: spectral basis matrix and encoding matrix. Next, on the basis of the periodicity of the encoding information, the spectral bases belonging to the same source are grouped together. Finally, distinct sources are reconstructed on the basis of the cluster of the basis matrix and the corresponding encoding information, and the noise components are then removed to facilitate more accurate monitoring of biological sounds. Our results show that the PC-NMF precisely enhances biological choruses, effectively suppressing environmental and anthropogenic noises in marine and terrestrial recordings without a need for training data. The results may improve behaviour assessment of calling animals and facilitate the investigation of the interactions between different sound sources within an ecosystem.
JD-NMF
Joint Dictionary Learning-based Non-Negative Matrix Factorization for Voice Conversion (TBME 2016)
LAVSE
Python codes for Lite Audio-Visual Speech Enhancement.
Learning-with-Learned-Loss-Function
BioASPLab's Repositories
BioASPLab/Improving-biodiversity-monitoring-through-soundscape-information-retrieval
Investigating the dynamics of biodiversity via passive acoustic monitoring is a challenging task, owing to the difficulty of identifying different animal vocalizations. Several indices have been proposed to measure acoustic complexity and to predict biodiversity. Although these indices perform well under low-noise conditions, they may be biased when environmental and anthropogenic noises are involved. In this paper, we propose a periodicity coded non-negative matrix factorization (PC-NMF) for separating different sound sources from a spectrogram of long-term recordings. The PC-NMF first decomposes a spectrogram into two matrices: spectral basis matrix and encoding matrix. Next, on the basis of the periodicity of the encoding information, the spectral bases belonging to the same source are grouped together. Finally, distinct sources are reconstructed on the basis of the cluster of the basis matrix and the corresponding encoding information, and the noise components are then removed to facilitate more accurate monitoring of biological sounds. Our results show that the PC-NMF precisely enhances biological choruses, effectively suppressing environmental and anthropogenic noises in marine and terrestrial recordings without a need for training data. The results may improve behaviour assessment of calling animals and facilitate the investigation of the interactions between different sound sources within an ecosystem.
BioASPLab/BAMSE
Bone/Air conducted speech signal enhancement exploiting multi-modal framework
BioASPLab/CITISEN
BioASPLab/DAEME
Speech Enhancement based on Denoising Autoencoder with Multi-branched Encoders
BioASPLab/DeepDenoisingAutoencoder
Tensorflow implementation for Speech Enhancement (DDAE)
BioASPLab/End-to-end-waveform-utterance-enhancement
End-to-end waveform utterance enhancement for direct evaluation metrics optimization by fully convolutional neural networks (TASLP 2018)
BioASPLab/ICSE
INCREASING COMPACTNESS OF DEEP LEARNING BASED SPEECH ENHANCEMENT MODELS WITH PARAMETER PRUNING AND QUANTIZATION TECHNIQUES
BioASPLab/JD-NMF
Joint Dictionary Learning-based Non-Negative Matrix Factorization for Voice Conversion (TBME 2016)
BioASPLab/LAVSE
Python codes for Lite Audio-Visual Speech Enhancement.
BioASPLab/Learning-with-Learned-Loss-Function
BioASPLab/MCSE
BioASPLab/MetricGAN
MetricGAN: Generative Adversarial Networks based Black-box Metric Scores Optimization for Speech Enhancement (ICML 2019, with Travel awards)
BioASPLab/MOSNet
Implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"
BioASPLab/Noise-Reduction-in-ECG-Signals
BioASPLab/noise_adaptive_DAT_SE
Noise Adaptive Speech Enhancement using Domain Adversarial Training
BioASPLab/SERIL
Official Implementation of SERIL in Pytorch
BioASPLab/Unified-Spectral-Prosodic-SE
BioASPLab/vqvae-speech
Tensorflow implementation of the speech model described in Neural Discrete Representation Learning (a.k.a. VQ-VAE)
BioASPLab/WaveCRN
WaveCRN: An Efficient Convolutional Recurrent Neural Network for End-to-end Speech Enhancement