/Awesome-Speech-Pretraining

Paper, Code and Statistics for Self-Supervised Learning and Pre-Training on Speech.

Table of Contents generated with DocToc

Awesome-Speech-Pretraining

Papers, Resources, and Statistics for Self-Supervised Learning and Pre-Training on Speech.

🌟 represents important papers.

Papers

2018

2019

2020

2021

2022

2023

Speech + Text

SSL for Audio

SSL for TTS

SSL Model Distillation, Compression and Acceleration

Resources

Speech processing Universal PERformance Benchmark (SUPERB)

Self-Supervised Speech Pre-training and Representation Learning (S3PRL)

Statistics

Statistics on speech pretraining.

wav2vec 2.0

Pre-training

Size Transformer Samples Batch Size Train Time
BASE 12 blocks, model dimension 768, FFN 3072, 8 heads 1.4m(cropped)/GPU 1.6h 400k updates, 64 V100 * 1.6d
LARGE 24 blocks, model dimension 1024, FFN 4096, 16 heads 1.2m(cropped)/GPU 2.7h 250k updates, 128 V100 * 2.3d(Librispeech)
600k updates, 128 V100 * 5.2d(LibriVox)

Fine-tuning

wav2vec-u

Method Feature Extractor Batch Size Train Time
wav2vec-U wav2vec 2.0 LARGE 160 unlabeled audio + 160 text samples 150k steps, single V100 * 12h
wav2vec-U + self training wav2vec 2.0 LARGE / 80k updates, 8 V100(Librispeech)
13k updates, 4V100(TIMIT)

HuBERT

Pre-training

Size Feature Extractor Batch Size Stage Train Time
BASE wav2vec 2.0 BASE(95M) 87.5s 1: MFCC 250k steps
2: 6-th transformer layer 400k steps
9.5h/100k steps, 32GPUs(Librispeech-960)
LARGE wav2vec 2.0 LARGE(317M) 56.25s 3: 9-th transformer layer from BASE HuBERT 400k steps 9.5h/100k steps, 128GPUs(Libri-light-60k)
X-LARGE Conformer XXL(964M) 22.5s 3: 9-th transformer layer from BASE HuBERT 400k steps 9.5h/100k steps, 256GPUs(Libri-light-60k)

Fine-tuning