# Set path to CUDA, NCCL
CUDAROOT=/usr/local/cuda
NCCL_ROOT=/usr/local/nccl
export CPATH=$NCCL_ROOT/include:$CPATH
export LD_LIBRARY_PATH=$NCCL_ROOT/lib/:$CUDAROOT/lib64:$LD_LIBRARY_PATH
export LIBRARY_PATH=$NCCL_ROOT/lib/:$LIBRARY_PATH
export CUDA_HOME=$CUDAROOT
export CUDA_PATH=$CUDAROOT
export CPATH=$CUDA_PATH/include:$CPATH # for warp-rnnt
# Install miniconda, python libraries, and other tools
cd tools
make KALDI=/path/to/kaldi
-
ASR
- AISHELL-1
- CSJ
- Librispeech
- Switchboard (+ Fisher)
- TEDLIUM2/TEDLIUM3
- TIMIT
- WSJ
-
LM
- Penn Tree Bank
- WikiText2
- RNN encoder
- Transformer encoder [link]
- Conformer encoder [link]
- Time-depth separable (TDS) convolution encoder [link] [line]
- Gated CNN encoder (GLU) [link]
- Beam search
- Shallow fusion
- Forced alignment
RNN-Transducer (RNN-T) decoder [link]
- Beam search
- Shallow fusion
- RNN decoder
- Attention type
- location-based
- content-based
- dot-product
- GMM attention
- Streaming RNN decoder specific
- Transformer decoder [link]
- Streaming Transformer decoder specific
- RNNLM (recurrent neural network language model)
- Gated convolutional LM [link]
- Transformer LM
- Transformer-XL LM [link]
- Adaptive softmax [link]
- Phoneme
- Grapheme
- Wordpiece (BPE, sentencepiece)
- Word
- Word-char mix
Multi-task learning (MTL) with different units are supported to alleviate data sparseness.
- Hybrid CTC/attention [link]
- Hierarchical Attention (e.g., word attention + character attention) [link]
- Hierarchical CTC (e.g., word CTC + character CTC) [link]
- Hierarchical CTC+Attention (e.g., word attention + character CTC) [link]
- Forward-backward attention [link]
- LM objective
Model | dev | test |
---|---|---|
Transformer | 5.0 | 5.4 |
Conformer | 4.7 | 5.2 |
Streaming MMA | 5.5 | 6.1 |
Model | eval1 | eval2 | eval3 |
---|---|---|---|
BLSTM LAS | 6.5 | 5.1 | 5.6 |
LC-BLSTM MoChA | 7.4 | 5.6 | 6.4 |
Model | SWB | CH |
---|---|---|
BLSTM LAS | 9.1 | 18.8 |
Model | SWB | CH |
---|---|---|
BLSTM LAS | 7.8 | 13.8 |
Model | dev-clean | dev-other | test-clean | test-other |
---|---|---|---|---|
BLSTM LAS | 2.5 | 7.2 | 2.6 | 7.5 |
BLSTM RNN-T | 2.9 | 8.5 | 3.2 | 9.0 |
Transformer | 2.1 | 5.3 | 2.4 | 5.7 |
UniLSTM RNN-T | 3.7 | 11.7 | 4.0 | 11.6 |
UniLSTM MoChA | 4.1 | 11.0 | 4.2 | 11.2 |
LC-BLSTM RNN-T | 3.3 | 9.8 | 3.5 | 10.2 |
LC-BLSTM MoChA | 3.3 | 8.8 | 3.5 | 9.1 |
Streaming MMA | 2.5 | 6.9 | 2.7 | 7.1 |
Model | dev | test |
---|---|---|
BLSTM LAS | 8.1 | 7.5 |
LC-BLSTM RNN-T | 8.9 | 8.5 |
LC-BLSTM MoChA | 10.6 | 8.6 |
UniLSTM RNN-T | 11.6 | 11.7 |
UniLSTM MoChA | 13.6 | 11.6 |
Model | test_dev93 | test_eval92 |
---|---|---|
BLSTM LAS | 8.8 | 6.2 |
Model | valid | test |
---|---|---|
RNNLM | 87.99 | 86.06 |
+ cache=100 | 79.58 | 79.12 |
+ cache=500 | 77.36 | 76.94 |
Model | valid | test |
---|---|---|
RNNLM | 104.53 | 98.73 |
+ cache=100 | 90.86 | 85.87 |
+ cache=2000 | 76.10 | 72.77 |