Modular and extensible speech recognition library leveraging accelerate and hydra
What is Accelerate ASR • Installation • Get Started • Codefactor • License
Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.
This project is an example that implements the asr project with Accelerate. In this project, I trained a model consisting of a conformer encoder + LSTM decoder with Joint CTC-Attention. I hope this could be a guideline for those who research speech recognition.
This project recommends Python 3.7 or higher.
I recommend creating a new virtual environment for this project (using virtual env or conda).
- numpy:
pip install numpy
(Refer here for problem installing Numpy). - pytorch: Refer to PyTorch website to install the version w.r.t. your environment.
- librosa:
conda install -c conda-forge librosa
(Refer here for problem installing librosa) - torchaudio:
pip install torchaudio==0.6.0
(Refer here for problem installing torchaudio) - sentencepiece:
pip install sentencepiece
(Refer here for problem installing sentencepiece) - accelerate:
pip install accelerate
(Refer here for problem installing accelerate) - hydra:
pip install hydra-core --upgrade
(Refer here for problem installing hydra)
Currently I only support installation from source code using setuptools. Checkout the source code and run the
following commands:
$ pip install -e .
$ ./setup.sh
For faster training install NVIDIA's apex library:
$ git clone https://github.com/NVIDIA/apex
$ cd apex
# ------------------------
# OPTIONAL: on your cluster you might need to load CUDA 10 or 9
# depending on how you installed PyTorch
# see available modules
module avail
# load correct CUDA before install
module load cuda-10.0
# ------------------------
# make sure you've loaded a cuda version > 4.0 and < 7.0
module load gcc-6.1.0
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
I use Hydra to control all the training configurations. If you are not familiar with Hydra I recommend visiting the Hydra website. Generally, Hydra is an open-source framework that simplifies the development of research applications by providing the ability to create a hierarchical configuration dynamically.
You have to download LibriSpeech dataset that contains 1000h English speech corpus. But you can download simply by dataset_download
option. If this option is True, download the dataset and start training. If you already have a dataset, you can set option dataset_download
to False and specify dataset_path
.
You can simply train with LibriSpeech dataset like below:
- Example1: Train the
conformer-lstm
model withfilter-bank
features on GPU.
$ python ./bin/main.py \
data=default \
dataset_download=True \
audio=fbank \
model=conformer_lstm \
lr_scheduler=reduce_lr_on_plateau \
trainer=gpu
- Example2: Train the
conformer-lstm
model withmel-spectrogram
features On TPU:
$ python ./bin/main.py \
data=default \
dataset_download=True \
audio=melspectrogram \
model=conformer_lstm \
lr_scheduler=reduce_lr_on_plateau \
trainer=tpu
If you have any questions, bug reports, and feature requests, please open an issue on Github.
I appreciate any kind of feedback or contribution. Feel free to proceed with small issues like bug fixes, documentation improvement. For major contributions and new features, please discuss with the collaborators in corresponding issues.
I follow PEP-8 for code style. Especially the style of docstrings is important to generate documentation.
This project is licensed under the MIT LICENSE - see the LICENSE.md file for details
- Soohwan Kim @sooftware
- Contacts: sh951011@gmail.com