burchim/EfficientConformer

How to use multiple Gpus for training?

jiaranjintianchism opened this issue · 1 comments

How to use multiple Gpus for training?

Distributed data parallel training can be activated using the "-d" flag.

Exemple:
python main.py --config_file configs/EfficientConformerCTCSmall.json -d

This will start distributed training with all available GPUs.
You will find all options in the options subsections.