jiaranjintianchism opened this issue 2 years ago · 1 comments
Distributed data parallel training can be activated using the "-d" flag.
Exemple: python main.py --config_file configs/EfficientConformerCTCSmall.json -d
This will start distributed training with all available GPUs. You will find all options in the options subsections.