Distributed Data Parallel on DenseNet

Distributed training using PyTorch on a DenseNet Model.

Requirements

  • Python 3.8.6
pip install -r requirements.txt

Training

Single GPU training:

CUDA_VISIBLE_DEVICES=0 python train.py

Distributed training using two GPUs:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp.py -g 2

Distributed training using two GPUs with Mixed Precision:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp_mp.py -g 2