Language translation using an encoder-decoder transformer model trained using distributed data parallel on compute canada clusters. Sample training done on 16 Tesla V100-SXM2-32GB GPUs spread across 4 nodes. To run execute either of the .sh
files in the slurm
folder using the sbatch
command