mlcommons/training_results_v0.6

MXNet hangs?

Opened this issue · 0 comments

I'm trying to run MXNet and it is hanging. Any suggestions? Log is below.

cat log/200326143204305975494_1.log 
Beginning trial 1 of 1
Run vars: id 200326143204305975494
SYSLOGGING: 1
Gathering sys log on seville
:::MLL 1585247550.089 submission_benchmark: {"metadata": {"lineno": 226, "file": "mlperf_log_utils.py"}, "value": "resnet"}
:::MLL 1585247550.090 submission_org: {"metadata": {"lineno": 231, "file": "mlperf_log_utils.py"}, "value": "NVIDIA"}
WARNING: Log validation: Key "submission_division" is not in known resnet keys.
:::MLL 1585247550.090 submission_division: {"metadata": {"lineno": 235, "file": "mlperf_log_utils.py"}, "value": "closed"}
:::MLL 1585247550.091 submission_status: {"metadata": {"lineno": 239, "file": "mlperf_log_utils.py"}, "value": "onprem"}
:::MLL 1585247550.091 submission_platform: {"metadata": {"lineno": 243, "file": "mlperf_log_utils.py"}, "value": "1xNVIDIA DGX-2"}
:::MLL 1585247550.092 submission_entry: {"metadata": {"lineno": 247, "file": "mlperf_log_utils.py"}, "value": "{'os': 'Ubuntu 18.04.3 LTS / NVIDIA DGX Server 4.2.0', 'notes': 'N/A', 'compilers': 'gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609', 'interconnect': 'InfiniBand 10 Gb/sec (4X)', 'power': 'N/A', 'hardware': 'NVIDIA DGX-2', 'framework': 'MXNet NVIDIA Release 19.05', 'libraries': \"{'trt_version': '5.1.5.0', 'cublas_version': '10.2.0.163', 'cuda_driver_version': '418.67', 'container_base': 'Ubuntu-16.04', 'cuda_version': '10.1.163', 'mofed_version': '4.6-1.0.1', 'cudnn_version': '7.6.0.64', 'dali_version': '0.9.1', 'openmpi_version': '3.1.3', 'nccl_version': '2.4.6'}\", 'nodes': \"{'network_card': 'Mellanox Technologies MT27800 Family [ConnectX-5]', 'accelerator': 'Tesla V100-SXM3-32GB', 'sys_storage_size': '8x 3.5T + 2x 894.3G', 'num_accelerators': '16', 'num_cores': '48', 'num_vcpus': '96', 'num_nodes': '1', 'cpu_accel_interconnect': 'UPI', 'notes': '', 'cpu': '2x Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz', 'sys_mem_size': '1510 GB', 'sys_storage_type': 'NVMe SSD', 'num_network_cards': '8'}\"}"}
:::MLL 1585247550.093 submission_poc_name: {"metadata": {"lineno": 251, "file": "mlperf_log_utils.py"}, "value": "Paulius Micikevicius"}
:::MLL 1585247550.093 submission_poc_email: {"metadata": {"lineno": 255, "file": "mlperf_log_utils.py"}, "value": "pauliusm@nvidia.com"}
CLEAR_CACHES: 1
Clearing cache on seville
:::MLL 1585247554.634 cache_clear: {"value": true, "metadata": {"file": "<string>", "lineno": 1}}

Launching user script on master node:
docker exec -e CONT=mlperf-nvidia:image_classification -e SEED -e MLPERF_HOST_OS -e DGXSYSTEM=DGX2 -e SLURM_JOB_NUM_NODES=1 -e SLURM_NTASKS_PER_NODE=16 -e OMPI_MCA_mca_base_param_files=/dev/shm/mpi/200326143204305975494/mca_params.conf mpi_200326143204305975494 mpirun --allow-run-as-root --bind-to none --tag-output -x CONT=mlperf-nvidia:image_classification -x SEED -x MLPERF_HOST_OS -x DGXSYSTEM=DGX2 -x SLURM_JOB_NUM_NODES=1 -x SLURM_NTASKS_PER_NODE=16 -x OMPI_MCA_mca_base_param_files=/dev/shm/mpi/200326143204305975494/mca_params.conf --launch-agent docker exec mpi_200326143204305975494 orted ./run_and_time.sh ; exit 0
[1,0]<stdout>:using config_DGX2.sh
[1,0]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,0]<stdout>:ls DATAROOT
[1,0]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,0]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,0]<stdout>:running benchmark
[1,0]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,1]<stdout>:using config_DGX2.sh
[1,1]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,1]<stdout>:ls DATAROOT
[1,1]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,1]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,1]<stdout>:running benchmark
[1,1]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,2]<stdout>:using config_DGX2.sh
[1,2]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,2]<stdout>:ls DATAROOT
[1,2]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,2]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,2]<stdout>:running benchmark
[1,2]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,3]<stdout>:using config_DGX2.sh
[1,3]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,3]<stdout>:ls DATAROOT
[1,3]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,3]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,3]<stdout>:running benchmark
[1,3]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,4]<stdout>:using config_DGX2.sh
[1,4]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,4]<stdout>:ls DATAROOT
[1,4]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,4]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,4]<stdout>:running benchmark
[1,4]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,5]<stdout>:using config_DGX2.sh
[1,5]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,5]<stdout>:ls DATAROOT
[1,5]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,5]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,5]<stdout>:running benchmark
[1,6]<stdout>:using config_DGX2.sh
[1,6]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,6]<stdout>:ls DATAROOT
[1,6]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,6]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,6]<stdout>:running benchmark
[1,6]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,7]<stdout>:using config_DGX2.sh
[1,7]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,7]<stdout>:ls DATAROOT
[1,7]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,7]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,7]<stdout>:running benchmark
[1,8]<stdout>:using config_DGX2.sh
[1,8]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,8]<stdout>:ls DATAROOT
[1,9]<stdout>:using config_DGX2.sh
[1,10]<stdout>:using config_DGX2.sh
[1,11]<stdout>:using config_DGX2.sh
[1,12]<stdout>:using config_DGX2.sh
[1,13]<stdout>:using config_DGX2.sh
[1,14]<stdout>:using config_DGX2.sh
[1,5]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,8]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,8]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,8]<stdout>:running benchmark
[1,9]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,9]<stdout>:ls DATAROOT
[1,10]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,15]<stdout>:using config_DGX2.sh
[1,10]<stdout>:ls DATAROOT
[1,7]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,9]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,9]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,9]<stdout>:running benchmark
[1,11]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,10]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,10]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,11]<stdout>:ls DATAROOT
[1,10]<stdout>:running benchmark
[1,8]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,12]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,12]<stdout>:ls DATAROOT
[1,11]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,11]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,11]<stdout>:running benchmark
[1,13]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,12]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,12]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,13]<stdout>:ls DATAROOT
[1,12]<stdout>:running benchmark
[1,9]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,14]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,10]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,14]<stdout>:ls DATAROOT
[1,13]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,13]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,13]<stdout>:running benchmark
[1,11]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,15]<stdout>:STARTING TIMING RUN AT 2020-03-26 06:32:35 PM
[1,14]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,14]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,15]<stdout>:ls DATAROOT
[1,14]<stdout>:running benchmark
[1,12]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,15]<stdout>:ILSVRC2012_img_train.tar  im2rec.py  train.idx	train.rec  val.idx  val.rec
[1,15]<stdout>:ILSVRC2012_img_val.tar	  train      train.lst	val	   val.lst
[1,15]<stdout>:running benchmark
[1,13]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,14]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
[1,15]<stdout>:./ompi_bind_DGX2.sh python train_imagenet.py --gpus 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 --batch-size 208 --kv-store horovod --lr 10 --lr-step-epochs pow2 --lars-eta 0.001 --label-smoothing 0.1 --wd 0.0002 --warmup-epochs 5 --eval-period 4 --eval-offset 3 --optimizer sgdwfastlars --network resnet-v1b-normconv-fl --num-layers 50 --num-epochs 72 --accuracy-threshold 0.759 --seed 8504306 --dtype float16 --use-dali --disp-batches 20 --image-shape 4,224,224 --fuse-bn-relu 1 --fuse-bn-add-relu 1 --min-random-area 0.05 --max-random-area 1.0 --conv-algo 1 --force-tensor-core 1 --input-layout NHWC --conv-layout NHWC --batchnorm-layout NHWC --pooling-layout NHWC --batchnorm-mom 0.9 --batchnorm-eps 1e-5 --data-train /data/train.rec --data-train-idx /data/train.idx --data-val /data/val.rec --data-val-idx /data/val.idx --dali-prefetch-queue 5 --dali-nvjpeg-memory-padding 256 --dali-threads 3 --dali-cache-size 0 --dali-roi-decode 1; ret_code=0
--------------------------------------------------------------------------
WARNING: One or more nonexistent OpenFabrics devices/ports were
specified:

  Host:                 seville
  MCA parameter:        mca_btl_if_include
  Nonexistent entities: mlx5_9

These entities will be ignored.  You can disable this warning by
setting the btl_openib_warn_nonexistent_if MCA parameter to 0.
--------------------------------------------------------------------------
[1,13]<stdout>::::MLL 1585247569.660 init_start: {"metadata": {"lineno": 83, "file": "train_imagenet.py"}, "value": null}
[1,11]<stdout>::::MLL 1585247569.661 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,9]<stdout>::::MLL 1585247569.662 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,10]<stdout>::::MLL 1585247569.662 init_start: {"metadata": {"file": "train_imagenet.py", "lineno": 83}, "value": null}
[1,14]<stdout>::::MLL 1585247569.662 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,12]<stdout>::::MLL 1585247569.662 init_start: {"metadata": {"file": "train_imagenet.py", "lineno": 83}, "value": null}
[1,15]<stdout>::::MLL 1585247569.662 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,1]<stdout>::::MLL 1585247569.662 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,4]<stdout>::::MLL 1585247569.663 init_start: {"metadata": {"lineno": 83, "file": "train_imagenet.py"}, "value": null}
[1,2]<stdout>::::MLL 1585247569.663 init_start: {"value": null, "metadata": {"lineno": 83, "file": "train_imagenet.py"}}
[1,5]<stdout>::::MLL 1585247569.663 init_start: {"metadata": {"lineno": 83, "file": "train_imagenet.py"}, "value": null}
[1,3]<stdout>::::MLL 1585247569.663 init_start: {"metadata": {"file": "train_imagenet.py", "lineno": 83}, "value": null}
[1,6]<stdout>::::MLL 1585247569.663 init_start: {"metadata": {"file": "train_imagenet.py", "lineno": 83}, "value": null}
[1,7]<stdout>::::MLL 1585247569.663 init_start: {"metadata": {"file": "train_imagenet.py", "lineno": 83}, "value": null}
[1,8]<stdout>::::MLL 1585247569.663 init_start: {"value": null, "metadata": {"file": "train_imagenet.py", "lineno": 83}}
[1,0]<stdout>::::MLL 1585247569.666 init_start: {"value": null, "metadata": {"file": "train_imagenet.py", "lineno": 83}}
[seville:00188] 15 more processes have sent help message help-mpi-btl-openib.txt / nonexistent port
[seville:00188] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_initial_shape" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.272 model_hp_initial_shape: {"value": [4, 224, 224], "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 266}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.273 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.274 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.275 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.276 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.277 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.278 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.279 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.280 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.281 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.281 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.282 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.283 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.284 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.285 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.286 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_shorcut_add" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.287 model_hp_shorcut_add: {"value": null, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 192}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_final_shape" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.287 model_hp_final_shape: {"value": 1000, "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 309}}
[1,0]<stdout>:WARNING: Log validation: Key "model_hp_loss_fn" is not in known resnet keys.
[1,0]<stdout>::::MLL 1585247584.288 model_hp_loss_fn: {"value": "categorical_cross_entropy", "metadata": {"file": "symbols/resnet-v1b-normconv-fl.py", "lineno": 320}}
[1,0]<stdout>::::MLL 1585247584.290 model_bn_span: {"value": 208, "metadata": {"file": "common/dali.py", "lineno": 229}}