IBM/FfDL

Learner pod stuck at training step 100 using custom image with TF Object Detection

falkmatt opened this issue · 5 comments

System information

BAZEL_VERSION=0.15.0
CUDA_VERSION=9.0.176
TensorFlow version: tensorflow:1.10.0-devel-gpu
GPU_COUNT=1.000000
GPU model and memory: Tesla K80 12GB
Kubernetes version: 1.10.11_1536

Problem

I am submitting training jobs with the FfDL CLI using a custom image containing the TF Object Detection API. After 100 - 200 steps of training the Faster RCNN model, the training run is stuck and the learner pod is still running. I am using 1 learner and request 1 GPU - 9gb within the manifest.yaml.

  1. Do you know what could cause the learner to get stuck?
  2. What is the best way to stop the training run?
    So far just deleting the pod causes the learner getting stuck in Status "Terminating". Deletion of the learner pod with grace-period=0 causes the worker node to status "Critical" and I have to reboot the cluster.
  3. When deleting the learner pod, corresponding lhelper and jobmonitor pods continue to run filling up allocated pods in the cluster. Is this a feature that helper and jobmonitor continue to run although trainer pod is deleted?
    I appreciate any help!

Maybe resource problem of some kind? Are you monitoring the logs?

The CLI tool has a delete command, that would be the best way to stop a job. Second best way is to try something like kubectl delete pod,pvc,,deploy,svc,statefulset,secrets,configmap --selector training_id=training-BzIb89qzR.

Can you ssh into the learner container, kubectl exec -it <name-of-learner-pod> bash and look around? Maybe do an ls -l $JOB_STATE_DIR? And the LOG_DIR too.

I ssh'ed into the learner container and nothing suspicious on this side. Also kubectl describe <name-of-learner-pod> doesn't really show anything unusual.

I am constantly monitoring the logs and the training still hangs at a certain step (now 1100 because I lowered the frequency of saving .ckpt files) and does not proceed since hours.
0.021782178, DetectionBoxes_Precision/mAP@.50IOU = 0.070287526, DetectionBoxes_Precision/mAP@.75IOU = 0.01808199, DetectionBoxes_Recall/AR@1 = 0.064615384, DetectionBoxes_Recall/AR@10 = 0.124615386, DetectionBoxes_Recall/AR@100 = 0.16307692, DetectionBoxes_Recall/AR@100 (large) = 0.29642856, DetectionBoxes_Recall/AR@100 (medium) = 0.06666667, DetectionBoxes_Recall/AR@100 (small) = 0.05, Loss/BoxClassifierLoss/classification_loss = 0.048917063, Loss/BoxClassifierLoss/localization_loss = 0.047601078, Loss/RPNLoss/localization_loss = 0.19058931, Loss/RPNLoss/objectness_loss = 0.21819682, Loss/total_loss = 0.5053042, global_step = 1100, learning_rate = 0.0003, loss = 0.5053042 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1100: /mnt/results/tuev-od-output/training-LU2Xd7PmR/training/model.ckpt-1100 I1206 21:27:24.838608 140339569706752 tf_logging.py:115] Saving 'checkpoint_path' summary for global step 1100: /mnt/results/tuev-od-output/training-LU2Xd7PmR/training/model.ckpt-1100 INFO:tensorflow:global_step/sec: 0.199728 I1206 21:27:25.806301 140339569706752 tf_logging.py:115] global_step/sec: 0.199728 INFO:tensorflow:loss = 0.12553665, step = 1100 (125.171 sec) I1206 21:27:25.807365 140339569706752 tf_logging.py:115] loss = 0.12553665, step = 1100 (125.171 sec)

It could be due to our default open source version of FfDL has a very small requirements for our helper pods. You can modify the helm chart values at https://github.com/IBM/FfDL/blob/master/values.yaml#L30-L32

to let the milli_cpu = 500 and mem_in_mb = 1500.

Then perform helm upgrade <your ffdl chart name> .

I updated the helm charts and it seems to work - my jobs are running through again. If I start to increase the save_checkpoint_step (anything > every 50th step), the pod seems to require too much resources and the job gets killed. Therefore the setup is not optimal. If I increase mili_cpu and mem_in_meb the same thing happens.

We are running the following bare metal server from IBM cloud, so I doubt its a resource problem of the cluster itself, but rather a resource allocation problem in the configs.
`16 Cores 128GB RAM
Bare Metal
mg1c.16x128

1 K80 GPU cards
2TB SATA primary disk
960GB SSD secondary disk
10Gbps bonded network speed`

Any thoughts that could help?

Closing this issue, since it was an IKS issue, not a FfDL issue. Thanks for your help!