google/caliban

Abrupt exit when training a model

dthiagarajan opened this issue · 7 comments

Hi all,

Thanks for putting this package up! I really love the idea behind it and can't wait to integrate it more tightly with my workflow!

I'm trying to integrate Caliban with one of my smaller projects I'm working on here, but I'm having some trouble getting things to run. I added the requirements.txt file as instructed, but when I run the training script, I don't see any visible error and the process exits abruptly.

I'm using a Mac, and my data is stored at /Users/dilip.thiagarajan/data. Here's exactly what I did:

  • In that repository, I first tried running:
caliban run --nogpu --docker_run_args "--volume /Users/dilip.thiagarajan/data:/data" train.py -- --model_name resnet18 --projection_dim 64 --fast_dev_run True --download --data_dir /data

When I run this from the terminal, I see the following output:

dilip.thiagarajan simclr_pytorch % caliban run --nogpu --docker_run_args "--volume /Users/dilip.thiagarajan/data:/data" train.py -- --model_name resnet18 --projection_dim 64 --fast_dev_run True --download --data_dir /data                    
I0624 22:07:53.246673 4578139584 docker.py:614] Running command: docker build --rm -f- /Users/dilip.thiagarajan/code/simclr_pytorch
Sending build context to Docker daemon  110.6kB

Step 1/11 : FROM gcr.io/blueshift-playground/blueshift:cpu
 ---> fafdb20241ad
Step 2/11 : RUN [ $(getent group 20) ] || groupadd --gid 20 20
 ---> Using cache
 ---> 6b724e6c1e38
Step 3/11 : RUN useradd --no-log-init --no-create-home -u 502 -g 20 --shell /bin/bash dilip.thiagarajan
 ---> Using cache
 ---> 251bdcb68ec9
Step 4/11 : RUN mkdir -m 777 /usr/app /.creds /home/dilip.thiagarajan
 ---> Using cache
 ---> d2952e2052e3
Step 5/11 : ENV HOME=/home/dilip.thiagarajan
 ---> Using cache
 ---> d8c700640045
Step 6/11 : WORKDIR /usr/app
 ---> Using cache
 ---> 8d6fd0c9f3f4
Step 7/11 : USER 502:20
 ---> Using cache
 ---> 293fcdb3733f
Step 8/11 : COPY --chown=502:20 requirements.txt /usr/app
 ---> Using cache
 ---> 9074b050a5de
Step 9/11 : RUN /bin/bash -c "pip install --no-cache-dir -r requirements.txt"
 ---> Using cache
 ---> 60f28d41deb9
Step 10/11 : COPY --chown=502:20 . /usr/app/.
 ---> 74b6d6b6d42f
Step 11/11 : ENTRYPOINT ["python", "train.py"]
 ---> Running in 54a219fe9826
Removing intermediate container 54a219fe9826
 ---> 081b2c362108
Successfully built 081b2c362108
I0624 22:07:54.054889 4578139584 util.py:710] Restoring pure python logging
I0624 22:07:54.057392 4578139584 docker.py:707]                                                                                                                                                
I0624 22:07:54.057760 4578139584 docker.py:708] Job 1 - Experiment args: ['--model_name', 'resnet18', '--projection_dim', '64', '--fast_dev_run', 'True', '--download', '--data_dir', '/data'] 
I0624 22:07:54.057989 4578139584 docker.py:787] Running command: docker run --ipc host --volume /Users/dilip.thiagarajan/data:/data 081b2c362108 --model_name resnet18 --projection_dim 64 --fast_dev_run True --download --data_dir /data
Executing:   0%|                                                                                                                                                 | 0/1 [00:00<?, ?experiment/s]Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/dilip.thiagarajan/.cache/torch/checkpoints/resnet18-5c106cde.pth
100%|██████████| 44.7M/44.7M [00:00<00:00, 52.8MB/s]
Running in fast_dev_run mode: will run a full train, val and test loop using a single batch
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
/opt/conda/envs/caliban/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:25: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.
  warnings.warn(*args, **kwargs)

  | Name            | Type            | Params
----------------------------------------------------
0 | model           | Sequential      | 11 M  
1 | projection_head | Linear          | 32 K  
2 | loss            | NTXEntCriterion | 0     
Files already downloaded and verified                                                                                                                                                          
Files already downloaded and verified                                                                                                                                                          
Training: 0it [00:00, ?it/s]                                                                                                                                                                   
Training:   0%|          | 0/2 [00:00<?, ?it/s]                                                                                                                                                
E0624 22:08:09.984529 4578139584 docker.py:747] Job 1 failed with return code 137.                                                                                                             
E0624 22:08:09.984878 4578139584 docker.py:750] Failing args for job 1: ['--model_name', 'resnet18', '--projection_dim', '64', '--fast_dev_run', 'True', '--download', '--data_dir', '/data']  
Executing: 100%|#########################################################################################################################################| 1/1 [00:15<00:00, 15.93s/experiment]

while when I output to log by doing

caliban run --nogpu --docker_run_args "--volume /Users/dilip.thiagarajan/data:/data" train.py -- --model_name resnet18 --projection_dim 64 --fast_dev_run True --download --data_dir /data &> caliban_run.log &

I see the following in my trace:

Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/Users/dilip.thiagarajan/.pyenv/versions/3.7.3/lib/python3.7/logging/__init__.py", line 2039, in shutdown
    h.close()
  File "/Users/dilip.thiagarajan/.pyenv/versions/3.7.3/lib/python3.7/site-packages/absl/logging/__init__.py", line 864, in close
    self.stream.close()
AttributeError: 'TqdmFile' object has no attribute 'close'

Is this a problem with some interaction with logging and tqdm? Or is it something I'm doing that's incorrect when I'm mounting my data directory?

The following works properly for me locally:
python3 train.py --model_name resnet18 --projection_dim 64 --fast_dev_run True --data_dir ~/data --download

Thanks for your help!

Thanks for the report, @dthiagarajan ! It looks like there are two issues occuring here.

The first, simple one is that you've discovered a bug in the way we were handling the stdout dance when we shell out to build the docker container. I wasn't explicitly flushing stdout, which caused the build steps to appear at the end of your caliban_run.log. I also, as you can see, needed to implement a close() method on TqdmFile. That's all covered in #30 , and we should have a new release out today.

But that's not what's causing the problem in your training job. Looking around a bit it seems that "return code 137" is Docker's way of signaling that it's run out of memory (moby/moby#21083, as an example).

I think this may be a Mac-only problem, and solvable this way:

"Repeating what's said above, this happens on OSX because of Docker 4 Mac's hard memory cap. You can increase your memory limit in Docker App > Preferences > Advanced."

On a Mac, you can click the "Docker Desktop" menu in the menu bar, click "Preferences" and increase the available memory in the "Resources" tab:

image

I think this is going to be the cleanest solution. I'll poke around and see if there is some setting we can enable by default that will allow Docker to access more memory, or at least catch this error and make it clearer to the user what's going on.

Please let me know if this helps and gets you unblocked! Thanks again for the report, @dthiagarajan , and for testing out Caliban.

This was a world-class bug report, by the way! Thanks for the care it took to write.)

Ah, I hadn't noticed that error code - that seems to fix the memory issue, thanks!

On another note (and more nitpicky), I'm seeing something like the following with the tqdm progress bar updating when running with caliban:

Training: 0it [00:00, ?it/s]
Training:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 1:   0%|          | 0/2 [00:00<?, ?it/s]
Epoch 1:  50%|█████     | 1/2 [00:03<00:03,  3.29s/it]
Epoch 1:  50%|█████     | 1/2 [00:03<00:03,  3.29s/it, loss=3.435, v_num=5]
Epoch 1: 100%|██████████| 2/2 [00:04<00:00,  2.22s/it, loss=3.435, v_num=5]
Epoch 1: 100%|██████████| 2/2 [00:04<00:00,  2.27s/it, loss=3.435, v_num=5]
Executing:   0%|                                                                                 | 0/1 [00:11<?, ?experiment/s]

whereas when I run locally, I see the following:

Epoch 1: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:20<00:00, 10.12s/it, loss=3.306, v_num=5]

Do I need to specify something when I'm logging in my script? I'm wondering why 1) the progress bar is much longer in the latter compared to the former and 2) why it's logging duplicates.

@dthiagarajan , I knew that this was a problem and I'd tried to fix it before and failed... but you've successfully motivated me to tackle the issue. Progress bars are too awesome to have to give up inside Caliban jobs. (Especially when I'm using tqdm myself to show how many jobs you've completed!)

I've solved this problem in #31. Once I get this merged today, I'll release 0.2.6 and let you know here on this ticket.

Incidentally it makes our tutorial much prettier!

Thanks again for the nudge.

I0626 09:27:58.373739 4605930944 docker.py:708] Job 1 - Experiment args: []
I0626 09:27:58.374180 4605930944 docker.py:810] Running command: docker run --ipc host -ePYTHONUNBUFFERED=1 -e COLUMNS=254 -e LINES=25 d74bc27fcf48
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step
2020-06-26 15:28:03.516755: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-26 15:28:03.522778: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2592000000 Hz
2020-06-26 15:28:03.524190: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55b5bad7c3c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-26 15:28:03.524249: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Training model with learning rate=0.1 for 3 epochs.
Epoch 1/3
1875/1875 [==============================] - 4s 2ms/step - loss: 2.0499 - accuracy: 0.2110
Epoch 2/3
1875/1875 [==============================] - 5s 2ms/step - loss: 1.9334 - accuracy: 0.1909
Epoch 3/3
1875/1875 [==============================] - 6s 3ms/step - loss: 1.9116 - accuracy: 0.1931
Model performance:
313/313 - 0s - loss: 1.8834 - accuracy: 0.2098
I0626 09:28:20.245232 4605930944 docker.py:744] Job 1 succeeded!
Executing: 100%|########################################################################################################################################################################################################| 1/1 [00:21<00:00, 21.88s/experiment]

@dthiagarajan The details here are:

  • tqdm uses carriage returns, like \r, to rewrite the current line. Python doesn't pass those through without some work, when you're running another python job in a subprocess.
  • Python buffers its output, which is a mess here, because tqdm uses both stdout and stderr to write its outputs.
  • Docker doesn't have a COLUMNS or LINES variable internally when you run a container in non-interactive mode!

#31 tackles each of these. It's not perfect — I suspect if you nest progress bars, you may run into trouble, but maybe not. If you have a tqdm process and write a bunch of output inside the loop, that might trigger a newline as well.

But this solves most of the issues we'd seen, and I think you'll be happier with the result for sure.

Okay @dthiagarajan , I've just a cut the 0.2.6 release with these changes: https://github.com/google/caliban/releases/tag/0.2.6

The build should finish shortly and deploy this to pypi. Upgrade with:

pip install -U caliban

and please let us know if this fixes the issue. I'm going to go ahead and close this now, but feel free to re-open if you run into trouble. Thank you!

Awesome work @sritchie, thank you so much!