Memory error
lvdengwei13 opened this issue · 3 comments
Using
TensorFlow backend.
100%|██████████|
9201/9201 [00:00<00:00, 211451.44it/s]
2019-08-18
11:18:02.451158: I tensorflow/core/platform/cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not
compiled to use: AVX2 FMA
2019-08-18
11:18:02.769927: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2019-08-18
11:18:02.770403: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
with properties:
name:GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.733
pciBusID:0000:01:00.0
totalMemory:
5.94GiB freeMemory: 5.64GiB
2019-08-18
11:18:02.770422: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
gpu devices: 0
2019-08-18
11:18:09.616736: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
interconnect StreamExecutor with strength 1 edge matrix:
2019-08-18
11:18:09.616806: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-08-18
11:18:09.616827: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-08-18
11:18:09.617285: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with
5408 MB memory) -> physical GPU (device: 0, name: GeForce GTX
1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
Experiment
name: Cloud-Net_trained_on_38-Cloud_training_patches
Predictionstarted...
Input image size = (384, 384)
Number of input spectral bands = 4
Batchsize = 1
/home/joe/.local/lib/python3.6/site-packages/skimage/transform/_warps.py:110:
UserWarning: Anti-aliasing will be enabled by default in skimage 0.15
to avoid aliasing artifacts when down-sampling images.
warn("Anti-aliasing
will be enabled by default in skimage 0.15 to "
Traceback
(most recent call last):
File
"/home/joe/PycharmProjects/Cloud_Net/Cloud-Net/main_test.py",
line 60, in
prediction()
File"/home/joe/PycharmProjects/Cloud_Net/Cloud-Net/main_test.py",
line 26, in prediction
steps=np.ceil(len(test_img)/ batch_sz))
File
"/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return
func(*args, **kwargs)
File"/usr/local/lib/python3.6/dist-packages/keras/engine/training.py",
line 1522, in predict_generator verbose=verbose)
File"/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py",
line 474, in predict_generator
return np.concatenate(all_outs[0])
MemoryError
Process finished with exit code 1
And my os is ubuntu18.04, CUDA 9.0,nvidia GTX 1060. Is my nvidia memory not enough?
Yeah. It seems your GPU (with 6GB memory) is not enough to support images with size 384* 384 even with batch size=1. If you do not have access to a GPU with more memory capacity you should reduce the size of images for prediction. So I would set input size in here to 300 or 256 or 192 (whatever fits GPU). Doing so, you need to resize the predicted masks to 384*384 for the evaluation. I updated evaluation.m to take care of size by itself, please use the updated one for evaluation.
You are welcome. Yes some features might be lost, unless you find a GPU with more memory (at least 8G).
Evaluation.m has been already updated, it automatically resizes the patches to 384.
Hope this helps.