During model creation script fails with MemoryError
Opened this issue · 5 comments
When I'm running create_chainer_model.py I'm getting MemoryError when loading the model using CaffeFunction.
Did anybody experienced something similar?
C:\Dev\Sandbox\OpenCV\image_art>C:\Python27\python.exe create_chainer_model.py -g -1
load VGG16 caffemodel
Traceback (most recent call last):
File "create_chainer_model.py", line 32, in
ref = CaffeFunction('VGG_ILSVRC_16_layers.caffemodel')
File "C:\Python27\lib\site-packages\chainer\links\caffe\caffe_function.py", line 127, in init
net.MergeFromString(model_file.read())
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1082, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 612, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 612, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File "C:\Python27\lib\site-packages\google\protobuf\internal\python_message.py", line 1118, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "C:\Python27\lib\site-packages\google\protobuf\internal\decoder.py", line 212, in DecodePackedField
value.append(element)
File "C:\Python27\lib\site-packages\google\protobuf\internal\containers.py", line 251, in append
self._values.append(self._type_checker.CheckValue(value))
MemoryError
Yes, your image is too large.
Try to reduce it.
For amazon GPU instances
if you're using gpu, max size is approx 1200-1300 pixels per side (running time ~3 sec)
if you're using cpu, max size is approx 1500-1700 pixels per side (running time ~30 sec)
I found that the issue is my PC memory. My 4GB is not enough.
What do you mean by Amazon GPU instances?
Should I run the code on Amazon S3?
@mungobungo BTW I managed to create models by using the Caffe model with lower size, but now the training is failing with the same error. So my guess is that I'm running the code on the wrong environment.
@JackTheHack I put AWS instances just for reference.
g2.2xlarge has 15 GB of RAM and 4 GB of VRAM.
So if your configuration is different, size should be adjusted accordingly.
It does not matter which env you run it.
For bigger pictures RAM is main limit.
@JackTheHack How did you use the Caffe model with lower size? Btw I'm having the same issue with 8gb of RAM.