lovasoa/pff-extract

ERROR: image is too large, cannot allocate 2.56 Gb.

Closed this issue · 22 comments

Hello lovasoa ,
I am trying to extract a picture. Unfortunately it fails all the time with the following error:
(ERROR: image is too large, cannot allocate 2.56 Gb.). I have enough disk space. Is there a workaround?

version: 0x6a
size: 32821 x 25951
number of tiles: 17640
ERROR: image is too large, cannot allocate 2.56 Gb.

I have several images and it works with many of them flawlessly. It's just the bigger ones causing problems. You'd have to modify the source code and allocate more memory. I just started learning C++ and don't know which edits to make. Thank you in advance:)

You'd have to modify the source code and allocate more memory.

No, the code is trying to allocate exactly the amount of memory that it needs. The problem is that you do not have enough free RAM on your computer. Your operating system is not allowing pff-extract to allocate 32821 x 25951 x 3 bytes of memory.

I actually have 16GB of RAM on my system and right now more than 12GB free... and 32821 x 25951 x 3 bytes of memory would be something around 2.56GB.
Thank you for your fast answer.

You can have a lot of RAM available, but your operating system is not allowing pff-extract to allocate a single chunk of 2.56 continuous gigabytes.

dezoomify-rs handles PFF images when they are available over HTTP and supports working with files that are larger than the available RAM when exporting to the IIIF or PNG data.

Which operating system are you using ?

I am on 5.8.18-1-MANJARO x86_64 GNU/Linux.

If you have a large enough swap space available, you can tell linux to use it by tuning overcommit_memory.

Using dezoomify-rs gets me an error because the content is hidden on a website behind a login page. I'd have to add the cookies somehow. I already checked out https://ophir.alwaysdata.net/dezoomify/dezoomify.html and added the cookies there but still got an error. Your tool seems to work the best when it comes to converting .pff files just the bigger ones are a problem. I'll try to find out how to do it with your suggestion (overcommit_memory). Thank you!

It should be as simple as

echo 1 | sudo tee /proc/sys/vm/overcommit_memory

I haven't tested it, though.

To use dezoomify-rs, use your browser's network inspector to get the list of headers sent in the pff request, and reproduce them in dezoomify-rs with -H "Header: Value"

I checked and I don't seem to have a memory limit for any process.
ulimit -a

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62456
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 62456
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

This is unrelated to memory overcommiting.

This is unrelated to memory overcommiting.

Ok I didn't know that. Thx for the note.

I tried echo 1 | sudo tee /proc/sys/vm/overcommit_memory but got the same error.
I will try the header technique.

You can have a lot of RAM available, but your operating system is not allowing pff-extract to allocate a single chunk of 2.56 continuous gigabytes.

Are you sure this is the case? I have researched a bit and found out that it's not how memory management works on Linux/Unix. The process requests more memory as it needs and is also given it unless there's a limit. You can limit certain processes with separate tools etc. but since I haven't done that and I'm admin it shouldn't be the problem. There are several posts on stackoverflow about RAM management. Are you very sure it's not the issue of the code?

Are you sure this is the case?

Yes.

There is always a limit to the maximum amount of memory you can malloc, even if you haven't set a ulimit for this process.

To get back to the main issue here, one solution would be to stream the file to disk, tile line by tile line. This is a complex process, and I'm not sure it is worth implementing, because most image readers would encounter the exact same issue when then trying to open the file: they wouldn't be able to allocate enough contiguous memory to read the huge jpeg.

Oh, I'm sorry. I just read the related code, and the error is not with memory allocation:

https://github.com/lovasoa/pff-extract/blob/master/pff-extract.c#L264-L273

The error is that the size to be allocated is larger than the largest integer, and tjAlloc requires an integer. This should on the contrary be easy to work around.

The error is that the size to be allocated is larger than the largest integer, and tjAlloc requires an integer. This should on the contrary be easy to work around.

Good to hear it turned out the error is easier to fix than expected. Could you therefore provide a solution to the problem? I can see you have outlined the code in question but I don't really know how tackle the issue.
I tried to remove part of the sections of that highlighted code before but it didn't work out.

i will do that when I have some free time;

Sure take your time! Thank you and have a nice day/night.

v0.8 should fix this issue. https://github.com/lovasoa/pff-extract/releases
Sorry again for the initial misunderstanding

Fabulous! It works flawlessly. Thank you for your work and I'd also like to make a small donation. Do you have PayPal?

Good that it works !

And thanks for the donation

https://paypal.me/olojkine