justinmeza/lci

Failed realloc during program buffering causes segfault

Opened this issue · 0 comments

Large program files and/or tight memory constraints can cause segfaults during program buffering. Expected: An exit with an error message.

A sparse file containing a LOLCODE program and trailing NULs will usually run correctly. For example, the following set of commands creates a sparse 1MB file (using something like 4kB on disk) and runs it.

echo "HAI 1.3\nVISIBLE \"hello world\"\nKTHXBYE" > program.lol
dd if=/dev/zero of=program.lol bs=1 count=0 seek=1M
lci program.lol

However, making the file 1TB causes lci to segfault.

echo "HAI 1.3\nVISIBLE \"hello world\"\nKTHXBYE" > program.lol
dd if=/dev/zero of=program.lol bs=1 count=0 seek=1T
lci program.lol

Here's the variable that keeps track of the buffer size.

lci/main.c

Line 144 in 6762b72

unsigned int size = 0;

And here's the loop that reads the input file.

lci/main.c

Lines 193 to 200 in 6762b72

while (!feof(file)) {
size += READSIZE;
buffer = realloc(buffer, sizeof(char) * size);
length += fread((buffer + size) - READSIZE,
1,
READSIZE,
file);
}

From what I can tell, two issues contribute to this behavior.

  1. The variable size is declared as unsigned int instead of as size_t. This makes it wrap around at 232 on most systems, even when more than 4GB of RAM is available. At that point, realloc is asked to resize the buffer to 0 bytes.
  2. The return value of realloc isn't checked. If the reallocation is unsuccessful, buffer is replaced with a null pointer.

If you're testing this on your own machine, you may want to limit the memory usage to avoid thrashing. I ran ulimit -v 6291456 to enforce a hard 6GB limit.