kyz/libmspack

memory exhausted in oabd_decompress()

JsHuang opened this issue · 2 comments

Description:

​ function oabd_decompress() in libmspack has a memory exhausted problem

Affected version:

​ libmspack 0.9.1 alpha

Details:

​ Critical code: oabd.c line 132~149:

  block_max   = EndGetI32(&hdrbuf[oabhead_BlockMax]); 
// block_max is directly from input file and can be controlled by attackers for malicious usage (memory exhaustion )
  target_size = EndGetI32(&hdrbuf[oabhead_TargetSize]);

  /* We use it for reading block headers too */
  if (block_max < oabblk_SIZEOF)
    block_max = oabblk_SIZEOF;

  outfh = sys->open(sys, output, MSPACK_SYS_OPEN_WRITE);
  if (!outfh) {
    ret = MSPACK_ERR_OPEN;
    goto out;
  }

  buf = sys->alloc(sys, block_max);			// block_max not checked
  if (!buf) {
    ret = MSPACK_ERR_NOMEMORY;
    goto out;
  }

​ In function oabd_decompress() in file oabd.c(line 132~149),block_max was read from oab file and lately allocate memory of size block_max, without check whether block_max is valid, Carefully constructed oab file will lead to memory exhausted problem.

block_max is 32bit, it can be as large as 0xffffffff. The maximum memory usage of oabd_decompress() can be 4G RAM, even if the input file is very small.

poc file

https://github.com/JsHuang/pocs/blob/master/libmspack/oom-oab

Credit: ADLab of Venustech

kyz commented

Thank you for reporting this, but I'm not sure this is a bug. Treating it like one, and putting an arbitrary limit on the size of this field, would violate the MS-OXOAB specification being implemented.

It's up to code controlling libmspack to set hard memory usage limits, if such limits are desired. Use a custom mspack_system.alloc() function to return NULL to libmspack if it goes over your arbitrary memory limit, instead of allocating the memory. libmspack will follow through and return MSPACK_ERR_NOMEMORY to the client.

The definition of the ulBlockMax field in the MS-OXOAB specification does not impose any limit on the value:

32-bit unsigned integer value that specifies in bytes the largest size of a block that will be read from the source OAB Details input file, written to the target OAB details output file, or read from the Differential Patch file. This field is here so that the client can pre-allocate required buffers.

Putting an arbitrary limit in place is an exercise in futility - it's completely arbitrary what you would consider "too large" buffer size for a "small" file, so any size I choose might again be called "too large" until you feel happy. And from the other direction, the smaller I make the arbitrary limit, the more likely real OAB files exist that surpass that limit, and they will be permanently prevented from unpacking by such a change, infuriating those users.

The right thing for people who think this allocation is "too large" is to customise their mspack_system.alloc() to refuse allocations over their preferred upper limit.

I can accept this as a feature enhancement request to re-write oabd to work with any buffer size, and only use the header field as a hint.

kyz commented

I've implemented the new feature; OAB decompression now uses a user-controllable fixed-size buffer for copying uncompressed blocks, rather than needing a memory allocation that's as large as the largest block (ulBlockMax). The buffer size is also passed to the LZX decompressor, instead of the fixed value 4096 being used. The default buffer size is 4096 bytes. See commit aaf8a05