Memory access out of bounds
Closed this issue · 7 comments
I'm getting a memory access out of bounds when I sub in talc as the global allocator for wasm (not sure if it is restricted to wasm).
at talc::Talc<O>::free::h4f5d820d5cec2749 (zstd_write_bg.d866f786.wasm)
at __rust_dealloc (zstd_write_bg.d866f786.wasm)
at rust_zstd_wasm_shim_free (zstd_write_bg.d866f786.wasm)
at ZSTD_freeCCtx (zstd_write_bg.d866f786.wasm)
at <zstd_safe::CCtx as core::ops::drop::Drop>::drop::hfea2fa39f259d191 (zstd_write_bg.d866f786.wasm)
at zstd_write::compress::h2ad992b8ec6d71d4 (zstd_write_bg.d866f786.wasm)
The code is pretty minimal. Simplifying and adding in talc results in:
use wasm_bindgen::prelude::*;
#[global_allocator]
static ALLOCATOR: talc::TalckWasm = unsafe { talc::TalckWasm::new_global() };
#[wasm_bindgen]
pub fn compress(data: &[u8], level: i32) -> Vec<u8> {
zstd::bulk::compress(data, level).unwrap()
}
dlmalloc and lol_alloc do not exhibit this issue.
The amount of data that is passed has been irrelevant so far.
Let me know if you want more information. Seems a bit related to what zstd is doing under the covers as I haven't been able to reproduce without it (yet).
Curious. The random actions tests on my WASM project are running fine.
Thanks for the minimal reproduction. I'd like to look into it. However, I'm struggling to build the zstd crate. The clang
invocation is failing - apparently it can't find some #include
headers (e.g. stdio.h
). Have you experienced any similar problems/know any fixes? (compiling a simple script that uses stdio.h with clang causes no issues)
This is quite low priority for me at the moment, but I trust it isn't blocking your development. I'll see what I can do when I have time: hopefully sooner rather than later.
Ah yeah, I believe you need to disable default features. This is how I include zstd in my wasm projects:
zstd = { version = "0.12.3", default-features = false, features = ["fat-lto"] }
Low priority is fine, as I'm just in exploratory mode, and figured you might be interested in feedback. I have an app where this triggers without running zstd code, so if I determine another edge case I'll post it here.
Thanks for the crate import incantation :)
TL;DR the zstd-rs crate breaks GlobalAlloc trait's safety contract.
Finally got some time to work on this. This debugging session has been long and bizarre. I won't be able to do anything about this, I think.
The zstd C code seems to be calling the Rust allocator, but issues deallocations with a size of 1 (breaking the contract with GlobalAlloc implementers https://doc.rust-lang.org/stable/std/alloc/trait.GlobalAlloc.html#tymethod.dealloc).
- The normal C free function doesn't take a size, which is why
dlmalloc
doesn't care about receiving an incorrect sizes: it stores that information at the base of the allocation. lol_alloc
's FreeListAllocator is (unintentionally?) robust when creating holes (it's simplicity means it doesn't rely on as many invariants astalc
), but predictably memory gets leaked fast once you run thecompress
routine in a loop, dropping the results.
talc
is really sensitive to this sort of thing. It stores a little bit of metadata at the top of the allocation, so it adds the allocation pointer by size=1 and rounds up, which puts it inside the allocation, and then reads, expecting a pointer. Within the allocation, this often ends up being zero, which gets offset backwards, wrapping around to the top of the address space, and the subsequent read at the top of the address space causes the error.
I'm putting together a minimal reproduction for the zstd-rs maintainer(s), perhaps they can do something about it, but I'm not sure (given the deallocations, if I'm not mistaken, originate in the ZSTD_freeCCtx C function, which they'd have to patch or something odd?).
The crate maintainer has (more or less) confirmed the problem on their end, so I'm closing this issue for now.
Thanks for bringing this to my attention!
For now, don't combine the zstd
crate with talc
or lol_alloc
, but hopefully something can be figured out.
The issue with zstd-rs
has been fixed, hopefully that should fully resolve this once the new version comes out.
Wow, I greatly appreciate you championing this bug fix. Thank you so much!
@nickbabcock New version of zstd-rs is out with the fix gyscos/zstd-rs#248 (comment) :)