gnzlbg/slice_deque

Segfault at end of mirrored page

Closed this issue · 5 comments

Discovered by a user of buf_redux in abonander/buf_redux#8

I was able to pare it down to the following repro:

extern crate slice_deque;
use slice_deque::SliceDeque;

fn main() {
    let mut deque = SliceDeque::<u8>::with_capacity(4096);

    let slice = unsafe {
       deque.move_tail(4096);
       deque.move_head(4000);
       deque.move_tail(4000);
       deque.move_head(4000);
       // head = 8000, tail = 8096
       deque.tail_head_slice()
    };

    for i in 0 .. slice.len() {
        // segfault at i = 96
        slice[i] = 0;
    }
}

Using GDB, I captured the program state at the time of the segfault:

(gdb) run
Starting program: /home/austin/slice-deque-repro/target/debug/slice-deque-repro
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
0x0000000008008392 in slice_deque_repro::main () at src/main.rs:18
18              slice[i] = 0;
(gdb) info locals
i = 96
__next = 96
iter = core::ops::range::Range<usize> {start: 97, end: 4000}
slice = &mut [u8] {data_ptr: 0x7fffff791fa0 "\000", length: 4000}
deque = slice_deque::SliceDeque<u8> {head: 8000, tail: 8096, buf: slice_deque::mirrored::buffer::Buffer<u8> {ptr: slice_deque::mirrored::buffer::NonZero<*mut u8> {ptr: 0x7fffff790000 "\000"}, len: 8192}}

It looks to be when the tail pointer reaches the end of the mirrored page, at which point it should wrap back around to the original page, no?

Thanks a lot for the report,

It looks to be when the tail pointer reaches the end of the mirrored page, at which point it should wrap back around to the original page, no?

Yes. I see if I can fix this today. This should have been covered by the tests, but that does not appear to be the case :/

This may well affect Windows as well if you change the numbers to use up the 64KB default capacity.

Edit: it does.

So i have a fix for this in #43

I am going to let the PR run CI without the fix to see if the test reproduces on appveyor. Then I'll uncomment the fix and see if it fixes it everywhere.

@abonander it seems that the fix worked

I'll try to do a release later today once I polish CI again.

Version 0.1.10 has been released with the fix.