Trouble with fmt::Debug?
Opened this issue · 3 comments
So I noticed that a test like
#[test]
fn delete_test_8() {
let r = Rope::from("this is not fine".to_string());
assert_eq!(&r, "this is not");
}
fails with
$ RUST_BACKTRACE=1 cargo test delete_test_8
Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/deps/an_rope-609884e6301c7df8
running 1 test
thread 'tests::delete_test_8' has overflowed its stack
fatal runtime error: stack overflow
error: process didn't exit successfully: `/home/rustdev/an-editor/an-rope/target/debug/deps/an_rope-609884e6301c7df8 delete_test_8` (signal: 6, SIGABRT: process abort signal)
To learn more, run the command again with --verbose.
. I also don't get a backtrace. I tracked the Problem down to something like this panic ! ("whatever: {:?}", r);
, which is done when assert_eq!
fails. This panic
will produce the same error message. I'm assuming that we somehow overflow the stack when building the fmt::Debug
message but I don't know where yet.
So from what I can tell the problem is that we always use the fmt::Debug
on Node
, so we call that for every node in the tree and that in turn calls the same function for every subtree and so we overflow the stack.
My proposed fix is in the fmt_fix branch. I removed the fmt::Debug
on Node
and replaced it with a derive(Debug)
so we use the debug on LeafRepr
and BranchNode
as needed.
I have no idea if this is correct or breaks something else but the tests make it through. I also added some tests that would trigger this bug.
@twisted-pear that sounds good, yeah. I had initially written a custom fmt::Debug
implementation rather than deriving, since the derived format was a little too wordy, but deriving is fine. I suspect making the custom fmt::Debug
implementation #[inline]
would also fix this?
I checked out your branch and it looks good to merge; with that said, before we close this issue we should make sure this is definitely localised to only the fmt::Debug
implementation - I'm pretty sure you're right but we ought to throw together some tests just to be on the safe side?