jojojames/fussy

Errors when attempting to benchmark

Opened this issue · 3 comments

Heyo!

First off, I tried to write a benchmark for fuzzy Emacs completions styles at axelf4/emacs-completion-bench. It would be great if you could take a look and see whether there are any glaring mistakes in how the fussy performance is measured.

Second, I run into panics with two of the backends on the benchmark instances. With flx-rs:

thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/flx-rs-0.1.4/src/search.rs:72:53

and with sublime_fuzzy:

thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', src/dynmod.rs:15:46

Should be enough to uncomment those here and running the benchmark in order to reproduce.

How are you running the benchmark? (commands/etc?)

Yes, compile hotfuzz-module.so and put it in the directory and then run ./runbench (which just launces Emacs with some arguments). There are some instructions in the README.

Haven't had much time to look into this (with 2022 closing in), not timelines but the benchmark looks pretty extensive to me.

Unlike with other dynamic modules that are consulted for each individual each candidate to compute its score, the Hotfuzz Lisp code only calls out to the native code once, passing along the entire completions list. This reduces overhead and enables multithreaded filtering and scoring.

This was something I wanted to do for fussy too but haven't had the time. I think it's definitely the right approach though, if it's possible to work with fussy/emacs' completion system.