dataset sizes for benchmarks
amueller opened this issue · 5 comments
It would be great if you could do benchmarks with different data set sizes and with tall, wide and parse data, where possible, and report where these are not supported for your solvers.
It would also be great to have the absolute times, not only the relative times. Some of these algorithms take .5s. In that case our input validation overhead probably is possibly dominating the work.
Hi @amueller,
We can definitely try both tall and wide data and report absolute timing. As for input validation, we disable it entirely here. That basically calls sklearn.set_config(assume_finite=True).
Currently, sparse inputs will always cause our patches to fall back to scikit-learn or convert the sparse matrix to a dense one.
@bibikar enabling assume_finite is definitely the right way to go. Still, I don't expect anything that takes .5s to be optimized in sklearn. Can you run something that takes like 10s or 1m?
And again, it's also an issue of how you display the results. I'm much more likely to believe a speedup of 20x from .1s to 0.005s than from 1h to 3m. If something is instantaneous, we don't really try to optimize much more usually.
In last several years datasets sizes get more variety and we are working on including more datasets with introduction of GPU support