What will be affected by adjusting MAX_BLOCK_NUM in H2_ALSH? The default value of MAX_BLOCK_NUM in the code is 4000, if I change it to 1000.
eggchai opened this issue · 1 comments
Hi, thank you for your interest!
For your issue, MAX_BLOCK_NUM is used to control the maximum number of data points in a block. Please refer to lines 44-47 of h2_alsh.cc. By default. I set it to 5,000, considering that the commonly used datasets comprise millions of points.
If you tune it down to 1,000, the number of blocks will increase, and hence the indexing time and index size might increase as we have to build more index (i.e., qalsh) for more blocks. However, the query time might also decrease as you might check fewer points in the first few blocks and stop early if some suitable answers can be found.
Nevertheless, you might not want to keep decreasing MAX_BLOCK_NUM because the index can only offer a little acceleration (as the number of points in a block is small) if the algorithm does not stop early. In the worst case, it will be reduced to an exhaustive linear scan of the data points from the largest to the smallest norm.
My suggestion is that you can (1) extract a small fraction of the data points (e.g., 100 or 200) with the largest norms and (2) directly scan all of them for every query to increase the chance of early termination (and achieve a good candidate with a large inner product value for pruning). Then, tune MAX_BLOCK_NUM (via grid search) to make a trade-off between indexing overhead and query efficiency. For example, some experienced values of MAX_BLOCK_NUM are n/200~n/50.