zama-ai/tfhe-rs

Correctness bug wopbs crt and bivariate wopbs native crt

SlurmsMacKenzie opened this issue · 7 comments

Describe the bug
A correctness bug with wopbs for crt and bivariate wopbs for native crt using a closure with a comparison

To Reproduce
code example in repo https://github.com/SlurmsMacKenzie/tfhe-wopbs-compare

Expected behaviour
LUT for comparison should return 1 for comparison 42 > 30 with crt moduli [13,14,15]

Evidence
as seen in code example

Configuration(please complete the following information):
OS: Ubuntu 20.04
tfhe-version: 0.5.0

Additional context
Follow up for the corresponding question on Discord

Thanks for following up here !

So I don't remember if I shared this, but it seems that even setting the noise to 0 everywhere is causing the computation to fail, so it might be something going wrong during the keyswitch to/from wopbs params

hello @SlurmsMacKenzie

So for the first example with the 4 bits 5 blocks, you are going to run into an issue with the bivariate PBS, if I trust the name of the parameters you can only process 5 blocks of 4 bits but with a basis containing 3 moduli then you get 6 blocks to process, so we are out of what works for that parameter set for the bivariate part.

We are aware that the wopbs needs some love to be more usable, it's one of our tasks for the current quarter.

For the other one you tried however it is much less clear what the max number of blocks is.

On a different note the 4 bits 5 blocks parameter set was an experimental parameter set that we forgot to remove, it should not be used and was removed for the next release. We did not remove it from 0.5 as that would be a breaking change. We'll update the docstrings however.

We'll keep looking for the other case your reported

Cheers

Hello, so this is taking longer as we are looking at parameters updates, we'll let you know once that work has been done

@SlurmsMacKenzie currently looking at this, it looks like it may be a lookup generation bug, which does not get caught by our tests, currently investigating

@SlurmsMacKenzie we think we have identified a bug which does not trigger for the luts used in our tests but does for yours, we will be checking that it works properly and adding more checks in our test suite with your example as a non regression test

Sorry for the delay fixing this