brain-research/tensorfuzz

What is a spurious disagreement in the quantization example?

vv-ss opened this issue · 0 comments

vv-ss commented

Hi,

I am interested in the accuracy loss due to quantization and was running the quantized_fuzzer.py example. In the script I see that we first get a "result" when the objective function is not met, namely argmax for logits and quantized_logits differ. And then, we check whether the disagreement is correct or spurious. Is this to capture non-determinism in floating point operation? I see that the loop runs 10 times for the same input. Is that intentional?

Thanks!