kraiskil/onnx2c

Setting floating-point precision for cout

robinvanemden opened this issue ยท 4 comments

Hi @kraiskil ,

Thanks for making onnx2c available! The onnx-to-c conversion works like a charm ๐Ÿ˜„

As a result of cout's limited default floating-point precision, some of my test models did fail the backend runner's result comparisons at first.

I was able to resolve this by simply raising cout's floating-point precision in main.cc and onnx_backend_tests_runner.cc using

std::cout.precision(20);

Hello Robin, glad you like it.

Thanks for the heads up about the precision. Hadn't thought about the possibility that it would be the printing that limits the testing accuracy. I had to at some point loosen then testing accuracy so that tests especially bigger networks would match the reference from Tensorflow. But I thought that was just errors accumulating from calculations, but could be it was also this same printing error that got those calculations off to a bad start.

I can just add that line to main and test runner of course, but would you have a simple test showcasing this error that you could do a pull request for?

Sure! See pull request #2 . I left out "std::cout.precision(20)" in the test runner and main code to make sure the showcase test will still fail.

The matmul test in the pull request fails (for me) when not also adding std::cout.precision(20) to onnx_backend_tests_runner.cc (I also added it to main.cc).

The data is generated by a basic Python script that I use to create simple models together with their input and onnxruntime generated output.

It is probably a good idea to double check whether a precision of 20 offers the right balance between c file size and model precision.

Thanks.
I'd say 20 is good for now. In case some future user has problems with the generated file size, then maybe they will contribute with a command line option to onnx2c to allow run-time selection :)

#2 merged. Thanks!