deepRacin yields wrong results on AMD graphics cards
Opened this issue · 3 comments
BjoernLange commented
I compared the results of a deepRacin run with the expected results given by the tensorflow training. On an NVIDIA GTX 1080 Ti everything is fine, the results look like this:
Required result: [-3.32278252 7.10063839]
Actual result: [array([-3.32278347, 7.10063887], dtype=float32)]
Squared difference: [[ 9.09494702e-13 2.27373675e-13]]
(the difference of something with e-13
seems to be acceptable)
However on an AMD Radeon RX 580 the results look like this:
Required result: [-3.32278252 7.10063839]
Actual result: [array([-1.49906969, 5.0553546 ], dtype=float32)]
Squared difference: [[ 3.32592847 4.1831858 ]]
janericlenssen commented
Thank you for reporting. Can you send the graph.dr file so that i can see which nodes are used?
Maybe it makes sense two wait until i implemented the Tests. Then, one could see directly which node is not accurate.
BjoernLange commented
Here is the graph.dr file.
BjoernLange commented
I also tested on an Intel CPU, which is also working fine.