KhronosGroup/glTF-Validator

float comparison and precision setting

atteneder opened this issue · 3 comments

Hi,

I frequently run into errors like this:

{
    "code": "ACCESSOR_MAX_MISMATCH",
    "message": "Declared maximum value for this component (0.40000003576278687) does not match actual maximum (0.4000000059604645).",
    "severity": 0,
    "pointer": "/accessors/0/max/2"
}

Although there is a numerical error, it is very small. In my case it is introduces because some calculations on the bounds were done.

Would it be possible to provide a precision setting where one can provide a relavtive epsilon value (maximum error threshold)?

It maybe isn't that straightforward, considering scaling and actual size of meshes in the scene.

Thanks for consideration!

calculations on the bounds were done

I'd like to learn more about why this affects final asset serialization. glTF's floats are single-precision (32-bit) IEEE-754 while JSON numbers are infinite-precision decimals.

In the example above:

  • 0.4000000059604645 (actual maximum) represents 0xCD 0xCC 0xCC 0x3E bytes used in the asset.
  • 0.40000003576278687 (declared maximum) would represent 0xCE 0xCC 0xCC 0x3E instead.

My guess would be that the DCC-internal bound is simply 0.4 which is not representable as float32. Then this value is probably processed by two independent code paths one of which generates 0.4000000059604645 (which is the closest representable value) and writes it in binary, while another code path ends up with a slightly different value.

Given that accessor.min and accessor.max reflect the bounds of the stored values, the validation is intentionally strict to catch issues like this one.

I get the argument about intentional strictness.

Here's how the error is introduced:

  • I'm doing a roundtrip original glTF -> Unity Mesh -> exported glTF
  • Unity's bounds are not represented as min/max, but as center and extends(half size) vectors (also with 32 bit float precision).
center = (max+min)/2
extends = (max-min)/2

max´ = center + extends
min´ = center - extends

I presume these divisions take away a tiny bit of precision, which causes the error.

The solution I'll probably have to implement is to re-calculate the correct min/max at export by iterating all vertices. I wanted to avoid this extra work, hence the question.

mosra commented

@lexaknyazev Hello, apologies for reviving old threads, but I'm regularly running into issues with min/max precision similarly as described here and in #79.

I completely get that the min/max bounds should never be stored with such an imprecision that causes the actual data to be outside of those bounds. That would defeat the main purpose of this property, and it's the right thing for the validator to check.

However, while the above is one validation error, a different error is when the values don't match exactly, and I think some one-directional fuzziness could be a good thing. In particular, allowing min to be slightly less than the actual min element in the data, and max to be slightly more than the actual max element. Let's say, by 1.0e-6 * magnitude, with the epsilon being small enough but still practical to work with in 32-bit float calculations. This would still fail if the min/max bounds are too tight (as it should), but allow for more headroom in the float-to-string conversion.

With such fuzziness, exporters that don't (or can't) use a float-to-string implementation matching the one used by the validator could simply pad the min/max values slightly in order to pass. That would also open a possibility to optimize for example for shorter output. For example, storing "min": 0.4 instead of "min": 0.4000000059604645, but on the other hand "max": 0.4000001 instead of "max": 0.4000000059604645.

Would this be something to consider? Thank you for your response.