WebAssembly/simd

Trunc Sat Conversion Test Looks Wrong

jlb6740 opened this issue · 2 comments

Hi,

I was looking at the implementation for trunc_sat in V8 (https://github.com/v8/v8/blob/c74e9e2cbf7c23753ab331011b9ce5e9572daf38/src/compiler/backend/x64/code-generator-x64.cc#L2924) and I keep thinking something is wrong with the algorithm where it seems to be counting off by one for values that are between INT_MAX+1..UINT_MAX. Instead of subtracting tmp= src-max_signed I was thinking this should be tmp=src-(max_signed+1). However it passes the spec test so I thought it must be right. However, I look at the spec test and I don't understand the expected result here either:

(assert_return (invoke "i32x4.trunc_sat_f32x4_u" (v128.const f32x4 2147483647.0 2147483647.0 2147483647.0 2147483647.0))
(v128.const i32x4 2147483648 2147483648 2147483648 2147483648))

Why is FLT 2147483647.0 expected to convert to UINT 2147483648. Why not 2147483647?

Looking at the definition for trunc_sat:

Lane-wise saturating conversion from floating point to integer using the IEEE convertToIntegerTowardZero function. If any input lane is a NaN, the resulting lane is 0. If the rounded integer value of a lane is outside the range of the destination type, the result is saturated to the nearest representable integer value.

I can't imagine why this original value should not be 2147483647.

Don't know the details about the instruction, but 2147483647.0 is not representable as a single-precision floating-point number

Hi @Maratyszcza .. yeah. I think that is where I went wrong. 2147483647.0 is stored as 2147483648. I was focusing on the int range and not the float range. Perfect, Thanks!