Unclear semantics of integer and float constants
Opened this issue · 3 comments
There are basically two possible approaches for C2:
- Distinguish between integer types, so that an integer constant may have a specific bit width. In order to affect this we must explicitly either cast or have suffixes 1.0f, 10uL etc.
- Say that integer constants are handled as BigInts (we already borrow that functionality from llvm), and that floating point are always folded as highest available precision (f128 or f64). At the final step, it is then rounded to the correct precision / size.
For the second alternative, if we have the following code:
i32 a = 2 * 1000;
f32 b = 2 * 1000.0;
i8 c = 3;
i32 d = 600 + a + 10000 * c
Expressing the above in C++:
int32_t a = (BigInt(2) + BigInt(1000)).toSigned32();
float b = (float)(2.0 * 1000.0);
i8 c = 3;
i32 d = 600 + a + 10000 * (i32)c;
An overflow, for example:
i8 a = 2 * 127;
u8 b = 255 + 1;
Would be detected and flagged as error at runtime.
However, this would not be flagged:
i8 b = 2;
i8 a = b * 127;
Nor would this:
i8 a;
....
u16 b = 65535 + a + 65535;
Only directly folded constants have the error.
I think we should go for (2), since tagging constants with suffixes is a drag and create those incredibly annoying errors when you turn on strict type checking.
for Integers: when parsed, numbers are arbitrary size integers. Currently these (APInt) numbers can be added/multiplied etc at analysis time. When assigned, they must fit the type or an error occurs.
For floats, we could use the same principle, just use the Float64 to do calculations and round to float32 if needed.
I dont want to annoy developers with suffixes, they can be removed completely.
What about f128? Should we support that where available?