BigDecimal literal syntax
littledan opened this issue Β· 22 comments
Following the rough consensus of several programming languages as well as TC39 tradition, this proposal uses the syntax 123.456m
for BigDecimal literals. (Maybe m stands for Money?) Is this a reasonable syntax, or are there any issues?
Should 123.m
be a valid literal?
I like d
suffix more because 123n
and 123m
is pretty similar especially on monospaced fonts
After some googling (correct me if I got something wrong):
- Java - doesn't have literal for BigDecimal, uses
d
suffix for doubles; - Python - doesn't have literal for Decimal, doesn't use
d
andm
suffixes; - PHP - no builtin BigDecimal type, doesn't use
d
andm
suffixes; - C# - uses suffix
m
for 128-bit decimal type and suffixd
for doubles; - Ruby - doesn't have literal for BigDecimal, doesn't use
d
andm
suffixes, uses0d
prefix for decimal notation; - C++ - no builtin BigDecimal type, doesn't use
d
andm
suffixes; - Go - doesn't have literals for math/big types, doesn't use
d
andm
suffixes; - Rust - no builtin BigDecimal type, doesn't use
d
andm
suffixes.
I don't see strong consensus here - most of the popular languages are missing this feature, so there will be no much harm to choose some different suffix than m
. Personally I find it confusing, I think d
is much more intuitive.
I propose we just use n
. It would be easier to remember, and also to learn "when a number has an n at the end it's not a float, it's an arbitrary precision number".
Even if their implementations are different their semantics are very much the same.
It's also not a big deal to support both decimals and integers having similar syntax in terms of parser writing. I suppose I can change a few lines in the Terser parser and this goes right in.
@fabiosantoscode n is already used for BigInt. It would be exceedingly confusing if 5.0n produced a BigInt and 5.01n produced a BigDecimal.
I was imagining that it'd be possible to have BigDecimal literals that didn't have a literal decimal point in them, e.g., 5m
would be meaningful. If we use n
, that'd be lost due to the BigInt conflict.
@ljharb What about extending BigInt to BigDecimal? I mean, let BigDecimal be supertype of BigInt, then we could easily solve the problem about ambiguity & confusing syntax.
I was wondering why we should separate BigInt and BigDecimal. If we do so, we canβt simply write function that parametrically polymorphic to BigInt & BigDecimal but we would use function like βmatchBigNumbers(BigN1, BigN2)β, which makes code long.
@ENvironmentSet I think that would lead to certain mismatches. For one, division in BigInt is integer division (which many developers told us was an important use case), but in BigDecimal, we may want to give a more precise answer, or trigger an error (discussed further in #13).
How about an x
suffix from the roman numeral for ten?
@wheresrhys It would make 0x0x
a valid literal :)
I can imaging cases where 1.d
being invalid syntax could cause problems. Let's say, we have some code generator and trying to eval
a template string containing ${integer}.${fraction}${isBigDecimal ? 'd' : ''}
. If fraction
is an empty string, then for Numbers this will work fine, but for BigDecimals it will throw SyntaxError. So I'd prefer to make trailing dot before d
an allowed syntax. @ljharb WDYT?
I don't like how 0.d
primitive value or 0.d.toString()
looks, but I think that making it an invalid syntax will lead to a worse developer experience.
@chicoxyzzy fraction || 0
seems like a better choice in that example than allowing the worst part of Number syntax to propagate.
Due to #36 , I think we should avoid d
as the suffix: Even if we don't want hexadecimal decimals now, it feels like backing us into an unfortunate corner (which may get worse depending on how extended numerical literals go). For now, I think I'll switch back to m
, but if you have other ideas here, please post them
About a trailing .
: my feeling is that we should follow the Number grammar here and allow it. Divergences from Number should have a particular reason in my opinion (like omitting legacy octal literals from BigInt because they are legacy).
Maybe the trailing decimal should warrant a separate issue; it feels like a mistake nobody wants that would be a shame to replicate.
I think "0."
should be allowed only if it's a string being parsed, not a literal in code. Such that any BigDecimal
parser implementation should allow 1 trailing dot at most, without throwing an error or returning NaN
It seems that this issue has been resolved. Shall we close it?
Based on feedback received at plenary about this issue, it looks like adding new literal syntax for decimals is, as of today, too heavy of a lift for some of the engine implementors. (That's not to say that it's not a bad idea and that we couldn't return to the issue sometime down the road.)
I don't think the pushback was about new syntax as much as it was about "a new primitive" and ===
, but maybe I'm remembering incorrectly.
I don't think the pushback was about new syntax as much as it was about "a new primitive" and
===
, but maybe I'm remembering incorrectly.
IIRC I thought that the argument for "no new syntax" did indeed involve ===
. The argument went something like this:
Assumptions:
- no new built-in datatype
- no overloading, but:
- new literal syntax (it would basically be just a macro for invocations of
new Decimal(...)
, given assumption (1))
Given those assumptions, we would be led to the awkward situation where 1.2m === 1.2m
would be false
because of (2).
In other words, adding new literals without overloading would lead to confusion. That was my takeaway, anyway. Maybe I'm getting the argument wrong.