CertainLach/jrsonnet

Support for big integers

LorisFriedel opened this issue · 1 comments

Hello!

Would it be possible to improve the current integer precision? I understand that all numbers in Jsonnet are represented as double (as defined in IEEE754), but for very big integers (larger than 2^53 IIRC) this gives a pretty poor precision.

If sticking to the language specifications is the priority, maybe it could be considered to add "big numbers" features in the standard library? Or something else?

WDYT?

There is an exp-bigint experimental feature available on the master branch.
I am not sure how it should be exposed to the end user; I think about not implementing it in Jrsonnet itself but allowing users of Jrsonnet as a library to implement such type itself as Val::Custom, so don't expect it to be there forever, it is made only for experimentation. Or should there be a proposal to upstream jsonnet to support this feature?

For now, the syntax is

// https://github.com/CertainLach/jrsonnet/issues/103
local bigintEq(a, b) = a >= b && a <= b;

std.type(std.bigint(123)) == 'bigint' &&
bigintEq(std.bigint(20) * std.bigint(30) , std.bigint(600))

For now, it only works with the basic operators and not with std.pow/other standard library functions.