purescript/purescript-numbers

Is Number always 64-bit across compilers?

JamieBallingall opened this issue · 2 comments

We have a PR (#19) that's been outstanding for while, which wants to add additional constants to Data.Number. Technically it will be easy to resolve. We either:

  1. Put the relevant constants in Number.js and import them into Number.purs
  2. Hardcode the relevant constants in Number.purs

The current PR takes approach (1) but it would be very easy to change to (2). But the two approaches imply different things about what Number means in Purescript (as opposed to Javascript).

  1. Means that Number is a floating point number of some kind but could be 64-bit or 32-bit or possibly other sizes. The compiler to Javascript chooses to implement that as 64-bit but other compilers can make other choices. All compilers must then specify things like maxValue for that compiler
  2. Means that Number is a 64-bit floating point number across all compilers. The compiler needs to provide implementations of things like sin but any constant can be hardcoded directly in Purescript

I don't really have an opinion on this since I only use the Javascript compiler.

For what it's worth, Wikipedia says that "In most implementations of PostScript, and some embedded systems, the only supported precision is single."

This related to another open PR (purescript/purescript-integers#48) which is looking to define minimum and maximum values for Int.

I didn't notice before but both Int and Number state specific representations in the module Prim. Specifically,

  • Number: "A double precision floating point number (IEEE 754)."
  • Int: "A 32-bit signed integer."

That's fairly dispositive, so let's go with approach (2) -- defining all constants in Purescript and placing no additional load on the compiler writer but constraining them as to what Number and Int mean.

Any objections?

I just wanted to say that my lack of a response is mainly because I'm not sure how to respond to this, not because I'm ignoring this.

The questions asked here are similar to the 'what should the runtime representation of Char be?' In some backends, one encoding may be better/possible whereas another encoding is problematic/impossible.