cockroachdb/apd

How to use Quantize with limited precision context

alexykot opened this issue · 5 comments

I'm building a financial application that deals with multiple currencies and multiple Contexts with settings appropriate to each currency. The DB storage supports the widest possible range of values to permit to store any currency, DB storage uses Decimal(78,18), i.e. MaxExponent=60 and MinExponent=-18 in apd terms.

I'm trying to quantize the value coming from DB with highest possible precision into the precision appropriate for given currency. The value is not large, but has many trailing decimal zeroes that I want to remove to bring it to correct precision.

And for whatever reason Quantize() does not produce the result I expect, but throws an Overflow condition instead. Below is the sample code that reproduces it:

package main

import (
	"fmt"
	"github.com/cockroachdb/apd/v2"
)

func main() {
	dStr := "6410.000000000000000000"    // small value with many trailing zeroes as seen in DB 
	wideCon := &apd.Context{         // widest supported context
		MaxExponent: 78,                
		MinExponent: -18,
		Traps:       apd.DefaultTraps,
	}
	d, _, _ := wideCon.SetString(&apd.Decimal{}, dStr)

	limitCon := &apd.Context{         // limited context suitable for given currency
		Precision:   17,
		MaxExponent: 9,
		MinExponent: -8,
		Traps:      apd.DefaultTraps,
	}
	scratch := &apd.Decimal{}
	con, err := limitCon.Quantize(scratch, d, limitCon.MinExponent)   // attempt to quantize to correct precision, expect to work fine 
	fmt.Println(con, err, scratch)      // fails with `overflow` condition
}

In this piece of code the wideCon is the widest supported context in the system, i.e. the DB default context, while limitCon is the target currency context. If I understand correctly the context settings - the limitCon effectively forces the values in this context to be within 999999999.99999999 range, which is exactly what I need.

The sample value is way below the MaxExponent limit, and I expect limitCon.Quantize() to produce same value with decimal places adjusted to fit, but instead it returns a NaN value and an Overflow condition.

The error goes away if I increase the MaxExponent, it works at MaxExponent=12 for this value, but workarounding it this way means I will have to forfeit the ceiling check limit which I'd like to not do.

What is a better way to achieve what I need - round an arbitrary long precision value to a constrained precision settings force-rounded per currency and with correct ceiling constraint?

I think the problem here is that limitCon can't fit the original value with all its trailing zeros and so it fails. It's been a long time since I knew how this worked so it's hard for me to give a more detailed explanation. However you should consider using the Reduce method which removes trailing zeros.

Well, problem with using Reduce here is that I cannot guarantee that the trailing figures are actually zeroes. So I need it to be correctly rounded, not just truncated.

I workarounded this by doing double parsing like this:

d, _, _ = limitCon.SetString(&apd.Decimal{}, d.String())
_, err := wideCon.Quantize(d, d, limitCon.MinExponent)

I have to make it a string and then parse again to control the ceiling value, which is not ideal but does what I need. It will return an overflow for values above the MaxExponent of the limitCon. Then do quantizing using wideCon, which will do the rounding correctly to the needed precision, but will not control the ceiling since it's a wideCon.

You might try comparing how this works in java or python. It is possible they exhibit the same error cases. If so, this suggests that the exact method you are following is not covered by the common decimal spec.

Yep, tried in Python, here is it:

Python 3.7.4 (default, Jul 16 2019, 07:12:58) 
[GCC 9.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import *
>>> limit_con = getcontext()
>>> limit_con.prec = 16
>>> limit_con.Emin = -8
>>> limit_con.Emax = 8
>>> dec = Decimal("6410.000000000000000000")
>>> limit_con.quantize(dec, Decimal('0.00000000'))
Decimal('6410.00000000')

Works as expected.

And the overflow happens as expected as well:

>>> dec_big = Decimal("64100000000000000000000000000000.000000000000000000")
>>> limit_con.quantize(dec_big, Decimal('0.00000000'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.7/_pydecimal.py", line 2567, in quantize
    'exponent of quantize result too large for current context')
  File "/usr/lib/python3.7/_pydecimal.py", line 4040, in _raise_error
    raise error(explanation)
decimal.InvalidOperation: exponent of quantize result too large for current context