consistancy issue when trying to encode long numbers
avnerbarr opened this issue · 0 comments
Hi,
I am trying to use the msgpack
library in accordance with the "int64-buffer
" library https://github.com/kawanet/int64-buffer
I get some mismatches when I try to encode numbers which are wrapped using the "int64" vs encoding the numbers directly (assuming the number is small)
var Int64BE = require("int64-buffer").Int64BE;
var msgpack = require("msgpack-lite");
var boxed = new Int64BE("1530");
msgpack.encode(boxed); // "\xd3\x00\x00\x00\x00\x00\x00\x05\xfa"
var msg = msgpack.encode(1530); // "\xcd\x05\xfa"
Generally speaking both of those encoded values are equal to 1530
, but the problem is that their representations are different.
We are using the encoded value as a key in a database and this is causing a discrepancy with our Scala and Python code which is always using the "compact" representation of that encoded value
for example in the scala msgpacker
code there is some logic which is choosing the "compact" encoding representation for a Long (same for python):
public MessagePacker packLong(long v)
throws IOException
{
if (v < -(1L << 5)) {
if (v < -(1L << 15)) {
if (v < -(1L << 31)) {
writeByteAndLong(INT64, v);
}
else {
writeByteAndInt(INT32, (int) v);
}
}
else {
if (v < -(1 << 7)) {
writeByteAndShort(INT16, (short) v);
}
else {
writeByteAndByte(INT8, (byte) v);
}
}
}
else if (v < (1 << 7)) {
// fixnum
writeByte((byte) v);
}
else {
if (v < (1L << 16)) {
if (v < (1 << 8)) {
writeByteAndByte(UINT8, (byte) v);
}
else {
writeByteAndShort(UINT16, (short) v);
}
}
else {
if (v < (1L << 32)) {
writeByteAndInt(UINT32, (int) v);
}
else {
writeByteAndLong(UINT64, v);
}
}
}
return this;
}
Is there a way to match this behavior, while still always using the int64-buffer
even on short numbers? (i.e. not having to figure out the shortest representation on my own)