Proposal: Represent Uint128 as two uint64 instead of big.Int
Fryuni opened this issue · 2 comments
A big.Int
value has 32 bytes in size
- 1 byte for a bool
- 8 bytes for a slice pointer (one word)
- 8 bytes for the slice length (one word)
- 8 bytes for the slice capacity (one word)
- 7 padding bytes
The number that it is holding only has 16 bytes, so using a big.Int
adds an extra allocation, indirection and triples the memory use.
I propose the type Uint128
to be changed to:
type Uint128 struct { High, Low uint64 }
There are libraries in the community that define a Uint128
with its operations. I don't think there is a need for any operations here, so we could have the serialization and deserialization hand-coded here (using the binary package). If we don't want to deal with those operations we could depend on one such library.
For consumers that already have a big.Int
we could provide a Uint128FromBig(*big.Int) Uint128
function.
This would be a braking change on the API, but that is ok by the status on the README.
I am curious what your use case for the uint128 data type is. Currently, we don't actually use it and it hasn't received much thought. big.Int
was chosen as that is what github.com/oschwald/maxminddb-golang](https://github.com/oschwald/maxminddb-golang) uses for deserialization. If this change is made, it is likely that additional changes will need to be made for loading an existing database containing the values.
When I wrote the reader, I was hoping that something would eventually happen with golang/go#9455, but it has been almost 8 years without movement.
Currently just for including an IPv6 in the data (for custom aliasing). It is not a huge deal since it is only a few hundred of those, but using a pointer type larger than the data it points to felt very odd to me.