Better chunk size calculation
Closed this issue · 1 comments
Currently, the size of the data chunks that are sent to memcached / L1 (in chunked mode) are fixed to a size of 1024 bytes - 16 bytes for the token = 1008 bytes. This is a rough calculation that does not capture enough information to properly restrict the data to a single slab. The overhead of the key and metadata should be subtracted from the default slab size that we target (1184 bytes) in order to find the optimal chunk size for the data. Currently, if the key is over ~80 bytes long then we will spill over into the next slab. Fortunately for us, keys tend to be fairly consistent in size so we likely won't see bad behavior from this. Nevertheless it shouldbe properly done so that we do have a guarantee of single-slab-ness.
Fortunately for me, this was all already figured out by @smadappa in the EVCache client (for the client-side chunking support): https://github.com/Netflix/EVCache/blob/master/evcache-client/src/main/java/com/netflix/evcache/pool/EVCacheClient.java#L654