maxSize is not correctly enforced
BTOdell opened this issue ยท 14 comments
The implementation of this LRU cache does not enforce the documented behavior:
maxSize
Required
Type: number
The maximum number of items before evicting the least recently used items.
Running this code snippet produces true
in the console:
const m = new QuickLRU({
maxSize: 2
});
m.set("key1", "value1");
m.set("key2", "value2");
m.set("key3", "value3");
console.log(m.has("key1"));
The cache should only be able to store 2 items and when adding a total of 3 items to the list, the first key-value pair (key1
-value1
) should be evicted. However, from the result of executing the code, you can see that key1
is still in the cache.
It's confusing but this is expected behavior. The trick described in the readme is using an old/new cache and swapping items out of old if they're picked up before the entire old cache is thrown away.
The has
method checks to see if its in old OR new. So maxSize will end up using 2x the memory---in this case 2 values permissible in new, and 2 values permissible in old.
Example code to illustrate what I mean.
var QuickLRU = require("quick-lru");
const m = new QuickLRU({
maxSize: 2
});
// New cache has 0 values. Old has 0 values
m.set("key1", "value1");
// New cache has 1 values. Old has 0 values
m.set("key2", "value2");
// New cache has 2 values & becomes old cache.
m.set("key3", "value3");
// New cache has 1 value. Old cache has 2 values.
console.log(`has value? ${m.has("key1")}`) // <-- This will be true
m.set("key4", "value3");
// New cache now has 2 values; new cache becomes old cache
// and previous values thrown away
console.log(`has value? ${m.has("key1")}`); // <-- This will be false
It might be "expected behavior" if you've read the code and know that it's going to behave that way. General users of this NPM package should only have to read the public documentation to understand what the package will do for them. If the implementation fails to produce the documented behavior then the implementation is bugged and should be fixed.
The interaction I am describing is implied by the readme.
But another choice is to just make it more clear in the documentation rather than changing the implementation and thus affecting the performance characteristics that drew many of us to quicklru in the first place.
Are we looking at the same readme? https://github.com/sindresorhus/quick-lru/blob/master/readme.md
No-where in the quick-lru readme does it describe the behavior that you're describing.
Anyway, I don't really care if this is fixed; I implemented my own fast LRU that only uses one Map and it runs in O(1) time. I left this issue open for the benefit of your 2.4 million users that you might either fix the implementation or improve the documentation. It's up to you whether you want to close it.
Ah I misremembered. I had clicked through to the description of the algorithm, but that's on another page linked from the readme.
I'm not the developer of quick-lru. That would be @sindresorhus.
I'm just a normal user of the library, just like you. I only happened to doing a review of our installed libraries, when I noticed your issue and thought to give an answer based on what I had read months ago.
@BTOdell it would be great if you can share your Map based LRU cache implementation. May be we can take inspiration and fix this one?
It looks like you already merged a fix to this issue. If my original code snippet runs and prints false
, then feel free to close this issue.
As of today on 5.1.1, the original code snippet still prints true
@adityapatadia A Map can be iterated in insertion order according to the MDN docs. This can be exploited to only evict the least recently accessed item. Not sure how fragile relying on this enumeration order promise is. Otherwise just use some doubly linked list to keep track of ancienty.
@aureooms I ended up using a single Map
to implement my own LRU using the technique you described. It's fast and uses a minimal amount of memory. The iteration order is defined in the ES6 spec, so it's okay to rely on it.
@BTOdell I see three issues with the current implementation of quick-lru
:
- Doubled memory usage to guarantee that all
maxSize
most recently used elements stay in cache. - Harder to predict when entries get evicted from the cache.
- Hard to predict bad insertion performance when using
onEviction
.
The current implementation has the advantage of being somewhat simple.
Do you see other issues or advantages?
ran into the same issue with latest 5.1.1. Took me hours to debug.
Arguably this is a blocker especially when an LRU cache does not enforce size
version 7, 4 years later from the latest comment, and it's still doing it :)