New storage mechanism with better concurrency characteristics
GoogleCodeExporter opened this issue · 2 comments
GoogleCodeExporter commented
The current storage relies on One Big Lock around the entire cache. This
strategy puts a cap on concurrent accesses to the cache, making one write
block all readers.
There are better data structures that could be used; for example the
ConcurrentLinkedHashMap project here on google code. However, this piece
of code would require extensive modification to support the locked-delayed-
deletes functionality of memcache.
This task is quite far reaching; it really means refactoring the call chain
from
the protocol handler down to move the locking/cas/delayed-delete support
further down.
Original issue reported on code.google.com by ryan.daum
on 11 Sep 2009 at 3:46
GoogleCodeExporter commented
Original comment by ryan.daum
on 11 Sep 2009 at 8:12
- Changed state: Started
GoogleCodeExporter commented
I made use of an early version of the ConcurrentLinkedHashMap project, modified
to
support sizing of elements.
This is still being tested, but early results are promising.
Original comment by ryan.daum
on 8 Nov 2009 at 6:54
- Changed state: Fixed