Support high degree of parallelism/concurrency in Store operations
Closed this issue · 1 comments
grecinto commented
<some rough draft, plans on increasing parallelism>
Scenario: multiple client apps simultaneously submitting a CRUD operation request to same SOP Store.
Each client connection/session is allocated its own Store "wrapper" instance so each can do work in parallel with the other client's Store.
Following enhancements need to be supported:
"Single Writer, Multiple Reader pattern"
- Lock on Reads
- Lock on Writes
- Multiple Readers will be given a Reader Store which is a "wrapper" that will allow parallelism in disk I/O (no locking used), 'will share the same MRU cache with other same Store Readers. Each operation in MRU will require locking. Since MRU is a fast, in-memory data structure, locking latency is very small/negligible.
- Writer Store "update" operation will use a single, exclusive lock as only one update operation will be allowed for the same Store, globally. Allowing multiple concurrent "read" operations is being considered for add'l boost.
Writer Store is a thinner wrapper than the Reader's bcoz they have different "locking" story..
grecinto commented
This feature is now supported, all code & automated tests/simulations were checked in. Documentation and "publicity" to follow. :)