Optimize Cache Settings
Opened this issue · 0 comments
The settings available for the cache are "total segments in cache" and "size of cache". Size of cache would be restricted to the system availability, so two checks will need to be implemented: Are we exceeding available memory at initialization and are we exceeding available memory during runtime. The first check would be rather simple to implement, because we already restrict the cache size at initialization. The second check would require extra thought since we will be met with two options: immediately halt the process and return an error, or attempt to resize the cache with some rule accordance.
Either way, this opens a dilemma for how to handle the total number of segments in the cache. There will need to be an appropriate balance between reducing file accesses, while at the same time reducing the amount of objects to transfer when a miss occurs in the cache. A simple test at initialization could automatically determine the total number of segments based on the runtime results of the current system, but the performance may change if available memory at runtime is restricted. This could mean that the number of segments are either: a) kept static or b) reevaluated when memory runs short and we do not halt the process.
Evaluation of the number of segments required to optimize performance of the cache could be done in two ways: a) we can take samples of number of segments and test accessing random data objects against runtime and run linear regression, or b) we could use PSO by randomly accessing data objects and optimizing runtime. The cost of either operation would need to be appropriately evaluated with the expectation that it may run more than once in a program.