Wrong cache eviction order in branch v2.0.0-preview
Opened this issue · 0 comments
sgx_protected_fs uses LRUCache to reduce the IO cost. When pfs needs a new node, it will create one and push it in front of the list. But when pfs needs to evict a node, it will call cache.iter()
to get all nodes that are dirty, which will get node from front to end.
In the C++ version, the file protected by pfs is implemented by mmap, so the write will be handled by the page cache and it won't trigger a real IO action. So it's okay to evict nodes in a reverse order. But in branch v2.0.0-preview, this Rust SDK rewrites pfs in Rust. The node.write_to_disk()
will directly write data to the file. So when we perform sequential writes on a file like block[1,2,3,4,5], the actual order is block[5,4,3,2,1], which is a random write, causing terrible performance.
so, we should use cache.iter().rev()
or create a rev_iter
as follow:
pub fn write_to_disk(&mut self, flush: bool) -> FsResult {
if self.is_need_write_node() {
for mut node in self.cache.iter().rev().filter_map(|node| {
// ...
}
// ...
}