Backend Latency for KVSSDs
jaeseanpark opened this issue · 4 comments
Hi. If I understood correctly, conventional and ZNS SSDs actually reflect the latency of NAND flash cells when they are added to "completed_time" in ssd_advance_nand() and passed to "nsecs_target" via "__insert_req_sorted", and finally added to an nvme_proc_table entry. However, there seems to be no such behavior in Optane SSDs or KVSSDs. It seems that the only things that are added are read/write time, delay, and trailing defined in main.c. Did I miss anything?
Also, for KVSSDs, in the paper it says that
"For KVSSD, the size of key-value pairs is small; hence its performance tends to be bound by the host-device interfacing performance and the key indexing time, rather than the performance of the storage media."
Does this mean that even with a proper backend for KVSSD (instead of the simple kv_ftl), the performance of KVSSD would not change?
OptaneSSDs and KVSSDs use very simple performance model, which only considers the simple read/write latencies. The completion time is calculated in __schedule_io_units()
in simple_ftl.c
For the quoted statement, I meant that in the target key-value sizes the interface overhead outweighs others so we focused on modeling the interface overhead in the simple_ftl. Of course the performance trends would be different with the "proper" KVSSD backend.
Thank you for your reply.
In that case, in the __schedule_io_units() function, it seems that the delay, latency and trailing are 1ns, 1ns, 0ns respectively. How do these values relate to the parameters of a KVSSD in the paper, Table 2?
You can set the parameters through the procfs entries exported at /proc/nvmev/... Or check set_perf.py in the common directory at github.com/snu-csl/nvmevirt-evaluation project.
Thanks again for the support. I clearly understand now.