CMU-CORGI/LHD

Slow reconfiguration

Opened this issue · 0 comments

LHD/lhd.hpp

Line 92 in 806ef46

static constexpr timestamp_t ACCS_PER_RECONFIGURATION = (1 << 20);

This constant required ~1M operations before a reconfiguration occurs. This value is quite high in some workloads, causing LHD to perform poorly. Take for example a synthetic loop from the LIRS authors. This trace has 505,500 events, which should be enough to dynamically tune towards an MRU configuration. At a cache size of 512, then LHD has a 4% hit rate. This increases to 49.7% if set to reconfigure every 1024 accesses (but 2s -> 26s). A real loopy workload is LIRS' glimpse, below.

Unfortunately tuning these parameters don't help a lot in the ARC traces. DS1 and S3 have MRU/LFU characteristics due to scans and are very large. OLTP appears to be the event log, so it is fairly recency-biased. I generally use Loop+Corda+Loop, where Corda is a blockchain trace (LRU-biased) to show adaptivity. In that case, LHD didn't adapt well which may be due to the reconfiguration setting.

These results were done by using Caffeine's Java simulator, rewriting the events to your binary format, and running using LHD's simulator. All events have the appId and size set to 1. I can try variable sized traces if interested. I have not observed a trace where LHD has the highest hit rate and only a few where it matches the best after manual tuning of the parameters. In general, LHD seems to be on-par with ARC, but with higher runtime overhead.

glimpse
ds1
s3
oltp