Inaccurate time measurement in test.h
Opened this issue · 0 comments
cppascalinux commented
In test.h
, we use the following macros to measure the update and query time for each sketch:
#define DEFINE_TIMERS \
auto timer = std::chrono::microseconds::zero(); \
auto tick = std::chrono::steady_clock::now(); \
auto tock = std::chrono::steady_clock::now();
#define START_TIMER tick = std::chrono::steady_clock::now();
#define STOP_TIMER \
tock = std::chrono::steady_clock::now(); \
timer += std::chrono::duration_cast<std::chrono::microseconds>(tock - tick);
#define TIMER_RESULT static_cast<int64_t>(timer.count())
Here the timer
is measured in microseconds
. However, in testUpdate
, we measure the time for each update separately:
DEFINE_TIMERS;
for (auto ptr = begin; ptr != end; ptr++) {
START_TIMER;
ptr_sketch->update(ptr->flowkey,
cnt_method == Data::InLength ? ptr->length : 1);
STOP_TIMER;
}
if (metric_vec.in(Metric::RATE))
update[Metric::RATE] = 1.0 * (end - begin) / TIMER_RESULT * 1e6;
For many sketch algorithms (e.g. CMSketch), the update
operation takes far less than 1us (~200ns on my machine), and the timer result is rounded to 0us. This causes significant inaccuracy for time measurement. The same problem exists in testInsert
and testQuery
, etc.
Changing microseconds
to nanoseconds
would be a solution.
Also notice that, std::chrono
measures the wall clock time instead of CPU time. We might want to switch to std::clock()
, which measures the CPU time and produces a more accurate result.