oxidecomputer/opte

Benchmark userland/kernel performance

Closed this issue · 0 comments

We need a consistent (and easily runnable) set of benchmarks so that we can accurately characterise various aspects of OPTE's performance.

  • Userland microbenchmarks [criterion]
    • Packet parsing
      • Per-protocol
      • Scaling in options len, possibly.
    • Hairpin packet generation time for various choices of VpcCfg
    • Slow/fastpath classification, ht execution
  • Kernel/XDE [dtrace-driven]
    • VM-VM workloads
    • VM-external workloads

Naturally, userland benchmarks should be runnable on as many host OSes as possible much like our current integation tests. These aren't necessarily meant to be definitive, but should give us some more directed results. In the kernel case we should aim for using falcon at first, before moving toward simulated and real sleds.

I'm currently noodling on some of these under the instrument branch.