The Benchmark working group's purpose is to gain consensus for an agreed set of benchmarks that can be used to:
- Track and evangelize performance gains made between Node releases
- Avoid performance regressions between releases
Its responsibilities are:
- Identify 1 or more benchmarks that reflect customer usage. Likely need more than one to cover typical Node use cases including low-latency and high concurrency
- Work to get community consensus on the list chosen
- Add regular execution of chosen benchmarks to Node builds
- Track/publicize performance between builds/releases
The path forward is to:
- Define the important use cases
- Define the key runtime attributes
- Find/create benchmarks that provide good coverage for the use cases and attributes (current table)
See here for information about the infrastructure in place so far: https://github.com/nodejs/benchmarking/blob/master/benchmarks/README.md
- Michael Dawson (@mhdawson) Facilitaor
- Trevor Norris (@trevnorris)
- Ali Sheikh (@ofrobots)
- Yosuke Furukawa (@yosuke-furukawa)
- Yunong Xiao (@yunong)
- Mark Leitch (@m-leitch)
- Surya V Duggirala (@suryadu)
- Uttam Pawar (@uttampawar)
- Michael Paulson (@michaelbpaulson)
- Gareth Ellis (@gareth-ellis)
- Wayne Andrews (@CurryKitten)
- Kyle Farnung (@kfarnung)
- Kunal Pathak (@kunalspathak)
- Benedikt Meurer (@bmeurer)
- Sathvik Laxminarayan (@sathvikl)