The project aims to build a repository of systems
that implement
effect handlers, benchmarks
implemented in those systems, and scripts to
build the systems, run the benchmarks, and produce the results.
A system
may either be a programming language that has native support for
effect handlers, or a library that embeds effect handlers in another programming
language.
Ensure that Docker is installed on your system. Then,
$ make bench_ocaml
runs the OCaml benchmarks and produces _results/ocaml.csv
which contains the
result of running the Multicore OCaml benchmarks.
Eff |
Handlers in Action |
Koka |
libhandler |
libmpeff |
Links |
Multicore OCaml |
Effekt |
|
---|---|---|---|---|---|---|---|---|
N-queens Counts the number of solutions to the N queens problem for board size N x N |
❌ | ❌ | ❌ | |||||
Generator Count the sum of elements in a complete binary tree using a generator |
❌ | ❌ | ❌ | ❌ | ||||
Tree explore Nondeterministically explore a complete binary tree with additional state |
❌ | ❌ | ❌ | ❌ | ❌ | |||
Triples Nondeterministically calculate triples that sum up to specified number |
❌ | ❌ | ❌ | ❌ | ❌ | |||
Simple counter Repeatedly apply operation in a non tail position. |
❌ | ❌ | ❌ | ❌ | ❌ |
Legend:
- ✅ : Benchmark is implemented
- ❌ : Benchmark is not implemented
- ➖ : Benchmark is unsuitable for this system, and there is no sense in implementing it (eg. benchmarking the speed of file transfer in a language that does not support networking)
systems/<system_name>/Dockerfile
is theDockerfile
in order to build the system.benchmarks/<system_name>/NNN_<benchmark_name>/
contains the source for the benchmark<benchmark_name>
for the system<system_name>
.benchmark_descriptions/NNN_<benchmark_name>/
contains the description of the benchmark, the input and outputs, and any special considerations.Makefile
is used to build the systems and benchmarks, and run the benchmarks. For eachsystem
, the Makefile has the following rules:sys_<system_name>
: Builds the<system_name>
docker image.bench_<system_name>
: Runs the benchmarks using the docker image for the<system_name>
.
LABELS.md
contains a list of available benchmark labels. Each benchmark can be assigned multiple labels.
The role of the benchmarking chairs is to curate the repository, monitor the quality of benchmarks, and to solicit new benchmarks and fixes to existing benchmarks. Each benchmarking chair serves two consecutive terms. Each term is 6 months.
The current co-chairs are
- Philipp Schuster (2022/09/21 - 2023/03/21 - 2023/09/21)
- Filip Koprivec (2022/01/21 - 2022/07/22 - 2023/03/21)
Past co-chairs
- Daniel Hillerström (In augural chair, 2021/07/23 - 2022/01/22 - 2022/09/20)
If you wish to add a new benchmark goat_benchmark
for system awesome_system
,
- Pick the next serial number for the benchmark
NNN
. - Add the benchmark sources under
benchmarks/<awesome_system>/NNN_<goat_benchmark>
, use the template provided inbenchmark_descriptions/000_template/
. - Update the
Makefile
to build and run the benchmark. - Add a benchmark description under
benchmark_description/NNN_<goat_benchmark>/readme.md
clearly stating the input, output and the expectation from the benchmark. Make sure you mention the default input argument for the benchmark. Add benchmark inputs and outputs (with their default values) to input/output files. - Update this
README.md
file to add the new benchmark to the table of benchmarks and to the benchmark availability table. - Add the benchmark to CI testing script.
If you wish to add a benchmark leet_benchmark
that is not available for a system
awesome_system
but is available for another system
- Use the same serial number for the benchmark
NNN
that is used by the existing system - Add the benchmark sources under
benchmarks/<awesome_system>/NNN_<leet_benchmark>
. - Update the
Makefile
to build and run the benchmark, using the same parameter as suggested in the benchmark description.
If you wish to contribute a system awesome_system
, please
- add a new dockerfile at
systems/<awesome_system>/Dockerfile
- add a new workflow under
.github/workflows/system_<awesome_system>.yml
- create a status badge for the new workflow and add it to to the availability table in lexicographic order.
- Update top level Makefile with commands that build the system and run the benchmarks (if applicable).
Ideally, you will also add benchmarks to go with the new system and update the benchmark availability table.
Having a dockerfile aids reproducibility and ensures that we can build the system from scratch natively on a machine if needed. The benchmarking chair will push the image to Docker Hub so that systems are easily available for wider use.
We use Ubuntu 20.04 as the base image for building the systems and hyperfine to run the benchmarks.
We curate software artifacts from papers related to effect
handlers. If you wish to contribute your artifacts, then please place
your artifact as-is under a suitable directory in artifacts/
.
There is no review process for artifacts (other than that they must be related to work on effect handlers). Whilst we do not enforce any standards on artifacts, we do recommend that artifacts conform with the artifacts evaluation packaging guidelines used by various programming language conferences.