RamiAwar/fastabm

Create manually triggered performance benchmark pipeline to compare to other ABM libraries

Opened this issue · 0 comments

Description

We need to automate the process of benchmarking and result aggregation and reporting.

This can be done by creating several python scripts that generate timing csvs. The scripts would execute the same model written in FastABM as well as other libraries, giving us a reference benchmark.

After those are done, another script would parse the CSVs and compare the timings, generating a markdown timings report that we can display in github pages.