/greenbench

A greener fuzzer benchmarking platform

Primary LanguagePythonApache License 2.0Apache-2.0

GreenBench: A Greener Fuzzer Benchmarking Platform

This repository contains the implementation of /GreenBench/, a greener benchmarking platform that, compared to FuzzBench, significantly speeds up fuzzer evaluations while maintaining very high accuracy. GreenBench is an extension of FuzzBench that drastically increases the number of benchmarks while drastically decreasing the duration of fuzzing campaigns. In our evaluation, we find that the fuzzer rankings generated by GreenBench are almost as accurate as those by FuzzBench (with very high correlation), but GreenBench is up to 61 times faster (see our paper for more details).

Paper

@inproceedings{inproceedings,
 author = {Ounjai, Jiradet and Christakis, Maria and W{\"u}stholz, Valentin},
 title = {Green Fuzzer Benchmarking},
 booktitle = {Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis},
 series = {ISSTA},
 year = {2023}
}

Implementation

We implemented GreenBench by extending FuzzBench (commit e816b71). You can see the changes here.

Prerequisites

To run GreenBench experiments, you must provide a custom seed corpus for your benchmarks. Alternatively, you can download our large seed corpus for the benchmarks we used in our evaluation.

Running Experiments

For a general introduction to running FuzzBench experiments, we suggest reviewing the FuzzBench documentation.

To start a GreenBench experiment, you need to use additional parameters (snapshot_period and target_fuzzing) in the FuzzBench experiment config file:

experiment-config.yaml

docker_registry: gcr.io/fuzzbench
experiment_filestore: /tmp/experiment-data
report_filestore: /tmp/web-reports
local_experiment: true
max_total_time: 900     # use a time limit of 15 minutes for each campaign
trials: 100             # use 100 campaigns per benchmark program
target_fuzzing: true    # use target edges as the performance measure
snapshot_period: 60     # make it measure coverage every minute

Then you can use the regular FuzzBench command to start an experiment with an extra parameter custom-seed-corpus-dir pointing to the directory where the custom seed corpus is stored:

PYTHONPATH=. python3 experiment/run_experiment.py \
--experiment-config experiment-config.yaml \
--benchmarks freetype2-2017 bloaty_fuzz_target \
--experiment-name $EXPERIMENT_NAME \
--fuzzers afl libfuzzer \
--allow-uncommitted-changes \
--custom-seed-corpus-dir $PATH_TO_CUSTOM_CORPUS

Generating a GreenBench report

To see the results of an experiment, you can run the following command to generate a GreenBench report. This script will require you to specify the path of the FuzzBench local database file (local.db); you can find it in the experiment_filestore directory that you set in the experiment configuration.

PYTHONPATH=. python3 analysis/greenbench_report.py $PATH_TO_FUZZBENCH_LOCAL_DB

The script will then show the fuzzer ranking table on standard output and generate a simple web-based report in the generated report directory.