If you want to add new fuzz targets, remove --depth 1
in Dockerfile of the new target projects.
Regressed:
Non-regressed:
- unicorn_fuzz_emu_arm_armbe
- muparser_set_eval_fuzzer
- ndpi_fuzz_process_packet
- harfbuzz_hb-shape-fuzzer
- ndpi_fuzz_ndpi_reader
- htslib_hts_open_fuzzer
Others:
- libcbor_cbor_load_fuzzer
- keystone_fuzz_asm_arm_armv8be
- keystone_fuzz_asm_mips
- keystone_fuzz_asm_systemz
- keystone_fuzz_asm_ppc64be
- Docker
- python >= 3.8.0
make install-dependencies
source .venv/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
make install-dependencies
deactivate
oss-fuzz is a submodule
git pull
git submodule update --init
git submodule update --remote --merge
sudo make presubmit
- afl
- aflchurn
example:
sudo make run-aflchurn-file_magic_fuzzer
For more details in fuzzbench guide.
CAUTION: This will remove all images created by fuzzbench. You can skip this step if you don't want to.
sudo docker rm $(sudo docker ps -qa --no-trunc --filter "status=exited")
sudo docker rmi -f $(sudo docker images | grep -e gcr -e none | sed 's/ */ /g' | cut -d" " -f3 | sort | uniq)
sudo docker builder prune
Change crash_plotdata_filestore
accordingly in file experiment-config.yaml
.
crash_plotdata_filestore
is the folder including experiment results.
-
crash test cases: e.g., $crash_plotdata_filestore/openssl_x509-afl/trial-753301/corpus/crashes/id*
-
plot_data: e.g., $crash_plotdata_filestore/openssl_x509-afl/trial-753301/corpus/plot_data
If one changes crash_plotdata_filestore
in experiment-config.yaml,
remove docker/generated.mk
to enable the change of crash_plotdata_filestore
.
The docker/generated.mk
will be auto-generated after being removed.
Get bug logs
sudo make churn-debug-afl_debug-[subject]
Calculate time to bug
./time2bug.sh
Currently, aflgo can only run on some subjects, which are
- libgit2_objects_fuzzer
- libhtp_fuzz_htp
example:
sudo make run-aflgo-libgit2_objects_fuzzer
FuzzBench is a free service that evaluates fuzzers on a wide variety of real-world benchmarks, at Google scale. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt. We invite members of the research community to contribute their fuzzers and give us feedback on improving our evaluation techniques.
FuzzBench provides:
- An easy API for integrating fuzzers.
- Benchmarks from real-world projects. FuzzBench can use any OSS-Fuzz project as a benchmark.
- A reporting library that produces reports with graphs and statistical tests to help you understand the significance of results.
To participate, submit your fuzzer to run on the FuzzBench platform by following our simple guide. After your integration is accepted, we will run a large-scale experiment using your fuzzer and generate a report comparing your fuzzer to others. See a sample report.
You can view our sample report here and our periodically generated reports here. The sample report is generated using 10 fuzzers against 24 real-world benchmarks, with 20 trials each and over a duration of 24 hours. The raw data in compressed CSV format can be found at the end of the report.
When analyzing reports, we recommend:
- Checking the strengths and weaknesses of a fuzzer against various benchmarks.
- Looking at aggregate results to understand the overall significance of the result.
Please provide feedback on any inaccuracies and potential improvements (such as integration changes, new benchmarks, etc.) by opening a GitHub issue here.
Read our detailed documentation to learn how to use FuzzBench.
Join our mailing list for discussions and announcements, or send us a private email at fuzzbench@google.com.