/rel_bench

Benchmark (mostly cryptographic programs) for Binsec/Rel.

Primary LanguageRoff

Binsec/Rel Benchmark

This repository contains the benchmarks used for the experimental evaluation of the Binsec/Rel tool.

This Readme details the experiments for scalability on constant-time analysis. For results on the preservation of constant-time and secret-erasure under multiple compiler setups, see constant-time preservation by compilers and secret-erasure preservation by compilers.

Prerequisite

Set up python virtual environment to run python scripts:

python3 -m venv ./venv
source venv/bin/activate
pip install -r requirements.txt

Files

  • Programs to analyze are located in ./src/[program_name],
  • The setup for individual programs can be found in experiments.py,
  • The script to run experiments is run.py,
  • The Makefile contains presets to run Binsec/Rel on programs in ./src with different experimental setups.
  • The folder ./properties_vs_compilers contains our experimental study

on constant-time and secret-erasure preservation under multiple compiler setups.

Run run.py --help for help.

Running Binsec/Rel experiments

First, make sure that Binsec/Rel is in you path.

Running the tests

Toy examples to test the tool can be found in src/toys. To run Binsec/Rel on these toy examples, just run make tests.

This will run experiments for the toy examples and check that the result of the analysis is correct.

Presets defined in the Makefile

This section describes how to quicky run Binsec/Rel on a set of cryptographic primitives. More details on how to use the script run.py can be found in the next section.

Running Experiments with best options

Run the command corresponding to the file you want to analyze:

ProgramCommand
ct-selectmake ct_select
ct-sortmake ct_sort
Hacl*_utilitymake utility_hacl
OpenSSL_utilitymake utility_openssl
teamake tea
donnamake donna
libsodium_salsa20make nacl_salsa20
libsodium_chacha20make nacl_chacha20
libsodium_sha256make nacl_sha256
libsodium_sha512make nacl_sha512
Hacl*_chacha20make hacl_chacha20
Hacl*_curve25519make hacl_curve25519
Hacl*_sha256make hacl_sha256
Hacl*_sha512make hacl_sha512
Bearssl_aes_ctmake aes_ct
Bearssl_des_ctmake des_ct
Bearssl_aes_bigmake aes_big
Bearssl_des_tabmake des_tab
OpenSSL_tls-remove-paddingmake openssl_almeida

Results can be found in "./__results__/[name of program]_[date].csv"

Scalability

To run all experiments for scalability, run make scale_[name of the program].

Results can be found in "./__results__/[name of program]_[date].csv"

Using run.py to run other experiments

Using the script run.py:

./run.py -d [dataset] -o [output_file] -e [experiment_to_run]

where dataset is the dataset you want to analyze as defined in experiments.py (e.g. ct-sort, tea_0).

Experiment to run:

sc
self-composition
relse
standard relational symbolic execution
relse_unt
standard relse + untainting
relse_unt_fp
standard relse + untainting + fault-packing
relse_flyrow
relse + flyrow
relse_flyrow_unt
relse + flyrow + untainting
binsecrel
relse + flyrow + untainting + fault-packing
se
standard symbolic execution
se_postrow
standard symbolic execution + read-over-write (not on-the-fly but as a formula pre-processing)
se_flyrow
standard symbolic execution + flyrow
relse_postrow
standard relse + read-over-write (not on-the-fly but as a formula pre-processing)
insecure
get a verbose counterexample for insecure experiments
best
best setup (same as binsecrel)
test
output smt formulas in /tmp/SMTDIR for debugging.

Generate latex tables

Run python3 stats.py -bv -bf -sc -sse to generate latex tables presented in the paper.