/Infer-no

Primary LanguageJava

Infer-no

Bemnchmark suite for infer Quandary

Structure

  • /testcode test code used to build the confusion matrix. Code directly from OWASP/Benchmark
  • /csv/actual the results of the benchmark are exported as a csv file directly inside this folder. Ready to be used to generate the confusion matrix.
  • /csv/expected expected benchmark results, generated by OWASP

Setup

First of all you must install Infer

To install the project simply run python setup.py install

To install the dependencies instead run pip install -r requirements.txt

Run the benchmark

You can run Infer quandary against all the tests by using the following command: python run_tests.py

Print the results

If you want to generate and print the confusion matrix into your terminal use:

python confusion_builder.py

For each .csv file inside scv/actual the script will print:

  • The confusion matrix
  • Top 3 False Positive/False Negative misclassification errors by vulnerability category.
    • The relative incorrect classification % represents the misclassification error percentage with respect to the total number of tests for that vulnerability type.
    • The absolute incorrect classification % represents the misclassification error percentage with respect to the total False Positive, False Negative number.

For each benchmark csv result file placed into csv/actual this script will generate a matrix. Note that the csv file must match the structure of the official OWASP expectedresults-1.2.csv

Setup an infer configuration

Is possible to tune the infer quandary configuration modifying the file .inferconfig