/SecurityEval

Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" published in MSR4P&S'22.

Primary LanguagePython

Hugging Face

SecurityEval

Update

We updated the dataset with a new version. It addresses the following issues:

  1. Typos in the prompt.
  2. Remove the prompt that deliberately asks to generate vulnerable code.

We have now 121 prompts for 69 CWEs in this version. We did not change the old result and evaluation for models. The new dataset is available in the dataset.jsonl file.

You can find the old dataset and evaluation result for MSR4P&S workshop in the v1.0.

Introduction

This repository contains source code for the paper titled SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques. The project is accepted for The first edition of the International Workshop on Mining Software Repositories Applications for Privacy and Security (MSR4P&S '22). The paper describes the dataset for evaluating machine learning-based code generation output and the application of the dataset to the code generation tools.

Project Structure

  • dataset.jsonl: dataset file in jsonl format. Every line contains a JSON object with the following fields:
    • ID: unique identifier of the sample.
    • Prompt: Prompt for the code generation model.
    • Insecure_code: code of the vulnerability example that may be generated from the prompt.
  • DatasetCreator.py: script to create the dataset from the folders: Testcases_Prompt and Testcases_Insecure_Code.
  • Testcases_Prompt: folder containing the prompt files.
  • Testcases_Insecure_Code: folder containing the insecure code files.
  • Testcases_Copilot: folder containing the code generated by GitHub Copilot.
  • Testcases_InCoder: folder containing the code generated by InCoder.
  • Databases: folder containing the databases for the CodeQL analysis.
    • job_{copilot,incoder}.sh: scripts to run the CodeQL analysis.
  • Result: folder containing the results of the evaluation.
    • DataTable.{csv,xlsx}: table of the CWE list with their source
    • testcases_copilot: folder containing result by running CodeQL on Testcases_Copilot
    • testcases_copilot.json: result by running Bandit on Testcases_Copilot
    • testcases_copilot.csv: result for manual analysis on Testcases_Copilot
    • testcases_incoder: folder containing result by running CodeQL on Testcases_InCoder
    • testcases_incoder.json: result by running Bandit on Testcases_InCoder
    • testcases_incoder.csv: result for manual analysis on Testcases_InCoder
    • testcases.json: contains the list of files and folders in Testcases_Prompt
    • CSVConvertor.py: script to convert the CSV files to from json file(i.e. testcases.json)

Loading the dataset of prompts from HuggingFace

The dataset is now published on HuggingFace. You can load it as follows:

from datasets import load_dataset
dataset = load_dataset("s2e-lab/SecurityEval")

Usage of the Analyzer

Dependencies:

  • Python: 3.9.4
  • CodeQL command-line toolchain: 2.10.0
  • Bandit: 1.7.4

Bandit

virtualenv bandit-env
python3 -m venv bandit-env
source bandit-env/bin/activate
pip install bandit
bandit -r Testcases_Copilot -f json -o Result/testcases_copilot.json 
bandit -r Testcases_InCoder -f json -o Result/testcases_incoder.json

CodeQL

Install CodeQL from here: https://codeql.github.com/docs/codeql-cli/getting-started-with-the-codeql-cli/

cd Testcases_Copilot
codeql database create --language=python  'ROOT_PATH/SecurityEval/Databases/Testcases_Copilot_DB' # Use your path to the database
cd ../Databases
sh job_copilot.sh

cd ..
cd Testcases_InCoder
codeql database create --language=python  'ROOT_PATH/SecurityEval/Databases/Testcases_Incoder_DB' # Use your path to the database
cd ../Databases
sh job_incoder.sh

Abstract

Automated source code generation is currently a popular machine learning-based task. It can be helpful for software developers to write functionally correct code from a given context. However, just like human developers, a code generation model can produce vulnerable code, which the developers can mistakenly use. For this reason, evaluating the security of a code generation model is a must. In this paper, we describe SecurityEval, an evaluation dataset to fulfill this purpose. It contains 130 samples for 75 vulnerability types, which are mapped to the Common Weakness Enumeration (CWE). We also demonstrate using our dataset to evaluate one open-source (i.e., InCoder) and one closed-source code generation model (i.e., GitHub Copilot).

Citation

@inproceedings{siddiq2022seceval,
  author={Siddiq, Mohammed Latif and Santos, Joanna C. S. },
  booktitle={Proceedings of the 1st International Workshop on Mining Software Repositories Applications for Privacy and Security (MSR4P&S22)}, 
  title={SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques}, 
  year={2022},
  doi={10.1145/3549035.3561184}
}