Detect Llama -- Finding Vulnerabilities in Smart Contracts using Large Language Models

Evaluation results and set

This data includes the results and the evaluation set for testing the various models in our paper, Detect Llama - Finding Vulnerabilities in Smart Contracts using Large Language Models.

Evaluation results is the output of the ScrawlD (https://github.com/sujeetc/ScrawlD) testing and majority vote process can be found in the evaluation_results directory.

The prompts used and the set of smart contracts used for evaluation, along with the address and compiler version of the deployed smart contract, can be found in the evaluation_set directory.

If you use this data please cite our paper using the below citation.

@misc{ince2024detectllamafinding,
      title={Detect Llama -- Finding Vulnerabilities in Smart Contracts using Large Language Models}, 
      author={Peter Ince and Xiapu Luo and Jiangshan Yu and Joseph K. Liu and Xiaoning Du},
      year={2024},
      eprint={2407.08969},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2407.08969}, 
}