Pinned Repositories
cti-to-mitre-with-nlp
Replication package for the paper "Automatic Mapping of Unstructured Cyber Threat Intelligence: An Experimental Study" published at the IEEE International Symposium on Software Reliability Engineering (ISSRE) 2022
DeepEST
EVIL
EVIL (Exploiting software VIa natural Language) is an approach to automatically generate software exploits in assembly/Python language from descriptions in natural language. The approach leverages Neural Machine Translation (NMT) techniques and a dataset that we developed for this work.
Failure-Dataset-OpenStack
Failure dataset containing information on the events collected in the OpenStack cloud computing platform during three different campaigns of fault-injection experiments performed with three different workloads.
fantastic_beasts
The Fantastic Beasts Framework is a collection of tools for fuzzing the Android OS.
Fault-Injection-Dataset
Failure dataset accompanying the paper "How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform"
OpenStack-Fault-Injection-Environment
Tools to repeat the fault-injection experiments presented in the paper "How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform" (ESEC/FSE '19).
powershell-offensive-code-generation
Shellcode_IA32
Shellcode_IA32 is a dataset consisting of challenging but common assembly instructions, collected from real shellcodes, with their natural language descriptions. The dataset can be used for neural machine translation tasks to automatically generate software exploits from natural language.
thorfi
ThorFI: A Novel Approach for Network Fault Injection as a Service
DESSERT Research Lab (University of Naples Federico II, Italy)'s Repositories
dessertlab/cti-to-mitre-with-nlp
Replication package for the paper "Automatic Mapping of Unstructured Cyber Threat Intelligence: An Experimental Study" published at the IEEE International Symposium on Software Reliability Engineering (ISSRE) 2022
dessertlab/Shellcode_IA32
Shellcode_IA32 is a dataset consisting of challenging but common assembly instructions, collected from real shellcodes, with their natural language descriptions. The dataset can be used for neural machine translation tasks to automatically generate software exploits from natural language.
dessertlab/EVIL
EVIL (Exploiting software VIa natural Language) is an approach to automatically generate software exploits in assembly/Python language from descriptions in natural language. The approach leverages Neural Machine Translation (NMT) techniques and a dataset that we developed for this work.
dessertlab/Failure-Dataset-OpenStack
Failure dataset containing information on the events collected in the OpenStack cloud computing platform during three different campaigns of fault-injection experiments performed with three different workloads.
dessertlab/OpenStack-Fault-Injection-Environment
Tools to repeat the fault-injection experiments presented in the paper "How Bad Can a Bug Get? An Empirical Analysis of Software Failures in the OpenStack Cloud Computing Platform" (ESEC/FSE '19).
dessertlab/thorfi
ThorFI: A Novel Approach for Network Fault Injection as a Service
dessertlab/powershell-offensive-code-generation
dessertlab/DeepEST
dessertlab/iris
Repository linked to DSN 2023 paper "IRIS: a Record and Replay Framework to Enable Hardware-assisted Virtualization Fuzzing"
dessertlab/Targeted-Data-Poisoning-Attacks
This repository contains the code, the dataset and the experimental results related to the paper "Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks" accepted for publication at The 32nd IEEE/ACM International Conference on Program Comprehension (ICPC 2024).
dessertlab/violent-python
This repo contains a manually curated dataset, where a sample contains a piece of Python code from an offensive software, and its corresponding description in natural language (plain English).
dessertlab/DeepSample
dessertlab/k4.0s
This repo refers to the paper "Introducing k4.0s: a Model for Mixed-Criticality Container Orchestration in Industry 4.0" @ ADSN 2022
dessertlab/CTI-Document-Analyzer
dessertlab/Laccolith
dessertlab/powershell-offensive-code-generation-Artifact
dessertlab/programmazione2-2020
dessertlab/PySA2
PySA2 is the Python Source-code Analyzer for Python 2.7 that exploits Python AST
dessertlab/train-ticket
Train Ticket - A Benchmark Microservice System
dessertlab/ACCA
Automating the Correctness Assessment of AI-generated Assembly Code for Security Contexts
dessertlab/DAIC
dessertlab/DeVAIC
DeVAIC (Detection of Vulnerabilities in AI-generated Code) is a tool that works on code snippets written in Python language with the aim of detecting vulnerabilities belonging to the OWASP categories listed in the Top 10 of 2021.
dessertlab/DevOpsTesting
dessertlab/generative-ai-cybersecurity
This repository contains the materials and scripts for the talk titled "Generative AI in Cybersecurity: Generating Offensive Code from Natural Language" by Pietro Liguori, University of Naples Federico II, DESSERT group. The talk is part of ARTISAN 2024: Summer School on the role and effects of ARTificial Intelligence in Secure ApplicatioNs.
dessertlab/JSSOFTWARE_Android_rejuvenation
This public repository includes raw data used in the experimental analysis provided in the article "Software Micro-Rejuvenation for Android Mobile Systems".
dessertlab/OpenStack-multi-tenant-workload
dessertlab/PureEdgeSim
PureEdgeSim: A simulation framework for performance evaluation of cloud, fog, and pure edge computing environments.
dessertlab/SEEliDS
dessertlab/Software-Exploits-with-Contextual-Information
dessertlab/xen_temporal_isolation_data