⚠️ This is experimental project. It has nothing to do with any existing or future real project. Its name is fake and not related to any existing product or company. It's only for educational purposes.
Research project on AI usefulness in DevSecOps. For design phase and using OpenAI GPT-3.5
The most important DevSecOps goals are:
- shift left security
- be placed in developers ecosystem (IDE, code hosting, PRs, etc.)
- provide fast feedback and guidance
For coding phase, we have tools like semgrep that can benefit from AI and LLMs. What about design phase? Typical manual activities are:
- security design review
- threat modelling
The aim of this research is to answer the question of whether or not the current state of LLMs can bring meaningful value to those security activities.
Each time input data are updated, github action runs, query is sent to GPT-3.5 and results are committed back to repository as output.
Workflow can directly push into repository or create pull request. User Stories can be created as issues and bot will add comment with output.
Name | File | Description | Security artefact | Output |
---|---|---|---|---|
Project description | PROJECT.md | High level description of the project with business explanation and listed core features | High level security design review | PROJECT_SECURITY.md and as pull request |
Architecture | ARCHITECTURE.md | Architecture of the solution | Threat Modelling | ARCHITECTURE_SECURITY.md |
User stories | user-stories/* also in issues |
Technical and user stories to implement | Security related acceptance criteria | user-stories/*_SECURITY.md also in issues - as comment |
Check my blog post if you want to learn how I approached this research and interpreted results.
If you want to talk, I'm on X/Twitter.
If you would like to try on your own with this experiment:
- fork repository
- set
OPENAI_API_KEY
in repository secrets