Ask questions
Guodafeng-tl opened this issue · 5 comments
How do you add a vulnerability case and generate a scorecard?
Scorecard generation is described here: https://owasp.org/www-project-benchmark/#div-scoring.
Vulnerability test cases cannot be added by users but they can be added by the project team. If there is a type of vulnerability you want added (CWE #) or specific framework, data source, data flow, or data sink (the introduces a CWE), let us know what you are interested in adding. We haven't released a new version in ages, but are planning on doing a 1.3 release in the next month or two that adds Hibernate Injection test cases and adds some web services style test cases as well. If you have other questions or follow up questions, feel free to email me directly at: dave dot wichers at owasp dot org.
Vulnerability test cases cannot be added by users but they can be added by the project team.
Whats the reasoning behind this?
I can understand that adding new test cases could be complicated and need interactions with the project team but why disallow it completely?
I think OWASP projects should be open to all to contribute to. That is a personal oppinion of course :)
Good questions! And I totally agree. I didn't mean to imply they weren't allowed to. We want contributions!
We just haven't made public the generator for OWASP Benchmark. What is public is the generated benchmark. We have invited in a few people/teams to contribute directly but after inviting them in, they look, see its a bit complex, and walk away. The project would LOVE for some people/teams to contribute in this way, but no one has picked up a shovel and dug in. Before we made it public, we'd want to document how it works, how to contribute (e.g., how to change it, how to verify the changes work (which is a bit complex), etc.). We've documented some of that to help those who offered to help, but there is much more in that area that needs to be done. We'd be happy to do more if more volunteers that actually would contribute emerged.
We've also volunteered to do the heavy lifting and they can just contribute code snippets of the types of things they want added to Benchmark and some things like that have already been incorporated into the 1.2 release and some additional ones are going to be included in 1.3.
Cool - that sounds very reasonable 👍