Support for the --audit-level flag passing to npm audit
archfz opened this issue · 6 comments
It would be nice to only consider vulnerabilities after a given level.
hmm it would be hard to implement with the current method - reads the vulnerability IDs from the generated report using a simple regex:
const SPLIT_REGEX = /(https:\/\/(nodesecurity.io|npmjs.com)\/advisories\/)/;
...
const rawIds = data.split(SPLIT_REGEX).map(str => str.substring(0, 4).trim());
One way that I can think of doing this is by collecting all the severity levels (low, high, critical, etc.) and also collecting all the vulnerability IDs, and map them together, but this might not be 100% accurate. I can add the flag --audit-level
and pass it into the command, but it seems like the report will still be generated with all of the vulnerabilities.
I'll try to think of something about this, meanwhile, any ideas / PRs are welcome :)
I've put a Pull Request together that allows support for this, as-well as accommodating a production flag that I've raised another issue for.
Simplest approach was to use JSON for the processing.
JSON approach looks great! Thank you for contributing @IPWright83
I tested with a few repositories, for some of my old repositories they have many vulnerabilities warnings, that I think resulted in an oversized buffer exceeds the limit and caused the child process to be terminated:
here I get the incomplete JSON buffer:
{
"id": 1490,
"path": "jest>jest-cli>@jest/core>jest-haste-map>sane>micromatch>extglob>snapdragon>base>define-property>is-descriptor>is-accessor-descriptor>kind-of",
"dev": true,
"optional": false,
"bundled": false
},
{
"id": 1490,
"path": "babel-jest>@jest/transform>micromatch>extg
undefined:22005
"path": "babel-jest>@jest/transform>micromatch>extg
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
So I put some logs to the buffer and I get this:
- Each of the data chunk sizes is 8192 bytes (8 kilobytes)
- Total of the JSON buffer size is 22316959 bytes (22 megabytes)
8192 bytes is the maximum limit for each chunk and I think we can't increase it (actually we don't have to deal with the chunk size). We only have to increase the maxBuffer
size up from the default size. The current approach works great, I'll add the code to increase the max size.
Thanks again!
Mok
@jeemok just to check after re-reading this.
The buffer size shouldn't be a problem with the JSON approach as it just keeps adding to the jsonBuffer
string via stdout
. I think that's just going to keep streaming regardless of buffer size - or have I misunderstood there? That's specifically why I ended up listening to the end of the stream to parse the JSON because it was invalid unless you'd obtained the whole lot.
@IPWright83 so I was testing with my old repository that has lots of vulnerabilities, and the JSON was invalid as the child process close before receiving all the chunks from the streaming. The default maximum size for the maxBuffer
allowed was 1,048,576 bytes (1024 * 1024) stated in the documentation
maxBuffer
<number>
Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer and Unicode. Default size: 1024 * 1024.
So what I did is just increasing that to 50 MB and it should work for most cases. Actually, maybe we should add handling there if it ever exceeds 50MB, we should throw a warning or something.