nodejs/security-wg

Exploring security policies in Node core

Closed this issue · 45 comments

In the the security-wg Slack, we've been discussing what policies in Node core might look like: https://nodejs-security-wg.slack.com/archives/C9KTR110F/p1529928028000444

This discussion largely started due to Ryan Dahl's talk at JSConf EU 2018, where he gives an example that "your linter shouldn't get complete access to your computer and network".

In the Slack discussion, we talked about different kinds of policies, and different attacker models. @brycebaril from NodeSource talked about some of the policies they offer, and mentioned that he'd be interested in these coarse-grained policies being implemented in core.

I also chimed in with some thoughts about policies and attacker models, since this is mostly what we do at Intrinsic.

This is all very speculative, but moving forward, there's been interest from other members of the group in further exploring this concept.

I think it's reasonable to start the discussion with very coarse-grained policies (e.g., does this Node process get to use the network or not?). We'll need to decide the list of policies we'd like to support. We'll need to decide if we're defending against well-meaning, yet buggy code, or actively malicious code. And depending on that answer, we'll have lots of details to work through (e.g., if you turn off networking, are you still allowed to spawn child processes that might use the network?). And finally, once we know what we'd like to build, we can figure out if it's feasible.

This sounds like a great idea to me, and good to get the discussion into an issue were everybody can contribute.

We might want to involve folks from the Module Team here.
I think having process-level policies is the best first step to iterate from.

I have to check if I can free some time to spend on this issue. But that's super exciting.

bmeck commented

I also chimed in with concerns about exceptions to policies and resource integrity checks. In particular I want to evaluate the granularity of permissions and sharing. Node's core is not incredibly robust and can easily be mutated in ways to make it less robust; leaking permissions seems realistic so I think these measures are more suited for accidental usage and real security is going to keep relying on ensuring auditing of code which we also cannot currently enforce on code being evaluated within Node.

What do I need to do to get into this conversation?

I've been working on how to reserve privileges to some modules but not others.

That seems to require some notion of module identity.
module-keys provides a notion of module identity that resists impersonation.

Being able to open a sink like eval to some modules but not others could benefit from letting some module's mint inputs that would be trusted by eval and node-sec-patterns builds on module-keys to provide configurable white-lists of modules allowed to create values of a particular contract type.

Finally, gating access to commonly misused sources of authority like child_process requires module loader hooks. I have a runtime patch that redoes @bmeck's module resolver hooks for require and am working on babel plugin that does the same for an unpatched runtime.

@mikesamuel: I'm trying to get the conversation to be in the issue tracker as much as possible so that it's accessible and easy to look at the history as we move forward exploring these topics. So consider yourself part of the conversation!

On Slack, we've started to talk about maybe meeting up at Node Summit. Will you be around by any chance? (by the way, to join our Slack, go to https://nodejs-security-wg.herokuapp.com/, but I'd prefer to move as much of the conversation to this issue tracker as possible).

Thanks for linking to module-keys! I looked at it really briefly, but from the README, it looks like you're leaning towards protecting well-meaning code, rather than malicious code:

Module keys allow code written in good faith to cooperate while avoiding lowest-common-denominator security problems. It does not allow safely running malicious code within the same process. Potentially malicious code should be sandboxed if it needs to run at all.

Do you think that's a good model to aim for for initial policies in core (I think this is one of the first things we need to decide)? I'll take a closer look at your module in the coming days.

Worst thing we could do would be to ship a mechanism that would give a false sense of security to the users.
In a first time, preventing a whole node process from loading a set of modules altogether (like net, http(s|2), dns and udp) or preventing the process from writing any file but still allowing to read some, would be a great PoC IMHO

I believe having a fine per module permission system, would require at least long stacktraces everywhere which, even with Async Hooks is not granted.

I'll be staying in SF a few days after Node.js Summit and would be available for a physical + hangout work session on that topic.

@vdeturckheim: if I'm understanding you correctly, you're saying that we should only consider policies resilient against malicious attackers?

I think that's worth exploring (and in fact is what we do at Intrinsic, though the other details are quite different: we have very fine-grained policies and many isolation contexts), but that implies a lot of other complications. For example, for the PoC you described, would you disallow all native modules and child processes (otherwise a malicious attacker can just reimplement that functionality themselves)? We'd also need to make changes to Buffer: consider that you can turn off bounds checking in many operations and also allocate memory without zero-filling). A malicious attacker model would also mean that we'd probably need to fix all of the binding issues that so far have been out of scope (discussed a bit in #18, and some more background in issues like nodejs/node#9821).

In my opinion, I think per-module policies don't make sense combined with a malicious attacker model (without massive semantic changes): modules need to interact too much with each other and it will be very difficult for users to think about the effect of the policies.

(btw, I'm out of town starting July 27, and unavailable on the 25th, so I'd prefer to meet on the 24th or 26th if possible)

deian commented

An in person meeting would be very useful. I'll also be speaking about binding bugs at Node Summit FWIW.

My 2c on the above discussion(s): I think scoping the attacker model and kinds of policies (at a high level) we want is the most important thing to try to tackle first. I'm worried about introducing mechanisms before we figure these things out.

@drifkin said

On Slack, we've started to talk about maybe meeting up at Node Summit. Will you be around by any chance?

Yep. I'm talking at 11:35 on day 1 about "Improving Security by Improving the Framework." </plug shameless>

Thanks for linking to module-keys! I looked at it really briefly, but from the README, it looks like you're leaning towards protecting well-meaning code, rather than malicious code:

Yes. I think that running user code in the same realm as process.binding (absent Intrinsic's membrane) is a non-starter which means that nothing short of frozen realms will allow true mutual suspicion. I also think true mutual suspicion is not the right model for mitigating damage due to bugs in code produced by trusted developers or third-party modules chosen by them.

Do you think that's a good model to aim for for initial policies in core (I think this is one of the first things we need to decide)? I'll take a closer look at your module in the coming days.

Without access to the prior discussion I'm not sure I can answer that question.

In my experience, in-realm language based enforcement mechanisms and boundary mechanisms like Intrinsic's filtering membranes or syscall filters are often complementary.

+1 to what @deian said. Expanding on the different models and exploring how they fit together would be a good use of time at the summit.

@vdeturckheim

Worst thing we could do would be to ship a mechanism that would give a false sense of security to the users.

This is a good point. I think we need to clearly communicate what we provide w.r.t. confidentiality.
Someone wrote a fictional account of an NPM module that exfiltrated secrets which I'm having trouble finding a link for. Solving this requires solving side channels and spectre also means that even if we can limit the side effects that injected code can cause, we can't solve exfiltration. So unless we provide a way to move secrets out of process, we can provide few hard confidentiality guarantees.

I think we can provide plenty of integrity improvements though.

In a first time, preventing a whole node process from loading a set of modules altogether (like net, http(s|2), dns and udp) or preventing the process from writing any file but still allowing to read some, would be a great PoC IMHO

I presented a demo of some of this in my recent jsconf.eu talk.
Using a combination of module resolver hooks and contract types, I dynamically enforce several properties:

  • XSS: That an http response body is the concatenation of strings from whitelisted source modules.
  • Shell Injection: That a shell command comes from a whitelisted source known to properly escape, sh-template-tag.
  • Attacker controlled string reaches require: That only files recognized as production sources load in production and have recognized hashes.

https://github.com/mikesamuel/jsconf-eu-2018
I'm happy to recap if we meet up at the Node summit.

<side_node>@mikesamuel I think we have to grab a drink at Node Summit to discuss your last presentation and compare how Sqreen works with that.</side_node>

I will be in SF from Sunday July 22nd to Saturday July 28 (mid day) and I am mostly needed at the conference on day 2.

  • What day would work best for everyone to meet (keeping in mind that we might have a few people through Hangout (@bmeck)) ?
    • What about Monday in the afternoon?
  • If we meet outside of conference time, do we have a place to meet? (Sqreen's office in SF are still too small for us to be able to host this)
  • If we meet during the conference, should I ask the organizer if we can have a room?

@vdeturckheim
Sounds lovely.
I tend to sit in a corner with a laptop mumbling to myself for several hours before I present so anything but morning of day 1 works for me.

I'm around on the 26th. I'm pretty busy the three days of the conference.

My first 2 cents is that we should include thinking about what "hooks" in node core would allow additional controls to be added as opposed to everything being part of core itself (in keeping with the small core philosophy). Might not be possible due to overhead but worth including as part of the discussion/thinking.

@mhdawson definitely. I don't want us to end up with a domain-like feature with impacts everywhere in the codebase. However that might be an optimistic wish.

From this thread, it sounds like the 26th (the day after Node Summit) would work the best to meet up to discuss policies.

/cc @mhdawson @mikesamuel @vdeturckheim

Who else is interested? We'd be happy to host at the Intrinsic offices (we're in the financial district in SF).

@bmeck are you still interested in joining remotely?

LGTM, I have an meeting at 10 in SOMa but I should be able to re-schedule it if needed

bmeck commented

Yes I would like to attend, that is the last day of TC39 and I'm not sure but there might be a fly on the wall or 2 that want to listen in from their end.

It was an awesome experience attending security working group meeting at Intrinsic, thank you very much to the organizers!

After listening to the discussion, I have these questions / observations.

Is the purview of the working group limited to malicious code injection and vulnerabilities thereon? doesn't it cover application and platform security in general in Node's context? Or is it that this sitting was focussing only on the malicious code injection topic?

[Context from other platform say Java] Java treats SDK's own Java APIs as trusted, and everything else as untrusted. A security manager is defined, that is programmable and tunable to define policies on Subjects, Principals and Users, with granularity of policy going as down as property access restrictions on objects. A JVM wide master object anchors all the security operations in the application. The common target of attackers is to nullify this object which shuts down the security manager in the system completely.

Within Node's context of protecttion from malicoious code, I believe it is important for defining the scope and setting the premise, before we examine the implementation details:

for example:

  • do we treat user code (application and modules) as untrusted?
  • do we treat only modules as untrusted?
  • do we treat only dynamic code (eval) as untrusted?

I guess a consensus from the meeting was to treat built-in APIs and application as trusted, and everything else (modules, dynamic code) as untrusted?

I guess a consensus from the meeting was to treat built-in APIs and application as trusted, and everything else (modules, dynamic code) as untrusted?

My recollection was that no consensus was reached on that question, but that a consensus was reached that resource integrity is within security-wg's purview.

I think there was a consensus that built-in modules are, and will continue to be confusable -- meaning that built-in modules do not maintain invariants in the face of things like malicious prototype monkeypatching and stack alignment attacks. Noone seemed skeptical when it was claimed that changing that would require a large & ongoing effort by maintainers.

So builtin modules do currently trust application code.

My argument throughout is that trusted/untrusted need not be a binary distinction.

I think we can make progress on many fronts by, during production,

  • assuming that module code (that passes resource integrity) is written in good faith, but is confusable by untrusted inputs
  • use mechanisms to better approximate POLA by limiting access to abusable authority based on need to small, identifiable sets of modules

and that we ought independently pursue efforts to limit ecosystem-level threats due to abuse of developer commit privileges.


do we treat only dynamic code (eval) as untrusted?

No consensus. I answer no, and believe there are use cases for eval which could be secured via contract types -- eval, Function, vm check that the input is a value of type TrustedScript and we control how TrustedScript values come to exist.

My understanding is that the Node.js model has always been to assume trusted code. It does not have the equivalent to a security manager so any code can use any of the available APIs. Unlike the browser, you actively install code locally as opposed to executing code that is dynamically pulled from external sources.

A change to this assumption would be a fundamental change and during the meeting we circled a few times, coming back (at least in my understanding) to a consensus not really feasible to change it in node core (which matches up with what @mikesamuel said above).

As stated above, though, it does not mean that we can't still do things that will improve the security posture when running Node.js.

thanks @mikesamuel - that clarifies many things. However I should admit that (not being a security expert) I do not follow few terms.

built-in modules are, and will continue to be confusable

Yes, I think it is deeply rooted to the language semantics itself: JS being dynamicaly typed implies objects are dynamic, and regulating object access and transformation in the pretext of security does not seem to be optimal and maintainable.

module code is written in good faith, but is confusable by untrusted inputs

Can you please provide an example for this? (confusable inputs)

use mechanisms to better approximate POLA by limiting access to abusable authority based on need to small, identifiable sets of modules

This is where I see a practical difficulty: Assume I have to use a module m in my app. m by contract provides me a sophisticated stream experience for example.

  • As an end user, I cannot truely compute the restrictions that I want to apply on m
  • Similarly, I wouldn't know what are the POLA characteristics of modules that m depends on:n, o, p.
  • On the contrary, by the face of the API contract, If I restrict m from accessing disk and network, m may be abstracting an fs or http stream underneath, so inhibiting it from doing so will break the function

So from a consumer point of view, defining access control may be painful. It may be great if this is achieved through a trusted software authority

we ought independently pursue efforts to limit ecosystem-level threats due to abuse of developer commit privileges.

Agree, makes perfect sense! Also how about defining security adherence policies, best practices (and certifying thereon) for modules?

@mhdawson - thanks, agree: defining and applying security policies at the node core is not i)feasible ii)bulletproof ii)maintainable.

So leaving the core as an efficient Javascript execution platform with security policies defined, scopped and implemented at the module level (source, load, production) looks like the path forward.

@gireeshpunathil

built-in modules are, and will continue to be confusable

Yes, I think it is deeply rooted to the language semantics itself: JS being dynamicaly typed implies objects are dynamic, and regulating object access and transformation in the pretext of security does not seem to be optimal and maintainable.

@erights did explain how Frozen Realms can address prototype poisoning. I was stating my sense of the room, though Mark and I might be more optimistic than that. Putting builtin code in a separate realm is not a trivial change though.

module code is written in good faith, but is confusable by untrusted inputs

Can you please provide an example for this? (confusable inputs)

Sorry. I didn't mean to talk about "confusable inputs."
I meant untrusted inputs reaching confusable code.
For an example of the latter, consider A very simple piece of code which uses a lookup table to find a capital city given (country, state, year) which is deeply vulnerable.

So from a consumer point of view, defining access control may be painful.

I don't think this is the case. I've worked on a team that has managed these kinds of access controls for a much larger application group.

If a module doesn't explicitly require a dependency, then we can assume, absent evidence to the contrary, that it doesn't require it.

We have pretty reliable ways to find contrary evidence -- run the tests and see what the module does.

We can recommend ways for library developers who get reports that access was denied in production -- add more tests.

On the contrary, by the face of the API contract, If I restrict m from accessing disk and network, m may be abstracting an fs or http stream underneath, so inhibiting it from doing so will break the function

Does m in your scenario directly require or import fs or does this use happen via an input or one of m's dependencies?

Also how about defining security adherence policies, best practices (and certifying thereon) for modules?

Is the best practices badge project aiming towards some of these goals?

@mhdawson - thanks, agree: defining and applying security policies at the node core is not i)feasible ii)bulletproof ii)maintainable.

I think @mhdawson was talking specifically about whether we treat module code as malicious. That's a separate issue from whether we define and apply security policies in core.

For example, resource integrity -- making sure that only code that should load actually loads -- could be done in core.

I think it is feasible to get bulletproof, maintainable resource integrity checks.

And without resource integrity, there's no clear relationship between the code loaded by core and the module code that we're debating whether we trust or not.

Enabling features that are necessary for many application-specific security stories are, IMO, good candidates for support in core where feasible & maintainable.

@erights did explain how Frozen Realms can address prototype poisoning. I was stating my sense of the room, though Mark and I might be more optimistic than that. Putting builtin code in a separate realm is not a trivial change though.

@deian did point out in his talk that polymorphic values are an oft-overlooked problem in both builtin module code and in C++ binding code. Frozen realms would not address that.

f({ i: 0, toString() { return this.i++ ? ', evil()' : '123' } })

// Returns a valid identifier
function f(string) {
  if (!/^\d+$/i.test(string)) { throw new Error('...'); }
  return 'foo_' + string;
}
bengl commented

Hey folks just a heads up that @addaleax has made an implementation of access control policies nodejs/node#22112

In terms of:

I think @mhdawson was talking specifically about whether we treat module code as malicious. That's a separate issue from whether we define and apply security policies in core.

Yes, I mean that introducing controls with the aim of getting to the point where we can treat module code as malicious. Controls may still be useful for other reasons.

Moving this here from nodejs/node#24908

Problem: Malicious packages

With the recent news of the event-stream/flatmap-stream attack (summary), it seems like now would be a good time to discuss defending against these kind of attacks.

Presently available defences:

1. use a lockfile

2. fully audit the published code of entire dependency tree

While a lockfile is generally good practice, it would require auditing to be effective. There lies the issue, auditing is not feasible. Dependency graphs are too large in most modern projects to effectively audit manually.

Suggestion

One defence is to introduce permissions for node core modules such as fs, http, process, and others.
A package.json would need to specify which core modules it uses, ex:

{
  "name": "package-name",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "permissions": [
    "fs"
  ]
}

Restrict import/require such that requiring http in this package would throw an error, but fs would be permitted.

Each package would be provided a uniquely restricted import/require.

For backwards compatibility packages without a permissions, would be considered to have all permissions. This could be deprecated in favour of always requiring a permissions field.

Additionally tooling could be developed for package managers such as yarn and npm. Users could be alerted when permissions have changed anywhere in their dependency graph. Upon install of a dependency the user could be prompted with accepting the permissions for all packages added to the dependency graph.

This is not intended to eliminate the need for auditing, but could reduce the amount of packages needing audits to a reasonable level.

Outstanding Issues

* What to do with C++ addons? Being able to identify them as such may be enough, during install the user could be warned that module has full access to their system (equivalent to all permissions).

* Permissions of main package during development, build scripts, ect.

Other Defences

* Content security policy / sandboxing (restrict access to white-listed directories, and domains)

This seems in line with the goals of Constraining APIs.

Concerns were raised in the original issue about adding more details to package.json. Personally I don't see a problem with this since it's unlikely to ever go away, and the addition is a rather small change to the api. If that does turn out to be a blocker an alternative would be that node add a runtime option to disable require of core modules, and only allow use of import, but not import(). This would at least allow static analysis of core modules.

It was also mentioned about the context of require in the REPL and node -e, this is a similar case to "Permissions of main package during development, build scripts, ect.". My suggestion for these contexts would be permissions are all enabled. For testing purposes though it would be ideal to be able to pass a package.json and use it's permissions something like node --perm=package.json main.js.

@robbiespeed Are you familiar with the sensitive modules hooks previously discussed on this thread?

The attack-review-testbed's package.json defines

  "sensitiveModules": {
    "child_process": {
      "advice": "Use safe/child_process.js instead.",
      "ids": [
        "main.js",
        "lib/safe/child_process.js"
      ]
    },

which wires into sensitive-module-hook.js which vetoes unapproved loads of sensitive modules.

(The attack-review-testbed would have prevented exfiltration by flatmap-stream because http is a sensitive module, but solving this kind of attack by focusing solely on prevent exfilis probably not the way to go because side channels.)

A good high level summary of the issue and what's needed to address it is

POLA Would Have Prevented the Event-Stream Incident by @katelynsills

Several of us are now involved in designing such a module system for SES, for providing libraries --- including many legacy libraries --- least authority across JS hosting environments (Node, browsers, IoT, blockchain). We keep coming back to this incident as a revealing test case.

@mikesamuel Just looking at that now, my understanding could be wrong, but does it require that the user explicitly define the white list for each of the dependencies it uses? This seems like an issue to me, as it would require a lot of manual work. Would packaged modules be able to define which core modules they require?

@erights Great article, basically was what I was trying to achieve with my proposal.

Is this example syntax from the article the current direction being explored?

const addHeader = require('./addHeader', {fs, https});

I like the idea that in js you are directly passing the dependencies upon use. However it wouldn't play nice with import syntax. I guess in an ideal world all dependencies would have no access to core modules, and then that would force library authors to write their apis to be used like:

// application code can import core modules
import http from 'http';
import { startServer } from 'server';

startServer({ http });

That combined with access control policies would probably cover the bases pretty well.

Is this example syntax from the article the current direction being explored?

Not literally. It is meant to be suggestive of the elements that need to somehow be present in any solution. The hard problem we are currently wrestling with is the conflict between aspects of current widespread coding patterns:

Module-to-module imports, and package-to-package dependencies, come in graphs, not trees. A module can be imported by many other modules, and a package can be depended upon by many other packages. This raises the issue of where policy --- of what authority should be granted to the module or package --- should be expressed.

The example code from the paper suggests that the authority be provided at the importing site. However, in order for multiple importers to share the instance they are jointly importing, these separate grants would somehow needs to be merged. Or, each import site that expresses such a grant could get its own instance. Neither of these work well for JS. Or, the enclosing container --- the app as a whole --- could express what authority is granted to each of the packages in the app as a whole. This requires the app author to have global knowledge of all the packages being linked together to form the app.

Or, we can introduce more structure into the expression of inter-package dependencies, so that the locality of policy expression can follow the natural locality of knowledge as programmers separately develop packages that get linked together. We expect to have something readable soon on our design.

@mikesamuel Just looking at that now, my understanding could be wrong, but does it require that the user explicitly define the white list for each of the dependencies it uses? This seems like an issue to me, as it would require a lot of manual work. Would packaged modules be able to define which core modules they require?

The dev team has to whitelist modules that may use sensitive modules. Uses of non-sensitive modules need not be recorded anywhere.

For minters, the dev team can use a combination of whitelists and self-nominate and second:

"""
Library code may also suggest grants. It may self nominate for certain privileges, and then an application may second those privileges.
"""

In teaching people about Node we always run into the issue of explaining why require('fs') works as-is but for require('express') you need to npm i express before it works.

What if Node.js could take advantage of this? What if require('fs') would only work if ./node_modules/fs/index.js (relative to the package.json) needed to exist for require('fs') to work?

{ Error: Cannot find module 'fs' - run: "npm install fs" to add it to your project
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:580:15)
    at Function.Module._load (internal/modules/cjs/loader.js:506:25)
    at Module.require (internal/modules/cjs/loader.js:636:17)
    at require (internal/modules/cjs/helpers.js:20:18) code: 'MODULE_NOT_FOUND' }

And if a submodule has fs in the package.json, as users we know that this package wants to access fs. NPM could report an warning if a package with 'fs' in the dependencies wanted to be installed.

Note: I am not suggesting that the the whole fs is installed with that package. Just a marker file that specifies that fs is supposed to work

@martinheidegger I feel like this would more solve the per-process limitation use case thant the per-dependency one. Actually, if fs is authorized, someone could dynamically add new files in the node_module directories to get the authorizations right?

@vdeturckheim I believe with a little tinkering a per-package solution (not module) could work with this, which I generally think is slightly more practical than per-module.

Yes, the fs authorization gives a module super access, just by the fact that it could theoretically rewrite the nodejs binary and replace it with a hacked one. Node could put limitations in place though.

@martinheidegger The target app's sensitive modules config restricts access to fs.

@vdeturckheim The target app's resource integrity checks prevent loading of modified source files. An attacker would need to be able to abuse write access and generate a sha-256 hash collision to before the app would consider loading their modified source file.

Since the target app locks down dynamic code loaders like eval and new Function and provides a proxy over Function we can

  • be confident that the vast majority of code cannot be tricked into loading attacker-controlled code,
  • still allow legacy modules that need new Function to run,
  • focus reviewers' attention on the latter.

@mikesamuel the target app uses the package.json which my approach tried to avoid. The target app also requires additional setup steps for the users (which imo. makes it harder to teach/explain).

@martinheidegger
Interesting, what prompted you to avoid using package.json?

Where would configuration related to granting privileges to modules go ideally?

You're right that the setup is tricky at present. I hope to bundle a lot of the setup so that a blue-teamer can integrate it by choosing a la carte, but that level of ease of use is not there yet.

@mikesamuel In one-on-one conversation I heard before from Node.js maintainers that the package.json is something they don't want to rely on as it is owned by NPM and not Node. Also, it is possible to write Node projects entirely without setting a package.json. Security might be relevant in those cases as well.

Where would configuration related to granting privileges to modules go ideally?

I would separate between two different users the "package developers" (devs) and the "package users" (user). Preferable they share the effort. Preferably I would have both do a part of the work. The devs need to have a good way to specify that their package requires a certain permission. While the users need to grant the permissions to the packages on/after install.

To me asking the devs to do a npm i fs --save if they want to use fs in their package seems like a reasonable solution. If npm encounters a package that wants to use fs on install, the user could simply get a check-box that enable/disables that by writing different things in the node_modules folder.

In one-on-one conversation I heard before from Node.js maintainers that the package.json is something they don't want to rely on as it is owned by NPM and not Node.

So … “Node.js maintainers” are not a homogenous group, and we have a lot of different opinions.

If we’re talking about adding real per-package or per-module config, I wouldn’t discard package.json as an option; the biggest difficulty might be the fact that Node.js modules and npm packages don’t map 1:1.

So … “Node.js maintainers” are not a homogenous group, and we have a lot of different opinions.

Oh totally. I just stated the reason why I thought about a solution outside the package.json; I don't remember the persons name, just the context - its a long while ago, that person may have changed their opinion by now.

I personally tended also to go directly to the package.json for configuration but the title of this issue is "Exploring security policies" so I tried thinking outside the box. 😉

@martinheidegger Thanks for explaining. I'll keep an ear out for arguments about which configuration is best placed where.

I am looking for red-teamers to help stress test that. If the biggest problem with that code is that blue-team maintained configuration is in a sub-ideal place, I'll be very happy indeed :)

any follow-up?

there's a discussion going on in the Node.js WG Slack on #experimental-policies that you might want to jump on

closing the issue since there is an ongoing permission model work here: #791

Note: There is also a discussion on the OpenJS Slack: #nodejs-discussion-security-model