w3c/webcrypto

Please require a secure origin (Bug 25972)

mwatson2 opened this issue Β· 84 comments

Bug 25972 from bugzilla:

As with service workers implementations want to require a secure origin in order to get access to cryptographic functionality. We should make that a requirement in the specification so that implementations do not have to reverse engineer each other.

In particular you want to refer to the origin of http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#entry-settings-object

Secure origin is defined by https://w3c.github.io/webappsec/specs/mixedcontent/

Ryan probably knows which flavor of secure origin is to be used here.

Add text saying "Developers should use Secure Origins, although this is not strictly required."

Discussion on 7/11 call suggest requiring a secure origin, in the stronger sense of requiring the top-level browsing context to be secure.

I assume that the secure origin test should be performed as the first step in each WebCrypto method. This would mean that errors due to WebIDL type conversion would still occur on insecure origins, but then you would get the secure origin test failure.

IIUC, Chrome's current implementation does algorithm normalization and possibly other parameter validation before checking for the secure origin.

-1 for requiring a secure origin (though I'm okay with it being recommended).

The "Secure Messaging" use case, for example, doesn't require TLS in order to preserve the privacy of the exchange, so long as only ciphertext is transmitted. Simply put, we don't want the extra roundtrips associated with establishing a TLS session when they are superfluous to the actual security solution, since this leads to a worse end-user experience with no tangible benefit.

Whatever you do, please do not use the incumbent or entry settings objects here. Current settings object is more correct. See https://readable-email.org/list/public-script-coord/topic/multiple-globals-and-you

Looking at https://w3c.github.io/webappsec-secure-contexts/#new, would it be simpler to add [SecureContext] to GlobalCrypto, Crypto, and/or SubtleCrypto?

That would probably work, although it would have different normative effects than adding a guard. (Adding a guard causes errors at call time, whereas [SecureContext] prevents the interfaces from existing at all.)

@bdhess Without a secure origin, you can only make security arguments with respect to passive attackers. An active attacker - one who can modify the pages served to the client - has full access.

Privacy against passive observation is not without value and this is the argument I have used in the past against requiring a secure origin. However, we'd need some evidence that there were users with that use-case as a requirement (my company was one of those, but no longer).

@ddorwin Yes, that would seem to me to be simpler and clearer. However the secure context specification recommends both. I am not sure why, since if the interface is only exposed in secure contexts the guard could never fail. @mikewest ?

@mwatson2 In our operating environment, there's no sensitive data or PII that's emanating from the client. Ultimately, our use of Web Crypto is entirely to provide security around server-provided assets. So the protection of client-side key material is therefore a concern for us, but as I understand it, out of scope for Web Crypto.

Yes, that would seem to me to be simpler and clearer. However the secure context specification recommends both.

Adding the [SecureContext] IDL attribute to the relevant interfaces should indeed be enough. The API won't be exposed in non-secure contexts, and since the state of a context can't change, the explicit guard in the algorithm is redundant (though probably worth noting next to the algorithm in any event). If you're ok with the implications (hiding the interface rather than throwing), then that sounds like a totally reasonable way to address the issue.

Is it web-compatible to hide the API from insecure contexts? That's the main tradeoff solution-wise.

Safari and IE both ship prefixed variants of the crypto.subtle APIs. I imagine developers would have to be feature-detecting anyway if they don't want to error out on those browsers.

Marking with [SecureContext] SGTM.

-1 for requiring a secure origin (though I'm okay with it being recommended).

Note that Chrome already doesn't allow Web Crypto over non-secure origins. So clients needing to support Chrome will have to address this in their implementation regardless.

IIUC, Chrome's current implementation does algorithm normalization and possibly other parameter validation before checking for the secure origin.

Chrome (currently) does the origin check after (most) WebIDL checks on the parameters, but before things like "algorithm normalization", and rejects with NotSupportedError in that case. Hence it is not very consistent as far as what parameters have been processed. Marking with [SecureContext] avoids that problem.

What should happen when structured (de) cloning a CryptoKey into an insecure context? Under the crypto.subtle is hidden model, I presume you could still postMessage a CryptoKey to an insecure context. Is that relevant?

@ericroman920

Note that Chrome already doesn't allow Web Crypto over non-secure origins. So clients needing to support Chrome will have to address this in their implementation regardless.

So the contention is, since Chrome has already defined a behavior, there's no point in discussing changes that would make the spec inconsistent with Chrome's present behavior? This seems like a really backwards way of developing community standards. I'm not the first person to object to this behavior:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972#c6
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972#c20
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972#c22

From the record, it's pretty clear that Chrome came to this decision based on the assumption that there exists no scenario in which there is value in being able to use the Web Crypto API in an insecure context. Including myself, there are at least four commenters who've disputed that.

I suspect that were you to ask the people that objected to this in the past again, they are probably more agreeable to this now. E.g., Mark commented in this very thread saying so.

Also, the concerns Ehsan raised there were mostly about the original solution easily being bypassed. Secure contexts addresses those concerns. He didn't really state any use cases for Web Crypto in insecure contexts and I don't really think you have clearly done so either.

New PR using [SecureContext]: #131

A problem with PR #131 is that it marks window.crypto as [SecureContext].

This means that window.crypto.getRandomValues() is no longer accessible on insecure contexts.

I don't believe we want that change -- getRandomValues() has always been available on insecure contexts.

Instead I propose marking only SubtleCrypto (i.e. window.crypto.subtle) as [SecureContext].

I'd lean towards requiring a secure origin here. I feel a little sad about losing hash capability in insecure origins (since that is not sensitive in any way, similar to getRandomValues) but overall given the current threat landscape I believe it is better to strongly protect key material even if we take this bit of collateral damage.

The CfC for this issue was approved.

Hi,
I don't think that enforcing HTTPS everywhere is a good solution, especially with that API.

I am currently developing a distributed app that works on a local network, without internet access. Each node has a little "admin" interface. Due to the cost of having and maintaining a PKI, especially for small clients, I don't want this admin interface to be on HTTPS.
Having a self-signed certificate is not an option either, because of all the security warning that would cause the user to skip the warning without even looking at it, making the whole thing as vulnerable as plain HTTP.

For the solution I designed, I implemented a key exchange mechanism over HTTP and encrypt only sensitive data as needed. But on first try with chrome on another computer, I realized that my thing is entirely broken.

I want to use new APIs, but the work of having HTTPS on each node is a no-go for me, and in the end of the day, I can't make my site any safer by using this API... that's really too bad. It's like telling a homeless to give his address when he wants to buy a blanket to cover himself...

That being said, I suppose that I need to find a crypto library... Which is just the same as using webcrypto from insecure origin, but with more js weight, less maintained code and less performance.

this pisses me off greatly, i just want to calculate my sha1 using your super secure web crypto shit, and it cant because well, i don't care about being all cryptic, i just need my sha1 please, looks like you got lost in definitions, not everything named "secur..." needs to be turned off

m5x commented

I do not want to be rude but whoever came with the idea that crypto should not be available in non-secure context was probably only thinking about his narrow set of usage scenarios. We have an intranet exam server for schools which they use separately in every class room for computer-based final exams. We cannot use HTTPS because schools cannot buy certificate for every intranet computer they want the exam server to run on and we do not want to use self-signed certificate because browsers would issue warnings which we want to avoid. So we use window.crypto.subtle to implement our own asymmetric cryptography to protect sensitive information on the wire. Now we will have to use some JS crypto library which will only increase bundle size and decrease performance.

bin-y commented

Can someone stop this environmentally unfriendly idea since everyone can calculate hash and encrypt data without this API but with bigger code size and lower speed.

Please reopen this. There is no justification given for the need of secure origin limitation. The use case of using this on HTTP is too important not to account for it.
The most common use case:

  1. Non-internet connected website, there is no way to order/maintain a valid certificate
  2. The server can not implement a valid SSL certificate (like a small microcontroller that can't embed certbot).
  3. System without maintenance (certificate tends to be more and more time-limited) and provider don't update them. Using webcrypto to store a secret, keypair once HTTPS is valid in localStorage it could be used to ensure everything is still ok even after HTTPS is failing.
  4. During development, it's much more difficult to use this code since we might not be exposed to the internet, and thus, might not have a valid certificate.

This bug actually worsen the security because:

  1. One could use a JS library for what's implemented here, and serve on HTTP.
  2. Doing so, he might use a less verified code (more prone to bugs than a highly used implementation)
  3. A MITM could capture the library and implement a backdoor in it (while using webcrypto, this does not happen since it's embedded in the browser)
  4. It's possible with subresource integrity (SRI) to ensure the JS is not tampered. It's not possible to ensure main HTML integrity, but if an attacker is able to intercept the main HTML page, he can also set up a phishing site as well in HTTPS with a "looks identical URL".
  5. Provided the event that the first time the attacker is intercepting the page is unlikely, the JS code can use localStorage to store the a key pair and use DH to derive a secret from it and the server's key. Then, in any later session, the server can send a JS encrypted file or content with this secret. Any interception will fail since the MITM has no way to know the secret and decrypt the file/content. He can not gain access to the server without this, making the attack very unlikely. The only advantage could be to learn the client password, but that's not enough to do anything with it.
  6. In all cases, using SRP is enough to ensure that a MITM could not capture the password or an auth token in all cases. SRP is implementable in JS too.
  7. This increase the storage required on the server (to store equivalent JS files for all features required), and also adds more battery/energy consumption on the client (since the crypto is now done in JS instead of optimized binary/hardware)
  8. This forces to reinvent the wheel.
  9. This make Chrome less interesting to use since all other browsers except Chrome allow Webcrypto over HTTP. Thus this again leads to bad web usage/feeling.
  10. HTTPS does not prevent XSS attacks and enforcing this does not make it harder since, nowadays, anyone can have HTTPS. It's still possible to steal keys whenever a malicious cross site javascript is run on your fine HTTPS page with webcrypto and hooks the methods/password field/whatever.

Currently, the only reason why one might want webcrypto.subtle to only be restricted to HTTPS is:

  1. To enforce developers to use HTTPS (but developers are not stupid, they sometime can't avoid using HTTP). This is a Chrome crusade based on heresy mainly with only faith based justification. Political or religious belief should not pollute the standard.
  2. To avoid MITM/phishing (that's more a reason why HTTPS is required, not webcrypto). Again, a developer knows that, when using HTTP, it's impossible to ensure the page displayed is the correct one. Yet, they are many case where we don't care (local network, secured internal network, etc...) or they are counter measures as explained above.
  3. It restrict the API to only "serious" users. Honestly, I hate this argument but I can understand it.

I'd like to add my voice to this chorus as well. Hiding the interface is pretty developer unfriendly. I can make do without it, but only by fetching a crypto library through a CDN (or copy-paste), which adds new potential points of failure (not to mention slowing down the page).

My use-case is just that I want to avoid sending out plaintext e-mail addresses in URLs over HTTP (which must then be decryptable on the other side). The context is not secure and can't be made secure. Not being able to access a standard library function because of the context in which it runs seems to violate the principle of least astonishment.

For us as well this limitation is unjustified. HTTPS is not a good default option for configurable enterprise intranet oriented IoT devices like IP cameras etc.

@X-Ryl669 put together a wonderful post detailing a plethora of reasons this was an incredibly stupid decision. I'd like to add to the list:

  • Implementation should be based on specs, not specs based on implementation

It seems like the only reason you decided to limit WebCrypto to HTTPS was because "Chrome did it first" -- that's bunk. Unless Chrome is now 100% of the World Wide Web Consortium and holds total sway over the future of web standards, they deserve a swift kick in the crotch and need to sit down.

Here's my use case that you're buggering with: I'm developing a video player. The video player will loads ads using the IMA SDK, and it will have analytics reporting playback statistics to AWS. Calls to AWS must be signed securely, and so I wish to use WebCrypto for SHA256 and HMAC generation. This player works when pushed to production where we have a valid SSL certificate and can use WebCrypto, but for local development it either fails because:

  • The IMA SDK does not load ads on http://localhost (in an attempt to avoid scripts spamming ad servers to generate fake ad views)
  • WebCrypto will not work if I create a hosts entry to point http://test.local to 127.0.0.1 (because now I'm suddenly in an insecure context)

Now I'm forced to either generate an SSL certificate and run a web server just to test my player locally, push my code to production in order to test it (nope.jpg), or add some bulky and potentially insecure SHA256/HMAC library to my code (also-nope.jpg)

This asinine "requirement" (which makes the web slower, less secure, and less user friendly) has now turned my new developer onboarding steps from:

  • Clone repo
  • Run npm install

To:

  • Clone repo
  • Run npm install
  • Generate an SSL certificate
  • Install nginx
  • Configure nginx correctly
  • Set up a hosts file

Running into this absurdity as well, internal use, cant HTTPS/SSL/ect, should have exceptions at the very least. No option = anti open

+1

I have a scientific application running offline that uses the web browser for rendering. I simply call crypto.subtle.digest to calculate a hash of function arguments that is used for memoizing an expensive function. Was confused why my app stopped working once I left localhost and found this issue. I can write my own hashing function, but it's insane to think that this standard doesn't trust developers to call a hash function in an insecure context. A hash function has many uses besides crypto.

I'm using a third-party libary that itself relies on certain hash functions provided in window.crypto.subtle so I really need the global defined.

I was able to shim this in webpack for local development on an insecure host like http://my-virtualbox-vm like so:

npm install --save-dev @peculiar/webcrypto
// webpack.config.dev.js
var webpack = require('webpack');

module.exports = {
  // ...
  plugins: [
    new webpack.ProvidePlugin({
      'CryptoShim': '@peculiar/webcrypto',
    }),
  ]
}
// index.js (executes in browser context)
if(window.crypto.subtle === undefined){
    console.warn(`Shimming window.crypto.subtle using Nodejs library!`);
    let Crypto = new CryptoShim.Crypto();
    window.crypto.subtle = Crypto.subtle;
}

I hope this can help any other webpack users for local development.

That said, in my case, I am actually consuming a cross-domain API secured by TLS so it is pretty silly not to be able to generate hashes, signatures, etc., as required for API clients. In production, this client will be served also over TLS.

Another option for individuals just working locally over HTTP is to enable the chrome://flags/#unsafely-treat-insecure-origin-as-secure flag for Chrome.

Can the digest functionality be made accessible? Even on a different API to distinguish it. I just want to create a digest of some text. There is nothing security critical about my use case, otherwise I have to store a far larger blob of text for comparisons.

In case anyone else is looking for a simple digest I'm trying https://github.com/indutny/hash.js and it seems to work well. For my purposes SHA-1 works just fine. It's a pity I'm not allowed access the perfectly serviceable native solution though.

This is absurd. For what god camn reason shouldn't I be able to get the checksum of a file just because I'm not using https? What were you thinking? Why shouldn't I be able to use any feature at all unless I'm on https?

If anyone is looking for HMAC-SHA256 support (and not just a checksum or simple digest), I wrote the following for my project:

https://gist.github.com/stevendesu/2d52f7b5e1f1184af3b667c0b5e054b8

The code was designed for efficient minification, and intended to be loaded via CommonJS - but it can be pretty trivially wrapped to support other use cases. For instance:

var hmac = {};
(function (module) {
// ... paste code here ...
})(hmac);
// can utilize hmac here without requiring CommonJS

The module exports 3 methods:

  • sign: Takes as input a key and data. If the key or data are strings, they are converted to Uint8Arrays. Otherwise, expects Uint8Arrays. Returns a Uint8Array of the HMAC signature
  • hash: Takes as input a string. Returns a hexadecimal string of the SHA-256 hash. Convenience method since many services (like AWS) include SHA-256 hashes of strings in the data to be signed
  • hex: Takes as input a Uint8Array. Returns a hexadecimal string representation

Usage:

var hmac = require("./hmac");
var key = "mySecretKey";
var data = "myData";
var signature = hmac.hex(hmac.sign(key, data));

Also, what's the deal with "chrome did so it's standard now"? Am I the only one that thinks a single corporation whims shouldn't dictate technical standards? If they want to do weird stuff and require https, that's their problem.

This is absurd. For what god camn reason shouldn't I be able to get the checksum of a file just because I'm not using https? What were you thinking? Why shouldn't I be able to use any feature at all unless I'm on https?

This is covered in the comments, but admittedly quite some time ago.

If you are on HTTP you are vulnerable to a MITM attack which could replace the webcrypto API (or indeed any of your code) with an attackers version. So, then, all security bets are off. For example, the attacker's webcrypto could return the checksum you expect for a file, but the file could still be the attacker's version.

At the time of writing, there were large, well-known, supposedly reputable ISPs conducting MITM attacks on the Internet to modify the Javascript code on websites. So the argument that MITM attacks are hard or rare or only applicable in particular network scenarios didn't carry much weight.

There are trade-offs, of course, as noted above, as there could be applications concerned only about passive attacks and they could work over HTTP with Webcrypto if webcrypto was not restricted to secure contexts.

@mwatson2 Out of curiosity, if malicious JavaScript has been executed in an insecure context, what is to stop someone from shimming window.crypto.subtle anyway as I had in this comment? It doesn't seem like removing the API accomplishes anything?

@mwatson2 that's a reason to use https, not a reason to FORCE people to use https. Why should people be FORCED to use it?

@StephenLynx The reason is that webcrypto over http is not functional - in the practical sense of being able to rely on the results - except in very limited circumstances. So it would be misleading for the standard to suggest that it was.

@patrickjrm Sure, an attacker performing a downshift attack from https to http could shim webcrypto. So the existence of the webcrypto API is not proof that the site has loaded over https. Nor is any other check that you do in JS that the attacker could spoof / disable. I don't see the relevance to the question of whether webcrypto should be available over http, though.

@mwatson2 What does that even means? You can't rely on the result of ANYTHING that's from IO. Are you going to force XMLHttpRequest to also be only usable under https? And if it's so important, why no one touched on it until chrome started requiring https?

And @mwatson2 is not for people to justify crypto being on http, is for people to justify why it shouldn't be. Imagine needing a justification for every single tool on every single context. That's the same as justifying a liberty. Is for the people denying the liberty to justify why it shouldn't be allowed. Otherwise it's a very authoritarian scenario. This is how I feel: this decision was fruit of pure authoritarianism. If people want to run unsecure websites that's their problem, not the standard.

If the standard for correctness of an API is that it cannot be corrupted by an attacker, what is the argument for enabling any API whatsoever, on HTTP or HTTPS?

HTTPS may protect against MITM attacks, but it does not protect against all forms of code injection. WebCrypto and other libraries are still vulnerable to attacks in an HTTPS environment, they're just vulnerable to a narrower range of attacks that are in principle easier to guard against.

What's more, all other libraries are just as vulnerable. Do the other libraries not carry significant risks too if they are corrupted? Is their correctness not also impossible to guarantee?

Is it within an API's jurisdiction to decide whether the caller is taking too great a risk by assuming that it is what it says it is? I personally do not think so.

I understand the need to balance between security and usability concerns, but this seems like an opaque, arbitrary decision that does not at all take the user's experience into account.

Remember that the error the user gets is that the specified API does not exist, not that it's correctness "cannot be guaranteed".

These are not unreasonable points. I'm just describing how the issue was decided because that context is way way back up the thread.

Different considerations apply to a cryptography API and other APIs that don't purport to enable implementation of secure protocols. End users have a particular interest in site developers not being given security footguns and end users interests come above site developers in the priority of constituencies. That's not to say there are no security footguns in the web platform, but the existence of others would not be an argument for introducing another one.

Nevertheless, there are trade offs, as I said, as this is how this one was decided.

No, it was decided because google decided on it and you decided that papa google knows best. People were OVERWHELMINGLY against it but it went forward anyway.

Speaking of which, who are you and who do you work for? I can't seem to find anything about you.

I've been following this issue for over a year. It's been quite saddening to see the complaints coming in over that time. I guess I hoped that the weight of the complaints would eventually add up to some kind of meaningful change.

@mwatson2 you say it's β€œdecided” so I guess there's no reason to keep following the issue. I just wasn't aware that W3C worked that way.

All the argument given here for forcing HTTPS are dumb. There are so many cases you can't have HTTPS and yet want a minimal security like for example, for having at least some security in a home private network (or a business local network) that can't have certificate.
With a webcrypto, you can secure what's on the wire to prevent anyone with Wireshark to get all your password for free.

You can implement SSH-like security (that is, store some key in local storage that are fetched only on first connection) and reject any tampering to the JS code. That way a MITM would be useless without knowing secret he couldn't see.

The result is why we have such a high security in IoT, it's impossible to do currently (having a device on your network with HTTPS), thanks to that kind of decisions (there's also the other security issues not related here, but this is one of them).

The decision could have been useful if, for example, it would have been possible to:

  1. Sign a JS code with a certificate that's verifiable on the Internet.
  2. Load such code and ask the user to accept it
  3. Enable webcrypto in that case for this file.

If a MITM kicks in, the signature will break, the browser pops up one of the famous red warning.
No HTTPS required to the JS source. No Google authoritarism, just some smart decision, instead of this one.

@mwatson2 you say it's β€œdecided” so I guess there's no reason to keep following the issue. I just wasn't aware that W3C worked that way.

It was decided in the sense that the WG made a decision by consensus as to what to put in the Proposed Recommendation back in 2016 and then there were no objections to the progress to Recommendation. That is how the W3C works (decisions by consensus). If the consensus changes, of course it can be reopened, but I don't see any sign of that.

Speaking of which, who are you and who do you work for? I can't seem to find anything about you.

Not that is should matter, but I work for Netflix and I am one of the editors of the Webcrypto specification, though to be honest I have done very little to deserve that title recently.

A consensus that clearly didn't involve that many people to consent on it. How publicized was this process that lead to the decision? I see for the most part opposition in this issue's comments.

bobf commented

@StephenLynx Although I don't disagree with some of your points I do think that everyone would benefit from you being a lot more polite. Your attitude is very discouraging for open source developers and people who might participate in the discussion.

Hi
can you my help
ERROR Error: Uncaught (in promise): TypeError: Cannot read property 'importKey' of undefined
TypeError: Cannot read property 'importKey' of undefined

not local server

Hi Folks,

I use the react native webview component to load up my log in screen on the Mobile application. With the latest Safari on iOS update in iOS 15, there is a need to make the webview load in a secure context. I use https://<hostname> to load the webview.

But when I try to use window.crypto.subtle.generateKey function, the method throws an exception saying undefined is not an object (evaluating 'a.subtle()[f]')

Is there some way I can load the webview in a secure context other than just having a https session?

Link from Apple team: https://trac.webkit.org/changeset/279628/webkit

Any help appreciated. Thanks.

twiss commented

@varunnaagaraj This sounds like a webkit-specific question or issue, so I would start there. In any case, it's off-topic for this thread.

(Not totally off topic, IMHO, since Webkit is the new W3C, as far as I'm observing it). Anyway, @varunnaagaraj you are doomed to find another solution without webcrypto as this issue points out, there is no webcrypto on http. And there is no https for localhost, so no webcrypto for you.

twiss commented

@X-Ryl669 localhost is in fact considered a potentially secure origin, see https://w3c.github.io/webappsec-secure-contexts/. I believe Web Crypto works on localhost in most browsers.

Well sorry, my mistake. I meant local host on your LAN (not 127.0.0.1 or ::1).

Yeah. @twiss is right. Localhost is considered secure. I also checked the originwhitelist and it has been set to allow all content in webcrypto.
@twiss do you happen to know if any polyfill changes can help make it into a secure context?

Different considerations apply to a cryptography API and other APIs that don't purport to enable implementation of secure protocols. End users have a particular interest in site developers not being given security footguns and end users interests come above site developers in the priority of constituencies. That's not to say there are no security footguns in the web platform, but the existence of others would not be an argument for introducing another one.

Nevertheless, there are trade offs, as I said, as this is how this one was decided.

I've been monitoring this thread for a long time... I feel like a happy compromise that could resolve many of the complaints in this thread satisfactorily would be to white-list class A, B, C private subnets to use this module (e.g. if you are developing on a VM or other computer on the same LAN network). So allow crypto on local networks like 192.168.0.x, 10.0.0.x, 172.16.0.x and similar.

@pztrick you can’t make assumptions about how routers are configured. Even given that the 192.168.255.255/16 range is reserved by (ICANN? ARIN? I forget which entities control what) - a compromised router may deliver traffic destined to these addresses to any server on the web.

I did say compromise and not perfection. :)

@pztrick you can’t make assumptions about how routers are configured. Even given that the 192.168.255.255/16 range is reserved by (ICANN? ARIN? I forget which entities control what) - a compromised router may deliver traffic destined to these addresses to any server on the web.

My understanding, and correct me if I'm wrong, for removing the crypto API on non-secure hosts is for developers to catch this footgun if for example they develop on localhost and then deploy to a production server serving on http and not https. Removing the usefulness of crypto API methods for other legitimate uses on private networks seems overkill when the use-cases I imagine for delivering apps to users on the wide internet would still hit this limitation deploying to http on public network IPs and so we aren't handing anyone a footgun. A development team will catch that limitation on the public IPs and alter their code/deployment.

On the other hand, if your hostname is accessed over HTTP without HSTS any usage of crypto API is mooted since the malicious party already has control. You're already MITM'd.

twiss commented

@twiss do you happen to know if any polyfill changes can help make it into a secure context?

@varunnaagaraj Sorry, I'm not sure I understand your question. There's nothing a polyfill or script can do to change whether a context is considered to be secure/trustworthy, no.

Is there any way that I can load a React native application in a secure context itself? I guess I am hitting the below scenario where the app loads in a unsecure context and I open a https url and still it is considered unsecure.
Screen Shot 2021-08-17 at 2 17 35 PM

@twiss do you happen to know if any polyfill changes can help make it into a secure context?

@varunnaagaraj Sorry, I'm not sure I understand your question. There's nothing a polyfill or script can do to change whether a context is considered to be secure/trustworthy, no.

Well what I meant was, PolyfillCrypto basically creates a Proxy reference to all the subtlecrypto api's. Im guessing if this WindowProxy is the reason for the session being termed insecure.
Reference:
https://www.w3.org/TR/html51/browsers.html#the-windowproxy-object
https://www.w3.org/TR/secure-contexts/#is-origin-trustworthy

twiss commented

The two things are unrelated. A Web Crypto polyfill can provide fallback implementations of crypto operations, but it can't access the native implementation of Web Crypto if the context is insecure. The context being insecure because of an insecure opener that holds a reference via a WindowProxy, can be fixed as is explained further down in the example you saw.

end users interests come above site developers in the priority of constituencies

The doctrine you refer to ends: "Of course, it is preferred to make things better for multiple constituencies at once." Let's check on all named constituents:

  1. Users - βœ… more faith in web's security
  2. Authors - ❌ add me to the list of people architecting around this in my own web app
  3. Implementors - ❌ see above comment about added PKI energy and costs
  4. Specifiers - ❌ it just makes w3c look so subservient to Big Tech, it's a bit disgusting
  5. Theoretical Purity - ❌ breaking separation of concerns as this comment and others note

To follow the priority of constituencies would be to help all but one group at once by not requiring secure origin.

twiss commented

Of course, if it was possible to make both users and authors happy, then we should do that. But that's why the first sentence applies: there's a conflict here, and users are more important than authors (and everything else on the list). It doesn't say that authors + implementors + specifiers + theoretical purity outweigh users. And anyway I don't agree with the last three crosses on that list. IMO more accurate would be:

  1. Users - βœ…
  2. Authors - ❌
  3. Implementors - βœ… implementors have indicated they prefer to make Web Crypto available on secure contexts only
  4. Specifiers - βœ… the previous editor and the current editor (me) are in this thread :) And personally I agree with this stance, because...
  5. Theoretical Purity - βœ… ...using Web Crypto on an insecure context is almost always insecure. Allowing its use would give a false sense of security in most cases. And, just because there are other APIs available on insecure contexts, doesn't mean that new ones should be.

The web in general is moving to being HTTPS-only. It's even easier and cheaper today to request a TLS certificate for any website than it was in 2016, thanks to initiatives like Let's Encrypt. For local applications, something like Electron can be used. For IoT devices, rather than hosting the entire web app on the device, I would recommend hosting a web app (or creating an Electron app), and then communicating from that web app with the IoT devices.

If you have some specific use case where you can't use HTTPS, but you believe you nevertheless have a secure application, I would recommend proposing an exception to the Secure Contexts specification instead; then, you can use all APIs that require a secure context in your application, not just Web Crypto. I'm not sure how such an exception would look, as it depends on the exact use case. However, I think this has a better chance of succeeding, if you can somehow show the user (agent) that the origin is secure, as this has the user's experience in mind. Shipping a web application over HTTP (which is nowadays shown by many browsers as insecure) and then somehow trying to convince the user that the application is secure anyway, will always lead to a suboptimal user experience.

So your proposal is... no solution ?
Right now, the security model on the web is based on a central service that must be connected to to assert their validity. Said differently, you can't, with current web standards, provide a solution to a secure item in your house that's not connected to the internet but to your LAN only. It's not internet (since you can't have a secure local network with this scheme), it's skynet: kill the head (let's encrypt server) and everything collapses.

Hopefully, other people have solved this, and if I take OpenSSH as a example, use a different scheme: security is asserted on first connection and then some cryptographic signature is stored locally for subsequent connections.

Right now, by forbidding browsers to offer a secure webcrypto implementation (one that can be relied upon in JS), you're providing exactly zero value to the world. In HTTPS, you can send whatever crypto code as JS and it'll be secure, you don't need web-crypto. In this situation, webcrypto's only interest is speed, and the main downside is complexity.

If webcrypto were allowed in unsecure context, but with secure guarantees (like for example, being able to store a signature of some server's main script in local storage and have the signature asserted before running the script) you could provide a solution for the local internet issue that many of us are trying to circumvent.

Take the following scenario:

  1. Some gizmo is being powered on.
  2. You connect to them via their access point over HTTP (since they are not connected to the internet)
  3. The server sends a JS script asking to access webcrypto and a token.
  4. The user is prompted to ensure she trusts the device/server.
  5. Webcrypto signs the script and store the signature and the server's token.
  6. Optionally The user changes the device's WIFI connection to join her network, but doesn't provide a gateway
  7. The device connects on the user's WIFI network and gets a local IP address
  8. Upon later connection from the user, when the server sends its webcrypto's script access request again, the token is checked in the localStorage and no prompt is given to the user. Everything is secure from now, the script is certified. Obviously, the script should contain some code to store and retrieve a private key pair in local storage, ciphered with some webcrypto random salt, and use that to initiate a Diffie Helman session key negociation.
  9. If a MITM is trying to impersonate the server, he can't send another script since the signature will fail. He can't get the private key since the local storage's priv key is ciphered and to decipher it, he need to validate the initial signature.
  10. This scheme's default is upon first connection where a MITM can impersonate the first server's query. Since WIFI use this scheme in WPS since years I would say it's an acceptable security risk, a lot better than no security at all and probably as safe as HTTPS is currently.

Your position, as I understand it, is not to allow any kind of innovation in security scheme.
"If something is working, don't try another way to do it", I'm not convinced...

twiss commented

No, my proposal is to solve this problem in w3c/webappsec-secure-contexts#60, instead of here. In that issue, someone proposed to consider local devices as secure origins if the user has indicated that they're on a private network. To me, that seems like a much simpler solution than what you're proposing, and is not specific to Web Crypto.

I understand your proposal. But I don't see how it would solve the MITM issue if, suddenly, the LAN address space became a secure context. Since HTTPS security is based on a verification done with an external server (the CA), and on a local address space you can't have a CA, allowing 192.168.1.0/24 to be secure without a CA, means that any device could replace the some other device's web service and scripts by just impersonating/spoofing the other device's IP address.
The MITM device could simply spoof the initial server with his own scripts to capture the credentials or data or whatever, and it would go completely unnoticed.

IMHO, I don't think it can be solved without changing the paradigm for local network: go from a central authority system to a decentralized trust system.

Secure context does not solve the former (I'm probably missing something here?), and you can use web crypto to solve the latter.
I feel like the defect in the webcrypto access is trying to be fixed by a external patch instead of a more natural solution: allow it to run everywhere, don't you think?

But you are right, it's probably not the right place to discuss this proposal.

twiss commented

But I don't see how it would solve the MITM issue if, suddenly, the LAN address space became a secure context. Since HTTPS security is based on a verification done with an external server (the CA), and on a local address space you can't have a CA, allowing 192.168.1.0/24 to be secure without a CA, means that any device could replace the some other device's web service and scripts by just impersonating/spoofing the other device's IP address.

In the subclause "if the user has indicated that they're on a private network" is hidden the assumption that the user trusts their local network and the devices in it. For example, if you don't trust your own device, HTTPS is also not secure, as malware on your device can MITM that as well.

Considering local devices as secure origins (if the user trusts them) addresses the use case that was brought up as then you can use Web Crypto (and other APIs that require secure origins). At the very least, it's not less secure than allowing Web Crypto to be used on HTTP in general ;)

If you want more security than that, indeed a separate proposal would be needed.

At the very least, it's not less secure than allowing Web Crypto to be used on HTTP in general ;)

That's true. So why deny it at first ?

twiss commented

What did I deny, exactly? I never argued against allowing Web Crypto on LAN. I was merely arguing against allowing Web Crypto on HTTP in general.

I’ve been following this topic for years and am astounded at those few supporting the requirements for https, regardless of how prevalent it is.

A good encryption library should not rely on having certificates from centralised issuing authorities, when it isn’t needed for that library’s internal logic. That’s insane.

Ok, so that's where I disagree with you. Webcrypto should be allowed on HTTP, maybe only on LAN, but it should be accessible. Because on LAN, it's not possible to have HTTPS (unless without horrible DNS tricks and an internet connection, so a WAN).
Maybe getting LAN recognized as a secure context is enough for this (and I'm with you if it's your opinion).

However having Webcrypto on HTTP anywhere else (WAN) doesn't decrease security (I think it's the opposite). Because when something needs to be done, it'll be done and we don't live in a world of unicorn and fairies. Without Webcrypto, the result is sending crypto code as unsecure JS, unauthenticated, uncertified. With Webcrypto, at least, the crypto toolbox is standard and secure, and not tempered. The code that's using these web crypto primitives can be modified by a MITM in HTTP unless some addition is added to the proposal/specification.

With few modifications (like I've described above), it can even be made secure against MITM after first contact.

It's not really complex to add, since we already have script's signature checking in browser (with subresource integrity specification), adding a secret per-browser salt token and a dictionary of token=>HMAC(script, secret||token) in local storage and using it before allowing the script to run is few lines of code, the most difficult part is to add a dialog to ask the user if she wants to trust the server initially (but again, we already have those for setting up bad HTTPS certificate exceptions).

As you said, it can be a new proposal, I think it's too small to make a proposal, and I don't think it'll be baked up by Google, that's why I think it should be in this proposal.

A good encryption library should not rely on having certificates from centralised issuing authorities

A crypto toolbox that's only doing what's already done/required in HTTPS is useless anyway, since you can already use HTTPS to send one to the browser. A crypto toolbox should allow making crypto algorithms and applications ever with a centralized architecture or a decentralized architecture. Right now, the specification is centralized only, since if Let's Encrypt server is DDOS, the whole web will collapse (and no, the fact that there are other CA is not a justification for decentralization, when one actor is free vs paying competitors, it's the main actor).

To allow decentralization of the web, one should not rely on a technology that mandate a unique (or few) hosts to be alive, but instead provide way to trust other & any hosts and a crypto toolbox is mandatory to achieve this. It's a shame that webcrypto doesn't run on HTTP, because this large area of possibilities is denied to us.

twiss commented

Ok, so that's where I disagree with you. Webcrypto should be allowed on HTTP, maybe only on LAN, but it should be accessible. Because on LAN, it's not possible to have HTTPS (unless without horrible DNS tricks and an internet connection, so a WAN).
Maybe getting LAN recognized as a secure context is enough for this (and I'm with you if it's your opinion).

Yes, that's what I meant. And that's what w3c/webappsec-secure-contexts#60 is :)

As you said, it can be a new proposal, I think it's too small to make a proposal, and I don't think it'll be baked up by Google, that's why I think it should be in this proposal.

Well - at the very least it should be outside of this issue :) As you'll notice this issue isn't even about allowing Web Crypto on insecure origins, it's about allowing Web Crypto on secure origins only. As that has been done, this issue is closed and will never be reopened. So if you want to have an open issue to discuss this stuff in, you should create a new one. But, apart from that, whether Google implements something doesn't depend on whether it's included in a big specification or a small one. And in fact, additions to this standard have to go through incubation, anyway, so you'd have to write a (small) specification anyway. But that's a good thing, not a bad thing - if you just sneak something into an existing specification, without gaining consensus on it, nobody will notice or implement it.

A crypto toolbox that's only doing what's already done/required in HTTPS is useless anyway, since you can already use HTTPS to send one to the browser.

That's not entirely true. Web Crypto can be used to implement end-to-end encryption, which HTTPS can't offer. There are some challenges around trusting the code if you don't trust the origin, but that's again orthogonal to this issue. Additionally, in a web extension (which is also a secure origin), you can use Web Crypto to send encrypted data to a server if you trust the web extension but not the server, for example.

A crypto toolbox should allow making crypto algorithms and applications ever with a centralized architecture or a decentralized architecture. Right now, the specification is centralized only, since if Let's Encrypt server is DDOS, the whole web will collapse (and no, the fact that there are other CA is not a justification for decentralization, when one actor is free vs paying competitors, it's the main actor).

If it's your position that it should be possible to make secure decentralized web apps, then that's reasonable, however, for that you need secure code distribution, which comes before Web Crypto, not after. If you want to have a mechanism where the code is signed and trusted by the user with a trust-on-first-use model, that's potentially possible, but again the "index.html" should then be signed also (and the signature be verified by the browser). If that's implemented, then the origin could be considered as secure, and then Web Crypto could be used there. However, using Web Crypto (which is run by the code of the web app) to verify the code of the web app is not possible.

That's probably the main point here.

You don't need secure code distribution, unless you consider that the browser's internal code is a distribution. Secure code distribution is what HTTPS is doing and we can safely say it's working well, but it doesn't mean other way to distribute code over unsafe channel won't work. The trust-on-first-use is an example of a secure code assertion over an insecure channel. It requires a secure / non falsifiable tool and web crypto can provide it since it resides in the web browser, not on the code that's sent over the wire. There are other methods to send secure code over an un-secure channel, (typically TLS is one of them too). They all rely on a trusted implementation of some primitive and web crypto should be this implementation.

It's not possible to verify the web app right now on HTTP but it can be made possible with 2 changes to the specifications:

  1. Allowing web crypto everywhere
  2. Allowing a taint mode for security. Once a web page tries to instantiate web crypto, a secret (to the javascript implementation and the server) is generated that's used to verify/sign/assert the content of the web page against a web page's own token. I think you've got that part, so I won't repeat how (that should be part of a new proposal).

So if you think it's worth it, I'll bootstrap another issue, but I don't know if it should be on this repo or on another one.

Why was this even accepted :/