Simple WebCrypto exploit in an untrusted `window` environment
Closed this issue · 1 comments
Using window.crypto
is not safe unless we have assurances about the browser environment. The following exploit could be exploited by a plugin, or script, that has access to the browser window in which libsignal
is running.
Steps to reproduce.
-
Come up with a bogus
myFakeGetRandomBytes(typedArray)
method that generates non-random stuff. -
window.crypto.getRandomBytes = myFakeGetRandomBytes
-
Import signal as normal.
Conclusion
I believe this exploit speaks to the absolute necessity of keeping the browser environment isolated from normal userland browser, which is probably a madhouse of unsafe code running, reading the page, etc. If you must run the signal protocol in-browser, run it in Electron, or as a Chrome app (Signal Desktop currently does the latter).
In the future, we may think about moving toward emscripten-compiled dependencies for cryptographic primitives. At the end of the day, window.crypto
can be absolutely anything. If we can bundle all primitives with the rest of the application code, we can verify the integrity of that one JS bundle, e.g. with subresource integrity.
See discussion on HN here. Copying AgentME's excellent response for posterity.
If the attacker is running code within the same javascript context, within the browser's process, or within the user's operating system kernel... then you're hosed. Anything can be anything. Other javascript within the same context could redefine global functions, intercept objects passed through them, and mutate function references in your JS bundle. Or it could just log the DOM! A browser plugin or a kernel rootkit can keylog the user. The only defense an application has against the user's own machine being compromised is obfuscation, and that's a losing battle.
There are real issues with doing cryptography with users' keys in web pages, but it's not "their machine might be compromised" (and Electron doesn't solve that anyway). Even if the page javascript correctly stores user keys in localStorage where the server can't see them, nothing stops the server from serving you some backdoored javascript tomorrow which silently uploads your localStorage to the server. This might be correctly solvable with ServiceWorkers, though you'd want users to have some way to verify that they have the correct and peer-reviewed ServiceWorker source running. The easiest way to do that would be some kind of local application or browser plugin, but then at that point that you've involved a local application you've missed some of the original goal at keeping it all in a browser, and it would probably be easier for everyone involved if the crypto just happened in the local application to begin with.
Interestingly, some
window.crypto
functions actually solve some of the problems with running cryptography in web pages. You can create and use a crypto key that is handled by the browser and never has its key material exposed to javascript. Even if an attacker injects javascript into the page or the server serves malicious javascript the next day, there's no way to steal the key material.window.crypto
can effectively provide a virtual HSM from the web page's perspective.