meganz/webclient

very weak cipher for *.userstorage.mega.co.nz subdomains

rezad1393 opened this issue · 14 comments

I see that web client in mega.nz website uses only TLS_RSA_WITH_AES_128_CBC_SHA.
this is one of the very weak ciphers that is very bad for this security oriented website.

also when I use firefox addon to download files in firefox I get

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at "https://gfs262n351.userstorage.mega.co.nz/dl/xxxxxxxxxxxxxxxx. (Reason: CORS request did not succeed). Status code: (null)."

In general you might be right. But if I'm not mistaken, this URL only serves already encrypted chunks. So the communication (from a confidentiality perspective) could just as well be conducted under an unencrypted, plain socket or HTTP connection. However, that would break the web site using HTTPS. So I presume this was a strategic move to reduce the security parameters for HTTPS (TLS) negotiation to the lowest possible for efficiency on the encryption overhead and TLS hand shake.

I believe while I was still working for Mega RC4 may have still be permissible (which was used for those hosts). Wouldn't go there usually, but for these purposes of carrying pre-encrypted chunks that's fully sufficient.

In general you might be right. But if I'm not mistaken, this URL only serves already encrypted chunks. So the communication (from a confidentiality perspective) could just as well be conducted under an unencrypted, plain socket or HTTP connection. However, that would break the web site using HTTPS. So I presume this was a strategic move to reduce the security parameters for HTTPS (TLS) negotiation to the lowest possible for efficiency on the encryption overhead and TLS hand shake.

I believe while I was still working for Mega RC4 may have still be permissible (which was used for those hosts). Wouldn't go there usually, but for these purposes of carrying pre-encrypted chunks that's fully sufficient.

I understand that and I respect the technical compromises done for speed.
the reason that I even saw this issue is that I have disabled the non fs (forward-secrecy) ciphers in firefox.
maybe using a forward secrecy cipher could be considered?
non-fs ciphers are at the risk of saving the data and later trying to decrypt it (specially very weak ciphers) but fs-ones dont that have issue.

also the overhead if it is for bandwidthm then it is the same for all ciphers and if it is for cpu usage for user and server, I think 10 years ago that issue was reviewed and was not an issue (cpu support for HW acceleration mostly fixes that) because these ciphers are for handshake and the actual transfer of data uses symmetrical one that is, again, mostly the same for all tls1.2 ciphers.

now if mega decided to support tls1.3 on the top of tls1.2 then many would at least be able to uses it securely.

using ssllabs to scan on these subdomains and also mega main domains show that they get low score because they either support the deprecated tls1.0 and tls.1.1 or they have (like this issue) very weak ciphers. even Microsoft and firefox have removed tls1.0. and that was after postponing it for covid websites compatibility of some health official websites.

https://www.ssllabs.com/ssltest/analyze.html?d=gfs214n109.userstorage.mega.co.nz
https://www.ssllabs.com/ssltest/analyze.html?d=mega.co.nz

maybe the whole website can be upgraded to the recommended tls practice here:
https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29
and keep the old compatibility for the android and desktop app? this way is way better for most users.

I would have suggested the Modern compatibility but maybe some people in the world dont have access to firefox that is released in the last year (big if) but Intermediate compatibility (recommended) is accessible everywhere.

I don't work there any more, so I'm predominantly interested in this discussion from a cryptography/security point of view.

Looking at the cipher suite used, I believe you're right, that it is a relatively cheap "throw away" to use FS negotiation schemes, but only if they are using elliptic curve crypto for the Diffie-Hellman key agreement, as they require by far less entropy and computational overhead to generate new key pairs. As this is pretty much a "one off" for establishing a connection, it won't have a detrimental impact on ongoing throughput. Even though this is a public key algorithm involved (which is 4-5 orders of magnitude slower than symmetric ciphers), it's relatively painless and quick.

AES-NI significantly improves encryption performance in hardware. Though there is still a significant impact on resources (especially if the server's I/O is fully saturated by that kind of activity, as those servers would be). So tuning down the symmetric load can have a significant impact in throughput. Particularly as it's not the block cipher (AES) alone but it also involves continuous computation of the authentication tag. Moving here from CBC-SHA to GCM would probably be beneficial, but still be impactful on the overall processing effort. In another case (in this case AES payload encryption in Parquet files using Hadoop in Java 9), the overhead of doing AES-CTR only (stream encryption in counter mode, no authentication) vs. AES-GCM (stream encryption using Galoise counter mode with authentication) is between 3 and 4.5.

So, this all is purely theoretical reasoning. And I can see and appreciate both sides of the argument. And any decisions or advice on what should be done with Mega servers is well out of my realm of influence.

Having said all that, I believe due to the nature of how Mega handles data encryption and transfers, there is no gain or loss of security possible here, as all data is pre-encrypted (with completely different keys), so FS or not, weak cipher suites or not ... the data won't be at risk. The big difference is (IMHO) what the browsers report and what the customers see, mainly due to the browser vendors (gladly) tightening their cryptography requirements (which is highly beneficial for the "average web site" use, but not so much an issue when dealing with Mega).

great answer even if much of the technical terms flew over my head.
about the browser support I would not be surprised if browsers removed non-fs ciphers at a intermediate future.
right now some of the web dont have that but according to this sample https://www.ssllabs.com/ssl-pulse/
the not supported domains are less than 1%. but unfortunately subdomains in mega seems to be in that 0.7%.

maybe in near future mega moved to that on web (if their server allow it) and maybe move performant options (less load faster speed) to the desktop and mobile apps that can implement whatever library they want .maybe a pure http or even a non standard faster one that combines the transfer and encryption in one and doesnt have to use https layer over it.

It's a rally good thing the browsers are tightening their min requirements for cipher suites.

And I'm pretty sure that Mega will "move with them". I've seen it happen way back then with the removal of the RC4 config from the servers back then, and I'm pretty sure that (eventually) this will happen here, too. After all, if the browsers choke, a good part of the value proposition goes.

Lastly, I believe that the non-browser clients already are using unencrypted connections to the servers to enhance transfer capabilities. At least that's how things were back then, and I'd be surprised if that has changed.

Hi,
Thank you guys for the feedback.

As @pohutukawa mentioned, there's no security gain in enforcing longer keys (i.e. more secure) for RSA TLS used for HTTPs communications.

Generally, this is an important aspect to keep the communication between the client and the server secure. The longer keys used the harder (or closer to being practically impossible with the current available hardware and CPUs) it is to break the encryption (i.e. finding/figuring the private key).

With MEGA service, HTTPs encrypted communication is not necessary to maintain the security of the communication between the client and the server, since the payload (the data) will be encrypted using a synchronous encryption AES with the client being the only party that has the the key for it.
In fact, in our Desktop client, there's an option to use HTTP instead of HTTPS protocol in order to get some performance gain.

Therefore, we value the the lighter RSA key length for TLS in web-client to reduce the load on the server and on the client during communication.
And probably we will always use the minimum acceptable key length by browsers.

@khaleddaifallah-mega Thanks for weighing in here from the side of Mega.

As for the keys, I'd still suggest to ditch RSA keys (for TLS certificates or key negotiation) altogether and adopt the usage of ECDHE (elliptic curve Diffie-Helman ephemeral). This should still speed up the process (key negotiation only happens once at the beginning of a session, and EC keys are usually a bit faster than clunky RSA), as well as allow the browser or SSLlabs to report forward secrecy (which is technically a non-issue, but practically shows the user that things are more "straight").

As for the stream encryption: Go for the lowest symmetric cipher suite possible that still keeps the modern browsers happy.

@pohutukawa
very much yes to the first part but no to the second part.

first part: absolutely. the hardware that user uses and hardware that mega server uses if (they are not older than 15 years )don't get that much performance loss from going to a modern/intermediate protocol for start of handshake, because as @khaleddaifallah-mega have said the main part is the data stream and that uses (AS ALL TLS IN BROWSER DO) symmetric encryption. the connection start is asymmetric, so slow but short in terms of time and resources spent and the data transfer in symmetric , so fast and almost always hardware accelerated (only one I know that is not is the relatively very new XChaCha20Poly1305 in tls1.3).

second part: no.
because mega is different from usual https websites as in it is a (usually big) file hosting and is more similar to video hosting sites (though those can use splitting of video via hls) so the main part in not the handshake but the data stream.
and because that is hardware accelerated, I think mega can use a good strong but not heavy symmetric protocol for that in standard https. using a "lowest symmetric cipher suite possible" is not ok because than would mean MAC-then-Encrypt which is vulnerable to (or contribute to the vulneralability of) "[BEAST], as well as a slew of padding oracle vulnerabilities such as Lucky 13 and Lucky Microseconds" (taken from here https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/)

my suggestion is to drop all pre-tls1.2 and not-safe tls1.2 and basically use https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 part.
it covers 99 percent of users and even the 1 percent that dont have that can use desktop app.

so basically go from TLS_RSA_WITH_AES_128_CBC_SHA (that is vulnerable from CBC and AES and small RSA) to at least ECDHE-ECDSA-AES128-GCM-SHA256
this moves from :
RSA to ECDSA
MAC-then-Encrypt to AEAD
and from CBC to GCM (not sure if that is one to one mapped)

Since Tor browser 12 has already disabled some ciphersuites, we cannot use mega unless those TLS ciphersuites are upgraded..

And python also disabled some ciphersuites since version 3.10.

It looks like SHA256 has just been supported, so we can use MEGA on Tor browser 12.

But it still doesn't satisfy python's requirements because ciphers without forward secrecy are disabled. (see python/cpython#88164 for detail)

I still don't get why mega uses non-fs ciphers.
when mega uses https the server load and overhead and other things are already there so it can't be that.

please consider using a secure modern tls practice instead of current one.