uNetworking/uWebSockets.js

TechEmpower ban 2024; the lore

uNetworkingAB opened this issue ยท 15 comments

Sour critique leads to ban

His ego couldn't take me pointing out that his quality control is zero (which it is), so he permanently banned me on GitHub, Twitter, removed and banned uWebSockets.js from the top and added a code of conduct. Yes the project itself is lifetime banned now.

The tweet:

Screenshot 2024-07-18 at 19 17 05

The response:

"[...] Happy to remove your framework. Seeing your tweets now as well. You're not interested in reviewing code and making things better; you're interested in being right and you're too toxic about it. [...]"

Screenshot 2024-07-18 at 19 10 39

The ban:
Screenshot 2024-07-18 at 19 51 40

The block:
Screenshot 2024-07-18 at 19 49 12

1st place in TechEmpower performs half that of uWS.js

TechEmpower is a flawed benchmark for two main reasons:

  1. Fake Participants: Many top participants use obvious hacks like simply splitting the TCP data on "\r\n\r\n", doing no real parsing or standards compliant interpretation whatsoever. There is no quality control to catch these fakes. For example, "gnet," which won Round 22, uses such a hack, making the entire test meaningless. "faf" also uses this hack.
  2. Misleading HTTP Pipelining: By enabling the most bizarre level of HTTP pipelining ever seen in a benchmark, the test only measures how fast the parser can process these tightly packed requests, not its total I/O performance. Since all top participants simply fake the HTTP parsing step as explained above, the entire TechEmpower benchmark is meaningless.

We can very easily see this by running a regular non-pipelined benchmark showing uWS.js massively outperforms "mrhttp", which is the very 1st in TechEmpower.

GSZjHKuWYAAEgji

Most recent run before ban

TechEmpower is not a credible benchmark.

Either way, uWS.js scores insignificantly different from 1st. If excluding obvious fakes & incorrect participants, it scores actually 1st (however, even this should be taken with a grain of salt given how statistically insignificant the win would be).

techruncomplete

Some important nitpicks (elaborating the critique)

  • We send more data per request: "Server: uWebSockets.js" and "uWebSockets: 20" vs. only sending "Server: F" like FaF does. It means we occupy more bandwidth per request. This could be fixed in a second run, but we are banned and can't participate any longer. TE does not keep track of CPU-usage, so we can only guess whether this single point makes all the difference, or not. Most likely, this has the most impact on the score.
  • We are entirely standards compliant (RFC 9110) with all the request smuggling checks, invalid field name checks, conflicting body length checks, missing "host" header checks, PROXYv2 header check, performed according to spec. None of the others in top 5 do even close to this. Most of them simply scan until "\r\n\r\n" and assume that's a valid request. Most of them can't even receive a body because they don't even check "content-length".
  • We do dynamic URL routing with wildcard, static, parameter routes. They (gnet, mrhttp, faf, pico.v) do just a hardcoded if (memcmp("GET") == 0) [[likely]] kind of "routing" and just copy a preformatted hardcoded buffer back.
  • Pipelining benchmarks only stress the HTTP parser and URL router which conveniently are the very parts that are seriously non-compliant, outright broken or entirely missing in the TE top performers (gnet, mrhttp, faf). Anyone can make a fast HTTP parser that just skips to "\r\n\r\n" using 2 lines of SIMD intrinsics. Comparing a fully standards compliant parser with all the required header validation against such a solution is meaningless.
  • uWebSockets.js is a dynamic module that dynamically links to Node.js's libuv. Native uWS does not use libuv, but links statically to its own epoll implementation with LTO linking and O3 optimizations, producing even better results. But, only uWS.js was added to TE, not native uWS. There is also no need for uWS.DeclarativeResponse in native uWS, cutting even more overhead. Likewise, the PROXYv2 parser pass can be disabled in native uWS, eliminating even more overhead.
  • Results are insignificantly different among the top 5. There is a cutoff plateau where ordering is very tight and depends on essentially random noise. A 7th placement can be a 2nd placement next run, as the variance is enough to scramble the list between runs.

Ok, looks like the project can come back if someone other than me reverts the removal:

git clone https://github.com/TechEmpower/FrameworkBenchmarks
git checkout -b add_uwebsocketsjs
git revert 4b0a91f07147386c8f11b36b1410f00b34c7611c
git commit
git push origin add_uwebsocketsjs

and make a PR of that ๐Ÿ‘Œ

If you're fine with it, I'll add it?

Absolutely, it's good PR regardless of credibility

If you want you can also remove the MySQL subject since it's very obvious all the top ones use PostgreSQL, so why even have MySQL there at all

@uNetworkingAB that's a pity. Your obsession (in a good way) with following best practices in building high-quality code is admirable, but sometimes when I read your comments in discussions, maybe you're "too rough" in saying things. It's your way of expressing yourself, but you know... some people are a little bit sensitive.

I think we can help in future cases where a benchmark is not transparent and equivalent, in the same way you've exposed here with many arguments.

The hassles of public relations ๐Ÿ™„

So, nobody added it back?

Done ;) TechEmpower/FrameworkBenchmarks#9189

Got delayed because I wanted to make it run myself too first, but that was unfortunately not as straight forward as it could have been. - Also on vacation

Ok. Thanks, guess that puts an end to this thing

Aha, vacation. Pfft, no time for vacation

@porsager aha, it is uWebSockets._cfg

Already changed it ;)

It needs to be uWebSockets._cfg can't be webserver._cfg

TechEmpower/FrameworkBenchmarks@389ed04 ;)

First fix was from cfg_ to _cfg..