# Naive performance tests for new world http implementations

System: https://gist.github.com/bjconlan/9ff7683d15d101f8f015827bb8244785

userver: `make start-release` (note when using rusage, the release binary is used directly)
  uses http 1.1. Is very slow when enabling TLS but very fast when not (also note its using GMT by default not
  local timezone - code could use improvement). I really like this framework, dynamic config & tracing ootb and
  with good defaults (arguably having yandex traceid keys isnt perfect, perhaps a pr can fix this though ;)
kestrel: `dotnet run --launch-profile https`
  Attempts to use more advanced http protocols but ab isn't compatable so falls back to http 1.1. Would love to
  evolve these tests to include http3/quic performance. Suprisingly fast (the advertisments are true)
go: `go run .` (note when using rusage this is being compiled and executed so probably not ideal)
  By far the neatest implementation (in my opinion) thanks to 1.22 clean routing APIs etc, using HTTP 1.1
  really good performance and memory usage which is probably why golang is becoming the defacto tool in this space.
redbean: `./redbean-2.2.com -s -D srv -p 7027`
  Lua is nice, im not sure how to squeese more performance out of this either (not that I've really tried yet,
  although I want to give a raw c based 'greenbean' server a shot) This also doesn't have the best datetime
  formatting (but at least was performing similar ops to others)
  When not using ssl termination this much more competitive (but doesnt perform well under stress, i've reported
  against a 100k requests over 1k concurrent connections and readbean has issues when this when not using tls
  which is strange, at about 15k@1k requests it stops handling requests), when less than this is averaging 40ms!
  And wow look at the memory usage of this thing! (need to look into if this is the issue with limited requests)
toit: `toit.run http.toit`
  I think the implementation details of the toit code are perhaps the most elegant. Its by far the simplest to
  read or understand and was developed faster than other implementations to which I was only slightly familiar.
  unfortinatly it looks like its also the slowest (wrk approximates 3k rps (1.7mb) transferred (vs go's 123k
  rps (82mb)))
  NOTE no benchmarks here yet; looks like ab doesn't like keep alive (and the toit http server doesn't force a
       connection close when sending `-H "Connection: close"` header when performing ab)
spring: `./gradlew bootRun`
  This is using the default tomcat based http server currently (as this is the default used by spring). Would
  prefer to use netty for this performance test at some point as this has been the best performing web server for
  java for some time now (but is only included as part of the 'reactive' spring/webflux platform set of deps.
greenbean `x86_64-unknown-cosmo-cc server.c -o server && ./server'
  Im hoping this is the fastest as its doing the simplest serialisation of json possible and no formatted logging.
  Currently this isn't implemented correctly (ie dates not rolling, nor are the present float formats inline with
  others (my c-fu is not the best but this has been a nice refresher). (NO TLS and doesn't like ab... but wrk puts
  this at Requests/sec: 188566, Transfer/sec: 143.34MB vs spring's Requests/sec: 125976, Transfer/sec: 79.29MB or
  go's amazing Requests/sec: 308733,Transfer/sec: 204.70MB)

sample responses:
```
[
 {"date":"2024-02-06T16:41:28.6222126+10:00","temperatureC":16,"summary":"Scorching","temperatureF":60.79769618430525},
 {"date":"2024-02-07T16:41:28.6222147+10:00","temperatureC":11,"summary":"Bracing","temperatureF":51.79841612670987},
 {"date":"2024-02-08T16:41:28.6222149+10:00","temperatureC":-14,"summary":"Bracing","temperatureF":6.802015838732899},
 {"date":"2024-02-09T16:41:28.6222182+10:00","temperatureC":39,"summary":"Warm","temperatureF":102.19438444924407},
 {"date":"2024-02-10T16:41:28.6222186+10:00","temperatureC":2,"summary":"Scorching","temperatureF":35.59971202303816}
]
```
```
[
 {"date":"2024-02-06T16:41:29.8949227+10:00","temperatureC":12,"summary":"Mild","temperatureF":53.59827213822894},
 {"date":"2024-02-07T16:41:29.8949252+10:00","temperatureC":-20,"summary":"Freezing","temperatureF":-3.9971202303815687},
 {"date":"2024-02-08T16:41:29.8949254+10:00","temperatureC":-17,"summary":"Cool","temperatureF":1.4024478041756652},
 {"date":"2024-02-09T16:41:29.8949268+10:00","temperatureC":48,"summary":"Hot","temperatureF":118.39308855291577},
 {"date":"2024-02-10T16:41:29.8949272+10:00","temperatureC":47,"summary":"Scorching","temperatureF":116.59323254139669}
]
```

# SSH
naive benchmarking via `ab -l -n 100000 -c 1000 -E https.pem https://localhost:7027/weatherforecast`

## kestrel (Numerous ssl errors; probably due to ab not supporting http3)
...
SSL read failed (1) - closing connection
80778EB3D67F0000:error:0A000197:SSL routines:SSL_shutdown:shutdown while in init:ssl/ssl_lib.c:2260:
80778EB3D67F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:ssl/record/rec_layer_s3.c:303:
SSL read failed (1) - closing connection
80778EB3D67F0000:error:0A000197:SSL routines:SSL_shutdown:shutdown while in init:ssl/ssl_lib.c:2260:
80778EB3D67F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:ssl/record/rec_layer_s3.c:303:
Completed 100000 requests
Finished 100000 requests


Server Software:        Kestrel
Server Hostname:        localhost
Server Port:            7027
SSL/TLS Protocol:       TLSv1.3,TLS_AES_256_GCM_SHA384,2048,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        localhost

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   43.192 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      71527155 bytes
HTML transferred:       57627155 bytes
Requests per second:    2315.23 [#/sec] (mean)
Time per request:       431.923 [ms] (mean)
Time per request:       0.432 [ms] (mean, across all concurrent requests)
Transfer rate:          1617.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       14  268  38.0    267     403
Processing:    39  162  38.6    162     292
Waiting:        1   93  48.1     90     255
Total:        127  431  20.2    426     528

Percentage of the requests served within a certain time (ms)
  50%    426
  66%    433
  75%    439
  80%    442
  90%    450
  95%    470
  98%    498
  99%    507
 100%    528 (longest request)

### rusage
RL: ballooned to 633,376kb in size
RL: needed 201,928,100µs cpu (29% kernel)
RL: caused 189,122 page faults (100% memcpy)
RL: 1,403,892 context switches (96% consensual)
RL: performed 0 reads and 240 write i/o operations

## redbean
Server Software:        redbean/2.2.0
Server Hostname:        localhost
Server Port:            7027
SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,256,256
Server Temp Key:        ECDH P-256 256 bits
TLS Server Name:        localhost

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   53.677 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      71850284 bytes
HTML transferred:       55250284 bytes
Requests per second:    1862.98 [#/sec] (mean)
Time per request:       536.773 [ms] (mean)
Time per request:       0.537 [ms] (mean, across all concurrent requests)
Transfer rate:          1307.19 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        2  296 132.6    284    2351
Processing:    21  239  35.4    236     328
Waiting:        0   72  29.4     59     167
Total:         74  534 137.7    523    2593

Percentage of the requests served within a certain time (ms)
  50%    523
  66%    530
  75%    535
  80%    538
  90%    547
  95%    559
  98%    582
  99%   1521
 100%   2593 (longest request)

### rusage
RL: ballooned to 2,688kb in size
RL: needed 106,731,644µs cpu (27% kernel)
RL: caused 7,157,442 page faults (100% memcpy)
RL: 485,889 context switches (98% consensual)

## go
Server Software:
Server Hostname:        localhost
Server Port:            7027
SSL/TLS Protocol:       TLSv1.3,TLS_AES_128_GCM_SHA256,2048,128
Server Temp Key:        X25519 253 bits
TLS Server Name:        localhost

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   40.980 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      69524098 bytes
HTML transferred:       58624098 bytes
Requests per second:    2440.24 [#/sec] (mean)
Time per request:       409.796 [ms] (mean)
Time per request:       0.410 [ms] (mean, across all concurrent requests)
Transfer rate:          1656.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        2  250  38.0    246     390
Processing:    63  157  38.9    157     379
Waiting:        6   98  40.6     95     328
Total:         89  408  31.2    400     664

Percentage of the requests served within a certain time (ms)
  50%    400
  66%    406
  75%    411
  80%    415
  90%    437
  95%    489
  98%    516
  99%    525
 100%    664 (longest request)

### rusage
RL: ballooned to 155,392kb in size
RL: needed 196,261,046µs cpu (4% kernel)
RL: caused 663,294 page faults (99% memcpy)
RL: 999,197 context switches (98% consensual)
RL: performed 70,760 reads and 37,920 write i/o operations

## userver
Server Software:        userver/1.0.0
Server Hostname:        localhost
Server Port:            7027
SSL/TLS Protocol:       TLSv1.3,TLS_AES_256_GCM_SHA384,2048,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        localhost

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   221.551 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      92041355 bytes
HTML transferred:       58141355 bytes
Requests per second:    451.36 [#/sec] (mean)
Time per request:       2215.506 [ms] (mean)
Time per request:       2.216 [ms] (mean, across all concurrent requests)
Transfer rate:          405.70 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       10  801 400.5    772    2292
Processing:     6 1410 516.3   1422    4375
Waiting:        0 1407 516.5   1420    4373
Total:        135 2210 575.5   2176    5574

Percentage of the requests served within a certain time (ms)
  50%   2176
  66%   2389
  75%   2561
  80%   2674
  90%   2965
  95%   3213
  98%   3501
  99%   3681
 100%   5574 (longest request)

### rusage
RL: ballooned to 898,608kb in size
RL: needed 878,878,665µs cpu (2% kernel)
RL: caused 549,542 page faults (100% memcpy)
RL: 212,331 context switches (98% consensual)

# NON TLS
naive benchmarking via `ab -l -n 100000 -c 1000 http://localhost:7027/weatherforecast`

## kestrel
Server Software:        Kestrel
Server Hostname:        localhost
Server Port:            5101

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   7.288 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      71520138 bytes
HTML transferred:       57620138 bytes
Requests per second:    13721.64 [#/sec] (mean)
Time per request:       72.878 [ms] (mean)
Time per request:       0.073 [ms] (mean, across all concurrent requests)
Transfer rate:          9583.73 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   36  50.2     34    1089
Processing:     5   37   9.0     37      74
Waiting:        2   22   8.7     20      64
Total:         28   73  50.3     71    1142

Percentage of the requests served within a certain time (ms)
  50%     71
  66%     73
  75%     75
  80%     76
  90%     78
  95%     80
  98%     82
  99%     84
 100%   1142 (longest request)

### rusage
RL: ballooned to 442,080kb in size
RL: needed 42,658,508µs cpu (43% kernel)
RL: caused 134,479 page faults (100% memcpy)
RL: 588,522 context switches (73% consensual)
RL: performed 0 reads and 240 write i/o operations

## redbean (had strange issues so omitting)

## go
Server Software:
Server Hostname:        localhost
Server Port:            7027

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   6.375 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      69524760 bytes
HTML transferred:       58624760 bytes
Requests per second:    15686.83 [#/sec] (mean)
Time per request:       63.748 [ms] (mean)
Time per request:       0.064 [ms] (mean, across all concurrent requests)
Transfer rate:          10650.62 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   31   5.5     31      46
Processing:     9   32   5.7     32      60
Waiting:        0   23   5.7     21      45
Total:         33   63   2.2     63      73

Percentage of the requests served within a certain time (ms)
  50%     63
  66%     64
  75%     65
  80%     65
  90%     66
  95%     67
  98%     67
  99%     68
 100%     73 (longest request)

### rusage
RL: ballooned to 96,640kb in size
RL: needed 9,581,460µs cpu (57% kernel)
RL: caused 42,249 page faults (99% memcpy)
RL: 552,518 context switches (99% consensual)
RL: performed 0 reads and 336 write i/o operations

## userver
Server Software:        userver/1.0.0
Server Hostname:        localhost
Server Port:            7027

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   6.323 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      92043841 bytes
HTML transferred:       58143841 bytes
Requests per second:    15816.42 [#/sec] (mean)
Time per request:       63.225 [ms] (mean)
Time per request:       0.063 [ms] (mean, across all concurrent requests)
Transfer rate:          14216.84 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   31   8.5     31      56
Processing:     9   32   8.6     32      59
Waiting:        0   21   8.9     16      38
Total:         31   63   3.7     62      78

Percentage of the requests served within a certain time (ms)
  50%     62
  66%     63
  75%     65
  80%     65
  90%     68
  95%     69
  98%     73
  99%     75
 100%     78 (longest request)

### rusage
RL: ballooned to 137,236kb in size
RL: needed 12,024,268µs cpu (47% kernel)
RL: caused 78,977 page faults (100% memcpy)
RL: 815,280 context switches (99% consensual)

## spring (tomcat http server)
Server Software:
Server Hostname:        localhost
Server Port:            8080

Document Path:          /weatherforecast
Document Length:        Variable

Concurrency Level:      1000
Time taken for tests:   7.199 seconds
Complete requests:      100000
Failed requests:        0
Total transferred:      63876813 bytes
HTML transferred:       53376813 bytes
Requests per second:    13890.46 [#/sec] (mean)
Time per request:       71.992 [ms] (mean)
Time per request:       0.072 [ms] (mean, across all concurrent requests)
Transfer rate:          8664.83 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   40 108.5     31    1090
Processing:     1   31  20.0     32     268
Waiting:        1   23  18.8     23     253
Total:          3   71 122.4     66    1289

Percentage of the requests served within a certain time (ms)
  50%     66
  66%     67
  75%     68
  80%     68
  90%     70
  95%     71
  98%     73
  99%   1056
 100%   1289 (longest request)

### rusage
RL: ballooned to 370,504kb in size
RL: needed 27,466,171µs cpu (19% kernel)
RL: caused 124,810 page faults (100% memcpy)
RL: 403,385 context switches (98% consensual)