luvit/luvit

Question: basic http benchmark perform bad with built-in http module and http keep-alive set

fallenwood opened this issue · 7 comments

I am testing the performance of luvit with the basic usage of http module comparing with nodejs, on Linux amd64, and the performance looks pretty bad

# 5001: luvit
→ ./go-wrk -c 125 http://127.0.0.1:5001
Running 10s test @ http://127.0.0.1:5001
  125 goroutine(s) running concurrently
30120 requests in 10.017535652s, 2.15MB read
Requests/sec:           3006.73
Transfer/sec:           220.22KB
Avg Req Time:           41.573438ms
Fastest Request:        13.393418ms
Slowest Request:        59.379088ms
Number of Errors:       0

# 5002: nodejs
→ ./go-wrk -c 125 http://127.0.0.1:5002
Running 10s test @ http://127.0.0.1:5002
  125 goroutine(s) running concurrently
280177 requests in 9.954598306s, 31.53MB read
Requests/sec:           28145.49
Transfer/sec:           3.17MB
Avg Req Time:           4.441209ms
Fastest Request:        262.193µs
Slowest Request:        30.861007ms
Number of Errors:       0

the QPS of luvit is 3k, while nodejs is 28k

the source codes are

local http = require("http")

local server = http.createServer(function (req, res)
  res:finish("Hello world!")
end)

server:listen(5001)
const http = require("http");

const server = http.createServer(function (req, res) {
  res.end("Hello world!")
});

server.listen(5002);

the runtime infos are

→ ./luvit --version
luvit version: 2.18.1
luvi version: v2.14.0
libuv version: 1.44.2
ssl version: OpenSSL 1.1.1m  14 Dec 2021, lua-openssl 0.8.2
rex version: 8.37 2015-04-28

→ node --version
v18.12.1

→ uname -ar
Linux 6.1.14-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Feb 26 00:13:26 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

This is a copy-paste of a benchmark @truemedian has done before regarding this:

nodejs http (keep-alive on):
Requests/sec:  10708.38
Transfer/sec:      1.64MB

luvit http (keep-alive on):
Requests/sec:  13926.38
Transfer/sec:      2.14MB

coro-http (keep-alive on):
Requests/sec:    104.72
Transfer/sec:     12.68KB

nodejs http (keep-alive off):
Requests/sec:   5134.01
Transfer/sec:    666.82KB

luvit http (keep-alive off):
Requests/sec:   8735.01
Transfer/sec:      1.11MB

coro-http (keep-alive off):
Requests/sec:  15382.50
Transfer/sec:      1.41MB

(can be found on Discord here)

As you can see, the way Luvit handles keep-alive seems to be causing the performance issue.

@Bilal2453, thanks for the info, I tested nodejs with http keep-alive disabled, but it looks still much better than luvit (7k vs 3k)

const http = require("http");

const server = http.createServer(function (req, res) {
  res.setHeader("Connection", "Close");
  res.end("Hello world!");
});

server.listen(5002);
→ ./go-wrk -c 125 http://127.0.0.1:5002
Running 10s test @ http://127.0.0.1:5002
  125 goroutine(s) running concurrently
71000 requests in 9.988334696s, 4.81MB read
Requests/sec:           7108.29
Transfer/sec:           492.86KB
Avg Req Time:           17.585096ms
Fastest Request:        12.587222ms
Slowest Request:        126.353709ms
Number of Errors:       0

And sets res:setHeader('Connection', 'keep-alive') in lua does help

Well @fallenwood, that is kinda weird. For myself I get the following:

Nodejs with keep-alive off (your provided code):

❯ wrk -d 10 -c 125 http://localhost:5001/
Running 10s test @ http://localhost:5001/
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.68ms    1.52ms  59.48ms   94.76%
    Req/Sec     1.49k    94.82     1.66k    90.00%
  29669 requests in 10.03s, 3.69MB read
Requests/sec:   2959.02
Transfer/sec:    376.93KB

Luvit with keep-alive off (same as your code with :setHeader added):

❯ wrk -d 10 -c 125 http://localhost:5002/
Running 10s test @ http://localhost:5002/
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    30.51ms    3.16ms  51.97ms   75.33%
    Req/Sec     2.03k   216.17     2.51k    77.50%
  40424 requests in 10.03s, 4.12MB read
Requests/sec:   4030.79
Transfer/sec:    421.19KB

And coro-http with keep-alive off:

wrk -d 10 -c 125 http://localhost:5003/
Running 10s test @ http://localhost:5003/
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.50ms    5.50ms  50.05ms   68.66%
    Req/Sec     2.71k   296.06     3.44k    67.68%
  53868 requests in 10.02s, 2.57MB read
  Socket errors: connect 0, read 53868, write 0, timeout 0
Requests/sec:   5376.48
Transfer/sec:    262.77KB

with the code:

local http = require 'coro-http'

http.createServer("127.0.0.1", 5003, function(req, res)
  return {{"Connection", "Close"}, code = 200, keepAlive = false}, "Hello world!"
end)

With Nodejs v18.15.0, Luvit 2.18.1, on the following machine

❯ uname -ar
Linux ts-bs 6.2.10-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr  6 23:30:41 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

I tried with 125 thread + 125 connection and similar results, could this perhaps be related to your wrk implementation?

@Bilal2453 here is the result with coro-http with the your code, it's better than nodejs (10k vs 7k)

# coro-http
→ ./go-wrk -c 125 http://127.0.0.1:5003
Running 10s test @ http://127.0.0.1:5003
  125 goroutine(s) running concurrently
100015 requests in 9.86320719s, 1.34MB read
Requests/sec:           10140.21
Transfer/sec:           138.64KB
Avg Req Time:           12.327159ms
Fastest Request:        1.214479ms
Slowest Request:        32.270675ms
Number of Errors:       0

with Connection: close added luvit with builtin http better than no custom headers (6k vs 3k)

local http = require("http")

local server = http.createServer(function (req, res)
  res:setHeader('Connection', 'close')
  res:finish("Hello world!")
end)

server:listen(5004)
# connection: close
→ ./go-wrk -c 125 http://127.0.0.1:5004
Running 10s test @ http://127.0.0.1:5004
  125 goroutine(s) running concurrently
59000 requests in 9.8310881s, 2.87MB read
Requests/sec:           6001.37
Transfer/sec:           298.90KB
Avg Req Time:           20.828576ms
Fastest Request:        9.237134ms
Slowest Request:        29.67284ms
Number of Errors:       0

I'll try c-wrk later

summary of c-wrk results (installed with nix)

→ wrk --version
wrk 4.2.0 [epoll] Copyright (C) 2012 Will Glozer
Usage: wrk <options> <url>
  Options:
    -c, --connections <N>  Connections to keep open
    -d, --duration    <T>  Duration of test
    -t, --threads     <N>  Number of threads to use

    -s, --script      <S>  Load Lua script file
    -H, --header      <H>  Add header to request
        --latency          Print latency statistics
        --timeout     <T>  Socket/request timeout
    -v, --version          Print version details

  Numeric arguments may include a SI unit (1k, 1M, 1G)
  Time arguments may include a time unit (2s, 2m, 2h)
# nodejs connection close
→ wrk -c 125 http://127.0.0.1:5002
Running 10s test @ http://127.0.0.1:5002
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    14.18ms    3.58ms  79.28ms   92.75%
    Req/Sec     4.13k   483.89     5.00k    88.50%
  82326 requests in 10.01s, 8.40MB read
Requests/sec:   8221.71
Transfer/sec:    859.10KB

# luvit http connection close
→ wrk -c 125 http://127.0.0.1:5004
Running 10s test @ http://127.0.0.1:5004
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    17.49ms    2.57ms  42.76ms   74.32%
    Req/Sec     3.22k   280.37     3.76k    73.50%
  64139 requests in 10.02s, 7.65MB read
Requests/sec:   6402.52
Transfer/sec:    782.06KB

# coro-http connection close
→ wrk -c 125 http://127.0.0.1:5003
Running 10s test @ http://127.0.0.1:5003
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.84ms    3.65ms  49.82ms   71.59%
    Req/Sec     3.77k   332.53     4.81k    80.00%
  75455 requests in 10.08s, 3.60MB read
  Socket errors: connect 0, read 75455, write 0, timeout 0
Requests/sec:   7485.19
Transfer/sec:    365.60KB

# nodejs default
→ wrk -c 125 http://127.0.0.1:5006
Running 10s test @ http://127.0.0.1:5006
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.58ms    2.01ms  57.62ms   97.80%
    Req/Sec    13.94k     1.77k   15.70k    96.00%
  277230 requests in 10.00s, 35.69MB read
Requests/sec:  27711.37
Transfer/sec:      3.57MB

# luvit default
→ wrk -c 125 http://127.0.0.1:5001
Running 10s test @ http://127.0.0.1:5001
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.23ms    2.06ms  50.16ms   97.12%
    Req/Sec     1.51k    51.53     1.64k    65.50%
  29989 requests in 10.01s, 3.73MB read
Requests/sec:   2996.05
Transfer/sec:    381.61KB

I think it's ok to answer my questions, thanks for your help :)

I think it is worth keeping this issue, or at least one issue referencing this, up. It is definitely something that needs to be solved. Would be really nice if you re-open this, and change the title to indicate the problem, being terrible with keep-alive requests.

Luvit is in fact doing something wrong here, don't currently have the time to dig into it but will serve as a reminder.