test_pingpong performance test, gigabit network card bandwidth, under 80% of bandwidth, why does the test frequently encounter socket send timeout
Closed this issue · 5 comments
Hello, I'm testing the latest master branch, specifically the test_pingpong.cpp file in the ZLToolKit directory. I have some questions I'd like to ask.
Testing on different machines within the intranet:
tcpserver and tcpclient are used for performance testing.
The server runs test_pingpongServer -s 192.168.5.27:10000
By default, there are 10 tcp clients, each sending 1MB of data every 100ms. 10 * 10 = 100MB/s
The client runs test_pingpongClient -s 192.168.5.27
On a gigabit network card, the send and receive rate is 100MB/s, which seems normal. However, in this situation, socket send timeout and socket unwritable situations occur.
The client runs test_pingpongClient -s 192.168.5.27 -c 1 -b 10000000
1 client sends 10 times 10MB data per second, and the same issue occurs. Actual testing shows that it's not significantly related to the number of clients or the packet size.
In summary, when the total amount of data sent is close to or greater than 80% of the bandwidth, most of the time all sockets will timeout, and sometimes a few sockets will randomly remain. Is this normal?
Even if the bandwidth is exceeded, why would all sockets timeout? The client then disconnects.
If I want to transfer two large files, for example, both 10GB, how can I send them using two clients? How can I ensure both bandwidth utilization and prevent socket timeouts and disconnections?
您好,测试最新的master分支,测试ZLToolKit目录下的test_pingpong.cpp,有些疑惑想请教一下.
在内网不同机器进行的测试:
tcpserver与tcpclient进行性能测试。
服务端运行test_pingpongServer -s 192.168.5.27:10000
默认10个tcp客户端 每个客户端100ms发送一次1MB的数据.10*10 =100MB/s
客户端运行test_pingpongClient -s 192.168.5.27
在千兆网卡下,发送和接受速率100MB/s,属于正常的吗,但是在这种情况下,会出现socket send timeout,socket不可写的情况。
客户端运行test_pingpongClient -s 192.168.5.27 -c 1 -b 10000000
1个客户端1s发送10次10MB数据,一样会出现,实测发现跟客户端数量,包大小,关系都不大。
综合加起来的发送数据量80%接近或者大于带宽,大部分情况下会导致所有的socket都发送超时,有时候会随机剩余几个socket。这种情况都是正常的吗?
就算超过了带宽,为什么会导致所有socket都发送超时?,客户端就进行断开了。
如果我想传输2个大文件,比如都10GB,我用2个客户端,如何进行发送呢?既要保证带宽利用率,又要保证socket不发送超时断开?
TRANS_BY_GITHUB_AI_ASSISTANT
socket send timeout generally indicates that the bandwidth is a bottleneck.
socket send timeout一般表明带宽已经瓶颈了
TRANS_BY_GITHUB_AI_ASSISTANT
The current issue is approaching the bandwidth bottleneck, which causes all sockets to timeout. Is this situation reasonable? Ultimately, this leads to 0% bandwidth utilization, instead of maintaining a state close to the bottleneck. I want the effect to be that even if it approaches or exceeds the bandwidth bottleneck, it sends at a rate close to the bandwidth bottleneck. I understand that other unsendable timeouts are understandable, but why do all of them timeout? Can I use this network library to achieve this effect?
现在的问题是接近带宽瓶颈,会导致所有的socket都发送超时,这种情况是合理的吗?最终导致带宽利用率为0%,而不是一直保持一个接近瓶颈的状态。 我想要的效果是就算接近带宽瓶颈或者超过,也是以接近带宽瓶颈的速率发送,其它无法发送超时我理解,为什么会导致所有的都超时呢?我能利用这个网络库实现这个效果吗?
TRANS_BY_GITHUB_AI_ASSISTANT
Of course it's reasonable. TCP data transmission cannot be lost, it must be reliable. When you call the zltoolkit interface upstream, the amount of data generated per second is constant.
But the bandwidth is limited. What can zltoolkit do? Drop some of your data? That would violate the premise of TCP reliability.
What about the data that can't be sent? You can't let zltoolkit keep caching it until the memory overflows. Then zltoolkit can only cut off the connection and stop sending.
当然合理了 tcp发送数据是不能丢的 是要可靠的 你在上游调用zltoolkit的接口 每秒产生的数据量是恒定的
但是带宽就这么多 你让zltoolkit怎么办?把你一部分的数据丢了?那这不是破坏了tcp可靠的前提吗?
发不出的数据怎么办?总不能让zltoolkit一直缓存到内存溢出? 那zltoolkit只能掐断链接,直接不发了。
TRANS_BY_GITHUB_AI_ASSISTANT
So you need to control the sending speed, you need to use the onFlush callback to control it. You see zlmediakit uses this callback to control the download speed of http files, which can just reach the maximum sending speed of the network.
所以你要控制发送速度啊 要用onFlush回调控制啊。你看zlmediakit就用这个回调控制http文件下载速度,可以刚好达到网络最大发送速度。
TRANS_BY_GITHUB_AI_ASSISTANT
Thank you for taking the time to answer my question.
多谢您抽空回答我的问题.
TRANS_BY_GITHUB_AI_ASSISTANT