CLOSE-WAIT bug
f4nff opened this issue · 26 comments
./ssserver -s "[::]:1" -m "aes-256-gcm" -k "pass" -U --tcp-no-delay --tcp-keep-alive 120 --udp-timeout 60
./sslocal --protocol tun -s "[2a01:1700:2::7:18b]:1" -m "aes-256-gcm" -k "pass" --outbound-bind-interface pppoe-wan --tun-interface-name tun0 -U --tcp-no-delay --tcp-keep-alive 120 --udp-timeout 60
查看netstat -an
服务端被连接放出现了CLOSE-WAIT
明显程序有问题.
tcp CLOSE-WAIT 0 0 [2a01:1700:2::7:18b]:1 [2a08:a220:9a17:bcb0::1]:57902
可以参考一下这个,之前realm之前也存在这种问题,后来作者修复了,
https://docs.rs/tokio/latest/tokio/net/struct.TcpStream.html
这个玩意有bug,有人提过,但是没解决.
Instead of obsessing with stats you don't understand and opening useless issues, you should take some time to learn some networking basics.
@database64128
如果close-wait一闪即逝是正常的,但是一直需要等待就不是正常的,
你应该多去学习一下.
golang c项目都没问题.
FIN-WAIT-1
FIN-WAIT-2
存在一会是正常的,
但是CLOSE-WAIT一直存在,就是程序问题.
cargo build --release --features 'brutal-shutdown'
增加 brutal-shutdown 编译选项后,测试无问题了
他底层socket没有使用tokio的,是他自己写的,调用了c的原生接口,就没有close-wait
有没有问题,自己多看看书噢,
I don't think the solution in zhboner/realm#87 is the final solution but just a temporary solution.
We should put more discussion on why there were so many connections left in CLOSE-WAIT
state.
io对拷出的问题,没有正确释放另外一端, 而golang有个defer, 是无条件可以触发的,
所以golang项目不存在CLOSE-WAIT,
realm早期版本也存在 CLOSE-WAIT 的问题,后来的版本就解决了.
io对拷出的问题,没有正确释放另外一端, 而golang有个defer, 是无条件可以触发的
This behavior is wrong. It will lose data if there are something still left unread in another transfer direction.
所以golang项目不存在CLOSE-WAIT,
realm早期版本也存在 CLOSE-WAIT 的问题,后来的版本就解决了.
Yes, and they will definitely lose data in some situation. I don't think it is "solved".
I won't do the same unless there is no correct solution on this issue.
一方都主动关闭了连接,你保持连接就会继续传输数据嘛?
一方主动关闭了连接,就要主动释放另一端连接.
For reference, in go-shadowsocks2
:
It calls io.Copy
on both direction. Set a Read timeout when one of them ends.
代理的本质就是数据转发,跟realm没有本质的区别.
我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到
go-shadowsocks2 不存在这种问题,但是shadowsocks-rust出现了这种问题.
这个tokio的底层,我也没整明白
我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到
If it is corrent,
- Why your server don't send RST to local clients?
- Why your local client couldn't receive RST.
这个tokio的底层,我也没整明白
There is nothing about tokio. It is Linux's network behavior.
然后我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到,我感觉就是这里有问题,
然后修复的,就是考虑了这情况
You don't need to repeat that solution. I know exactly what it aims to and what it solved.
如果出现大量的CLOSE-WAIT是对的,
为什么那些C项目,还有golang项目都没有?
偏偏只有你家项目存在呢?
I have never said that. If you continue attacking, discussion ends here.
就这样吧,过几天我自己来修复.
已经断开了,又哪里来的数据,,,