shadowsocks/shadowsocks-rust

CLOSE-WAIT bug

f4nff opened this issue · 26 comments

f4nff commented

./ssserver -s "[::]:1" -m "aes-256-gcm" -k "pass" -U --tcp-no-delay --tcp-keep-alive 120 --udp-timeout 60

./sslocal --protocol tun -s "[2a01:1700:2::7:18b]:1" -m "aes-256-gcm" -k "pass" --outbound-bind-interface pppoe-wan --tun-interface-name tun0 -U --tcp-no-delay --tcp-keep-alive 120 --udp-timeout 60

查看netstat -an

服务端被连接放出现了CLOSE-WAIT

明显程序有问题.

tcp CLOSE-WAIT 0 0 [2a01:1700:2::7:18b]:1 [2a08:a220:9a17:bcb0::1]:57902

f4nff commented

zhboner/realm#87

可以参考一下这个,之前realm之前也存在这种问题,后来作者修复了,

f4nff commented

https://docs.rs/tokio/latest/tokio/net/struct.TcpStream.html

这个玩意有bug,有人提过,但是没解决.

Instead of obsessing with stats you don't understand and opening useless issues, you should take some time to learn some networking basics.

f4nff commented

@database64128
如果close-wait一闪即逝是正常的,但是一直需要等待就不是正常的,
你应该多去学习一下.

golang c项目都没问题.

f4nff commented

FIN-WAIT-1
FIN-WAIT-2
存在一会是正常的,
但是CLOSE-WAIT一直存在,就是程序问题.

f4nff commented

zhboner/realm#87

cargo build --release --features 'brutal-shutdown'
增加 brutal-shutdown 编译选项后,测试无问题了

他底层socket没有使用tokio的,是他自己写的,调用了c的原生接口,就没有close-wait

有没有问题,自己多看看书噢,

f4nff commented

image

客户端一样存在很多 close-wait,
我的客户端运行在openwrt

I don't think the solution in zhboner/realm#87 is the final solution but just a temporary solution.

We should put more discussion on why there were so many connections left in CLOSE-WAIT state.

f4nff commented

io对拷出的问题,没有正确释放另外一端, 而golang有个defer, 是无条件可以触发的,
所以golang项目不存在CLOSE-WAIT,

f4nff commented

realm早期版本也存在 CLOSE-WAIT 的问题,后来的版本就解决了.

io对拷出的问题,没有正确释放另外一端, 而golang有个defer, 是无条件可以触发的

This behavior is wrong. It will lose data if there are something still left unread in another transfer direction.

所以golang项目不存在CLOSE-WAIT,

realm早期版本也存在 CLOSE-WAIT 的问题,后来的版本就解决了.

Yes, and they will definitely lose data in some situation. I don't think it is "solved".

I won't do the same unless there is no correct solution on this issue.

f4nff commented

一方都主动关闭了连接,你保持连接就会继续传输数据嘛?

f4nff commented

一方主动关闭了连接,就要主动释放另一端连接.

For reference, in go-shadowsocks2:

https://github.com/shadowsocks/go-shadowsocks2/blob/828576df0a9415d1f49e9433577389821e862f3c/tcp.go#L145-L166

It calls io.Copy on both direction. Set a Read timeout when one of them ends.

f4nff commented

代理的本质就是数据转发,跟realm没有本质的区别.

f4nff commented

我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到

f4nff commented

go-shadowsocks2 不存在这种问题,但是shadowsocks-rust出现了这种问题.

f4nff commented

这个tokio的底层,我也没整明白

我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到

If it is corrent,

  1. Why your server don't send RST to local clients?
  2. Why your local client couldn't receive RST.

这个tokio的底层,我也没整明白

There is nothing about tokio. It is Linux's network behavior.

f4nff commented

然后我测试发现,如果转发通道建立了,但是不发数据,服务端死了,通道感知不到,我感觉就是这里有问题,

f4nff commented

然后修复的,就是考虑了这情况

You don't need to repeat that solution. I know exactly what it aims to and what it solved.

f4nff commented

image

出现大量的CLOSE-WAIT是对的?
你认为对就是对的吧,随后我自己来修复.

f4nff commented

如果出现大量的CLOSE-WAIT是对的,
为什么那些C项目,还有golang项目都没有?
偏偏只有你家项目存在呢?

I have never said that. If you continue attacking, discussion ends here.

f4nff commented

就这样吧,过几天我自己来修复.
已经断开了,又哪里来的数据,,,