[BUG] 针对多网卡的限速策略无效
Opened this issue · 0 comments
juilletVent commented
环境信息
系统信息
Linux BKVM4275716 5.15.0-71-generic #78-Ubuntu SMP Tue Apr 18 09:00:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
网卡信息(地址信息按照环境情况已脱敏)
eth0为国内侧、eth1为国外侧
br-84ee77784cda: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.21.0.1 netmask 255.255.0.0 broadcast 172.21.255.255
ether 02:42:89:75:38:99 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:1b:6d:77:c5 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 111.111.111.111 netmask 255.255.255.0 broadcast 111.111.111.255
inet6 fe80::216:3eff:fe07:3c04 prefixlen 64 scopeid 0x20<link>
ether 00:00:00:00:3c:04 txqueuelen 1000 (Ethernet)
RX packets 52104916 bytes 8011852583 (8.0 GB)
RX errors 0 dropped 13900 overruns 0 frame 0
TX packets 18316065 bytes 36121122223 (36.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 222.222.222.222 netmask 255.255.255.0 broadcast 222.222.222.255
inet6 fe80::216:3eff:feb4:cdb8 prefixlen 64 scopeid 0x20<link>
ether 00:00:00:b4:cd:b8 txqueuelen 1000 (Ethernet)
RX packets 18882336 bytes 35622782661 (35.6 GB)
RX errors 0 dropped 97256 overruns 0 frame 0
TX packets 15052288 bytes 4005707959 (4.0 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ifb0: flags=195<UP,BROADCAST,RUNNING,NOARP> mtu 1500
inet6 fe80::4c02:4dff:fe79:435c prefixlen 64 scopeid 0x20<link>
ether 4e:02:4d:79:43:5c txqueuelen 32 (Ethernet)
RX packets 18177530 bytes 35509288006 (35.5 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18177530 bytes 35509288006 (35.5 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 15148 bytes 4793634 (4.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15148 bytes 4793634 (4.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
路由信息
default via 222.222.222.1 dev eth1
222.222.222.0/24 dev eth1 proto kernel scope link src 222.222.222.222
111.111.111.0/24 dev eth0 proto kernel scope link src 111.111.111.111
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.21.0.0/16 dev br-84ee77784cda proto kernel scope link src 172.21.0.1 linkdown
发生场景
专线双独立IP机型,机器具备国内端IP以及国外端IP,并且将默认路由指向了国外端IP
BUG原因猜测
主要原因是backend中tc流控脚本对网卡的识别逻辑场景覆盖不足导致的,当前脚本针对单网卡的机器,工作正常,流控没有什么问题。但是,如果机器具备多张网卡,tc脚本的识别逻辑会把具备默认路由的网卡(也就是外网出口网卡)作为流控策略的目标网卡,进而导致流控无效。
一般来说,双IP机型,会将外网网卡配置为系统的默认流量出口,境内网卡作为入口。我对tc的流控了解不多,只是简单测试过,说下我的猜想吧:由于在入口测的网卡没有流控策略,数据流已经在国内侧入站,经过路由表转至境外侧网卡,因为数据已经入站,所以流控无效。
手动在入口侧网卡添加tc流控策略,可以正常生效,配合ifb网卡,可以正常控制入站与出站速率。
在单网卡机器上配置面板的流控,可以正常生效。
代码位置
https://github.com/Aurora-Admin-Panel/backend/blob/main/ansible/project/files/tc.sh#L6-L7