DPDK: unable to ping DPDK-kni-captured NIC port
pen9u1nlee opened this issue · 2 comments
Hi,
I followed the instructions here to run the KNI sample application with DPDK v20.11.8. Everything seems to be okay, but I cannot successfully perform a remote ping.
The NIC status is shown below. Remote ping to this IP (10.11.140.8) is successful.
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.11.140.8 netmask 255.255.252.0 broadcast 10.11.143.255
inet6 fe80::ea61:1fff:????:???? prefixlen 64 scopeid 0x20<link>
ether e8:61:1f:??:??:?? txqueuelen 1000 (Ethernet)
RX packets 1652015 bytes 119549045 (119.5 MB)
RX errors 0 dropped 661719 overruns 0 frame 0
TX packets 18400 bytes 951902 (951.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Firstly I bind enp3s0
to uio_pci_generic
driver (when binding to vfio-pci
driver the same problem occurs) and load rte_kni
module with carrier=off
.
???@??????:~/dpdk/usertools$ dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'I210 Gigabit Network Connection 1533' drv=uio_pci_generic unused=igb,vfio-pci
Network devices using kernel driver
===================================
......
???@??????:~/dpdk/build/kernel/linux/kni$ sudo rmmod rte_kni
???@??????:~/dpdk/build/kernel/linux/kni$ sudo insmod rte_kni.ko carrier=off
Then, I launch the sample application and it seems to be OK.
???@??????:~/dpdk/build/examples$ sudo ./dpdk-kni -l 4-7 -n 4 -- -P -p 0x1 --config="(0,4,6,8)"
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0000:03:00.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
APP: Initialising port 0 ...
Port 0, MAC address: E8:61:1F:??:??:??
Checking link status
............................done
Port 0 Link up at 1 Gbps FDX Autoneg
APP: ========================
APP: KNI Running
APP: kill -SIGUSR1 243150
APP: Show KNI Statistics.
APP: kill -SIGUSR2 243150
APP: Zero KNI Statistics.
APP: ========================
APP: Lcore 5 has nothing to do
APP: Lcore 6 is writing to port 0
APP: Lcore 4 is reading from port 0
APP: Lcore 7 has nothing to do
Finally, IP, netmask and gw are added, with ifconfig
and ip addr
results shown below:
vEth0_0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.11.140.8 netmask 255.255.252.0 broadcast 10.11.143.255
ether e8:61:1f:??:??:?? txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vEth0_0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether e8:61:1f:??:??:?? brd ff:ff:ff:ff:ff:ff
inet 10.11.140.8/22 brd 10.11.143.255 scope global vEth0_0
valid_lft forever preferred_lft forever
The veth is up but it is not running, and I cannot perform a remote ping to 10.11.140.8.
Consequently my question are as follows:
- What else need to be configured to make sure that I could remote ping the DPDK-captured NIC?
- Is it possible to set the IP of DPDK-captured NIC? I found that MAC\MTU\IP could be configured on the KNI veth, but I do not find the method to configure IP on the DPDK-captured physical NIC.
Thanks a lot in advance!
Try exec echo 1 > /sys/class/net/vEth0_0/carrier
or insmod rte_kni.ko carrier=off
?
Thanks, the problem is solved through a restart...and cat /sys/class/net/vEth0_0/carrier
shows 1.