Bug 88161
Summary: | High traffic causes a lot of softirqs | ||
---|---|---|---|
Product: | Networking | Reporter: | Mike Zupan (mike) |
Component: | Other | Assignee: | Stephen Hemminger (stephen) |
Status: | NEEDINFO --- | ||
Severity: | high | CC: | alan, asilva, dmitry.samsonov, szg00000 |
Priority: | P1 | ||
Hardware: | Intel | ||
OS: | Linux | ||
Kernel Version: | 3.17.2 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Mike Zupan
2014-11-13 14:18:19 UTC
Sorry here's the nics we have on the system 06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) kworker just handles offloaded work, so if the box is being hammered then it's not unreasonable for it to be high. What makes you think its doing lots of acpi calls ?? Same issue here with all Centos 7 and elrepo kernels. NET_RX in many times bigger than NET_TX: NET_TX: 94345 94868 94714 94441 96972 97641 NET_RX: 466312374 484706991 484924300 494927859 500039928 499807940 i'm having a similar behavior, same network card. Updated from kernel 3.10.58 to 4.4.38, with higher traffic i start to have a lot pkt loss, with one of two of my cpu cores getting lock (output on htop). Recently update to kernel 4.9.20 but the same results. I've try some options (queue size, gro,...) to improve network performance, but the issue is still at softirq level, in top what i see is that the traffic seams to lock to a softirq, where in the previous kernel this doesn't happen: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29 root 20 0 0 0 0 R 99.9 0.0 0:21.30 ksoftirqd/3 114 root 20 0 0 0 0 R 99.6 0.0 0:19.02 ksoftirqd/17 in /proc/softirqs and /proc/interrupts i didn't find anything strange. Please tel me if is there any more info i can get to help. My server is just doing routing with iptables. Regards, Some extra info, after reading issue #109581 i set the qdisc to prio_fast and no more cpu usage in softirq. Rules that i have installed: ip link set dev eth2.24 txqlen 1000 tc qdisc del dev eth2.24 root tc qdisc add dev eth2.24 root handle 1: prio bands 3 tc qdisc add dev eth2.24 parent 1:1 handle 10: pfifo limit 50 tc qdisc add dev eth2.24 parent 1:2 handle 2: hfsc default 2 tc class add dev eth2.24 parent 2: classid 2:1 hfsc sc rate 300000kbit ul rate 300000kbit tc class add dev eth2.24 parent 2: classid 2:2 hfsc sc rate 300000kbit ul rate 300000kbit Side note: I notice that is issue is more that one year old.. did you manage to solve your issue? can you share how? Regards, |