Kernel Bug Tracker – Bug 16525
unexpected high load since 2.6.35
Last modified: 2011-03-30 23:45:08 UTC
Subject : unexpected high load since 2.6.35
Submitter : "MadLoisae@gmx.net" <MadLoisae@gmx.net>
Date : 2010-08-02 20:53
Message-ID : 4C573041.email@example.com
References : http://marc.info/?l=linux-kernel&m=128078243726655&w=2
This entry is being used for tracking a regression from 2.6.34. Please don't
close it until the problem is fixed in the mainline.
*** Bug 16563 has been marked as a duplicate of this bug. ***
I marked my bug as duplicate but I use x86-64. I hope it doesn't matter because this bug is marked as i386. If you need any testing, don't hesitate to ask me. I will help as hard as I can. Thanks.
I'm not an expert on kernel, but does the new Google Patch (Receive packet/flow steering) is enabled by default in the new 2.6.35? If so, how can I disable it? I suspect this causes the high load, because everytime I watch a video on youtube or listen to last.fm streaming, the high load happens. Maybe it's just a coincidence, I don't know.
Same here on arch, core 2 duo (x86).
Usually, when doing simple tasks, i have load averages of 0.4 (where kernel 2.6.34 was around 0.15!).
Also, the Big Man Linus noticed this on his macbook (http://lkml.org/lkml/2010/8/15/111)
Same... Arch 2.6.35 i686 kernel
Core 2 duo / Nvidia 9500
0.40 / 0.70 Idle
still the same with 126.96.36.199
Seeing something similar with 2.6.35. on x86_64 NO_HZ. Tried the drm_crtc_helper patch to disable polling with no effect. Under 100% (spinning) CPU, load is very nearly 1.00 as expected, but with less than 1% CPU (as per /proc/stat, ps, top, vmstat, etc.) load oscilates wildly between 0.2 and 1.0, averaging around 0.55 (always below 0.02 under identical conditions/config on <=188.8.131.52).
Can someone try linus tree as of now? He just pulled a bunch of intel-gfx regression fixes and reverts which probably fix the SDVO issue he was seeing.
This might not be the only cause of this as there is also some Nvidia machines affected here, it seems?
If the issue persists:
Can someone please try to bisect this issue?
I have ATI, no intel at all, and the issue happens without any drm/gpu/fb stuff loaded. I'll try what I can.
I have this problem too since 2.6.35 ish, I'm on ati drm.
I cannot find what causes it. vmstat io is quiet, cpu is low
Disabling NO_HZ seems to fix it here
For me as well: NO_HZ with otherwise identical config (184.108.40.206) my load is back to 0.01 when idle. Could this have something to do with sampling: if calc_load_fold_active is only when we wake up, then it's biased to sample when something is runnable.
Can you try with NOHZ enabled and the following on the kernel command line:
Make sure that CONFIG_ACPI_PROCESSOR is set to "y" and not to "m"
If that works, then try
Sorry, screwed up quoting, the above is a response of Thomas Gleixner on thread on LKML, which might (or might not) be related.
Disabling NO_HZ also fixed the problem for me here.
any ideas what is causing it?
Can you retest with 220.127.116.11 ?
If the issue persists, could someone test if processor.max_cstate=1 on the kernel commandline (with NOHZ enabled) fixes the issue as per Comment #13 ?
i tested already with 18.104.22.168, same issue.
I am testing now comment 13, will take a while.
According to /proc/acpi/processor/CPU0/power, my processor only has C1 state (no power management) and throttling is disabled too. The problem also persists on 22.214.171.124.
After running for a while without NOHZ (HZ=300) I believe the problem isn't totally gone, just much better, at least on my machine. After a couple hours of <1% CPU on 2.6.34, 15m load is 0.00. On 2.6.35 without NOHZ, it varies around 0.07-0.10 (mean 0.08 std 0.01, and 1m is mean 0.10 std 0.06). With NOHZ, it's 0.50-0.60. That is, it still looks like there's some random noise added to the load samples, but much less.
processor.max_cstate=1 show the exact same behaviout than without.
i tried HZ=250 with NOHZ=1 (I do not know if this matters).
Thanks for testing. So we can rule out the hpet issue that is already fixed in 126.96.36.199.
Also the polling issue of i915 drm driver can be ruled out by:
1. not having any intel gfx
2. the issue persists while polling in the intel drm is disabled. 
Can (some of) you find similarities with these reports:
(kworker: high CPU usage -> system sluggish)
(Why is kslowd accumulating so much CPU time?)
: I don't have the description of the intelgfx polling issue handy atm, maybe someone can give a pointer? it was some module parameter, maybe not in mainline yet
Using x64 here on 188.8.131.52. I'm experiencing the same high load since .35.1, currently i am only typing this message and load is 0.80 0.65 0.56. This is on a core2duo 2.26gz (Macbook Pro - Nvidia), which had an average load of 0.00 - 0.05 on kernel .34.
It's important to note that CPU usage is low, around 5% on average, temperature is more or less normal. There is no visible high IO in iotop etc. I don't experience obvious sluggishness, yet i'm using an Intel X25 SSD so it could take a while before this becomes really perceivable.
nohz=off and highres=off have no effect on my load.
I do not see anything interesting for my usage scenario in the threads you link to above, unfortunately (low cpu, kslowd time is close to zero).
How is it with interrupts? Does Powertop show any significant changes between a
good and a bad kernel?
Do you have already narrowed the regression down to between 2.6.35 and 184.108.40.206?
What was the last good kernel?
If it is indeed narrowed down to the 39 patches that build 220.127.116.11. A bisection on the commits that affect your .config might be the quickest approach. Can you reliably tell a good kernel apart from a bad kernel?
I'm sorry for the confusion Florian, 18.104.22.168 was the first revision of kernel 2.6.35 that i tested, i didn't mean 22.214.171.124 was good for me..
I will compare powertop readings as soon as i have some time available.
Ok, thx for the clarification. What kernel version was the last good one for you?
My last good kernel was 126.96.36.199, which was the last 34-kernel i used before going to 35. Here is a first comparison of top interrupt users when idle in X:
188.8.131.52 - load 0.01 0.03 0.02 - C4 mwait 99.5%
43% Load balancing tick
10% Extra timer interrupt
184.108.40.206 - load 0.66 0.50 0.55 - C4 mwait 99.7%
40% Load balancing tick
9% Hrtimer_start (tick_sched_timer)
I'm trying to bisect this, so far i have it narrowed down to about 42 commits. Here is the current status:
# good: [3aaf51ace5975050ab43c7d4d7e439e0ae7d13d7] Merge branch 'oprofile-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
git bisect good 3aaf51ace5975050ab43c7d4d7e439e0ae7d13d7
# bad: [b8ae30ee26d379db436b0b8c8c3ff1b52f69e5d1] Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
git bisect bad b8ae30ee26d379db436b0b8c8c3ff1b52f69e5d1
I believe i have found the offending merge, which is:
"Merge branch 'cpu_stop' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into sched/core"
It consists of 10 commits, including load average and NO_HZ changes, which i cannot bisect since they result in panics here. It was merged right after 2.6.34-rc6.
Last good is: 371fd7e7a56a5c136d31aa980011bd2f131c3ef5
If there's anything else i can provide, please let me know.
Addition, the actual last good is:
Which is the next commit after the one in my previous post.
Awesome. Let's cc Tejun, for a little bit know how.
Oh, and can you post your .config and a good and a bad dmesg?
Created attachment 31562 [details]
Created attachment 31572 [details]
Created attachment 31582 [details]
Could it be possible that the 'high load' is a consequence of
Author: Peter Zijlstra <firstname.lastname@example.org> 2010-04-22 21:50:19
Committer: Ingo Molnar <email@example.com> 2010-04-23 11:02:02
sched: Cure load average vs NO_HZ woes
which definitely changes the load numbers for your .config.
(And probably for most other users as well)
Does that commit contradict your bisection results?
I tried bisecting the commits in the cpu_stop branch, of which the above NO_HZ is one, yet any kernel produced resulted in a panic. How can i test this?
Not easy. I tried reverting commit 74f5187 on top of current mainline, but it conflicts with some later change. I will dig into the code later and see if I can reverse the change manually, but I don't know...
On lkml there is a report about a similar problem.
Do you also see performance differences between good and bad kernels?
(for example doing timed kernel compiles)
good: 30:22 minutes
bad: 30:57 minutes
Ok. Turns out that in that other case (which was bisected to the above commit), there also was no performance impact in the end.
Can you boot a kernel with the low load-averages with nohz=off on the kernel commandline and then compare the load-averages with newer kernels.
The theory behind that experiment is, that the new load-averages are the un-broken-ones. I.e. before, for all nohz=on kernels, the load-averages were too low.
That theory would be somewhat backed by the changelog of the above commit.
maybe related: about two years ago I changed from 2.6.25.X to 2.6.26. since 2.6.26 I had the "low" load, before the load was compareable to now.
but why is the load lower as soon as I change to a not-tickless system? if now the load is calculated correct for NOHZ, is it then wrong calculated for HZ=250?
Florian, i don't think i can confirm your theory.
Good kernel with nohz=off gives around 0.30 load after boot, then slowly (seemingly a bit slower than without nohz=off) falls back to 0.00 and stays there.
Bad kernel gives around 0.30 load after boot, then starts to jump up and down between 0.30 and 0.80. After 10 minutes the average load is still 0.30, which would be a lot for a C2D/SSD system with no X or network running. It never reaches 0.00.
I also feel, but am not entirely certain, that a bad kernel gives about 4-5 degrees higher cpu temperature after 10 minutes idle. Don't take this as an objective fact at this point though
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Pentium(R) 4 CPU 3.20GHz
stepping : 1
cpu MHz : 3191.913
cache size : 1024 KB
No GUI, Slackware 13.1, local file server.
Only thing that runs is NFS and mysql, the load has never exceed 0.00 with earlier kernels, but since with 2.6.35 and subsequent tested versions including .7 , like others here, the idle load avg is between 0.20 to 0.40
Last known good kernel 220.127.116.11 (which I'll be booting back into shortly)
Tested nohz=off load is lower, avg 0.05 but since this box does nothing, idling all the time, it is regarded as still a bit high.
Likely related to fdf3e95d3916f18bf8703fb065499fdbc4dfe34c (search in
A test patch has been proposed by the author in this discussion thread:
Hopefully we'll see the final version makes its way into subsequent 2.6.36.x patches.
This patch has now been queued by Ingo:
merged for v2.6.37-rc7:
Author: Peter Zijlstra <firstname.lastname@example.org>
Date: Tue Nov 30 19:48:45 2010 +0100
sched: Cure more NO_HZ load average woes