|Summary:||unexpected high load since 2.6.35|
|Product:||Platform Specific/Hardware||Reporter:||Maciej Rutecki (maciej.rutecki)|
|Severity:||normal||CC:||a.p.zijlstra, adrien, bugs, dylan-kernel, florian, fragabr, goeran, julio90001, kernel.org, krcroft, maciej.rutecki, MadLoisae, mishu, nospam, ole, orion, oskarw85.spam, pheldens, rissko, rjw, tj, umbertotozzato|
|Bug Depends on:|
Description Maciej Rutecki 2010-08-05 17:55:34 UTC
Subject : unexpected high load since 2.6.35 Submitter : "MadLoisae@gmx.net" <MadLoisae@gmx.net> Date : 2010-08-02 20:53 Message-ID : 4C573041.firstname.lastname@example.org References : http://marc.info/?l=linux-kernel&m=128078243726655&w=2 This entry is being used for tracking a regression from 2.6.34. Please don't close it until the problem is fixed in the mainline.
Comment 1 Dâniel Fraga 2010-08-11 16:44:50 UTC
*** Bug 16563 has been marked as a duplicate of this bug. ***
Comment 2 Dâniel Fraga 2010-08-11 16:47:41 UTC
I marked my bug as duplicate but I use x86-64. I hope it doesn't matter because this bug is marked as i386. If you need any testing, don't hesitate to ask me. I will help as hard as I can. Thanks.
Comment 3 Dâniel Fraga 2010-08-11 18:39:20 UTC
I'm not an expert on kernel, but does the new Google Patch (Receive packet/flow steering) is enabled by default in the new 2.6.35? If so, how can I disable it? I suspect this causes the high load, because everytime I watch a video on youtube or listen to last.fm streaming, the high load happens. Maybe it's just a coincidence, I don't know. Thanks.
Comment 4 Umberto Tozzato 2010-08-28 11:47:15 UTC
Same here on arch, core 2 duo (x86). Usually, when doing simple tasks, i have load averages of 0.4 (where kernel 2.6.34 was around 0.15!). Also, the Big Man Linus noticed this on his macbook (http://lkml.org/lkml/2010/8/15/111)
Comment 5 Andrew 2010-08-28 13:05:20 UTC
Same... Arch 2.6.35 i686 kernel Core 2 duo / Nvidia 9500 0.40 / 0.70 Idle
Comment 6 Cyber 2010-08-30 21:13:38 UTC
still the same with 126.96.36.199
Comment 7 Dylan Simon 2010-09-06 22:50:22 UTC
Seeing something similar with 2.6.35. on x86_64 NO_HZ. Tried the drm_crtc_helper patch to disable polling with no effect. Under 100% (spinning) CPU, load is very nearly 1.00 as expected, but with less than 1% CPU (as per /proc/stat, ps, top, vmstat, etc.) load oscilates wildly between 0.2 and 1.0, averaging around 0.55 (always below 0.02 under identical conditions/config on <=188.8.131.52).
Comment 8 Florian Mickler 2010-09-08 12:36:59 UTC
Can someone try linus tree as of now? He just pulled a bunch of intel-gfx regression fixes and reverts which probably fix the SDVO issue he was seeing. This might not be the only cause of this as there is also some Nvidia machines affected here, it seems? If the issue persists: Can someone please try to bisect this issue?
Comment 9 Dylan Simon 2010-09-08 12:59:02 UTC
I have ATI, no intel at all, and the issue happens without any drm/gpu/fb stuff loaded. I'll try what I can.
Comment 10 Paul Heldens 2010-09-11 14:33:22 UTC
I have this problem too since 2.6.35 ish, I'm on ati drm. I cannot find what causes it. vmstat io is quiet, cpu is low
Comment 11 Paul Heldens 2010-09-12 12:17:09 UTC
Disabling NO_HZ seems to fix it here
Comment 12 Dylan Simon 2010-09-12 20:16:28 UTC
For me as well: NO_HZ with otherwise identical config (184.108.40.206) my load is back to 0.01 when idle. Could this have something to do with sampling: if calc_load_fold_active is only when we wake up, then it's biased to sample when something is runnable.
Comment 13 Florian Mickler 2010-09-16 08:20:24 UTC
Can you try with NOHZ enabled and the following on the kernel command line: processor.max_cstate=1 Make sure that CONFIG_ACPI_PROCESSOR is set to "y" and not to "m" If that works, then try processor.max_cstate=2 Thanks, tglx
Comment 14 Florian Mickler 2010-09-16 08:23:56 UTC
Sorry, screwed up quoting, the above is a response of Thomas Gleixner on thread on LKML, which might (or might not) be related. References: http://lkml.org/lkml/2010/9/14/105
Comment 15 Cyber 2010-09-22 10:27:19 UTC
Disabling NO_HZ also fixed the problem for me here. any ideas what is causing it?
Comment 16 Florian Mickler 2010-09-22 10:36:08 UTC
Can you retest with 220.127.116.11 ? If the issue persists, could someone test if processor.max_cstate=1 on the kernel commandline (with NOHZ enabled) fixes the issue as per Comment #13 ?
Comment 17 Cyber 2010-09-22 12:10:59 UTC
i tested already with 18.104.22.168, same issue. I am testing now comment 13, will take a while.
Comment 18 Dylan Simon 2010-09-22 15:18:57 UTC
According to /proc/acpi/processor/CPU0/power, my processor only has C1 state (no power management) and throttling is disabled too. The problem also persists on 22.214.171.124. After running for a while without NOHZ (HZ=300) I believe the problem isn't totally gone, just much better, at least on my machine. After a couple hours of <1% CPU on 2.6.34, 15m load is 0.00. On 2.6.35 without NOHZ, it varies around 0.07-0.10 (mean 0.08 std 0.01, and 1m is mean 0.10 std 0.06). With NOHZ, it's 0.50-0.60. That is, it still looks like there's some random noise added to the load samples, but much less.
Comment 19 Cyber 2010-09-22 19:37:33 UTC
processor.max_cstate=1 show the exact same behaviout than without. i tried HZ=250 with NOHZ=1 (I do not know if this matters).
Comment 20 Florian Mickler 2010-09-22 20:33:45 UTC
Thanks for testing. So we can rule out the hpet issue that is already fixed in 126.96.36.199. Also the polling issue of i915 drm driver can be ruled out by: 1. not having any intel gfx OR 2. the issue persists while polling in the intel drm is disabled.  Can (some of) you find similarities with these reports: https://bugzilla.kernel.org/show_bug.cgi?id=18802 (kworker: high CPU usage -> system sluggish) https://bugzilla.kernel.org/show_bug.cgi?id=16265 (Why is kslowd accumulating so much CPU time?) http://lkml.org/lkml/2010/9/22/71 ? Regards, Flo : I don't have the description of the intelgfx polling issue handy atm, maybe someone can give a pointer? it was some module parameter, maybe not in mainline yet
Comment 21 Olivier Keun 2010-09-23 12:04:28 UTC
Florian, Using x64 here on 188.8.131.52. I'm experiencing the same high load since .35.1, currently i am only typing this message and load is 0.80 0.65 0.56. This is on a core2duo 2.26gz (Macbook Pro - Nvidia), which had an average load of 0.00 - 0.05 on kernel .34. It's important to note that CPU usage is low, around 5% on average, temperature is more or less normal. There is no visible high IO in iotop etc. I don't experience obvious sluggishness, yet i'm using an Intel X25 SSD so it could take a while before this becomes really perceivable. nohz=off and highres=off have no effect on my load. I do not see anything interesting for my usage scenario in the threads you link to above, unfortunately (low cpu, kslowd time is close to zero). Best regards, Olivier.
Comment 22 Florian Mickler 2010-09-23 19:38:02 UTC
Hi Oliver! How is it with interrupts? Does Powertop show any significant changes between a good and a bad kernel? Do you have already narrowed the regression down to between 2.6.35 and 184.108.40.206? What was the last good kernel? If it is indeed narrowed down to the 39 patches that build 220.127.116.11. A bisection on the commits that affect your .config might be the quickest approach. Can you reliably tell a good kernel apart from a bad kernel?
Comment 23 Olivier Keun 2010-09-23 19:55:09 UTC
I'm sorry for the confusion Florian, 18.104.22.168 was the first revision of kernel 2.6.35 that i tested, i didn't mean 22.214.171.124 was good for me.. I will compare powertop readings as soon as i have some time available.
Comment 24 Florian Mickler 2010-09-24 05:00:32 UTC
Ok, thx for the clarification. What kernel version was the last good one for you?
Comment 25 Olivier Keun 2010-09-24 07:46:14 UTC
My last good kernel was 126.96.36.199, which was the last 34-kernel i used before going to 35. Here is a first comparison of top interrupt users when idle in X: 188.8.131.52 - load 0.01 0.03 0.02 - C4 mwait 99.5% --- 43% Load balancing tick 17% Wlan0 10% Extra timer interrupt 10% Acpi 184.108.40.206 - load 0.66 0.50 0.55 - C4 mwait 99.7% --- 40% Load balancing tick 9% Hrtimer_start (tick_sched_timer) 8% Wlan0 8% Card_reader_usb 7% Ehci_hcd:usb2 7% Sleepd
Comment 26 Olivier Keun 2010-09-25 18:36:39 UTC
I'm trying to bisect this, so far i have it narrowed down to about 42 commits. Here is the current status: # good: [3aaf51ace5975050ab43c7d4d7e439e0ae7d13d7] Merge branch 'oprofile-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip git bisect good 3aaf51ace5975050ab43c7d4d7e439e0ae7d13d7 # bad: [b8ae30ee26d379db436b0b8c8c3ff1b52f69e5d1] Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip git bisect bad b8ae30ee26d379db436b0b8c8c3ff1b52f69e5d1
Comment 27 Olivier Keun 2010-09-26 16:06:33 UTC
I believe i have found the offending merge, which is: e7858f52a5cb868289a72264534a1f05f3340c6c "Merge branch 'cpu_stop' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into sched/core" It consists of 10 commits, including load average and NO_HZ changes, which i cannot bisect since they result in panics here. It was merged right after 2.6.34-rc6. Last good is: 371fd7e7a56a5c136d31aa980011bd2f131c3ef5 If there's anything else i can provide, please let me know. -Olivier.
Comment 28 Olivier Keun 2010-09-26 16:11:12 UTC
Addition, the actual last good is: 66f41d4c5c8a5deed66fdcc84509376c9a0bf9d8 Linux 2.6.34-rc6 Which is the next commit after the one in my previous post.
Comment 29 Florian Mickler 2010-09-27 05:13:02 UTC
Awesome. Let's cc Tejun, for a little bit know how. Regards, Flo
Comment 30 Florian Mickler 2010-09-27 05:28:48 UTC
Oh, and can you post your .config and a good and a bad dmesg?
Comment 34 Florian Mickler 2010-09-27 10:19:13 UTC
Could it be possible that the 'high load' is a consequence of commit 74f5187ac873042f502227701ed1727e7c5fbfa9 Author: Peter Zijlstra <email@example.com> 2010-04-22 21:50:19 Committer: Ingo Molnar <firstname.lastname@example.org> 2010-04-23 11:02:02 sched: Cure load average vs NO_HZ woes which definitely changes the load numbers for your .config. (And probably for most other users as well) Does that commit contradict your bisection results?
Comment 35 Olivier Keun 2010-09-27 10:58:55 UTC
I tried bisecting the commits in the cpu_stop branch, of which the above NO_HZ is one, yet any kernel produced resulted in a panic. How can i test this?
Comment 36 Florian Mickler 2010-09-29 12:34:57 UTC
Not easy. I tried reverting commit 74f5187 on top of current mainline, but it conflicts with some later change. I will dig into the code later and see if I can reverse the change manually, but I don't know... On lkml there is a report about a similar problem. References: http://lkml.org/lkml/2010/9/27/328 Do you also see performance differences between good and bad kernels? (for example doing timed kernel compiles)
Comment 37 Olivier Keun 2010-09-29 15:29:47 UTC
Compiling kernels: good: 30:22 minutes bad: 30:57 minutes
Comment 38 Florian Mickler 2010-10-03 17:39:39 UTC
Ok. Turns out that in that other case (which was bisected to the above commit), there also was no performance impact in the end. Can you boot a kernel with the low load-averages with nohz=off on the kernel commandline and then compare the load-averages with newer kernels. The theory behind that experiment is, that the new load-averages are the un-broken-ones. I.e. before, for all nohz=on kernels, the load-averages were too low. That theory would be somewhat backed by the changelog of the above commit.
Comment 39 Cyber 2010-10-03 17:56:54 UTC
maybe related: about two years ago I changed from 2.6.25.X to 2.6.26. since 2.6.26 I had the "low" load, before the load was compareable to now. but why is the load lower as soon as I change to a not-tickless system? if now the load is calculated correct for NOHZ, is it then wrong calculated for HZ=250?
Comment 40 Olivier Keun 2010-10-04 07:55:23 UTC
Florian, i don't think i can confirm your theory. Good kernel with nohz=off gives around 0.30 load after boot, then slowly (seemingly a bit slower than without nohz=off) falls back to 0.00 and stays there. Bad kernel gives around 0.30 load after boot, then starts to jump up and down between 0.30 and 0.80. After 10 minutes the average load is still 0.30, which would be a lot for a C2D/SSD system with no X or network running. It never reaches 0.00. I also feel, but am not entirely certain, that a bad kernel gives about 4-5 degrees higher cpu temperature after 10 minutes idle. Don't take this as an objective fact at this point though
Comment 41 bugs 2010-10-07 00:24:52 UTC
/proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 4 model name : Intel(R) Pentium(R) 4 CPU 3.20GHz stepping : 1 cpu MHz : 3191.913 cache size : 1024 KB 1G ram No GUI, Slackware 13.1, local file server. Only thing that runs is NFS and mysql, the load has never exceed 0.00 with earlier kernels, but since with 2.6.35 and subsequent tested versions including .7 , like others here, the idle load avg is between 0.20 to 0.40 Last known good kernel 220.127.116.11 (which I'll be booting back into shortly) Tested nohz=off load is lower, avg 0.05 but since this box does nothing, idling all the time, it is regarded as still a bit high.
Comment 42 krcroft 2010-10-23 13:15:20 UTC
Likely related to fdf3e95d3916f18bf8703fb065499fdbc4dfe34c (search in http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.36) A test patch has been proposed by the author in this discussion thread: http://groups.google.com/group/linux.kernel/browse_thread/thread/eb5cb488b7404dd2/bb30bf1aa1758f27?lnk=raot&pli=1 Hopefully we'll see the final version makes its way into subsequent 2.6.36.x patches.
Comment 43 Florian Mickler 2010-11-30 20:15:33 UTC
Comment 44 Florian Mickler 2010-12-09 10:17:58 UTC
Ignore-Patch: https://patchwork.kernel.org/patch/367101/ This patch has now been queued by Ingo: Patch: https://patchwork.kernel.org/patch/391982/