Bug 13493 - cpufreq: INFO: possible circular locking dependency detected
Summary: cpufreq: INFO: possible circular locking dependency detected
Status: REJECTED UNREPRODUCIBLE
Alias: None
Product: Power Management
Classification: Unclassified
Component: cpufreq (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: cpufreq
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-06-09 19:15 UTC by Márton Németh
Modified: 2011-04-19 07:48 UTC (History)
7 users (show)

See Also:
Kernel Version: 2.6.30-rc8
Subsystem:
Regression: Yes
Bisected commit-id:


Attachments
dmesg 2.6.30-rc8 (56.02 KB, text/plain)
2009-06-09 19:15 UTC, Márton Németh
Details
2.6.30-rc8 .config on EeePC 901 (82.59 KB, application/octet-stream)
2009-06-09 19:16 UTC, Márton Németh
Details
git bisect log (1.76 KB, text/plain)
2009-06-12 04:19 UTC, Márton Németh
Details

Description Márton Németh 2009-06-09 19:15:42 UTC
Created attachment 21832 [details]
dmesg 2.6.30-rc8

Trying to set "powersave" governor on EeePC 901 causes possible circular locking dependency message in dmesg.

Steps to reproduce:
1. boot the system
2. modprobe -k acpi-cpufreq
3. modprobe -k cpufreq-ondemand
4. echo powersave >/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

Actual result:
[   77.802623] =======================================================
[   77.802807] [ INFO: possible circular locking dependency detected ]
[   77.802913] 2.6.30-rc8 #1
[   77.802999] -------------------------------------------------------
[   77.803102] bash/2317 is trying to acquire lock:
[   77.803197]  (&(&dbs_info->work)->work){+.+...}, at: [<c013abd8>] __cancel_work_timer+0x8f/0x190
[   77.803502] 
[   77.803506] but task is already holding lock:
[   77.803673]  (dbs_mutex){+.+.+.}, at: [<f94dea44>] cpufreq_governor_dbs+0x296/0x322 [cpufreq_ondemand]
[   77.803981] 
[   77.803985] which lock already depends on the new lock.
[   77.803990] 
[   77.804231] 
[   77.804235] the existing dependency chain (in reverse order) is:
[   77.804408] 
[   77.804412] -> #2 (dbs_mutex){+.+.+.}:
[   77.804733]        [<c014d3b5>] __lock_acquire+0xf85/0x128d
[   77.804890]        [<c014d770>] lock_acquire+0xb3/0xd6
[   77.804928]        [<c034c432>] mutex_lock_nested+0x45/0x2b0
[   77.804928]        [<f94de828>] cpufreq_governor_dbs+0x7a/0x322 [cpufreq_ondemand]
[   77.804928]        [<c02cabfd>] __cpufreq_governor+0x9d/0xd3
[   77.804928]        [<c02cae43>] __cpufreq_set_policy+0xe7/0x11f
[   77.804928]        [<c02cb7bc>] store_scaling_governor+0x197/0x1bf
[   77.804928]        [<c02cc19e>] store+0x48/0x61
[   77.804928]        [<c01de7f8>] sysfs_write_file+0xb9/0xe4
[   77.804928]        [<c01a1954>] vfs_write+0x8a/0x12e
[   77.804928]        [<c01a1a91>] sys_write+0x3b/0x60
[   77.804928]        [<c01031a4>] sysenter_do_call+0x12/0x38
[   77.804928]        [<ffffffff>] 0xffffffff
[   77.804928] 
[   77.804928] -> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}:
[   77.804928]        [<c014d3b5>] __lock_acquire+0xf85/0x128d
[   77.804928]        [<c014d770>] lock_acquire+0xb3/0xd6
[   77.804928]        [<c034ccae>] down_write+0x2a/0x46
[   77.804928]        [<c02cbb7d>] lock_policy_rwsem_write+0x33/0x5b
[   77.804928]        [<f94de527>] do_dbs_timer+0x4b/0x2d2 [cpufreq_ondemand]
[   77.804928]        [<c013a50f>] worker_thread+0x1ad/0x28a
[   77.804928]        [<c013d8ae>] kthread+0x45/0x6b
[   77.804928]        [<c0103d63>] kernel_thread_helper+0x7/0x10
[   77.804928]        [<ffffffff>] 0xffffffff
[   77.804928] 
[   77.804928] -> #0 (&(&dbs_info->work)->work){+.+...}:
[   77.804928]        [<c014d145>] __lock_acquire+0xd15/0x128d
[   77.804928]        [<c014d770>] lock_acquire+0xb3/0xd6
[   77.804928]        [<c013abfe>] __cancel_work_timer+0xb5/0x190
[   77.804928]        [<c013ace4>] cancel_delayed_work_sync+0xb/0xd
[   77.804928]        [<f94dea58>] cpufreq_governor_dbs+0x2aa/0x322 [cpufreq_ondemand]
[   77.804928]        [<c02cabfd>] __cpufreq_governor+0x9d/0xd3
[   77.804928]        [<c02cae2d>] __cpufreq_set_policy+0xd1/0x11f
[   77.804928]        [<c02cb7bc>] store_scaling_governor+0x197/0x1bf
[   77.804928]        [<c02cc19e>] store+0x48/0x61
[   77.804928]        [<c01de7f8>] sysfs_write_file+0xb9/0xe4
[   77.804928]        [<c01a1954>] vfs_write+0x8a/0x12e
[   77.804928]        [<c01a1a91>] sys_write+0x3b/0x60
[   77.804928]        [<c01031a4>] sysenter_do_call+0x12/0x38
[   77.804928]        [<ffffffff>] 0xffffffff
[   77.804928] 
[   77.804928] other info that might help us debug this:
[   77.804928] 
[   77.804928] 3 locks held by bash/2317:
[   77.804928]  #0:  (&buffer->mutex){+.+.+.}, at: [<c01de764>] sysfs_write_file+0x25/0xe4
[   77.804928]  #1:  (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [<c02cbb7d>] lock_policy_rwsem_write+0x33/0x5b
[   77.804928]  #2:  (dbs_mutex){+.+.+.}, at: [<f94dea44>] cpufreq_governor_dbs+0x296/0x322 [cpufreq_ondemand]
[   77.804928] 
[   77.804928] stack backtrace:
[   77.804928] Pid: 2317, comm: bash Not tainted 2.6.30-rc8 #1
[   77.804928] Call Trace:
[   77.804928]  [<c034ac15>] ? printk+0xf/0x12
[   77.804928]  [<c014c061>] print_circular_bug_tail+0xa3/0xae
[   77.804928]  [<c014d145>] __lock_acquire+0xd15/0x128d
[   77.804928]  [<c014d770>] lock_acquire+0xb3/0xd6
[   77.804928]  [<c013abd8>] ? __cancel_work_timer+0x8f/0x190
[   77.804928]  [<c013abfe>] __cancel_work_timer+0xb5/0x190
[   77.804928]  [<c013abd8>] ? __cancel_work_timer+0x8f/0x190
[   77.804928]  [<c034c657>] ? mutex_lock_nested+0x26a/0x2b0
[   77.804928]  [<c014b985>] ? trace_hardirqs_on_caller+0x103/0x124
[   77.804928]  [<c034c683>] ? mutex_lock_nested+0x296/0x2b0
[   77.804928]  [<c013ace4>] cancel_delayed_work_sync+0xb/0xd
[   77.804928]  [<f94dea58>] cpufreq_governor_dbs+0x2aa/0x322 [cpufreq_ondemand]
[   77.804928]  [<c014180c>] ? __blocking_notifier_call_chain+0x40/0x4c
[   77.804928]  [<c02cabfd>] __cpufreq_governor+0x9d/0xd3
[   77.804928]  [<c02cae2d>] __cpufreq_set_policy+0xd1/0x11f
[   77.804928]  [<c02cb7bc>] store_scaling_governor+0x197/0x1bf
[   77.804928]  [<c02cc28d>] ? handle_update+0x0/0xd
[   77.804928]  [<c02cbb7d>] ? lock_policy_rwsem_write+0x33/0x5b
[   77.804928]  [<c02cb625>] ? store_scaling_governor+0x0/0x1bf
[   77.804928]  [<c02cc19e>] store+0x48/0x61
[   77.804928]  [<c01de7f8>] sysfs_write_file+0xb9/0xe4
[   77.804928]  [<c01de73f>] ? sysfs_write_file+0x0/0xe4
[   77.804928]  [<c01a1954>] vfs_write+0x8a/0x12e
[   77.804928]  [<c01a1a91>] sys_write+0x3b/0x60
[   77.804928]  [<c01031a4>] sysenter_do_call+0x12/0x38
Comment 1 Márton Németh 2009-06-09 19:16:50 UTC
Created attachment 21833 [details]
2.6.30-rc8 .config on EeePC 901
Comment 2 Márton Németh 2009-06-09 19:17:48 UTC
$ cat /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 28
model name      : Intel(R) Atom(TM) CPU N270   @ 1.60GHz
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm lahf_lm
bogomips        : 3200.22
clflush size    : 64
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 28
model name      : Intel(R) Atom(TM) CPU N270   @ 1.60GHz
stepping        : 2
cpu MHz         : 800.000
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 1
apicid          : 1
initial apicid  : 1
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm lahf_lm
bogomips        : 3199.91
clflush size    : 64
power management:
Comment 3 Márton Németh 2009-06-09 19:43:04 UTC
This problem does not exsits with 2.6.30-rc7.
Comment 4 yury 2009-06-10 16:50:36 UTC
also in 2.6.30

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.30 #1
-------------------------------------------------------
hald-addon-cpuf/2938 is trying to acquire lock:
 (&(&dbs_info->work)->work){+.+...}, at: [<c012d218>] __cancel_work_timer+0x8f/0x131

but task is already holding lock:
 (dbs_mutex){+.+.+.}, at: [<c02aebe1>] cpufreq_governor_dbs+0x231/0x2bc

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (dbs_mutex){+.+.+.}:
       [<c013e76c>] __lock_acquire+0xf9b/0x12a4
       [<c013eb07>] lock_acquire+0x92/0xb4
       [<c0341a5c>] mutex_lock_nested+0x39/0x25d
       [<c02aea02>] cpufreq_governor_dbs+0x52/0x2bc
       [<c02ace63>] __cpufreq_governor+0x5d/0x91
       [<c02acf97>] __cpufreq_set_policy+0xe7/0x11f
       [<c02adcd7>] cpufreq_add_dev+0x22a/0x2bc
       [<c02772c8>] sysdev_driver_register+0x96/0xe5
       [<c02ad353>] cpufreq_register_driver+0x7c/0xd6
       [<f8b2e080>] 0xf8b2e080
       [<c0101131>] _stext+0x49/0x119
       [<c01460ef>] sys_init_module+0x89/0x192
       [<c01030a8>] sysenter_do_call+0x12/0x3c
       [<ffffffff>] 0xffffffff

-> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}:
       [<c013e76c>] __lock_acquire+0xf9b/0x12a4
       [<c013eb07>] lock_acquire+0x92/0xb4
       [<c03421d4>] down_write+0x29/0x63
       [<c02ad9da>] lock_policy_rwsem_write+0x1d/0x33
       [<c02ae78e>] do_dbs_timer+0x36/0x258
       [<c012cef0>] worker_thread+0x189/0x256
       [<c012fbac>] kthread+0x42/0x67
       [<c01038ff>] kernel_thread_helper+0x7/0x10
       [<ffffffff>] 0xffffffff

-> #0 (&(&dbs_info->work)->work){+.+...}:
       [<c013e4fc>] __lock_acquire+0xd2b/0x12a4
       [<c013eb07>] lock_acquire+0x92/0xb4
       [<c012d234>] __cancel_work_timer+0xab/0x131
       [<c012d2c5>] cancel_delayed_work_sync+0xb/0xd
       [<c02aebf2>] cpufreq_governor_dbs+0x242/0x2bc
       [<c02ace63>] __cpufreq_governor+0x5d/0x91
       [<c02acf81>] __cpufreq_set_policy+0xd1/0x11f
       [<c02ad822>] store_scaling_governor+0x197/0x1bf
       [<c02addb1>] store+0x48/0x61
       [<c01a96f2>] sysfs_write_file+0xb9/0xe4
       [<c0175884>] vfs_write+0x8a/0x11c
       [<c01759af>] sys_write+0x3b/0x60
       [<c01030a8>] sysenter_do_call+0x12/0x3c
       [<ffffffff>] 0xffffffff

other info that might help us debug this:

3 locks held by hald-addon-cpuf/2938:
 #0:  (&buffer->mutex){+.+.+.}, at: [<c01a965e>] sysfs_write_file+0x25/0xe4
 #1:  (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [<c02ad9da>] lock_policy_rwsem_write+0x1d/0x33
 #2:  (dbs_mutex){+.+.+.}, at: [<c02aebe1>] cpufreq_governor_dbs+0x231/0x2bc

stack backtrace:
Pid: 2938, comm: hald-addon-cpuf Not tainted 2.6.30 #1
Call Trace:
 [<c03409ec>] ? printk+0xf/0x13
 [<c013d3fe>] print_circular_bug_tail+0xa2/0xad
 [<c013e4fc>] __lock_acquire+0xd2b/0x12a4
 [<c013eb07>] lock_acquire+0x92/0xb4
 [<c012d218>] ? __cancel_work_timer+0x8f/0x131
 [<c012d234>] __cancel_work_timer+0xab/0x131
 [<c012d218>] ? __cancel_work_timer+0x8f/0x131
 [<c013cae1>] ? mark_held_locks+0x43/0x5b
 [<c0341c68>] ? mutex_lock_nested+0x245/0x25d
 [<c013cd37>] ? trace_hardirqs_on_caller+0x101/0x122
 [<c0341c78>] ? mutex_lock_nested+0x255/0x25d
 [<c02aebe1>] ? cpufreq_governor_dbs+0x231/0x2bc
 [<c012d2c5>] cancel_delayed_work_sync+0xb/0xd
 [<c02aebf2>] cpufreq_governor_dbs+0x242/0x2bc
 [<c02ace63>] __cpufreq_governor+0x5d/0x91
 [<c02acf81>] __cpufreq_set_policy+0xd1/0x11f
 [<c02ad822>] store_scaling_governor+0x197/0x1bf
 [<c02adea0>] ? handle_update+0x0/0xd
 [<c02ad68b>] ? store_scaling_governor+0x0/0x1bf
 [<c02addb1>] store+0x48/0x61
 [<c01a96f2>] sysfs_write_file+0xb9/0xe4
 [<c01a9639>] ? sysfs_write_file+0x0/0xe4
 [<c0175884>] vfs_write+0x8a/0x11c
 [<c01759af>] sys_write+0x3b/0x60
 [<c01030a8>] sysenter_do_call+0x12/0x3c
Comment 5 Simon Holm Thøgersen 2009-06-11 12:37:15 UTC
This is either the same as bug #13424 or closely related to it.
Comment 6 Márton Németh 2009-06-12 04:19:58 UTC
Created attachment 21864 [details]
git bisect log

I bisected this problem on an EeePC 901. The result is:

b14893a62c73af0eca414cfed505b8c09efc613c is first bad commit
commit b14893a62c73af0eca414cfed505b8c09efc613c
Author: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Date:   Sun May 17 10:30:45 2009 -0400

    [CPUFREQ] fix timer teardown in ondemand governor
    
    * Rafael J. Wysocki (rjw@sisk.pl) wrote:
    > This message has been generated automatically as a part of a report
    > of regressions introduced between 2.6.28 and 2.6.29.
    >
    > The following bug entry is on the current list of known regressions
    > introduced between 2.6.28 and 2.6.29.  Please verify if it still should
    > be listed and let me know (either way).
    >
    >
    > Bug-Entry : http://bugzilla.kernel.org/show_bug.cgi?id=13186
    > Subject           : cpufreq timer teardown problem
    > Submitter : Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
    > Date              : 2009-04-23 14:00 (24 days old)
    > References        : http://marc.info/?l=linux-kernel&m=124049523515036&w=4
    > Handled-By        : Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
    > Patch             : http://patchwork.kernel.org/patch/19754/
    >             http://patchwork.kernel.org/patch/19753/
    >

    (updated changelog)
    
    cpufreq fix timer teardown in ondemand governor
    
    The problem is that dbs_timer_exit() uses cancel_delayed_work() when it should
    use cancel_delayed_work_sync(). cancel_delayed_work() does not wait for the
    workqueue handler to exit.
    
    The ondemand governor does not seem to be affected because the
    "if (!dbs_info->enable)" check at the beginning of the workqueue handler returns
    immediately without rescheduling the work. The conservative governor in
    2.6.30-rc has the same check as the ondemand governor, which makes things
    usually run smoothly. However, if the governor is quickly stopped and then
    started, this could lead to the following race :
    
    dbs_enable could be reenabled and multiple do_dbs_timer handlers would run.
    This is why a synchronized teardown is required.
    
    The following patch applies to, at least, 2.6.28.x, 2.6.29.1, 2.6.30-rc2.
    
    Depends on patch
    cpufreq: remove rwsem lock from CPUFREQ_GOV_STOP call
    
    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
    CC: Andrew Morton <akpm@linux-foundation.org>
    CC: gregkh@suse.de
    CC: stable@kernel.org
    CC: cpufreq@vger.kernel.org
    CC: Ingo Molnar <mingo@elte.hu>
    CC: rjw@sisk.pl
    CC: Ben Slusky <sluskyb@paranoiacs.org>
    Signed-off-by: Dave Jones <davej@redhat.com>

:040000 040000 d7fb0f04e2d13be36ff8a5fba2e6bbcff4406996 482e48e8a15065bf48e28a98db06433ced382551 M      drivers
Comment 7 Márton Németh 2009-06-12 04:25:43 UTC
(In reply to comment #6)
> b14893a62c73af0eca414cfed505b8c09efc613c is first bad commit

This commit is the same what Simon already pointed out in comment #5. The bug #13424 points to an email at http://www.spinics.net/lists/cpufreq/msg00711.html which describes the problem.
Comment 8 Martin Bammer 2009-06-13 19:17:24 UTC
Having the same problem with the latest stable kernel 2.6.30:

[   97.231797] =======================================================
[   97.234189] [ INFO: possible circular locking dependency detected ]
[   97.234189] 2.6.30-ep1 #3
[   97.234189] -------------------------------------------------------
[   97.234189] ondemand/2848 is trying to acquire lock:
[   97.234189]  (&(&dbs_info->work)->work){+.+...}, at: [<c013abb6>] __cancel_work_timer+0x8f/0x17c
[   97.234189] 
[   97.234189] but task is already holding lock:
[   97.234189]  (dbs_mutex){+.+.+.}, at: [<c0404c78>] cpufreq_governor_dbs+0x1f3/0x28b
[   97.234189] 
[   97.234189] which lock already depends on the new lock.
[   97.234189] 
[   97.234189] 
[   97.234189] the existing dependency chain (in reverse order) is:
[   97.234189] 
[   97.234189] -> #2 (dbs_mutex){+.+.+.}:
[   97.234189]        [<c014bdc3>] __lock_acquire+0x95b/0xac2
[   97.234189]        [<c014bfda>] lock_acquire+0xb0/0xcd
[   97.234189]        [<c04c774c>] __mutex_lock_common+0x3e/0x392
[   97.234189]        [<c04c7ada>] mutex_lock_nested+0x12/0x15
[   97.234189]        [<c0404af3>] cpufreq_governor_dbs+0x6e/0x28b
[   97.234189]        [<c0401c6b>] __cpufreq_governor+0x51/0x85
[   97.234189]        [<c0401eab>] __cpufreq_set_policy+0xe7/0x11f
[   97.234189]        [<c04027ce>] store_scaling_governor+0x197/0x1c0
[   97.234189]        [<c040317b>] store+0x48/0x61
[   97.234189]        [<c01e16d6>] sysfs_write_file+0xb5/0xe0
[   97.234189]        [<c01a23b3>] vfs_write+0x84/0xdf
[   97.234189]        [<c01a24a7>] sys_write+0x3b/0x60
[   97.234189]        [<c0102da5>] syscall_call+0x7/0xb
[   97.234189]        [<ffffffff>] 0xffffffff
[   97.234189] 
[   97.234189] -> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+.+.+.}:
[   97.234189]        [<c014bdc3>] __lock_acquire+0x95b/0xac2
[   97.234189]        [<c014bfda>] lock_acquire+0xb0/0xcd
[   97.234189]        [<c04c7d27>] down_write+0x24/0x60
[   97.234189]        [<c0402b8e>] lock_policy_rwsem_write+0x33/0x5b
[   97.234189]        [<c0404894>] do_dbs_timer+0x36/0x227
[   97.234189]        [<c013a398>] worker_thread+0x1c4/0x2a1
[   97.234189]        [<c013d70a>] kthread+0x45/0x6b
[   97.234189]        [<c01038c7>] kernel_thread_helper+0x7/0x10
[   97.234189]        [<ffffffff>] 0xffffffff
[   97.234189] 
[   97.234189] -> #0 (&(&dbs_info->work)->work){+.+...}:
[   97.234189]        [<c014bcd3>] __lock_acquire+0x86b/0xac2
[   97.234189]        [<c014bfda>] lock_acquire+0xb0/0xcd
[   97.234189]        [<c013abcd>] __cancel_work_timer+0xa6/0x17c
[   97.234189]        [<c013acae>] cancel_delayed_work_sync+0xb/0xd
[   97.234189]        [<c0404c87>] cpufreq_governor_dbs+0x202/0x28b
[   97.234189]        [<c0401c6b>] __cpufreq_governor+0x51/0x85
[   97.234189]        [<c0401e95>] __cpufreq_set_policy+0xd1/0x11f
[   97.234189]        [<c04027ce>] store_scaling_governor+0x197/0x1c0
[   97.234189]        [<c040317b>] store+0x48/0x61
[   97.234189]        [<c01e16d6>] sysfs_write_file+0xb5/0xe0
[   97.234189]        [<c01a23b3>] vfs_write+0x84/0xdf
[   97.234189]        [<c01a24a7>] sys_write+0x3b/0x60
[   97.234189]        [<c0102da5>] syscall_call+0x7/0xb
[   97.234189]        [<ffffffff>] 0xffffffff
[   97.489281] 
[   97.489281] other info that might help us debug this:
[   97.489281] 
[   97.489281] 3 locks held by ondemand/2848:
[   97.489281]  #0:  (&buffer->mutex){+.+.+.}, at: [<c01e1645>] sysfs_write_file+0x24/0xe0
[   97.489281]  #1:  (&per_cpu(cpu_policy_rwsem, cpu)){+.+.+.}, at: [<c0402b8e>] lock_policy_rwsem_write+0x33/0x5b
[   97.489281]  #2:  (dbs_mutex){+.+.+.}, at: [<c0404c78>] cpufreq_governor_dbs+0x1f3/0x28b
[   97.489281] 
[   97.489281] stack backtrace:
[   97.489281] Pid: 2848, comm: ondemand Not tainted 2.6.30-ep1 #3
[   97.489281] Call Trace:
[   97.489281]  [<c04c6580>] ? printk+0xf/0x11
[   97.489281]  [<c014b1a4>] print_circular_bug_tail+0x5d/0x68
[   97.489281]  [<c014bcd3>] __lock_acquire+0x86b/0xac2
[   97.489281]  [<c014bfda>] lock_acquire+0xb0/0xcd
[   97.489281]  [<c013abb6>] ? __cancel_work_timer+0x8f/0x17c
[   97.489281]  [<c013abcd>] __cancel_work_timer+0xa6/0x17c
[   97.489281]  [<c013abb6>] ? __cancel_work_timer+0x8f/0x17c
[   97.489281]  [<c014aa27>] ? mark_held_locks+0x47/0x5f
[   97.489281]  [<c04c7a13>] ? __mutex_lock_common+0x305/0x392
[   97.489281]  [<c014ac6b>] ? trace_hardirqs_on_caller+0xff/0x120
[   97.489281]  [<c04c7a52>] ? __mutex_lock_common+0x344/0x392
[   97.489281]  [<c013acae>] cancel_delayed_work_sync+0xb/0xd
[   97.489281]  [<c0404c87>] cpufreq_governor_dbs+0x202/0x28b
[   97.489281]  [<c014164e>] ? __blocking_notifier_call_chain+0x40/0x4c
[   97.489281]  [<c0401c6b>] __cpufreq_governor+0x51/0x85
[   97.489281]  [<c0401e95>] __cpufreq_set_policy+0xd1/0x11f
[   97.489281]  [<c04027ce>] store_scaling_governor+0x197/0x1c0
[   97.489281]  [<c040326a>] ? handle_update+0x0/0xd
[   97.489281]  [<c0402b8e>] ? lock_policy_rwsem_write+0x33/0x5b
[   97.489281]  [<c0402637>] ? store_scaling_governor+0x0/0x1c0
[   97.489281]  [<c040317b>] store+0x48/0x61
[   97.489281]  [<c01e16d6>] sysfs_write_file+0xb5/0xe0
[   97.489281]  [<c01e1621>] ? sysfs_write_file+0x0/0xe0
[   97.489281]  [<c01a23b3>] vfs_write+0x84/0xdf
[   97.489281]  [<c01a24a7>] sys_write+0x3b/0x60
[   97.489281]  [<c0102da5>] syscall_call+0x7/0xb
Comment 9 Len Brown 2011-01-18 06:05:39 UTC
is this still a problem with a modern kernel?
Comment 10 Martin Bammer 2011-01-18 06:45:48 UTC
Currently I'm using 2.6.35 and I haven't seen this problem yet.
Comment 11 Zhang Rui 2011-04-19 07:48:48 UTC
please re-open it once you can reproduce the problem in the latest upstream kernel. :)

Note You need to log in before you can comment on or make changes to this bug.