Bug 58461 - lockdep warning
Summary: lockdep warning
Status: NEW
Alias: None
Product: Drivers
Classification: Unclassified
Component: Console/Framebuffers (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Alan
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-05-18 12:10 UTC by Stas Sergeev
Modified: 2013-11-14 08:50 UTC (History)
2 users (show)

See Also:
Kernel Version: git of may 2013
Subsystem:
Regression: No
Bisected commit-id:


Attachments
rename local war to not shadow semaphore func (6.40 KB, patch)
2013-05-18 12:13 UTC, Stas Sergeev
Details | Diff
use separate sem for vc alloc/dealloc (9.05 KB, patch)
2013-05-18 12:14 UTC, Stas Sergeev
Details | Diff

Description Stas Sergeev 2013-05-18 12:10:41 UTC
Got a lockdep warning below.
It seems, the problem is as follows:
vt_ioctl() protects vc_deallocate() call with
console_lock. But vc_deallocate() eventually
triggers a workqueue flush, that needs a buf->work lock.
Workqueue, in turn, can do a console output, requiring
a console_lock again.
Also, the workqueue flush can be triggered by other
means, so buf->work --> console_lock dependency looks
valid, but the opposite direction is not.

I will also attach a patch to fix that, but it is
definitely broken.

======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0+ #48 Tainted: G        W
-------------------------------------------------------
kworker/3:2/1204 is trying to acquire lock:
 (&ldata->output_lock){+.+...}, at: [<ffffffff8130992d>] process_echoes+0x4d/0x2f0

but task is already holding lock:
 ((&buf->work)){+.+...}, at: [<ffffffff810582e1>] process_one_work+0x171/0x480

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 ((&buf->work)){+.+...}:
       [<ffffffff810958b5>] lock_acquire+0x65/0x90
       [<ffffffff81058b47>] flush_work+0x47/0x290
       [<ffffffff81059d92>] __cancel_work_timer+0x82/0x110
       [<ffffffff81059e3b>] cancel_work_sync+0xb/0x10
       [<ffffffff813106b1>] tty_port_destroy+0x11/0x20
       [<ffffffff8131ec38>] vc_deallocate+0xf8/0x110
       [<ffffffff81314459>] vt_ioctl+0x469/0x1310       <-- console_lock()
       [<ffffffff81308a38>] tty_ioctl+0x278/0xbd0
       [<ffffffff811438f7>] do_vfs_ioctl+0x87/0x570
       [<ffffffff81143e2b>] SyS_ioctl+0x4b/0x90
       [<ffffffff814b9510>] tracesys+0xdd/0xe2

-> #1 (console_lock){+.+.+.}:
       [<ffffffff810958b5>] lock_acquire+0x65/0x90
       [<ffffffff8103d84f>] console_lock+0x6f/0x80
       [<ffffffff81320936>] do_con_write.part.24+0x36/0xa40
       [<ffffffff813213af>] con_write+0x2f/0x50
       [<ffffffff81309f4d>] n_tty_write+0x1cd/0x460     <-- ldata->output_lock
       [<ffffffff81306f61>] tty_write+0x151/0x2d0
       [<ffffffff8130718d>] redirected_tty_write+0xad/0xb0
       [<ffffffff8113238c>] vfs_write+0xbc/0x1b0
       [<ffffffff81132800>] SyS_write+0x50/0xa0
       [<ffffffff814b9510>] tracesys+0xdd/0xe2
       
-> #0 (&ldata->output_lock){+.+...}:
       [<ffffffff81095042>] __lock_acquire+0x1922/0x1c50
       [<ffffffff810958b5>] lock_acquire+0x65/0x90
       [<ffffffff814afa69>] __mutex_lock_common+0x59/0x4f0
       [<ffffffff814b002f>] mutex_lock_nested+0x3f/0x50
       [<ffffffff8130992d>] process_echoes+0x4d/0x2f0
       [<ffffffff8130bccc>] n_tty_receive_char+0x43c/0xe90
       [<ffffffff8130c94c>] n_tty_receive_buf+0x22c/0x460
       [<ffffffff81310169>] flush_to_ldisc+0x119/0x170
       [<ffffffff8105833e>] process_one_work+0x1ce/0x480        <-- buf->work
       [<ffffffff81059778>] worker_thread+0x118/0x370
       [<ffffffff81060735>] kthread+0xe5/0xf0
       [<ffffffff814b923c>] ret_from_fork+0x7c/0xb0

other info that might help us debug this:

Chain exists of:
  &ldata->output_lock --> console_lock --> (&buf->work)

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock((&buf->work));
                               lock(console_lock);
                               lock((&buf->work));
  lock(&ldata->output_lock);

 *** DEADLOCK ***

2 locks held by kworker/3:2/1204:
 #0:  (events){.+.+.+}, at: [<ffffffff810582e1>] process_one_work+0x171/0x480
 #1:  ((&buf->work)){+.+...}, at: [<ffffffff810582e1>] process_one_work+0x171/0x480

stack backtrace:
CPU: 3 PID: 1204 Comm: kworker/3:2 Tainted: G        W    3.9.0+ #48
Hardware name: MSI MS-7599/870A-G54 (MS-7599), BIOS V17.3 04/30/2010
Workqueue: events flush_to_ldisc
 ffffffff81ced220 ffff880125775918 ffffffff814ac5a6 ffff880125775968
 ffffffff814a7336 ffff88012fcc3fc0 ffff8801257759f8 ffff880125752cc8
 ffff880125752cf0 ffff880125752cc8 ffff880125752460 000000042c324208
Call Trace:
 [<ffffffff814ac5a6>] dump_stack+0x19/0x1b
 [<ffffffff814a7336>] print_circular_bug+0x1fe/0x20f
 [<ffffffff81095042>] __lock_acquire+0x1922/0x1c50
 [<ffffffff81094439>] ? __lock_acquire+0xd19/0x1c50
 [<ffffffff810958b5>] lock_acquire+0x65/0x90
 [<ffffffff8130992d>] ? process_echoes+0x4d/0x2f0
 [<ffffffff814afa69>] __mutex_lock_common+0x59/0x4f0
 [<ffffffff8130992d>] ? process_echoes+0x4d/0x2f0
 [<ffffffff81096345>] ? trace_hardirqs_on_caller+0x105/0x1d0
 [<ffffffff8130992d>] ? process_echoes+0x4d/0x2f0
 [<ffffffff810961c6>] ? mark_held_locks+0xb6/0x130
 [<ffffffff8130a3f2>] ? echo_char.isra.11+0x32/0xb0
 [<ffffffff814b0155>] ? __mutex_unlock_slowpath+0x115/0x1b0
 [<ffffffff814b002f>] mutex_lock_nested+0x3f/0x50
 [<ffffffff8130992d>] process_echoes+0x4d/0x2f0
 [<ffffffff814b01f9>] ? mutex_unlock+0x9/0x10
 [<ffffffff8130bccc>] n_tty_receive_char+0x43c/0xe90
 [<ffffffff8130c94c>] n_tty_receive_buf+0x22c/0x460
 [<ffffffff810961c6>] ? mark_held_locks+0xb6/0x130
 [<ffffffff814b31a5>] ? _raw_spin_unlock_irqrestore+0x65/0x80
 [<ffffffff81096345>] ? trace_hardirqs_on_caller+0x105/0x1d0
 [<ffffffff81310169>] flush_to_ldisc+0x119/0x170
 [<ffffffff8105833e>] process_one_work+0x1ce/0x480
 [<ffffffff810582e1>] ? process_one_work+0x171/0x480
 [<ffffffff8127be5c>] ? do_raw_spin_lock+0x4c/0x120
 [<ffffffff81059778>] worker_thread+0x118/0x370
 [<ffffffff81059660>] ? manage_workers.isra.25+0x290/0x290
 [<ffffffff81060735>] kthread+0xe5/0xf0
 [<ffffffff81060650>] ? kthread_create_on_node+0x160/0x160
 [<ffffffff814b923c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81060650>] ? kthread_create_on_node+0x160/0x160
Comment 1 Stas Sergeev 2013-05-18 12:13:15 UTC
Created attachment 101881 [details]
rename local war to not shadow semaphore func
Comment 2 Stas Sergeev 2013-05-18 12:14:10 UTC
Created attachment 101891 [details]
use separate sem for vc alloc/dealloc

Broken patch in attempt to fix the problem
Comment 3 Stas Sergeev 2013-11-14 08:50:25 UTC
Hello Alan Cox, good to see you come back. :)

Note You need to log in before you can comment on or make changes to this bug.