Bug 154041 - DOM Worker/4206 is trying to acquire lock xfs_file_buffered_aio_write but task is already holding lock
Summary: DOM Worker/4206 is trying to acquire lock xfs_file_buffered_aio_write but tas...
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: XFS (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: XFS Guru
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-08-27 09:05 UTC by Mikhail
Modified: 2016-08-27 09:05 UTC (History)
0 users

See Also:
Kernel Version: 4.7.2-200.fc24.x86_64+debug
Tree: Fedora
Regression: No


Attachments
dmesg (141.44 KB, text/plain)
2016-08-27 09:05 UTC, Mikhail
Details

Description Mikhail 2016-08-27 09:05:54 UTC
Created attachment 230441 [details]
dmesg

[38053.200301] ======================================================
[38053.200302] [ INFO: possible circular locking dependency detected ]
[38053.200304] 4.7.2-200.fc24.x86_64+debug #1 Not tainted
[38053.200305] -------------------------------------------------------
[38053.200306] DOM Worker/4206 is trying to acquire lock:
[38053.200307]  (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffc094159b>] xfs_file_buffered_aio_write+0x6b/0x350 [xfs]
[38053.200362] 
               but task is already holding lock:
[38053.200363]  (&pipe->mutex/1){+.+.+.}, at: [<ffffffff8129e89e>] pipe_lock+0x1e/0x20
[38053.200368] 
               which lock already depends on the new lock.

[38053.200369] 
               the existing dependency chain (in reverse order) is:
[38053.200370] 
               -> #2 (&pipe->mutex/1){+.+.+.}:
[38053.200373]        [<ffffffff8110d1ee>] lock_acquire+0xfe/0x1f0
[38053.200376]        [<ffffffff818d9576>] mutex_lock_nested+0x86/0x3f0
[38053.200380]        [<ffffffff8129e89e>] pipe_lock+0x1e/0x20
[38053.200381]        [<ffffffff812cf9d0>] splice_to_pipe+0x40/0x260
[38053.200384]        [<ffffffff812d13a3>] __generic_file_splice_read+0x633/0x710
[38053.200386]        [<ffffffff812d1855>] generic_file_splice_read+0x45/0x90
[38053.200387]        [<ffffffffc093ed2c>] xfs_file_splice_read+0x11c/0x2a0 [xfs]
[38053.200407]        [<ffffffff812cfd49>] do_splice_to+0x79/0x90
[38053.200408]        [<ffffffff812d26e4>] SyS_splice+0x7f4/0x830
[38053.200410]        [<ffffffff818dd13c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[38053.200413] 
               -> #1 (&(&ip->i_iolock)->mr_lock#2){++++++}:
[38053.200415]        [<ffffffff8110d1ee>] lock_acquire+0xfe/0x1f0
[38053.200417]        [<ffffffff8110656e>] down_write_nested+0x5e/0xc0
[38053.200421]        [<ffffffffc094fcb5>] xfs_ilock+0x215/0x2c0 [xfs]
[38053.200439]        [<ffffffffc09415a8>] xfs_file_buffered_aio_write+0x78/0x350 [xfs]
[38053.200456]        [<ffffffffc094199e>] xfs_file_write_iter+0x11e/0x130 [xfs]
[38053.200471]        [<ffffffff81295218>] __vfs_write+0xe8/0x160
[38053.200473]        [<ffffffff81295af8>] vfs_write+0xb8/0x1a0
[38053.200475]        [<ffffffff81296fa8>] SyS_write+0x58/0xc0
[38053.200476]        [<ffffffff818dd13c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[38053.200478] 
               -> #0 (&sb->s_type->i_mutex_key#20){+.+.+.}:
[38053.200481]        [<ffffffff8110cd45>] __lock_acquire+0x11d5/0x1210
[38053.200483]        [<ffffffff8110d1ee>] lock_acquire+0xfe/0x1f0
[38053.200484]        [<ffffffff818da81a>] down_write+0x5a/0xc0
[38053.200486]        [<ffffffffc094159b>] xfs_file_buffered_aio_write+0x6b/0x350 [xfs]
[38053.200502]        [<ffffffffc094199e>] xfs_file_write_iter+0x11e/0x130 [xfs]
[38053.200517]        [<ffffffff81294f75>] vfs_iter_write+0x95/0x110
[38053.200518]        [<ffffffff812d08d0>] iter_file_splice_write+0x270/0x3c0
[38053.200520]        [<ffffffff812d2259>] SyS_splice+0x369/0x830
[38053.200522]        [<ffffffff818dd13c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
[38053.200524] 
               other info that might help us debug this:

[38053.200525] Chain exists of:
                 &sb->s_type->i_mutex_key#20 --> &(&ip->i_iolock)->mr_lock#2 --> &pipe->mutex/1

[38053.200530]  Possible unsafe locking scenario:

[38053.200531]        CPU0                    CPU1
[38053.200532]        ----                    ----
[38053.200532]   lock(&pipe->mutex/1);
[38053.200534]                                lock(&(&ip->i_iolock)->mr_lock#2);
[38053.200536]                                lock(&pipe->mutex/1);
[38053.200538]   lock(&sb->s_type->i_mutex_key#20);
[38053.200540] 
                *** DEADLOCK ***

[38053.200541] 2 locks held by DOM Worker/4206:
[38053.200542]  #0:  (sb_writers#16){.+.+.+}, at: [<ffffffff81299024>] __sb_start_write+0xb4/0xf0
[38053.200547]  #1:  (&pipe->mutex/1){+.+.+.}, at: [<ffffffff8129e89e>] pipe_lock+0x1e/0x20
[38053.200550] 
               stack backtrace:
[38053.200552] CPU: 7 PID: 4206 Comm: DOM Worker Not tainted 4.7.2-200.fc24.x86_64+debug #1
[38053.200553] Hardware name: Gigabyte Technology Co., Ltd. Z87M-D3H/Z87M-D3H, BIOS F11 08/12/2014
[38053.200555]  0000000000000086 00000000c9c3e5d4 ffff880600983b30 ffffffff81457a05
[38053.200558]  ffffffff82b1c1e0 ffffffff82b18f80 ffff880600983b70 ffffffff8110a2de
[38053.200560]  000000000094c000 ffff88060094c998 ffff88060094c000 000000155c5d062b
[38053.200562] Call Trace:
[38053.200565]  [<ffffffff81457a05>] dump_stack+0x86/0xc1
[38053.200567]  [<ffffffff8110a2de>] print_circular_bug+0x1be/0x210
[38053.200569]  [<ffffffff8110cd45>] __lock_acquire+0x11d5/0x1210
[38053.200571]  [<ffffffff8110d1ee>] lock_acquire+0xfe/0x1f0
[38053.200587]  [<ffffffffc094159b>] ? xfs_file_buffered_aio_write+0x6b/0x350 [xfs]
[38053.200590]  [<ffffffff818da81a>] down_write+0x5a/0xc0
[38053.200604]  [<ffffffffc094159b>] ? xfs_file_buffered_aio_write+0x6b/0x350 [xfs]
[38053.200619]  [<ffffffffc094159b>] xfs_file_buffered_aio_write+0x6b/0x350 [xfs]
[38053.200621]  [<ffffffff818d9777>] ? mutex_lock_nested+0x287/0x3f0
[38053.200623]  [<ffffffff8110b81d>] ? trace_hardirqs_on+0xd/0x10
[38053.200637]  [<ffffffffc094199e>] xfs_file_write_iter+0x11e/0x130 [xfs]
[38053.200639]  [<ffffffff81294f75>] vfs_iter_write+0x95/0x110
[38053.200641]  [<ffffffff812d08d0>] iter_file_splice_write+0x270/0x3c0
[38053.200643]  [<ffffffff812d2259>] SyS_splice+0x369/0x830
[38053.200644]  [<ffffffff8110b755>] ? trace_hardirqs_on_caller+0xf5/0x1b0
[38053.200646]  [<ffffffff818dd13c>] entry_SYSCALL_64_fastpath+0x1f/0xbd

Note You need to log in before you can comment on or make changes to this bug.