Using a CubieTruck with the latest Fedora rawhide armv7hl image with /home on XFS on a SSD I'm seeing the following circular lock being reported in dmesg kernel: 3.15.0-0.rc7.git4.2.fc21.armv7hl [ 5727.299355] ====================================================== [ 5727.305540] [ INFO: possible circular locking dependency detected ] [ 5727.311820] 3.15.0-0.rc7.git4.2.fc21.armv7hl+lpae #1 Tainted: G W [ 5727.318870] ------------------------------------------------------- [ 5727.325143] bash/1075 is trying to acquire lock: [ 5727.329767] (&mm->mmap_sem){++++++}, at: [<c014fa8c>] might_fault+0x54/0xa8 [ 5727.336923] but task is already holding lock: [ 5727.342765] (&xfs_dir_ilock_class){.+.+..}, at: [<bf31eb10>] xfs_ilock_data_map_shared+0x34/0x3c [xfs] [ 5727.352955] which lock already depends on the new lock. [ 5727.361146] the existing dependency chain (in reverse order) is: [ 5727.368635] -> #2 (&xfs_dir_ilock_class){.+.+..}: [ 5727.373598] [<c0092fc0>] lock_acquire+0xe4/0x1ec [ 5727.378865] [<c008cc68>] down_read_nested+0x48/0x5c [ 5727.384385] [<bf31eb50>] xfs_ilock_attr_map_shared+0x38/0x40 [xfs] [ 5727.391827] [<bf2ef524>] xfs_attr_get+0x80/0xc4 [xfs] [ 5727.398023] [<bf2e67e8>] xfs_xattr_get+0x44/0x58 [xfs] [ 5727.404277] [<c01b1928>] generic_getxattr+0x5c/0x64 [ 5727.409803] [<c02e7194>] inode_doinit_with_dentry+0x134/0x508 [ 5727.416192] [<c02e76e4>] sb_finish_set_opts+0x160/0x214 [ 5727.422058] [<c02ecc44>] selinux_set_mnt_opts+0x458/0x4a8 [ 5727.428101] [<c02eccd4>] superblock_doinit+0x40/0xc0 [ 5727.433706] [<c02ed3ac>] selinux_sb_kern_mount+0x40/0x7c [ 5727.439659] [<c019074c>] mount_fs+0xe4/0x16c [ 5727.444571] [<c01ac4b8>] vfs_kern_mount+0x60/0x128 [ 5727.450004] [<c01af67c>] do_mount+0x900/0x9f8 [ 5727.455003] [<c01af9b0>] SyS_mount+0x8c/0xc0 [ 5727.459914] [<c001f980>] ret_fast_syscall+0x0/0x48 [ 5727.465347] -> #1 (&isec->lock){+.+.+.}: [ 5727.469527] [<c0092fc0>] lock_acquire+0xe4/0x1ec [ 5727.474789] [<c066bc18>] mutex_lock_nested+0x70/0x3c0 [ 5727.480484] [<c02e7098>] inode_doinit_with_dentry+0x38/0x508 [ 5727.486784] [<c0142fbc>] __shmem_file_setup.part.30+0xfc/0x198 [ 5727.493258] [<c01430a4>] shmem_file_setup+0x4c/0x58 [ 5727.498775] [<c014360c>] shmem_zero_setup+0x2c/0x68 [ 5727.504292] [<c015986c>] mmap_region+0x2f8/0x510 [ 5727.509554] [<c0159d98>] do_mmap_pgoff+0x314/0x36c [ 5727.514985] [<c0144afc>] vm_mmap_pgoff+0x80/0xb0 [ 5727.520244] [<c0158460>] SyS_mmap_pgoff+0x168/0x1dc [ 5727.525756] [<c001fb34>] __sys_trace_return+0x0/0x2c [ 5727.531360] [<00000003>] 0x3 [ 5727.534898] -> #0 (&mm->mmap_sem){++++++}: [ 5727.539252] [<c0092114>] __lock_acquire+0x1330/0x1a88 [ 5727.544946] [<c0092fc0>] lock_acquire+0xe4/0x1ec [ 5727.550205] [<c014faac>] might_fault+0x74/0xa8 [ 5727.555289] [<c019ee18>] filldir64+0x4c/0x184 [ 5727.560288] [<bf2cc264>] xfs_dir2_sf_getdents+0x128/0x300 [xfs] [ 5727.567266] [<bf2ccaec>] xfs_readdir+0xec/0x214 [xfs] [ 5727.573370] [<bf2d08f0>] xfs_file_readdir+0x30/0x3c [xfs] [ 5727.579826] [<c019ec1c>] iterate_dir+0x80/0xb0 [ 5727.584913] [<c019f0c4>] SyS_getdents64+0x84/0xf0 [ 5727.590258] [<c001fb34>] __sys_trace_return+0x0/0x2c [ 5727.595863] [<00008000>] 0x8000 [ 5727.599657] other info that might help us debug this: [ 5727.607672] Chain exists of: &mm->mmap_sem --> &isec->lock --> &xfs_dir_ilock_class [ 5727.615882] Possible unsafe locking scenario: [ 5727.621811] CPU0 CPU1 [ 5727.626347] ---- ---- [ 5727.630881] lock(&xfs_dir_ilock_class); [ 5727.634933] lock(&isec->lock); [ 5727.640711] lock(&xfs_dir_ilock_class); [ 5727.647276] lock(&mm->mmap_sem); [ 5727.650721] *** DEADLOCK *** [ 5727.656660] 2 locks held by bash/1075: [ 5727.660417] #0: (&type->i_mutex_dir_key#6){+.+.+.}, at: [<c019ebe8>] iterate_dir+0x4c/0xb0 [ 5727.668985] #1: (&xfs_dir_ilock_class){.+.+..}, at: [<bf31eb10>] xfs_ilock_data_map_shared+0x34/0x3c [xfs] [ 5727.679523] stack backtrace: [ 5727.683908] CPU: 1 PID: 1075 Comm: bash Tainted: G W 3.15.0-0.rc7.git4.2.fc21.armv7hl+lpae #1 [ 5727.693425] [<c00284e8>] (unwind_backtrace) from [<c00236b8>] (show_stack+0x18/0x1c) [ 5727.701197] [<c00236b8>] (show_stack) from [<c06670a4>] (dump_stack+0x84/0xb0) [ 5727.708448] [<c06670a4>] (dump_stack) from [<c008f2b8>] (print_circular_bug+0x26c/0x2c0) [ 5727.716565] [<c008f2b8>] (print_circular_bug) from [<c0092114>] (__lock_acquire+0x1330/0x1a88) [ 5727.725199] [<c0092114>] (__lock_acquire) from [<c0092fc0>] (lock_acquire+0xe4/0x1ec) [ 5727.733054] [<c0092fc0>] (lock_acquire) from [<c014faac>] (might_fault+0x74/0xa8) [ 5727.740561] [<c014faac>] (might_fault) from [<c019ee18>] (filldir64+0x4c/0x184) [ 5727.748309] [<c019ee18>] (filldir64) from [<bf2cc264>] (xfs_dir2_sf_getdents+0x128/0x300 [xfs]) [ 5727.757854] [<bf2cc264>] (xfs_dir2_sf_getdents [xfs]) from [<bf2ccaec>] (xfs_readdir+0xec/0x214 [xfs]) [ 5727.768006] [<bf2ccaec>] (xfs_readdir [xfs]) from [<bf2d08f0>] (xfs_file_readdir+0x30/0x3c [xfs]) [ 5727.777326] [<bf2d08f0>] (xfs_file_readdir [xfs]) from [<c019ec1c>] (iterate_dir+0x80/0xb0) [ 5727.785705] [<c019ec1c>] (iterate_dir) from [<c019f0c4>] (SyS_getdents64+0x84/0xf0) [ 5727.793387] [<c019f0c4>] (SyS_getdents64) from [<c001fb34>] (__sys_trace_return+0x0/0x2c) [ 5727.801596] [<c001fb34>] (__sys_trace_return) from [<00008000>] (0x8000)
Known false positive. Work is in progress to fix. Please close. In future, please report bugs to xfs@oss.sgi.com, not the kernel bugzilla. -Dave.
(In reply to Dave Chinner from comment #1) > Known false positive. Work is in progress to fix. Please close. Documented where? > In future, please report bugs to xfs@oss.sgi.com, not the kernel bugzilla. I just won't bother. The whole point of this BZ is to make it easy for users to report bugs, if I have to sign up to yet another mailing list it'll be easier to use another filesystem
(In reply to Peter Robinson from comment #2) > (In reply to Dave Chinner from comment #1) > > Known false positive. Work is in progress to fix. Please close. > > Documented where? In the mailing list archives. Google finds lots of similar reports and the conversions explaining it. > > In future, please report bugs to xfs@oss.sgi.com, not the kernel bugzilla. > > I just won't bother. The whole point of this BZ is to make it easy for users > to report bugs, if I have to sign up to yet another mailing list it'll be > easier to use another filesystem You don't have to sign up to the mailing list to post to it. -Dave.