Bug 203947 - [xfstests generic/475]: general protection fault: 0000 [#1] RIP: 0010:xfs_setfilesize_ioend+0xb1/0x220 [xfs]
Summary: [xfstests generic/475]: general protection fault: 0000 [#1] RIP: 0010:xfs_set...
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: XFS (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: FileSystem/XFS Default Virtual Assignee
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-06-21 08:32 UTC by Zorro Lang
Modified: 2019-07-12 13:18 UTC (History)
2 users (show)

See Also:
Kernel Version: xfs-linux xfs-5.3-merge-1
Subsystem:
Regression: No
Bisected commit-id:


Attachments
console log about panic on xfs_bmapi_read (771.72 KB, text/plain)
2019-06-29 03:46 UTC, Zorro Lang
Details

Description Zorro Lang 2019-06-21 08:32:54 UTC
Description of problem:
generic/475 hit a kernel panic on x86_64, the xfs info is:

meta-data=/dev/sda2              isize=512    agcount=16, agsize=245696 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=3931136, imaxpct=25
         =                       sunit=64     swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

part of panic log:
....
[29158.142556] XFS (dm-0): writeback error on sector 19720192 
[29158.167263] XFS (dm-0): writeback error on sector 29562736 
[29158.194303] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1272 of file fs/xfs/xfs_log.c. Return address = 00000000025e6ad7 
[29158.248165] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[29158.278321] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[29158.647121] XFS (dm-0): Unmounting Filesystem 
[29159.265101] XFS (dm-0): Mounting V5 Filesystem 
[29159.590476] XFS (dm-0): Starting recovery (logdev: internal) 
[29161.495439] XFS (dm-0): Ending recovery (logdev: internal) 
[29163.269463] kasan: CONFIG_KASAN_INLINE enabled 
[29163.291984] kasan: GPF could be caused by NULL-ptr deref or user memory access 
[29163.328565] general protection fault: 0000 [#1] SMP KASAN PTI 
[29163.354186] CPU: 4 PID: 1049 Comm: kworker/4:3 Not tainted 5.2.0-rc4 #1 
[29163.383882] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 05/06/2015 
[29163.413366] Workqueue: xfs-conv/dm-0 xfs_end_io [xfs] 
[29163.436225] RIP: 0010:xfs_setfilesize_ioend+0xb1/0x220 [xfs] 
[29163.461648] Code: 03 38 d0 7c 08 84 d2 0f 85 3c 01 00 00 49 8d bc 24 f8 00 00 00 45 8b 6d 24 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <80> 3c 02 00 0f 85 33 01 00 00 4d 89 ac 24 f8 00 00 00 48 b8 00 00 
[29163.546149] RSP: 0018:ffff888070f37c28 EFLAGS: 00010202 
[29163.569758] RAX: dffffc0000000000 RBX: ffff8880069632c0 RCX: ffff8880069632e0 
[29163.601781] RDX: 000000000000001f RSI: 0000000000000001 RDI: 00000000000000f8 
[29163.636304] RBP: ffff8880471c6f00 R08: dffffc0000000000 R09: ffffed1008e38e61 
[29163.669587] R10: 1ffff11008e38dd7 R11: ffff88806f85a8c8 R12: 0000000000000000 
[29163.702129] R13: 0000000004208060 R14: 0000000000000001 R15: dffffc0000000000 
[29163.734261] FS:  0000000000000000(0000) GS:ffff88810e400000(0000) knlGS:0000000000000000 
[29163.770758] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[29163.797513] CR2: 000055569b8a2000 CR3: 0000000138816002 CR4: 00000000001606e0 
[29163.832418] Call Trace: 
[29163.844440]  xfs_ioend_try_merge+0x42d/0x610 [xfs] 
[29163.867530]  xfs_end_io+0x217/0x380 [xfs] 
[29163.885689]  ? xfs_setfilesize+0xe0/0xe0 [xfs] 
[29163.905876]  process_one_work+0x8f4/0x1760 
[29163.924473]  ? pwq_dec_nr_in_flight+0x2d0/0x2d0 
[29163.944767]  worker_thread+0x87/0xb50 
[29163.961526]  ? __kthread_parkme+0xb6/0x180 
[29163.979926]  ? process_one_work+0x1760/0x1760 
[29163.999701]  kthread+0x326/0x3f0 
[29164.014194]  ? kthread_create_on_node+0xc0/0xc0 
[29164.034154]  ret_from_fork+0x3a/0x50 
[29164.050229] Modules linked in: dm_mod iTCO_wdt iTCO_vendor_support intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore intel_rapl_perf pcspkr dax_pmem_compat device_dax nd_pmem dax_pmem_core ipmi_ssif sunrpc i2c_i801 lpc_ich ipmi_si hpwdt hpilo sg ipmi_devintf ipmi_msghandler acpi_tad ioatdma acpi_power_meter dca xfs libcrc32c mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops sd_mod ttm drm crc32c_intel serio_raw tg3 hpsa scsi_transport_sas wmi 
[29164.284974] ---[ end trace 185128643cc7ea23 ]--- 
...
...

Version-Release number of selected component (if applicable):
xfs-linux:
 f5b999c03f4c (HEAD -> for-next, tag: xfs-5.3-merge-1, origin/xfs-5.3-merge, origin/for-next) xfs: remove unused flag arguments

How reproducible:
Once so far, trying to reproduce it.

Steps to Reproduce:
Loop run generic/475
Comment 1 Zorro Lang 2019-06-27 08:35:13 UTC
Hit this panic again on xfs-linux xfs-5.3-merge-2:

[38857.886348] run fstests generic/475 at 2019-06-27 00:41:30 
[38858.311084] XFS (vda5): Unmounting Filesystem 
[38858.510461] device-mapper: uevent: version 1.0.3 
[38858.511208] device-mapper: ioctl: 4.40.0-ioctl (2019-01-18) initialised: dm-devel@redhat.com 
[38859.308289] XFS (dm-0): Mounting V5 Filesystem 
[38859.398726] XFS (dm-0): Ending clean mount 
[38860.580486] XFS (dm-0): writeback error on sector 15751000 
[38860.581500] XFS (dm-0): writeback error on sector 23597576 
[38860.581919] XFS (dm-0): log I/O error -5 
[38860.583029] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1235 of file fs/xfs/xfs_log.c. Return address = 00000000a62cc036 
[38860.583272] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[38860.583391] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[38860.583595] XFS (dm-0): log I/O error -5 
[38860.583791] XFS (dm-0): writeback error on sector 7883120 
[38860.583977] XFS (dm-0): writeback error on sector 15765912 
[38860.584092] XFS (dm-0): writeback error on sector 15765960 
[38860.584294] XFS (dm-0): writeback error on sector 15781104 
[38860.672123] Buffer I/O error on dev dm-0, logical block 31457152, async page read 
[38860.672288] Buffer I/O error on dev dm-0, logical block 31457153, async page read 
[38860.672420] Buffer I/O error on dev dm-0, logical block 31457154, async page read 
[38860.672547] Buffer I/O error on dev dm-0, logical block 31457155, async page read 
[38860.672676] Buffer I/O error on dev dm-0, logical block 31457156, async page read 
[38860.672802] Buffer I/O error on dev dm-0, logical block 31457157, async page read 
[38860.672927] Buffer I/O error on dev dm-0, logical block 31457158, async page read 
[38860.673053] Buffer I/O error on dev dm-0, logical block 31457159, async page read 
[38860.673406] Buffer I/O error on dev dm-0, logical block 31457160, async page read 
[38860.673532] Buffer I/O error on dev dm-0, logical block 31457161, async page read 
[38861.034014] XFS (dm-0): Unmounting Filesystem 
[38862.090150] XFS (dm-0): Mounting V5 Filesystem 
[38863.214360] XFS (dm-0): Starting recovery (logdev: internal) 
[38863.679809] XFS (dm-0): Ending recovery (logdev: internal) 
[38865.810147] XFS (dm-0): writeback error on sector 15793776 
[38865.811088] XFS (dm-0): writeback error on sector 15779744 
[38865.811237] XFS (dm-0): writeback error on sector 15813904 
[38865.811417] XFS (dm-0): writeback error on sector 23629648 
[38865.811568] XFS (dm-0): writeback error on sector 23634192 
[38865.811730] XFS (dm-0): writeback error on sector 15802768 
[38865.811861] XFS (dm-0): writeback error on sector 15802944 
[38865.812295] XFS (dm-0): writeback error on sector 23629584 
[38865.812441] XFS (dm-0): writeback error on sector 27032 
[38865.812628] XFS (dm-0): writeback error on sector 38560
[38865.812797] XFS (dm-0): log I/O error -5 
[38865.813411] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1235 of file fs/xfs/xfs_log.c. Return address = 00000000a62cc036 
[38865.813700] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[38865.813822] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[38865.813992] XFS (dm-0): log I/O error -5 
[38865.814076] XFS (dm-0): log I/O error -5 
[38865.935923] buffer_io_error: 118 callbacks suppressed 
[38865.935930] Buffer I/O error on dev dm-0, logical block 31457152, async page read 
[38865.936236] Buffer I/O error on dev dm-0, logical block 31457153, async page read 
[38865.936394] Buffer I/O error on dev dm-0, logical block 31457154, async page read 
[38865.936549] Buffer I/O error on dev dm-0, logical block 31457155, async page read 
[38865.936697] Buffer I/O error on dev dm-0, logical block 31457156, async page read 
[38865.936853] Buffer I/O error on dev dm-0, logical block 31457157, async page read 
[38865.937021] Buffer I/O error on dev dm-0, logical block 31457158, async page read 
[38865.937170] Buffer I/O error on dev dm-0, logical block 31457159, async page read 
[38865.937321] Buffer I/O error on dev dm-0, logical block 31457160, async page read 
[38865.937475] Buffer I/O error on dev dm-0, logical block 31457161, async page read 
[38866.282227] XFS (dm-0): Unmounting Filesystem 
[38867.000037] XFS (dm-0): Mounting V5 Filesystem 
[38868.488578] XFS (dm-0): Starting recovery (logdev: internal) 
[38870.047897] XFS (dm-0): Ending recovery (logdev: internal) 
[38871.200554] xfs_destroy_ioend: 3 callbacks suppressed 
[38871.200576] XFS (dm-0): writeback error on sector 15846656 
[38871.200931] XFS (dm-0): log I/O error -5 
[38871.201306] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1235 of file fs/xfs/xfs_log.c. Return address = 00000000a62cc036 
[38871.201537] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[38871.201650] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[38871.201828] XFS (dm-0): writeback error on sector 23645728 
[38871.202155] XFS (dm-0): writeback error on sector 23661208 
[38871.202491] XFS (dm-0): writeback error on sector 7938296 
[38871.202588] XFS (dm-0): writeback error on sector 7938552 
[38871.202680] XFS (dm-0): writeback error on sector 7938648 
[38871.202796] XFS (dm-0): writeback error on sector 7938664 
[38871.202962] XFS (dm-0): writeback error on sector 7936016 
[38871.203068] XFS (dm-0): writeback error on sector 7936392 
[38871.203217] XFS (dm-0): writeback error on sector 7936456 
[38871.287223] buffer_io_error: 118 callbacks suppressed
[38871.287246] Buffer I/O error on dev dm-0, logical block 31457152, async page read 
[38871.288884] Buffer I/O error on dev dm-0, logical block 31457153, async page read 
[38871.289067] Buffer I/O error on dev dm-0, logical block 31457154, async page read 
[38871.289317] Buffer I/O error on dev dm-0, logical block 31457155, async page read 
[38871.336902] Buffer I/O error on dev dm-0, logical block 31457156, async page read 
[38871.337150] Buffer I/O error on dev dm-0, logical block 31457157, async page read 
[38871.337369] Buffer I/O error on dev dm-0, logical block 31457158, async page read 
[38871.337578] Buffer I/O error on dev dm-0, logical block 31457159, async page read 
[38871.337795] Buffer I/O error on dev dm-0, logical block 31457160, async page read 
[38871.337990] Buffer I/O error on dev dm-0, logical block 31457161, async page read 
[38871.728914] XFS (dm-0): Unmounting Filesystem 
[38872.419930] XFS (dm-0): Mounting V5 Filesystem 
[38874.050592] restraintd[3933]: *** Current Time: Thu Jun 27 00:41:46 2019 Localwatchdog at: Sat Jun 29 00:40:45 2019 
[38874.227008] XFS (dm-0): Starting recovery (logdev: internal) 
[38875.000418] XFS (dm-0): Ending recovery (logdev: internal) 
[38876.444568] BUG: Kernel NULL pointer dereference at 0x000000e0 
[38876.444723] Faulting instruction address: 0xc008000001ada168 
[38876.444855] Oops: Kernel access of bad area, sig: 11 [#1] 
[38876.444941] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries 
[38876.445079] Modules linked in: dm_mod sunrpc pseries_rng xts virtio_balloon vmx_crypto xfs libcrc32c virtio_net net_failover virtio_console virtio_blk failover 
[38876.445436] CPU: 6 PID: 19513 Comm: kworker/6:2 Not tainted 5.2.0-rc4 #1 
[38876.445688] Workqueue: xfs-conv/dm-0 xfs_end_io [xfs] 
[38876.445774] NIP:  c008000001ada168 LR: c008000001ada3fc CTR: c00000000068cc90 
[38876.445898] REGS: c0000000046c7930 TRAP: 0380   Not tainted  (5.2.0-rc4) 
[38876.446004] MSR:  800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>  CR: 24002488  XER: 20000000 
[38876.446157] CFAR: c008000001ada3f8 IRQMASK: 0  
[38876.446157] GPR00: c008000001ada3fc c0000000046c7bc0 c008000001bc7500 c00000018c93af00  
[38876.446157] GPR04: 0000000000000001 c00000021d326100 000000302d6d642f c00000017b547140  
[38876.446157] GPR08: 0000000000007000 0000000004208060 0000000000000000 c008000001b53b90  
[38876.446157] GPR12: c00000000068cc90 c00000003fff7800 c00000000015f9f8 c0000001a5349f40  
[38876.446157] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000  
[38876.446157] GPR20: 0000000000000000 0000000000000000 fffffffffffffef7 0000000000000402  
[38876.446157] GPR24: 0000000000000000 0000000000000000 c00000017b547390 c00000021d326100  
[38876.446157] GPR28: 0000000000000000 c0000000046c7c38 c00000021d326100 c00000018c93af00  
[38876.447263] NIP [c008000001ada168] xfs_setfilesize_ioend+0x20/0xa0 [xfs] 
[38876.447456] LR [c008000001ada3fc] xfs_ioend_try_merge+0x214/0x230 [xfs]
[38876.447559] Call Trace: 
[38876.447651] [c0000000046c7c10] [c008000001adca2c] xfs_end_io+0xe4/0x140 [xfs] 
[38876.447846] [c0000000046c7c70] [c000000000156250] process_one_work+0x260/0x520 
[38876.447972] [c0000000046c7d10] [c000000000156598] worker_thread+0x88/0x5f0 
[38876.448078] [c0000000046c7db0] [c00000000015fba8] kthread+0x1b8/0x1c0 
[38876.448187] [c0000000046c7e20] [c00000000000ba54] ret_from_kernel_thread+0x5c/0x68 
[38876.448312] Instruction dump: 
[38876.448376] 4bfffed0 60000000 000ed3b8 00000000 3c4c000f 3842d3b8 7c0802a6 60000000  
[38876.448504] e8e30018 e9430030 e92d0968 81290114 <f92a00e0> e90d0968 81280114 2fa40000  
[38876.448634] ---[ end trace 2479158bc75365a8 ]--- 
[38876.451282]  
[38877.451364] Kernel panic - not syncing: Fatal exception
Comment 2 Darrick J. Wong 2019-06-27 18:35:30 UTC
Hmm... so we're clearly in a situation where we have ioend A -> ioend B and we're trying to merge A and B.  A has a setfilesize transaction and B does not, but current code assumes that if A has one then B must have one and that it must cancel B's.  Then we crash trying to cancel the transaction that B doesn't have.

How do we end up in this situation?  I can't trigger it on my systems, but I guess this sounds plausible:

1. Dirty pages 0, 1, and 2 of an empty file.

2. Writeback gets scheduled for pages 0 and 2, creating ioends A and C.  Both ioends describe writes past the on-disk isize so we allocate transactions.

3. ioend C completes immediately, sets the ondisk isize to (3 * PAGESIZE).

4. Writeback gets scheduled for page 1, creating ioend B.  ioend B describes a write within the on-disk isize so we do not allocate setfilesize transaction.

5. ioend A and B complete and are sorted into the per-inode ioend completion list.  xfs_ioend_try_merge looks at ioend A, sees that ioend A has a setfilesize transaction and that there's an ioend B that can be merged with A.

6. _try_merge tries to call xfs_setfilesize_ioend(ioend B, -1) to cancel ioend B's transaction, but as we saw in (4), ioend B has no transaction and crashes.

I wonder how hard it will be to write a regression test for this, since it requires fairly tight timing?

Coincidentally, Christoph just posted "xfs: allow merging ioends over append boundaries" which I think fixes this problem.  Zorro, can you apply it and retry?
Comment 3 Darrick J. Wong 2019-06-27 18:35:50 UTC
Meant to say "...can you apply it and retest generic/475?"
Comment 4 Zorro Lang 2019-06-29 03:30:43 UTC
By merging "[PATCH] xfs: allow merging ioends over append boundaries" into xfs-linux xfs-5.3-merge-3, then loop run g/475 again, I just hit another panic as below[1].

More Assertion and Internal error output before this panic is [2].
Due to the test job panic as this, for verify this bug, I have to do the test again.

[1]
[23759.890740] XFS (dm-0): Unmounting Filesystem 
[23760.224109] XFS (dm-0): Mounting V5 File ystem 
[23760.468142] XFS (dm-0): Starting recovery (logdev: internal) 
[23764.766209] XFS (dm-0): Ending recovery (logdev: internal) 
[23760.241613] restraintd[1378]: *** Current Time: Fri Jun 28 12:57:05 2019 Localwatchdog at: Sun Jun 30 06:28:04 2019 
[23765.948214] XFS (dm-0): writeback error on sector 5942384 
[23765.951464] XFS (dm-0): writeback error on sector 3955064 
[23765.952027] XFS (dm-0): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x1dfff0 len 8 error 5 
[23765.957388] kasan: CONFIG_KASAN_INLINE enabled 
[23765.957391] kasan: GPF could be caused by NULL-ptr deref or user memory access 
[23765.957400] general protection fault: 0000 [#1] SMP KASAN PTI 
[23765.957408] CPU: 5 PID: 29727 Comm: fsstress Tainted: G    B   W         5.2.0-rc4+ #1 
[23765.957411] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 05/06/2015 
[23765.957512] RIP: 0010:xfs_bmapi_read+0x311/0xb00 [xfs] 
[23765.957519] Code: 45 85 ff 0f 85 8b 02 00 00 48 8d 45 48 48 89 04 24 48 8b 04 24 48 8d 78 12 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <0f> b6 04 02 48 89 fa 83 e2 07 38 d0 7f 08 84 c0 0f 85 a9 07 00 00 
[23765.957522] RSP: 0018:ffff888047f9ed68 EFLAGS: 00010202 
[23765.957528] RAX: dffffc0000000000 RBX: ffff888047f9f038 RCX: 1ffffffff5f99f51 
[23765.957532] RDX: 0000000000000002 RSI: 0000000000000008 RDI: 0000000000000012 
[23765.957535] RBP: ffff888002a41f00 R08: ffffed10005483f0 R09: ffffed10005483ef 
[23765.957539] R10: ffffed10005483ef R11: ffff888002a41f7f R12: 0000000000000004 
[23765.957542] R13: ffffe8fff53b5768 R14: 0000000000000005 R15: 0000000000000001 
[23765.957546] FS:  00007f11d44b5b80(0000) GS:ffff888114200000(0000) knlGS:0000000000000000 
[23765.957550] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[23765.957553] CR2: 0000000000ef6000 CR3: 000000002e176003 CR4: 00000000001606e0 
[23765.957556] Call Trace: 
[23765.957646]  ? xfs_bmapi_convert_delalloc+0xcf0/0xcf0 [xfs] 
[23765.957662]  ? save_stack+0x4d/0x80 
[23765.957666]  ? save_stack+0x19/0x80 
[23765.957672]  ? __kasan_kmalloc.constprop.6+0xc1/0xd0 
[23765.957676]  ? kmem_cache_alloc+0xf4/0x320 
[23765.957770]  ? kmem_zone_alloc+0x6c/0x120 [xfs] 
[23765.957869]  ? xlog_ticket_alloc+0x33/0x3d0 [xfs] 
[23765.957876] XFS (dm-0): writeback error on sector 2064384 
[23765.957968]  ? xfs_trans_reserve+0x6d0/0xd80 [xfs] 
[23765.958060]  ? xfs_trans_alloc+0x299/0x630 [xfs] 
[23765.958149]  ? xfs_attr_inactive+0x1f3/0x5e0 [xfs] 
[23765.958242]  ? xfs_inactive+0x4c8/0x5b0 [xfs] 
[23765.958335]  ? xfs_fs_destroy_inode+0x31b/0x8e0 [xfs] 
[23765.958342]  ? destroy_inode+0xbc/0x190 
[23765.958434]  ? xfs_bulkstat_one_int+0xa8c/0x1200 [xfs] 
[23765.958526]  ? xfs_bulkstat+0x6fa/0xf20 [xfs] 
[23765.958617]  ? xfs_ioc_bulkstat+0x182/0x2b0 [xfs] 
[23765.958709]  ? xfs_file_ioctl+0xee0/0x12a0 [xfs] 
[23765.958716]  ? do_vfs_ioctl+0x193/0x1000 
[23765.958721]  ? ksys_ioctl+0x60/0x90 
[23765.958726]  ? __x64_sys_ioctl+0x6f/0xb0 
[23765.958734]  ? do_syscall_64+0x9f/0x4d0 
[23765.958741]  ? entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[23765.958756]  ? stack_depot_save+0x260/0x430 
[23765.958843]  xfs_dabuf_map.constprop.18+0x696/0xe50 [xfs] 
[23765.958942]  ? xfs_fs_destroy_inode+0x31b/0x8e0 [xfs] 
[23765.958948]  ? destroy_inode+0xbc/0x190 
[23765.959040]  ? xfs_bulkstat_one_int+0xa8c/0x1200 [xfs] 
[23765.959136]  ? xfs_bulkstat_one+0x16/0x20 [xfs] 
[23765.959222]  ? xfs_da3_node_order.isra.10+0x3a0/0x3a0 [xfs] 
[23765.959320]  ? xlog_space_left+0x52/0x250 [xfs] 
[23765.959415]  ? xlog_grant_head_check+0x187/0x430 [xfs] 
[23765.959512]  ? xlog_grant_head_wait+0xaa0/0xaa0 [xfs] 
[23765.959599]  xfs_da_read_buf+0xf5/0x2c0 [xfs] 
[23765.959684]  ? xfs_da3_root_split.isra.13+0xf40/0xf40 [xfs] 
[23765.959782]  ? xlog_ticket_alloc+0x3d0/0x3d0 [xfs] 
[23765.959795]  ? lock_acquire+0x142/0x380 
[23765.959803]  ? lock_contended+0xd50/0xd50 
[23765.959949]  xfs_da3_node_read+0x1d/0x230 [xfs] 
[23765.960043]  xfs_attr_inactive+0x3cc/0x5e0 [xfs] 
[23765.960133]  ? xfs_attr3_node_inactive+0x760/0x760 [xfs] 
[23765.960143]  ? lock_downgrade+0x620/0x620 
[23765.960148]  ? lock_contended+0xd50/0xd50 
[23765.960158]  ? fsnotify_destroy_marks+0x62/0x1c0 
[23765.960256]  xfs_inactive+0x4c8/0x5b0 [xfs] 
[23765.960355]  xfs_fs_destroy_inode+0x31b/0x8e0 [xfs] 
[23765.960365]  destroy_inode+0xbc/0x190 
[23765.960459]  xfs_bulkstat_one_int+0xa8c/0x1200 [xfs] 
[23765.960552]  ? xfs_irele+0x270/0x270 [xfs] 
[23765.960647]  ? xfs_bulkstat_ichunk_ra.isra.1+0x340/0x340 [xfs] 
[23765.960746]  xfs_bulkstat_one+0x16/0x20 [xfs] 
[23765.960837]  xfs_bulkstat+0x6fa/0xf20 [xfs] 
[23765.960934]  ? xfs_bulkstat_one_int+0x1200/0x1200 [xfs] 
[23765.961033]  ? xfs_bulkstat_one+0x20/0x20 [xfs] 
[23765.961047]  ? cred_has_capability+0x125/0x240 
[23765.961054]  ? selinux_sb_eat_lsm_opts+0x550/0x550 
[23765.961144]  ? xfs_buf_find+0x1068/0x20d0 [xfs] 
[23765.961155]  ? lock_acquire+0x142/0x380 
[23765.961161]  ? lock_downgrade+0x620/0x620 
[23765.961263]  xfs_ioc_bulkstat+0x182/0x2b0 [xfs] 
[23765.961356]  ? copy_overflow+0x20/0x20 [xfs] 
[23765.961368]  ? do_raw_spin_unlock+0x54/0x220 
[23765.961375]  ? _raw_spin_unlock+0x24/0x30 
[23765.961463]  ? xfs_buf_rele+0x5a2/0xc70 [xfs] 
[23765.961551]  ? xfs_buf_read_map+0x471/0x5f0 [xfs] 
[23765.961647]  ? xfs_buf_unlock+0x1ea/0x2c0 [xfs] 
[23765.961745]  xfs_file_ioctl+0xee0/0x12a0 [xfs] 
[23765.961839]  ? xfs_ioc_swapext+0x4c0/0x4c0 [xfs] 
[23765.961854]  ? unwind_next_frame+0xff8/0x1c00 
[23765.961859]  ? arch_stack_walk+0x5f/0xe0 
[23765.961867]  ? deref_stack_reg+0xb0/0xf0 
[23765.961875]  ? __read_once_size_nocheck.constprop.8+0x10/0x10 
[23765.961883]  ? deref_stack_reg+0xf0/0xf0 
[23765.961894]  ? lock_downgrade+0x620/0x620 
[23765.961901]  ? is_bpf_text_address+0x5/0xf0 
[23765.961911]  ? lock_downgrade+0x620/0x620 
[23765.961918]  ? avc_has_extended_perms+0xd6/0x11a0 
[23765.961927]  ? kernel_text_address+0x125/0x140 
[23765.961937]  ? avc_has_extended_perms+0x4e4/0x11a0 
[23765.961950]  ? avc_ss_reset+0x140/0x140 
[23765.961962]  ? stack_trace_consume_entry+0x160/0x160 
[23765.961973]  ? save_stack+0x4d/0x80 
[23765.961977]  ? save_stack+0x19/0x80 
[23765.961982]  ? __kasan_slab_free+0x125/0x170 
[23765.961986]  ? kmem_cache_free+0xc3/0x310 
[23765.961992]  ? do_sys_open+0x169/0x360 
[23765.961998]  ? do_syscall_64+0x9f/0x4d0 
[23765.962003]  ? entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[23765.962015]  ? trace_hardirqs_on_thunk+0x1a/0x1c 
[23765.962029]  do_vfs_ioctl+0x193/0x1000 
[23765.962040]  ? ioctl_preallocate+0x1b0/0x1b0 
[23765.962045]  ? selinux_file_ioctl+0x3c9/0x550 
[23765.962054]  ? selinux_file_mprotect+0x5b0/0x5b0 
[23765.962067]  ? syscall_trace_enter+0x5b2/0xe30 
[23765.962073]  ? __kasan_slab_free+0x13a/0x170 
[23765.962080]  ? do_sys_open+0x169/0x360 
[23765.962094]  ksys_ioctl+0x60/0x90 
[23765.962103]  __x64_sys_ioctl+0x6f/0xb0 
[23765.962110]  do_syscall_64+0x9f/0x4d0 
[23765.962117]  entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[23765.962123] RIP: 0033:0x7f11d39a3e5b 
[23765.962128] Code: 0f 1e fa 48 8b 05 2d a0 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d fd 9f 2c 00 f7 d8 64 89 01 48 
[23765.962131] RSP: 002b:00007fff16fb7a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 
[23765.962137] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f11d39a3e5b 
[23765.962141] RDX: 00007fff16fb7a80 RSI: ffffffffc0205865 RDI: 0000000000000003 
[23765.962143] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000003 
[23765.962147] R10: 0000000000000000 R11: 0000000000000246 R12: ffffffffc0205865 
[23765.962150] R13: 0000000000000351 R14: 0000000000ed9220 R15: 000000000000006b 
[23765.962165] Modules linked in: dm_mod iTCO_wdt iTCO_vendor_support intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore intel_rapl_perf pcspkr sunrpc dax_pmem_compat device_dax dax_pmem_core nd_pmem i2c_i801 lpc_ich ipmi_ssif ipmi_si hpilo ext4 ipmi_devintf sg hpwdt ipmi_msghandler ioatdma acpi_tad acpi_power_meter dca mbcache jbd2 xfs libcrc32c mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops sd_mod ttm drm uas crc32c_intel usb_storage serio_raw tg3 hpsa scsi_transport_sas wmi 
[23765.962274] ---[ end trace 51bdc5e7fcb8f571 ]--- 
[23765.970394] XFS (dm-0): writeback error on sector 2064888 
[23765.974614] XFS (dm-0): writeback error on sector 6544 
[23765.974700] XFS (dm-0): writeback error on sector 67648 
[23765.987115] RIP: 0010:xfs_bmapi_read+0x311/0xb00 [xfs] 
[23765.991942] Buffer I/O error on dev dm-0, logical block 31457152, async page read 
[23765.992006] Buffer I/O error on dev dm-0, logical block 31457153, async page read 

[2]
[ 3666.774066] XFS (dm-0): log I/O error -5 
[ 3666.781644] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1235 of file fs/xfs/xfs_log.c. Return address = 0000000056321ec9 
[ 3666.781649] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[ 3666.781653] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[ 3670.161316] XFS (dm-0): Unmounting Filesystem 
[ 3670.493415] XFS (dm-0): Mounting V5 Filesystem 
[ 3671.532800] XFS (dm-0): Starting recovery (logdev: internal) 
[ 3673.605082] XFS (dm-0): Ending recovery (logdev: internal) 
[ 3673.785156] XFS: Assertion failed: fs_is_ok, file: fs/xfs/libxfs/xfs_ialloc.c, line: 1529 
[ 3673.826539] WARNING: CPU: 8 PID: 19166 at fs/xfs/xfs_message.c:94 asswarn+0x1c/0x1f [xfs] 
[ 3673.867150] Modules linked in: dm_mod iTCO_wdt iTCO_vendor_support intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore intel_rapl_perf pcspkr sunrpc dax_pmem_compat device_dax dax_pmem_core nd_pmem i2c_i801 lpc_ich ipmi_ssif ipmi_si hpilo ext4 ipmi_devintf sg hpwdt ipmi_msghandler ioatdma acpi_tad acpi_power_meter dca mbcache jbd2 xfs libcrc32c mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops sd_mod ttm drm uas crc32c_intel usb_storage serio_raw tg3 hpsa scsi_transport_sas wmi 
[ 3674.127363] CPU: 8 PID: 19166 Comm: fsstress Tainted: G    B   W         5.2.0-rc4+ #1 
[ 3674.164732] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 05/06/2015 
[ 3674.195470] RIP: 0010:asswarn+0x1c/0x1f [xfs] 
[ 3674.215962] Code: 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 1f 44 00 00 48 89 f1 41 89 d0 48 c7 c6 40 24 b2 c0 48 89 fa 31 ff e8 06 fa ff ff <0f> 0b c3 0f 1f 44 00 00 48 89 f1 41 89 d0 48 c7 c6 40 24 b2 c0 48 
[ 3674.304237] RSP: 0018:ffff8881158c7030 EFLAGS: 00010282 
[ 3674.329633] RAX: 0000000000000000 RBX: ffff888046e15440 RCX: 0000000000000000 
[ 3674.364835] RDX: dffffc0000000000 RSI: 000000000000000a RDI: ffffed1022b18df8 
[ 3674.399965] RBP: 1ffff11022b18e09 R08: ffffed10229fdfb1 R09: ffffed10229fdfb0 
[ 3674.434264] R10: ffffed10229fdfb0 R11: ffff888114fefd87 R12: fe00000000000000 
[ 3674.467992] R13: ffff8881158c71f0 R14: ffff8881158c70a8 R15: ffff888046e15448 
[ 3674.501551] FS:  00007fab884d2b80(0000) GS:ffff888114e00000(0000) knlGS:0000000000000000 
[ 3674.539731] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[ 3674.566832] CR2: 00007fa94b16a8c0 CR3: 000000011519a003 CR4: 00000000001606e0 
[ 3674.600370] Call Trace: 
[ 3674.611931]  xfs_dialloc_ag_update_inobt+0x2a3/0x500 [xfs] 
[ 3674.637835]  ? xfs_dialloc_ag_finobt_newino.isra.12+0x560/0x560 [xfs] 
[ 3674.668283]  ? kmem_zone_alloc+0x6c/0x120 [xfs] 
[ 3674.689685]  ? xfs_inobt_init_cursor+0x7e/0x520 [xfs] 
[ 3674.713562]  xfs_dialloc_ag+0x3ff/0x6a0 [xfs] 
[ 3674.734055]  ? xfs_dialloc_ag_inobt+0x1420/0x1420 [xfs] 
[ 3674.758645]  ? xfs_perag_put+0x26e/0x390 [xfs] 
[ 3674.779206]  ? xfs_ialloc_read_agi+0x1fb/0x560 [xfs] 
[ 3674.802651]  ? xfs_perag_put+0x26e/0x390 [xfs] 
[ 3674.823621]  xfs_dialloc+0xf4/0x6a0 [xfs] 
[ 3674.843583]  ? xfs_ialloc_ag_select+0x4a0/0x4a0 [xfs] 
[ 3674.869046]  ? save_stack+0x4d/0x80 
[ 3674.886821]  ? save_stack+0x19/0x80 
[ 3674.903911]  ? __kasan_kmalloc.constprop.6+0xc1/0xd0 
[ 3674.928691]  ? kmem_cache_alloc+0xf4/0x320 
[ 3674.948118]  ? kmem_zone_alloc+0x6c/0x120 [xfs] 
[ 3674.969312]  ? xfs_trans_alloc+0x44/0x630 [xfs] 
[ 3674.990319]  ? xfs_generic_create+0x38e/0x4b0 [xfs] 
[ 3675.013030]  ? lookup_open+0xed9/0x1990 
[ 3675.030904]  ? path_openat+0xb33/0x2a60 
[ 3675.048478]  ? do_filp_open+0x17c/0x250 
[ 3675.066205]  ? do_sys_open+0x1d9/0x360 
[ 3675.083526]  xfs_ialloc+0xfa/0x1c10 [xfs] 
[ 3675.102396]  ? xfs_iunlink_free_item+0x60/0x60 [xfs] 
[ 3675.125858]  ? xlog_grant_head_check+0x187/0x430 [xfs] 
[ 3675.150110]  ? xlog_grant_head_wait+0xaa0/0xaa0 [xfs] 
[ 3675.173980]  ? xlog_grant_add_space.isra.14+0x85/0x100 [xfs] 
[ 3675.200628]  xfs_dir_ialloc+0x135/0x630 [xfs] 
[ 3675.221134]  ? __percpu_counter_compare+0x86/0xe0 
[ 3675.243349]  ? xfs_lookup+0x490/0x490 [xfs] 
[ 3675.263055]  ? lock_acquire+0x142/0x380 
[ 3675.281175]  ? lock_contended+0xd50/0xd50 
[ 3675.300078]  xfs_create+0x5bc/0x1320 [xfs] 
[ 3675.319636]  ? xfs_dir_ialloc+0x630/0x630 [xfs] 
[ 3675.340956]  ? get_cached_acl+0x23a/0x390 
[ 3675.360223]  ? set_posix_acl+0x250/0x250 
[ 3675.379875]  ? get_acl+0x18/0x1f0 
[ 3675.396228]  xfs_generic_create+0x38e/0x4b0 [xfs] 
[ 3675.419343]  ? lock_downgrade+0x620/0x620 
[ 3675.438996]  ? lock_contended+0xd50/0xd50 
[ 3675.458080]  ? xfs_setup_iops+0x420/0x420 [xfs] 
[ 3675.479498]  ? do_raw_spin_unlock+0x54/0x220 
[ 3675.499493]  ? _raw_spin_unlock+0x24/0x30 
[ 3675.518304]  ? d_splice_alias+0x417/0xb50 
[ 3675.537345]  ? xfs_vn_lookup+0x156/0x190 [xfs] 
[ 3675.558262]  ? selinux_capable+0x20/0x20 
[ 3675.576337]  ? from_kgid+0x83/0xc0 
[ 3675.592075]  lookup_open+0xed9/0x1990 
[ 3675.609036]  ? path_init+0x10f0/0x10f0 
[ 3675.626336]  ? lock_downgrade+0x620/0x620 
[ 3675.644483]  path_openat+0xb33/0x2a60 
[ 3675.661432]  ? getname_flags+0xba/0x510 
[ 3675.679087]  ? path_mountpoint+0xab0/0xab0 
[ 3675.697892]  ? kasan_init_slab_obj+0x20/0x30 
[ 3675.717904]  ? new_slab+0x326/0x630 
[ 3675.734621]  ? ___slab_alloc+0x3e3/0x5e0 
[ 3675.753045]  do_filp_open+0x17c/0x250 
[ 3675.770273]  ? may_open_dev+0xc0/0xc0 
[ 3675.787452]  ? __check_object_size+0x25f/0x346 
[ 3675.808348]  ? do_raw_spin_unlock+0x54/0x220 
[ 3675.828459]  ? _raw_spin_unlock+0x24/0x30 
[ 3675.847272]  do_sys_open+0x1d9/0x360 
[ 3675.864040]  ? filp_open+0x50/0x50 
[ 3675.880894]  do_syscall_64+0x9f/0x4d0 
[ 3675.899047]  entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[ 3675.924132] RIP: 0033:0x7fab879bb5e8 
[ 3675.941751] Code: 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 8d 05 35 51 2d 00 8b 00 85 c0 75 17 b8 55 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 55 89 f5 53 48 89 
[ 3676.030090] RSP: 002b:00007ffe5e603e98 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 
[ 3676.065926] RAX: ffffffffffffffda RBX: 00007ffe5e607298 RCX: 00007fab879bb5e8 
[ 3676.099605] RDX: 000000000040ce46 RSI: 00000000000001b6 RDI: 00007ffe5e603eb6 
[ 3676.133161] RBP: 0000000000000007 R08: 0000000000000000 R09: 00007ffe5e603c44 
[ 3676.168328] R10: fffffffffffffe05 R11: 0000000000000246 R12: 0000000000000001 
[ 3676.201669] R13: 00007ffe5e608608 R14: 0000000000000000 R15: 0000000000000000 
[ 3676.235285] irq event stamp: 0 
[ 3676.249630] hardirqs last  enabled at (0): [<0000000000000000>] 0x0 
[ 3676.279149] hardirqs last disabled at (0): [<ffffffffad5b045b>] copy_process.part.33+0x187b/0x5e40 
[ 3676.321474] softirqs last  enabled at (0): [<ffffffffad5b04f7>] copy_process.part.33+0x1917/0x5e40 
[ 3676.363493] softirqs last disabled at (0): [<0000000000000000>] 0x0 
[ 3676.393505] ---[ end trace 51bdc5e7fcb8f553 ]--- 
[ 3676.416166] XFS (dm-0): Internal error XFS_WANT_CORRUPTED_RETURN at line 1529 of file fs/xfs/libxfs/xfs_ialloc.c.  Caller xfs_dialloc_ag+0x3ff/0x6a0 [xfs] 
[ 3676.484269] CPU: 8 PID: 19166 Comm: fsstress Tainted: G    B   W         5.2.0-rc4+ #1 
[ 3676.521635] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 05/06/2015 
[ 3676.552486] Call Trace: 
[ 3676.563989]  dump_stack+0x7c/0xc0 
[ 3676.579620]  xfs_dialloc_ag_update_inobt+0x2e8/0x500 [xfs] 
[ 3676.605548]  ? xfs_dialloc_ag_finobt_newino.isra.12+0x560/0x560 [xfs] 
[ 3676.635924]  ? kmem_zone_alloc+0x6c/0x120 [xfs] 
[ 3676.657042]  ? xfs_inobt_init_cursor+0x7e/0x520 [xfs] 
[ 3676.680967]  xfs_dialloc_ag+0x3ff/0x6a0 [xfs] 
[ 3676.701420]  ? xfs_dialloc_ag_inobt+0x1420/0x1420 [xfs] 
[ 3676.725958]  ? xfs_perag_put+0x26e/0x390 [xfs] 
[ 3676.746890]  ? xfs_ialloc_read_agi+0x1fb/0x560 [xfs] 
[ 3676.770395]  ? xfs_perag_put+0x26e/0x390 [xfs] 
[ 3676.791359]  xfs_dialloc+0xf4/0x6a0 [xfs] 
[ 3676.810064]  ? xfs_ialloc_ag_select+0x4a0/0x4a0 [xfs] 
[ 3676.833401]  ? save_stack+0x4d/0x80 
[ 3676.849513]  ? save_stack+0x19/0x80 
[ 3676.865512]  ? __kasan_kmalloc.constprop.6+0xc1/0xd0 
[ 3676.888939]  ? kmem_cache_alloc+0xf4/0x320 
[ 3676.909702]  ? kmem_zone_alloc+0x6c/0x120 [xfs] 
[ 3676.934091]  ? xfs_trans_alloc+0x44/0x630 [xfs] 
[ 3676.956527]  ? xfs_generic_create+0x38e/0x4b0 [xfs] 
[ 3676.980830]  ? lookup_open+0xed9/0x1990 
[ 3676.998959]  ? path_openat+0xb33/0x2a60 
[ 3677.017132]  ? do_filp_open+0x17c/0x250 
[ 3677.035327]  ? do_sys_open+0x1d9/0x360 
[ 3677.053125]  xfs_ialloc+0xfa/0x1c10 [xfs] 
[ 3677.072140]  ? xfs_iunlink_free_item+0x60/0x60 [xfs] 
[ 3677.096825]  ? xlog_grant_head_check+0x187/0x430 [xfs] 
[ 3677.120704]  ? xlog_grant_head_wait+0xaa0/0xaa0 [xfs] 
[ 3677.144190]  ? xlog_grant_add_space.isra.14+0x85/0x100 [xfs] 
[ 3677.170450]  xfs_dir_ialloc+0x135/0x630 [xfs] 
[ 3677.192303]  ? __percpu_counter_compare+0x86/0xe0 
[ 3677.214719]  ? xfs_lookup+0x490/0x490 [xfs] 
[ 3677.234578]  ? lock_acquire+0x142/0x380 
[ 3677.252628]  ? lock_contended+0xd50/0xd50 
[ 3677.271669]  xfs_create+0x5bc/0x1320 [xfs] 
[ 3677.291990]  ? xfs_dir_ialloc+0x630/0x630 [xfs] 
[ 3677.313421]  ? get_cached_acl+0x23a/0x390 
[ 3677.332536]  ? set_posix_acl+0x250/0x250 
[ 3677.351094]  ? get_acl+0x18/0x1f0 
[ 3677.366715]  xfs_generic_create+0x38e/0x4b0 [xfs] 
[ 3677.388901]  ? lock_downgrade+0x620/0x620 
[ 3677.407885]  ? lock_contended+0xd50/0xd50 
[ 3677.427663]  ? xfs_setup_iops+0x420/0x420 [xfs] 
[ 3677.450633]  ? do_raw_spin_unlock+0x54/0x220 
[ 3677.471607]  ? _raw_spin_unlock+0x24/0x30 
[ 3677.491544]  ? d_splice_alias+0x417/0xb50 
[ 3677.511952]  ? xfs_vn_lookup+0x156/0x190 [xfs] 
[ 3677.532896]  ? selinux_capable+0x20/0x20 
[ 3677.551382]  ? from_kgid+0x83/0xc0 
[ 3677.567407]  lookup_open+0xed9/0x1990 
[ 3677.584749]  ? path_init+0x10f0/0x10f0 
[ 3677.602514]  ? lock_downgrade+0x620/0x620 
[ 3677.621511]  path_openat+0xb33/0x2a60 
[ 3677.638854]  ? getname_flags+0xba/0x510 
[ 3677.657089]  ? path_mountpoint+0xab0/0xab0 
[ 3677.676373]  ? kasan_init_slab_obj+0x20/0x30 
[ 3677.696714]  ? new_slab+0x326/0x630 
[ 3677.713198]  ? ___slab_alloc+0x3e3/0x5e0 
[ 3677.731229]  do_filp_open+0x17c/0x250 
[ 3677.747756]  ? may_open_dev+0xc0/0xc0 
[ 3677.765138]  ? __check_object_size+0x25f/0x346 
[ 3677.785750]  ? do_raw_spin_unlock+0x54/0x220 
[ 3677.805102]  ? _raw_spin_unlock+0x24/0x30 
[ 3677.823094]  do_sys_open+0x1d9/0x360 
[ 3677.839645]  ? filp_open+0x50/0x50 
[ 3677.855954]  do_syscall_64+0x9f/0x4d0 
[ 3677.873228]  entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[ 3677.897383] RIP: 0033:0x7fab879bb5e8 
[ 3677.914515] Code: 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 8d 05 35 51 2d 00 8b 00 85 c0 75 17 b8 55 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 55 89 f5 53 48 89 
[ 3678.005389] RSP: 002b:00007ffe5e603e98 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 
[ 3678.041628] RAX: ffffffffffffffda RBX: 00007ffe5e607298 RCX: 00007fab879bb5e8 
[ 3678.075514] RDX: 000000000040ce46 RSI: 00000000000001b6 RDI: 00007ffe5e603eb6 
[ 3678.109193] RBP: 0000000000000007 R08: 0000000000000000 R09: 00007ffe5e603c44 
[ 3678.142727] R10: fffffffffffffe05 R11: 0000000000000246 R12: 0000000000000001 
[ 3678.176442] R13: 00007ffe5e608608 R14: 0000000000000000 R15: 0000000000000000 
[ 3678.210096] XFS (dm-0): Internal error xfs_trans_cancel at line 1051 of file fs/xfs/xfs_trans.c.  Caller xfs_create+0x5db/0x1320 [xfs] 
[ 3678.267017] CPU: 8 PID: 19166 Comm: fsstress Tainted: G    B   W         5.2.0-rc4+ #1 
[ 3678.304486] Hardware name: HP ProLiant DL360 Gen9, BIOS P89 05/06/2015 
[ 3678.335484] Call Trace: 
[ 3678.347211]  dump_stack+0x7c/0xc0 
[ 3678.362941]  xfs_trans_cancel+0x404/0x540 [xfs] 
[ 3678.384517]  ? xfs_create+0x5db/0x1320 [xfs] 
[ 3678.404810]  xfs_create+0x5db/0x1320 [xfs] 
[ 3678.424039]  ? xfs_dir_ialloc+0x630/0x630 [xfs] 
[ 3678.445491]  ? get_cached_acl+0x23a/0x390 
[ 3678.465373]  ? set_posix_acl+0x250/0x250 
[ 3678.484604]  ? get_acl+0x18/0x1f0 
[ 3678.501131]  xfs_generic_create+0x38e/0x4b0 [xfs] 
[ 3678.524394]  ? lock_downgrade+0x620/0x620 
[ 3678.544411]  ? lock_contended+0xd50/0xd50 
[ 3678.563513]  ? xfs_setup_iops+0x420/0x420 [xfs] 
[ 3678.585377]  ? do_raw_spin_unlock+0x54/0x220 
[ 3678.605561]  ? _raw_spin_unlock+0x24/0x30 
[ 3678.624534]  ? d_splice_alias+0x417/0xb50 
[ 3678.643473]  ? xfs_vn_lookup+0x156/0x190 [xfs] 
[ 3678.664364]  ? selinux_capable+0x20/0x20 
[ 3678.683024]  ? from_kgid+0x83/0xc0 
[ 3678.699042]  lookup_open+0xed9/0x1990 
[ 3678.716242]  ? path_init+0x10f0/0x10f0 
[ 3678.733905]  ? lock_downgrade+0x620/0x620 
[ 3678.752998]  path_openat+0xb33/0x2a60 
[ 3678.770354]  ? getname_flags+0xba/0x510 
[ 3678.788441]  ? path_mountpoint+0xab0/0xab0 
[ 3678.807602]  ? kasan_init_slab_obj+0x20/0x30 
[ 3678.827795]  ? new_slab+0x326/0x630 
[ 3678.844374]  ? ___slab_alloc+0x3e3/0x5e0 
[ 3678.862756]  do_filp_open+0x17c/0x250 
[ 3678.880089]  ? may_open_dev+0xc0/0xc0 
[ 3678.897937]  ? __check_object_size+0x25f/0x346 
[ 3678.918993]  ? do_raw_spin_unlock+0x54/0x220 
[ 3678.939234]  ? _raw_spin_unlock+0x24/0x30 
[ 3678.958169]  do_sys_open+0x1d9/0x360 
[ 3678.974662]  ? filp_open+0x50/0x50 
[ 3678.991695]  do_syscall_64+0x9f/0x4d0 
[ 3679.009953]  entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[ 3679.035227] RIP: 0033:0x7fab879bb5e8 
[ 3679.053006] Code: 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 8d 05 35 51 2d 00 8b 00 85 c0 75 17 b8 55 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 55 89 f5 53 48 89 
[ 3679.142262] RSP: 002b:00007ffe5e603e98 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 
[ 3679.178078] RAX: ffffffffffffffda RBX: 00007ffe5e607298 RCX: 00007fab879bb5e8 
[ 3679.212063] RDX: 000000000040ce46 RSI: 00000000000001b6 RDI: 00007ffe5e603eb6 
[ 3679.245500] RBP: 0000000000000007 R08: 0000000000000000 R09: 00007ffe5e603c44 
[ 3679.278839] R10: fffffffffffffe05 R11: 0000000000000246 R12: 0000000000000001 
[ 3679.312393] R13: 00007ffe5e608608 R14: 0000000000000000 R15: 0000000000000000 
[ 3679.346558] XFS (dm-0): log I/O error -5 
[ 3679.366619] XFS (dm-0): xfs_do_force_shutdown(0x2) called from line 1235 of file fs/xfs/xfs_log.c. Return address = 0000000056321ec9 
[ 3679.378219] Buffer I/O error on dev dm-0, logical block 31457152, async page read 
[ 3679.422619] XFS (dm-0): Log I/O Error Detected. Shutting down filesystem 
[ 3679.457704] Buffer I/O error on dev dm-0, logical block 31457153, async page read 
[ 3679.489606] XFS (dm-0): Please unmount the filesystem and rectify the problem(s) 
[ 3679.563351] Buffer I/O error on dev dm-0, logical block 31457154, async page read 
[ 3679.600812] Buffer I/O error on dev dm-0, logical block 31457155, async page read 
[ 3679.636355] Buffer I/O error on dev dm-0, logical block 31457156, async page read 
[ 3679.671882] Buffer I/O error on dev dm-0, logical block 31457157, async page read 
[ 3679.707456] Buffer I/O error on dev dm-0, logical block 31457158, async page read 
[ 3679.743268] Buffer I/O error on dev dm-0, logical block 31457159, async page read 
[ 3679.957782] XFS (dm-0): Unmounting Filesystem
Comment 5 Zorro Lang 2019-06-29 03:46:28 UTC
Created attachment 283473 [details]
console log about panic on xfs_bmapi_read
Comment 6 Zorro Lang 2019-06-29 03:48:07 UTC
Please check the attachment to get more details about comment 4. Maybe I should report another bug to track this issue.
Comment 7 Darrick J. Wong 2019-06-29 16:07:45 UTC
This is a different issue.  It would help to know where "xfs_bmapi_read+0x311" points to in your kernel, though it looks like inode inactivation crashed while trying to tear down an attr fork, and maybe the attr fork wasn't loaded in memory?  (Possibly because the fs went down while the inode was being set up?)

Ahh generic/475, bringer of much bug report. :)
Comment 8 Darrick J. Wong 2019-06-29 17:31:56 UTC
Ok, so I reproduced it locally and tracked the crash to this part of xfs_bmapi_read() where we dereference *ifp:

        if (!(ifp->if_flags & XFS_IFEXTENTS)) {                                 
                error = xfs_iread_extents(NULL, ip, whichfork);                 
                if (error)                                                      
                        return error;                                           
        }                                                                       
                                                                                
Looking at xfs_iformat_fork(), it seems that if there's any kind of error formatting the attr fork it'll free ip->i_afp and set it to NULL, so I think the fix is to add an "if (!afp) return -EIO;" somewhere.

Not sure how we actually get to this place, though.  fsstress is running bulkstat, which is inactivating an inode with i_nlink == 0 and a corrupt attr fork that won't load.  Maybe we hit an inode that had previously gone through unlinked processing after log recovery but was lurking on the mru waiting to be inactivated, but then bulkstat showed up (with its IGET_DONTCACHE) which forced immediate inactivation?
Comment 9 Darrick J. Wong 2019-06-29 17:35:14 UTC
Zorro,

If you get a chance, can you try this debugging patch, please?

diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
index baf0b72c0a37..1bf408255349 100644
--- a/fs/xfs/libxfs/xfs_bmap.c
+++ b/fs/xfs/libxfs/xfs_bmap.c
@@ -3846,6 +3846,12 @@ xfs_bmapi_read(
                return 0;
        }
 
+       if (!ifp) {
+               xfs_err(mp, "NULL FORK, inode x%llx fork %d??",
+                               ip->i_ino, whichfork);
+               return -EFSCORRUPTED;
+       }
+
        if (!(ifp->if_flags & XFS_IFEXTENTS)) {
                error = xfs_iread_extents(NULL, ip, whichfork);
                if (error)
Comment 10 Zorro Lang 2019-06-30 13:52:43 UTC
(In reply to Darrick J. Wong from comment #9)
> Zorro,
> 
> If you get a chance, can you try this debugging patch, please?

Sure, I'll give it a try. With this bug together ... they both triggered by g/475. You really write a nice case :)

Both these two bugs are too hard to reproduce, so I only can try my best to test it, but I can't 100% verify they're fixed even if all test pass, I'll try to approach 99% :-P

BTW, if this's a separate bug, I'd like to report a new bug to track it, to avoid confusion.

Thanks,
Zorro


> 
> diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> index baf0b72c0a37..1bf408255349 100644
> --- a/fs/xfs/libxfs/xfs_bmap.c
> +++ b/fs/xfs/libxfs/xfs_bmap.c
> @@ -3846,6 +3846,12 @@ xfs_bmapi_read(
>                 return 0;
>         }
>  
> +       if (!ifp) {
> +               xfs_err(mp, "NULL FORK, inode x%llx fork %d??",
> +                               ip->i_ino, whichfork);
> +               return -EFSCORRUPTED;
> +       }
> +
>         if (!(ifp->if_flags & XFS_IFEXTENTS)) {
>                 error = xfs_iread_extents(NULL, ip, whichfork);
>                 if (error)
Comment 11 Zorro Lang 2019-07-09 04:30:11 UTC
(In reply to Zorro Lang from comment #10)
> (In reply to Darrick J. Wong from comment #9)
> > Zorro,
> > 
> > If you get a chance, can you try this debugging patch, please?
> 
> Sure, I'll give it a try. With this bug together ... they both triggered by
> g/475. You really write a nice case :)
> 
> Both these two bugs are too hard to reproduce, so I only can try my best to
> test it, but I can't 100% verify they're fixed even if all test pass, I'll
> try to approach 99% :-P
> 
> BTW, if this's a separate bug, I'd like to report a new bug to track it, to
> avoid confusion.
> 
> Thanks,
> Zorro

Updata: By merging the patches in comment 2 and comment 9, I can't reproduce this bug and https://bugzilla.kernel.org/show_bug.cgi?id=204031, after running generic/475 on six different machines 3 days.
Comment 12 Luis Chamberlain 2019-07-11 22:59:33 UTC
(In reply to Zorro Lang from comment #11)
> (In reply to Zorro Lang from comment #10)
> > (In reply to Darrick J. Wong from comment #9)
> > > Zorro,
> > > 
> > > If you get a chance, can you try this debugging patch, please?
> > 
> > Sure, I'll give it a try. With this bug together ... they both triggered by
> > g/475. You really write a nice case :)
> > 
> > Both these two bugs are too hard to reproduce, so I only can try my best to
> > test it, but I can't 100% verify they're fixed even if all test pass, I'll
> > try to approach 99% :-P
> > 
> > BTW, if this's a separate bug, I'd like to report a new bug to track it, to
> > avoid confusion.
> > 
> > Thanks,
> > Zorro
> 
> Updata: By merging the patches in comment 2 and comment 9, I can't reproduce
> this bug and https://bugzilla.kernel.org/show_bug.cgi?id=204031, after
> running generic/475 on six different machines 3 days.

Can you try with just the patch in comment 2? Also it doesn't seem clear to me yet if this is a regression or not. Did this used to work? If not sure can you try with v4.19 and see if the issue also appears there?
Comment 13 Zorro Lang 2019-07-12 04:27:08 UTC
(In reply to Luis Chamberlain from comment #12)
> (In reply to Zorro Lang from comment #11)
> > (In reply to Zorro Lang from comment #10)
> > > (In reply to Darrick J. Wong from comment #9)
> > > > Zorro,
> > > > 
> > > > If you get a chance, can you try this debugging patch, please?
> > > 
> > > Sure, I'll give it a try. With this bug together ... they both triggered
> by
> > > g/475. You really write a nice case :)
> > > 
> > > Both these two bugs are too hard to reproduce, so I only can try my best
> to
> > > test it, but I can't 100% verify they're fixed even if all test pass,
> I'll
> > > try to approach 99% :-P
> > > 
> > > BTW, if this's a separate bug, I'd like to report a new bug to track it,
> to
> > > avoid confusion.
> > > 
> > > Thanks,
> > > Zorro
> > 
> > Updata: By merging the patches in comment 2 and comment 9, I can't
> reproduce
> > this bug and https://bugzilla.kernel.org/show_bug.cgi?id=204031, after
> > running generic/475 on six different machines 3 days.
> 
> Can you try with just the patch in comment 2? Also it doesn't seem clear to
> me yet if this is a regression or not. Did this used to work? If not sure
> can you try with v4.19 and see if the issue also appears there?

I didn't try to make sure if it's a regression or not. Due to
1. This bug is very hard to be reproduced.
2. The reproducer(generic/475) might easily to trigger some other issues on old kernel.

Any way, I'll give it a try on v4.19, but I can't promise that I can find out if it's a regression:)
Comment 14 Zorro Lang 2019-07-12 13:18:09 UTC
(In reply to Zorro Lang from comment #13)
> (In reply to Luis Chamberlain from comment #12)
> > (In reply to Zorro Lang from comment #11)
> > > (In reply to Zorro Lang from comment #10)
> > > > (In reply to Darrick J. Wong from comment #9)
> > > > > Zorro,
> > > > > 
> > > > > If you get a chance, can you try this debugging patch, please?
> > > > 
> > > > Sure, I'll give it a try. With this bug together ... they both
> triggered
> > by
> > > > g/475. You really write a nice case :)
> > > > 
> > > > Both these two bugs are too hard to reproduce, so I only can try my
> best
> > to
> > > > test it, but I can't 100% verify they're fixed even if all test pass,
> > I'll
> > > > try to approach 99% :-P
> > > > 
> > > > BTW, if this's a separate bug, I'd like to report a new bug to track
> it,
> > to
> > > > avoid confusion.
> > > > 
> > > > Thanks,
> > > > Zorro
> > > 
> > > Updata: By merging the patches in comment 2 and comment 9, I can't
> > reproduce
> > > this bug and https://bugzilla.kernel.org/show_bug.cgi?id=204031, after
> > > running generic/475 on six different machines 3 days.
> > 
> > Can you try with just the patch in comment 2? Also it doesn't seem clear to
> > me yet if this is a regression or not. Did this used to work? If not sure
> > can you try with v4.19 and see if the issue also appears there?
> 
> I didn't try to make sure if it's a regression or not. Due to
> 1. This bug is very hard to be reproduced.
> 2. The reproducer(generic/475) might easily to trigger some other issues on
> old kernel.
> 
> Any way, I'll give it a try on v4.19, but I can't promise that I can find
> out if it's a regression:)

v4.19 reproduced two different panic [1] and [2], but none of them same as above issues:

[1]
[  591.414884] kasan: GPF could be caused by NULL-ptr deref or user memory access 
[  591.422117] general protection fault: 0000 [#1] SMP KASAN PTI 
[  591.427869] CPU: 19 PID: 17963 Comm: xfsaild/dm-0 Tainted: G    B   W         4.19.0 #1 
[  591.435865] Hardware name: Dell Inc. PowerEdge R740/00WGD1, BIOS 1.3.7 02/08/2018 
[  591.443394] RIP: 0010:xfs_buf_free+0x20f/0x5b0 [xfs] 
[  591.448364] Code: 80 02 00 00 0f 86 60 01 00 00 41 80 7d 00 00 0f 85 ff 02 00 00 48 8b 8b 40 02 00 00 44 89 fa 48 8d 14 d1 48 89 d1 48 c1 e9 03 <42> 80 3c 21 00 0f 85 c8 02 00 00 48 8b 3a 31 f6 41 83 c7 01 e8 d8 
[  591.467108] RSP: 0018:ffff880101e3fab0 EFLAGS: 00010246 
[  591.472334] RAX: 0000000000000000 RBX: ffff880001980680 RCX: 0000000000000000 
[  591.479466] RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff8800019808c0 
[  591.486602] RBP: ffffed0000330120 R08: ffffed0005a681a1 R09: ffffed0005a681a0 
[  591.493730] R10: ffffed0005a681a0 R11: ffff88002d340d03 R12: dffffc0000000000 
[  591.500862] R13: ffffed0000330118 R14: ffff880001980900 R15: 0000000000000000 
[  591.507997] FS:  0000000000000000(0000) GS:ffff880119e00000(0000) knlGS:0000000000000000 
[  591.516081] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[  591.521827] CR2: 00007f39c4ebd228 CR3: 0000000013216002 CR4: 00000000007606e0 
[  591.528961] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 
[  591.536094] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 
[  591.543225] PKRU: 55555554 
[  591.545936] Call Trace: 
[  591.548438]  xfs_buf_rele+0x2f8/0xc80 [xfs] 
[  591.552672]  ? xfs_buf_hold+0x280/0x280 [xfs] 
[  591.557072]  ? xfs_buf_unlock+0x1ea/0x2d0 [xfs] 
[  591.561647]  ? __xfs_buf_submit+0x687/0x730 [xfs] 
[  591.566395]  ? xfs_buf_ioend+0x446/0x6f0 [xfs] 
[  591.570886]  __xfs_buf_submit+0x687/0x730 [xfs] 
[  591.575460]  ? xfs_buf_delwri_submit_buffers+0x36b/0xaa0 [xfs] 
[  591.581328]  xfs_buf_delwri_submit_buffers+0x36b/0xaa0 [xfs] 
[  591.587036]  ? xfsaild+0x94a/0x25a0 [xfs] 
[  591.591088]  ? xfs_bwrite+0x150/0x150 [xfs] 
[  591.595319]  ? xfs_iunlock+0x310/0x480 [xfs] 
[  591.599637]  ? xfsaild+0x940/0x25a0 [xfs] 
[  591.603697]  xfsaild+0x94a/0x25a0 [xfs] 
[  591.607542]  ? finish_task_switch+0xfc/0x6c0 
[  591.611860]  ? xfs_trans_ail_cursor_first+0x180/0x180 [xfs] 
[  591.617439]  ? lock_downgrade+0x5e0/0x5e0 
[  591.621450]  ? __kthread_parkme+0x59/0x180 
[  591.625553]  ? __kthread_parkme+0xb6/0x180 
[  591.629699]  ? xfs_trans_ail_cursor_first+0x180/0x180 [xfs] 
[  591.635273]  kthread+0x31a/0x3e0 
[  591.638505]  ? kthread_create_worker_on_cpu+0xc0/0xc0 
[  591.643558]  ret_from_fork+0x3a/0x50 
[  591.647140] Modules linked in: dm_mod iTCO_wdt iTCO_vendor_support dcdbas intel_rapl skx_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore intel_rapl_perf dax_pmem device_dax nd_pmem pcspkr sg i2c_i801 ipmi_ssif mei_me lpc_ich mei ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter sunrpc ext4 mbcache vfat jbd2 fat xfs libcrc32c sr_mod cdrom sd_mod crc32c_intel mgag200 i2c_algo_bit drm_kms_helper mlx5_core syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci ixgbe libahci mdio i40e drm megaraid_sas libata dca tg3 mlxfw 
[  591.702756] ---[ end trace e658cafc4d56a43f ]--- 

[2]
[  852.300021] kasan: CONFIG_KASAN_INLINE enabled 
[  852.304474] kasan: GPF could be caused by NULL-ptr deref or user memory access 
[  852.311699] general protection fault: 0000 [#1] SMP KASAN PTI 
[  852.317444] CPU: 3 PID: 18390 Comm: fsstress Tainted: G        W         4.19.0 #1 
[  852.325010] Hardware name: Supermicro SYS-1018R-WR/X10SRW-F, BIOS 2.0 12/17/2015 
[  852.332464] RIP: 0010:xfs_trans_brelse+0xe7/0x560 [xfs] 
[  852.337690] Code: 83 7e 03 00 00 89 ed 48 0f a3 2d 74 d0 44 f2 0f 82 bc 02 00 00 48 b8 00 00 00 00 00 fc ff df 48 8d 7b 38 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e f2 03 00 00 81 7b 38 3c 12 00 
[  852.356434] RSP: 0018:ffff8800402b78d8 EFLAGS: 00010202 
[  852.361660] RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000000 
[  852.368786] RDX: 0000000000000007 RSI: 000000000000000a RDI: 0000000000000038 
[  852.375910] RBP: 0000000000000003 R08: ffffed002307d0e1 R09: ffffed002307d0e0 
[  852.383042] R10: ffffed002307d0e0 R11: ffff8801183e8707 R12: ffff880015e74ac0 
[  852.390165] R13: 0000000000000007 R14: ffff880015e74cf8 R15: ffff8800073e3a00 
[  852.397292] FS:  00007fe0d67d7b80(0000) GS:ffff880118200000(0000) knlGS:0000000000000000 
[  852.405376] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[  852.411115] CR2: 0000000000dfd000 CR3: 0000000009860002 CR4: 00000000003606e0 
[  852.418238] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 
[  852.425363] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 
[  852.432495] Call Trace: 
[  852.434996]  xfs_attr_set+0x81c/0xa90 [xfs] 
[  852.439214]  ? xfs_attr_get+0x530/0x530 [xfs] 
[  852.443576]  ? avc_has_perm+0xa5/0x4b0 
[  852.447330]  ? mark_held_locks+0x140/0x140 
[  852.451427]  ? avc_has_perm+0x283/0x4b0 
[  852.455267]  ? avc_has_perm_noaudit+0x480/0x480 
[  852.459800]  ? lock_downgrade+0x5e0/0x5e0 
[  852.463816]  ? creds_are_invalid+0x43/0xd0 
[  852.467974]  xfs_xattr_set+0x75/0xe0 [xfs] 
[  852.472074]  __vfs_setxattr+0xd0/0x130 
[  852.475824]  ? xattr_resolve_name+0x4e0/0x4e0 
[  852.480186]  __vfs_setxattr_noperm+0xe7/0x390 
[  852.484543]  vfs_setxattr+0xa3/0xd0 
[  852.488035]  setxattr+0x182/0x240 
[  852.491355]  ? vfs_setxattr+0xd0/0xd0 
[  852.495022]  ? filename_lookup.part.31+0x1f1/0x360 
[  852.499815]  ? mark_held_locks+0x140/0x140 
[  852.503916]  ? filename_parentat.part.30+0x3e0/0x3e0 
[  852.508882]  ? lock_acquire+0x14f/0x3b0 
[  852.512718]  ? mnt_want_write+0x3c/0xa0 
[  852.516560]  ? rcu_sync_lockdep_assert+0xf/0x110 
[  852.521178]  ? __sb_start_write+0x1b2/0x260 
[  852.525366]  path_setxattr+0x11b/0x130 
[  852.529118]  ? setxattr+0x240/0x240 
[  852.532610]  ? __audit_syscall_exit+0x76b/0xa90 
[  852.537143]  __x64_sys_setxattr+0xc0/0x160 
[  852.541244]  do_syscall_64+0xa5/0x4a0 
[  852.544908]  entry_SYSCALL_64_after_hwframe+0x49/0xbe 
[  852.549959] RIP: 0033:0x7fe0d5ccd38e 
[  852.553530] Code: 48 8b 0d fd 2a 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 bc 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ca 2a 2c 00 f7 d8 64 89 01 48 
[  852.572277] RSP: 002b:00007ffd90fb1ce8 EFLAGS: 00000206 ORIG_RAX: 00000000000000bc 
[  852.579843] RAX: ffffffffffffffda RBX: 0000000000000050 RCX: 00007fe0d5ccd38e 
[  852.586968] RDX: 0000000000d3da40 RSI: 00007ffd90fb1d10 RDI: 0000000000d2d3e0 
[  852.594100] RBP: 0000000000000001 R08: 0000000000000002 R09: 00007ffd90fb1a57 
[  852.601224] R10: 0000000000000050 R11: 0000000000000206 R12: 0000000000000002 
[  852.608348] R13: 00000000000001ab R14: 0000000000d3da40 R15: 0000000000000050 
[  852.615477] Modules linked in: sunrpc ext4 mbcache jbd2 iTCO_wdt iTCO_vendor_support intel_rapl sb_edac x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore intel_rapl_perf pcspkr dax_pmem nd_pmem device_dax i2c_i801 lpc_ich ioatdma joydev mei_me sg mei wmi ipmi_ssif ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter xfs libcrc32c dm_service_time sd_mod ast drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel ahci libahci igb mpt3sas libata raid_class dca scsi_transport_sas i2c_algo_bit dm_multipath dm_mirror dm_region_hash dm_log dm_mod 
[  852.672554] ---[ end trace 10bad956ab83813f ]---

Note You need to log in before you can comment on or make changes to this bug.