Bug 217030 - [Syzkaller & bisect] There is "xfs_bmapi_convert_delalloc" WARNING in v6.2-rc7 kernel
Summary: [Syzkaller & bisect] There is "xfs_bmapi_convert_delalloc" WARNING in v6.2-rc...
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: XFS (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: FileSystem/XFS Default Virtual Assignee
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-02-13 10:57 UTC by xupengfe
Modified: 2023-02-14 01:23 UTC (History)
0 users

See Also:
Kernel Version: v6.2-rc7
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description xupengfe 2023-02-13 10:57:31 UTC
userspace arch: x86

There is "xfs_bmapi_convert_delalloc" WARNING in v6.2-rc7 kernel in guest:
[   31.180821] ------------[ cut here ]------------
[   31.181101] WARNING: CPU: 0 PID: 550 at fs/xfs/libxfs/xfs_bmap.c:4544 xfs_bmapi_convert_delalloc+0x793/0xf20
[   31.181692] Modules linked in:
[   31.181886] CPU: 0 PID: 550 Comm: repro Not tainted 6.2.0-rc7-4ec5183ec486 #1
[   31.182310] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[   31.182989] RIP: 0010:xfs_bmapi_convert_delalloc+0x793/0xf20
[   31.183342] Code: f9 ff ff e8 0f fd 08 ff 44 89 e6 bf 02 00 00 00 41 bf f5 ff ff ff e8 dc fe 08 ff 41 83 fc 02 0f 84 eb fe ff ff e8 ed fc 08 ff <0f> 0b e9 df 0
[   31.184391] RSP: 0018:ff1100000d03ef20 EFLAGS: 00010246
[   31.184703] RAX: 0000000000000000 RBX: ff1100001eb47800 RCX: ffffffff82521274
[   31.185118] RDX: 0000000000000000 RSI: ff1100000efc8000 RDI: 0000000000000002
[   31.185531] RBP: ff1100000d03f120 R08: ff1100000d03f048 R09: fffffbfff0eb4055
[   31.185946] R10: ffffffff875a02a7 R11: fffffbfff0eb4054 R12: 0000000000000000
[   31.186363] R13: ff1100000d03f0f8 R14: ff1100000e544000 R15: 00000000fffffff5
[   31.186774] FS:  00007fd840a14700(0000) GS:ff1100006ca00000(0000) knlGS:0000000000000000
[   31.187306] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   31.187648] CR2: 00007f6e90b65000 CR3: 0000000013fa2004 CR4: 0000000000771ef0
[   31.188069] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   31.188482] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
[   31.188897] PKRU: 55555554
[   31.189069] Call Trace:
[   31.189222]  <TASK>
[   31.189370]  ? __pfx_xfs_bmapi_convert_delalloc+0x10/0x10
[   31.189707]  ? __pfx_rwsem_wake+0x10/0x10
[   31.189996]  ? write_comp_data+0x2f/0x90
[   31.190260]  xfs_map_blocks+0x562/0xe40
[   31.190527]  ? __pfx_xfs_map_blocks+0x10/0x10
[   31.190800]  ? __this_cpu_preempt_check+0x20/0x30
[   31.191113]  ? lock_is_held_type+0xe6/0x140
[   31.191371]  ? rcu_lockdep_current_cpu_online+0x4b/0x160
[   31.191710]  ? __sanitizer_cov_trace_pc+0x25/0x60
[   31.192012]  ? write_comp_data+0x2f/0x90
[   31.192270]  iomap_do_writepage+0xc6f/0x2ba0
[   31.192537]  ? write_comp_data+0x2f/0x90
[   31.192805]  ? __pfx_iomap_do_writepage+0x10/0x10
[   31.193093]  ? rcu_read_lock_sched_held+0xa9/0xd0
[   31.193389]  ? __pfx_rcu_read_lock_sched_held+0x10/0x10
[   31.193714]  ? write_comp_data+0x2f/0x90
[   31.193970]  write_cache_pages+0x573/0x12c0
[   31.194247]  ? __pfx_iomap_do_writepage+0x10/0x10
[   31.194543]  ? __pfx_write_cache_pages+0x10/0x10
[   31.194831]  ? __pfx_rcu_read_lock_sched_held+0x10/0x10
[   31.195173]  ? ncsi_channel_monitor.cold.17+0x204/0x3f7
[   31.195488]  ? lock_release+0x3f8/0x7d0
[   31.195729]  ? __this_cpu_preempt_check+0x20/0x30
[   31.196026]  ? xfs_vm_writepages+0x123/0x1c0
[   31.196298]  ? __pfx_lock_release+0x10/0x10
[   31.196561]  ? do_raw_spin_lock+0x132/0x2a0
[   31.196824]  ? __pfx_do_raw_spin_lock+0x10/0x10
[   31.197117]  ? __kasan_check_read+0x15/0x20
[   31.197381]  iomap_writepages+0x5b/0xc0
[   31.197626]  xfs_vm_writepages+0x13c/0x1c0
[   31.197885]  ? __pfx_xfs_vm_writepages+0x10/0x10
[   31.198189]  ? lock_release+0x3f8/0x7d0
[   31.198429]  ? __pfx_rcu_read_lock_sched_held+0x10/0x10
[   31.198752]  ? write_comp_data+0x2f/0x90
[   31.199021]  ? __pfx_xfs_vm_writepages+0x10/0x10
[   31.199310]  do_writepages+0x1c5/0x690
[   31.199550]  ? __pfx_do_writepages+0x10/0x10
[   31.199816]  ? do_raw_spin_unlock+0x154/0x230
[   31.200096]  ? __sanitizer_cov_trace_pc+0x25/0x60
[   31.200393]  ? wbc_attach_and_unlock_inode+0x585/0x8f0
[   31.200714]  filemap_fdatawrite_wbc+0x163/0x1d0
[   31.201005]  __filemap_fdatawrite_range+0xc9/0x100
[   31.201305]  ? __pfx___filemap_fdatawrite_range+0x10/0x10
[   31.201651]  ? write_comp_data+0x2f/0x90
[   31.201908]  filemap_write_and_wait_range.part.72+0x93/0x100
[   31.202264]  filemap_write_and_wait_range+0x48/0x60
[   31.202575]  __iomap_dio_rw+0x65e/0x1c80
[   31.202819]  ? __sanitizer_cov_trace_pc+0x25/0x60
[   31.203131]  ? timestamp_truncate+0x1ee/0x2f0
[   31.203424]  ? __pfx___iomap_dio_rw+0x10/0x10
[   31.203697]  ? write_comp_data+0x2f/0x90
[   31.203952]  ? __sanitizer_cov_trace_pc+0x25/0x60
[   31.204251]  ? write_comp_data+0x2f/0x90
[   31.204537]  iomap_dio_rw+0x4d/0xb0
[   31.204765]  xfs_file_dio_write_aligned+0x1e9/0x450
[   31.205077]  ? __pfx_xfs_file_dio_write_aligned+0x10/0x10
[   31.205412]  ? lock_is_held_type+0xe6/0x140
[   31.205676]  ? write_comp_data+0x2f/0x90
[   31.205932]  xfs_file_write_iter+0x530/0x750
[   31.206208]  ? __sanitizer_cov_trace_pc+0x25/0x60
[   31.206513]  vfs_write+0xa65/0xe60
[   31.206736]  ? __pfx_vfs_write+0x10/0x10
[   31.206999]  ? __fget_files+0x270/0x400
[   31.207259]  ? write_comp_data+0x2f/0x90
[   31.207515]  ksys_write+0x143/0x280
[   31.207742]  ? __pfx_ksys_write+0x10/0x10
[   31.208013]  __x64_sys_write+0x7c/0xc0
[   31.208248]  ? syscall_enter_from_user_mode+0x51/0x60
[   31.208561]  do_syscall_64+0x3b/0x90
[   31.208791]  entry_SYSCALL_64_after_hwframe+0x72/0xdc
[   31.209103] RIP: 0033:0x7fd846f5b59d
[   31.209324] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 8
[   31.210373] RSP: 002b:00007fd840a13e98 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[   31.210814] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fd846f5b59d
[   31.211248] RDX: 0000000000001400 RSI: 0000000020000000 RDI: 0000000000000005
[   31.211663] RBP: 00007fd840a13ec0 R08: 0000000000000005 R09: 0000000000000000
[   31.212077] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc9b897cce
[   31.212491] R13: 00007ffc9b897ccf R14: 00007ffc9b897d60 R15: 00007fd840a14700
[   31.212929]  </TASK>
[   31.213072] irq event stamp: 995
[   31.213269] hardirqs last  enabled at (1003): [<ffffffff813dd0a1>] __up_console_sem+0x91/0xb0
[   31.213769] hardirqs last disabled at (1010): [<ffffffff813dd086>] __up_console_sem+0x76/0xb0
[   31.214272] softirqs last  enabled at (286): [<ffffffff8536af3f>] __do_softirq+0x53f/0x836
[   31.214754] softirqs last disabled at (211): [<ffffffff812498b0>] irq_exit_rcu+0x100/0x140
[   31.215258] ---[ end trace 0000000000000000 ]---

Bisected and found the first bad commit is:
7348b322332d8602a4133f0b861334ea021b134a
xfs: xfs_bmap_punch_delalloc_range() should take a byte range

After reverted above commit on top of v6.2-rc7 kernel, this issue was gone.

Reproduced code from syzkaller, kconfig, v6.2-rc7 kernel reproduced dmesg are in attached.

More detailed bisect info is in link:
https://github.com/xupengfe/syzkaller_logs/tree/main/230207_175127_xfs_bmapi_convert_delalloc

And previous syzbot seems repored the similar issue in email name:
"[syzbot] [xfs?] WARNING in xfs_bmapi_convert_delalloc"
And I just updated the above bisected info for this issue.

I hope it's helpful.

---
If you need the reproduced virtual machine environment:

git clone https://gitlab.com/xupengfe/repro_vm_env.git
cd repro_vm_env
tar -xvf repro_vm_env.tar.gz
cd repro_vm_env; ./start3.sh  // it needs qemu-system-x86_64 and I used v7.1.0
   // start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel
   // You could change the bzImage_xxx as you want In vm and login with root,  there is no password for root.

After login vm successfully, you could transfer reproduced binary to the VM by below way, and reproduce the problem:
gcc -pthread -o repro repro.c
scp -P 10023 repro root@localhost:/root/

Get the bzImage for target kernel:
Please use attached kconfig and copy it to kernel_src/.config make olddefconfig
make -jx bzImage           //x should equal or less than cpu num your pc has

Fill the bzImage file into above start3.sh to load the target kernel vm.


Tips:
If you already have qemu-system-x86_64, please ignore below info.
If you want to install qemu v7.1.0 version:
git clone https://github.com/qemu/qemu.git cd qemu git checkout -f v7.1.0 mkdir build cd build yum install -y ninja-build.x86_64 ../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl make make install

---
Thanks!
Comment 1 Dave Chinner 2023-02-13 23:01:58 UTC
On Mon, Feb 13, 2023 at 10:57:31AM +0000, bugzilla-daemon@kernel.org wrote:
> Bisected and found the first bad commit is:
> 7348b322332d8602a4133f0b861334ea021b134a
> xfs: xfs_bmap_punch_delalloc_range() should take a byte range

I don't see how that commit makes any material difference to the
code; the ranges being punched are the same, just being passed in
different units.

The test filesystem image has a 1kB block size, and it has a bad CRC
in the root bnobt block, hence any attempt to allocate fails
immediately with a corrupt root btree block. The first write done
by the reproducer is an O_SYNC write, so it writes 0x1400 bytes into
the page cache (5 1kB blocks) and then internally the write() runs
fsync() which then fails to allocate the blocks in the writeback
path.

This writeback failure then calls ->discard_page() on the write range,
which punches out the delalloc block underneath the failed write.

At the same time, the other thread is doing a direct IO write, and
it sees there are dirty page cache pages over the page cache and it
attempts to writeback the page. It calls into ->map_blocks which
then sees a delalloc extent and tries to convert it. By the time
writeback has locked the folio, locked the ILOCK and runs the
delalloc extent conversion, the delalloc extent has been punched,
and it fails to find the delalloc extent to convert. It then issues
the WARN_ON() reported in this bug report.

I don't think this a failure that is a result of the commit that the
bisect identified - I think the race condition has been there for a
lot longer, and for some reason we've perturbed the code enough
to now expose the race condition occasionally. i.e. the
->discard_folio path is not invalidating the folio or marking it
with an error when we punch the backing store from under it, so it's
remaining dirty in the page cache but without any backing store for
writeback.

Hence I suspect the problem here is that we aren't calling
folio_set_error() on folios that we fail the first delalloc block
conversion on (i.e. count is zero) so this isn't operating like an
IO error. i.e. page_endio() calls folio_set_error() when a
writeback IO fails, and we are not doing that when we call
->discard_folio so the folio remains dirty and uptodate.  Hence
after a failed writeback due to allocation failure, we have the
situation where a dirty page has no backing store, and it seems a
future writeback can trip over that state.

Hence I suspect this commit first exposed the issue being reported
here:

commit e9c3a8e820ed0eeb2be05072f29f80d1b79f053b
Author: Darrick J. Wong <djwong@kernel.org>
Date:   Mon May 16 15:27:38 2022 -0700

    iomap: don't invalidate folios after writeback errors
    
    XFS has the unique behavior (as compared to the other Linux filesystems)
    that on writeback errors it will completely invalidate the affected
    folio and force the page cache to reread the contents from disk.  All
    other filesystems leave the page mapped and up to date.
    
    This is a rude awakening for user programs, since (in the case where
    write fails but reread doesn't) file contents will appear to revert to
    old disk contents with no notification other than an EIO on fsync.  This
    might have been annoying back in the days when iomap dealt with one page
    at a time, but with multipage folios, we can now throw away *megabytes*
    worth of data for a single write error.
    
    On *most* Linux filesystems, a program can respond to an EIO on write by
    redirtying the entire file and scheduling it for writeback.  This isn't
    foolproof, since the page that failed writeback is no longer dirty and
    could be evicted, but programs that want to recover properly *also*
    have to detect XFS and regenerate every write they've made to the file.
    
    When running xfs/314 on arm64, I noticed a UAF when xfs_discard_folio
    invalidates multipage folios that could be undergoing writeback.  If,
    say, we have a 256K folio caching a mix of written and unwritten
    extents, it's possible that we could start writeback of the first (say)
    64K of the folio and then hit a writeback error on the next 64K.  We
    then free the iop attached to the folio, which is really bad because
    writeback completion on the first 64k will trip over the "blocks per
    folio > 1 && !iop" assertion.
    
    This can't be fixed by only invalidating the folio if writeback fails at
    the start of the folio, since the folio is marked !uptodate, which trips
    other assertions elsewhere.  Get rid of the whole behavior entirely.
    
    Signed-off-by: Darrick J. Wong <djwong@kernel.org>
    Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Reviewed-by: Jeff Layton <jlayton@kernel.org>
    Reviewed-by: Christoph Hellwig <hch@lst.de>

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 8fb9b2797fc5..94b53cbdefad 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1387,7 +1387,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
                if (wpc->ops->discard_folio)
                        wpc->ops->discard_folio(folio, pos);
                if (!count) {
-                       folio_clear_uptodate(folio);
                        folio_unlock(folio);
                        goto done;
                }

This commit removed the invalidation of the folio after discarding
it's backing store via ->discard_folio, hence allowing writeback of
dirty folios that have no storage reservation or delalloc extent
backing the dirty folio.

This, to me, looks like the underlying problem here. If we are not
invalidating the cached data on writeback allocation failure, we must
not punch out the storage reservation backing the dirty cached data
as we are going to retry the write at some point in the future and
hence still need the storage reservation for the dirty folio....

..... and removing ->discard_folio() then results in delalloc
extents that cannot be converted by writeback and so when we go to
reclaim the inode during cache eviction we now have delalloc extents
on the inode and that trips assert failures because writeback is
it assumes that writeback will always resolve delalloc extents
either by conversion or discarding them.

Ok, the simple solution trips over other assumptions.  More
investigation required....

-Dave.
Comment 2 xupengfe 2023-02-14 01:23:01 UTC
Hi Dave,

Thanks a lot for your comments!
I just used bisect scripts to find the above commit. And I reverted the commit and the issue was gone.

Maybe it's not accurate for the root cause.

Your comments could give more clue for the problem solving.

Thanks!

Note You need to log in before you can comment on or make changes to this bug.