Bug 20902 - High IO wait when writing to ext4
Summary: High IO wait when writing to ext4
Status: RESOLVED OBSOLETE
Alias: None
Product: File System
Classification: Unclassified
Component: ext4 (show other bugs)
Hardware: x86-64 Linux
: P1 normal
Assignee: fs_ext4@kernel-bugs.osdl.org
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-10-22 08:56 UTC by Martin Moerch
Modified: 2013-12-10 22:21 UTC (History)
5 users (show)

See Also:
Kernel Version: 2.6.36-020636rc8-generic
Subsystem:
Regression: No
Bisected commit-id:


Attachments
dmesg output during problem (248.55 KB, text/plain)
2010-10-22 08:56 UTC, Martin Moerch
Details
LVM configuration (18.62 KB, text/plain)
2010-10-22 08:57 UTC, Martin Moerch
Details
tune2fs -l of filesystem (1.67 KB, text/plain)
2010-10-22 08:57 UTC, Martin Moerch
Details
VG backup file (4.85 KB, text/plain)
2010-10-22 08:58 UTC, Martin Moerch
Details
sysrq-w (68.72 KB, text/plain)
2010-10-22 21:52 UTC, Martin Moerch
Details
vmstat when problem starts (2.11 KB, text/plain)
2010-10-22 21:53 UTC, Martin Moerch
Details
dumpe2fs -h from LVMcrypto device (1.84 KB, text/plain)
2010-10-22 21:55 UTC, Martin Moerch
Details
blktrace showing initial write slowness (697.25 KB, application/x-bzip)
2010-11-24 03:14 UTC, Justin Maggard
Details
graph of IO (68.17 KB, image/png)
2010-11-24 05:13 UTC, Eric Sandeen
Details
tune2fs -l output (1.74 KB, text/plain)
2010-11-24 18:41 UTC, Justin Maggard
Details

Description Martin Moerch 2010-10-22 08:56:19 UTC
I'm running an ext4 filesystem on a LUKS device that sits ontop of an LVM device. The filesystem is created with no special options and is used for storing filessizes ranging from 1KB to 20GB.

I experience extremely high IO wait times when trying to write files to the filesystem, bot locally and through Samba.

The problem can be reproduced by doing a reboot, unlocking with cryptsetup, mounting the filesystem and then writing a file to any directory on the filesystem. The 'cp' process stalls for almost 15 minutes and then returns to normal and completes the copy eventually. If I then 'cp' the same file again, there's no problem at all and the copy goes through like normal without stalling for 15 minutes. Rebooting and doing it again, reproduces the problem immediately.

I've attached anything that might be of interest to anyone who'd want to look into this.
Comment 1 Martin Moerch 2010-10-22 08:56:51 UTC
Created attachment 34362 [details]
dmesg output during problem
Comment 2 Martin Moerch 2010-10-22 08:57:19 UTC
Created attachment 34372 [details]
LVM configuration
Comment 3 Martin Moerch 2010-10-22 08:57:58 UTC
Created attachment 34382 [details]
tune2fs -l of filesystem
Comment 4 Martin Moerch 2010-10-22 08:58:16 UTC
Created attachment 34392 [details]
VG backup file
Comment 5 Martin Moerch 2010-10-22 21:52:20 UTC
Created attachment 34432 [details]
sysrq-w
Comment 6 Martin Moerch 2010-10-22 21:53:35 UTC
Created attachment 34442 [details]
vmstat when problem starts
Comment 7 Martin Moerch 2010-10-22 21:55:17 UTC
Created attachment 34452 [details]
dumpe2fs -h from LVMcrypto device
Comment 8 Eric Sandeen 2010-10-29 16:10:59 UTC
Any chance you can test w/o LUKS to see how that affects things?

If I had to guess I'd say things are stuck behind here:

[  554.110652] flush-252:1   D ffff880001e14cc0     0  2186      2 0x00000000
[  554.110679] Call Trace:
[  554.110686]  [<ffffffff81586503>] io_schedule+0x73/0xc0
[  554.110692]  [<ffffffff81180ab5>] sync_buffer+0x45/0x50
[  554.110714]  [<ffffffff81586e8d>] out_of_line_wait_on_bit+0x7d/0x90
[  554.110725]  [<ffffffff811809f8>] __wait_on_buffer+0x28/0x30
[  554.110732]  [<ffffffff81217199>] ext4_mb_init_cache+0x389/0x900

I guess ext4_mb_init_cache() is in:

        /* wait for I/O completion */
        for (i = 0; i < groups_per_page && bh[i]; i++)
                wait_on_buffer(bh[i]);
Comment 9 Justin Maggard 2010-10-29 16:23:36 UTC
I just ran into something similar a few days ago.  It was a 4TB filesystem with about 50,000 files in a mixed data set.  I ran a test script that unmounted, remounted, and then started writing 64MB files in a loop using dd.  With delalloc on, the third 64MB write after umount/mount always took about 150 seconds.  With delalloc off, the first write always took ~150 seconds.  Writes after that were a fraction of a seconds.
Comment 10 Martin Moerch 2010-10-29 18:08:27 UTC
Unfortunately I cannot do a test on the same filesystem without LUKS, since the problem only shows itself when writing, and I'd prefer keeping my files.

#9

Yes, after the initial first write that takes ages, it seems the problem is gone more or less. On the odd occasion it can do it again, but its rare.
Comment 11 Theodore Tso 2010-10-29 21:13:50 UTC
Did things work OK with an earlier kernel version?   Is this something that has always troubled you, or is it something which recently started when you upgraded to the 2.6.36 kernel?

Once things are OK after "the first write", are they OK forever after that?  If the problem re-occurs after the first write, can you say something about when that might be?

How about replicating it on a USB attached disk?  Can you try a couple of different scenarios with a USB attached disk?  Say,

1) With a USB attached disk using a MBR-partitioned disk?
2) With a USB attached disk set up using LVM with a single LVM volume?
3) With a USB attached disk set up using LVM and a LUKS-encrypted volume?

etc.

And can you specified what mount options, if any, that you were using?

Thanks,

-- Ted
Comment 12 Martin Moerch 2010-10-29 21:46:35 UTC
I remember first noticing it as far back as when 8.04 came out, and has happened ever since. However, around that time, my LVM grew substantially aswell for the next year. So if it's 8.04 and up, or if it's the size of the LVM as it gets larger, I don't know.

By "first write" I mean the first write after a reboot. As soon as the system wants to flush data to the drives, it starts this cycle of slow and cumbersome I/O waiting.

I'm using the mount options as per the tune2fs attachment further up.
Comment 13 Eric Sandeen 2010-11-04 17:58:56 UTC
Have any of you with this problem tried other IO schedulers just to rule that in or out?

-Eric
Comment 14 Martin Moerch 2010-11-04 18:46:26 UTC
noop and deadline produce the exact same result for me.
Comment 15 Eric Sandeen 2010-11-04 18:52:22 UTC
Could you capture a blktrace when this happens?

just:

# mount -t debugfs none /sys/kernel/debug
# blktrace -d /dev/blah -o mytrace &
# do your test
# killall blktrace
# tar cvjf mytrace.tar.bz2 mytrace* 

and attach it.  Might offer clues.
Comment 16 Justin Maggard 2010-11-24 03:14:05 UTC
Created attachment 38002 [details]
blktrace showing initial write slowness

Here's mine.  My test case is mount, sleep 5, then do 10 x 128MB writes using dd to the just-mounted filesystem.  The first 128MB write took over 20 seconds.  Unfortunately I don't have access any more to the box where it took 150 seconds.
Comment 17 Eric Sandeen 2010-11-24 05:13:16 UTC
Created attachment 38012 [details]
graph of IO

Here's a graph of the IO.

Does the trace cover the entire test starting at mount?  Looks like it.

Were the 10x128MB writes in series or parallel?

Looks like it spends about 20s seekily sucking up 28MB of metadata off the disk (mostly done in the [flush-253:0] process) and then doing 1.3G of horrible writeback ;)

What exact commands did you use for the test; I assume it was buffered IO, all in the same subdir?  Sync afterwards or not?

It might be interesting to see the tune2fs -l output to correlate the initial reads to disk metadata but I suppose we can have a pretty good guess at that.

Anyway looks like the first issue is the giant seeky read of disk metadata before any writing even gets underway.
Comment 18 Justin Maggard 2010-11-24 18:41:31 UTC
Created attachment 38102 [details]
tune2fs -l output

Yep, the trace covers the entire test, starting at mount.

The writes are done in series, buffered IO, same subdir, without a sync.  Here's all I'm doing:

for i in $(seq 1 10); do
  dd if=/dev/zero of=$testdir/test.$i bs=1M count=128
done
Comment 19 Andreas Dilger 2010-11-25 09:17:26 UTC
(In reply to comment #16)
> Here's mine.  My test case is mount, sleep 5, then do 10 x 128MB writes using
> dd to the just-mounted filesystem.  The first 128MB write took over 20
> seconds.
>  Unfortunately I don't have access any more to the box where it took 150
> seconds.

We've seen this problem with Lustre as well.  The root of the problem is that the initial write to a filesystem that is fairly full causes mballoc to scan all of the block groups looking for groups with enough space for preallocation of an 8MB chunk.  On an 8TB filesystem with 64k groups @ 100 seeks/second this could take up to 10 minutes to complete.

The patch from Curt committed in 8a57d9d61a6e361c7bb159dda797672c1df1a691 fixed this for small writes at mount time, but does not help for large writes.

We are starting to look at other solutions to this problem in our bugzilla:
https://bugzilla.lustre.org/show_bug.cgi?id=24183

with a patch (currently untested) in:
https://bugzilla.lustre.org/attachment.cgi?id=32320&action=edit


Increasing the flex_bg size is likely going to reduce the severity of this problem, by reducing the number of seeks needed to load the block bitmaps proportional to the flex_bg factor (32 by default today).  That would change the 8TB bitmap scan time from 10 minutes to about 20s.

Other possibilities include starting the bitmap scan at some random group instead of always starting at group 0, storing some free extent information for each group in the group descriptor table, or storing some information in the superblock about which group to start allocations at.
Comment 20 Theodore Tso 2010-11-25 15:30:45 UTC
Curtw's patch (at commit id: 8a57d9d61) also fixes the problem in the steady state, once all of the block bitmap's statistics are loaded into memory.   So one thing we could do is to simply force the block bitmap scan at mount time.  It doesn't so much solve the problem as it moves it to a time when it might be less objectionable.  For 8TB file systems if it really does take 10 minutes then we would have to mitigate this by (a) having mke2fs use a larger flex_bg size automatically, and/or (b) loading up the block bitmap statistics in parallel (which will help on RAID systems, but not when we have 8TB on a single spindle; given that 3 and 4 TB disks are within the horizon in the next couple of years, 8TB/spindle aren't that far out of reach).

Storing the largest contiguous free extent in a block group in the block group might be another way of solving this problem.  The reason why I don't like this approach, though, is that it forces the implementation details of the buddy bitmap implementation into the file system format.  It's possible that we might have N blocks free, where N might be say (for the sake of argument) 256 blocks.  But if those N blocks aren't aligned on a buddy bitmap allocation boundary, mballoc won't find that free extent.  It might see it as a free regions that are 31 blocks, 32 blocks, 128 blocks, 64 blocks, and 1 block free.

Note You need to log in before you can comment on or make changes to this bug.