Created attachment 140961 [details] dmesg output I tried to install gentoo linux on virtualbox VM. When I create XFS filesystem on HDD and extract stage3 tarball image onto it, tar process hangs over than 120 seconds and kernel reports it. I used 66,846,680 sectors of partition and use following command to create file system(927 blocks of log size is minimum size that mkfs.xfs currently accepts): # mkfs.xfs -f -b size=512 -d agcount=1 -i maxpct=0 -l size=927b,version=1 /dev/dis/by-partlabel/HDD_GENTOO_ROOT When mounting that partition, kernel complains about small log size as: XFS (sda3): Mounting Filesystem XFS (sda3): Log size 927 blocks too small, minimum size is 4740 blocks XFS (sda3): Log size out of supported range. Continuing onwards, but if log hangs are experienced then please report this message in the bug report. XFS (sda3): Ending clean mount Here I used gentoo's default kernel, but vanilla kernels shows same behavior. That occurs on real machine(not VM), too. # uname -a Linux livecd 3.12.13-gentoo #1 SMP Thu Apr 3 06:56:32 UTC 2014 x86_64 Intel(R) Core(TM) i5-2500S CPU @ 2.70GHz GenuineIntel GNU/Linux # mkfs.xfs -V mkfs.xfs version 3.1.10
There's some typo: /dev/dis/by-partlabel/HDD_GENTOO_ROOT -> /dev/disk/by-partlabel/HDD_GENTOO_ROOT It mapped to /dev/sda3.
You've got an old xfsprogs that doesn't take into account filesystem geometry when configuring the log size. A current mkfs will take that into account (it uses the same code the kernel uses to generate that warning) and correctly size the log. Or, in your case, reject user specified log sizes that are too small. That kernel message is there basically to stop us having to waste time triaging a log hang caused by misconfiguration - we know logs too small violate fundamental design rules, so as the message says you need to make a larger log (of at least 4740 blocks). And, ah, BTW, I hope you understand exactly what that mkfs command you are running will do. It will create a filesystem that has terrible performance and be unrecoverable if something get corrupted. It's about the worst thing you could possibly do.... Anyway, please close this bug, there's nothing for us to fix here.... -Dave.
(In reply to Dave Chinner from comment #2) > And, ah, BTW, I hope you understand exactly what that mkfs command you are > running will do. It will create a filesystem that has terrible performance > and be unrecoverable if something get corrupted. It's about the worst thing > you could possibly do.... Yes, I know. that's very terrible setting just for some test - minimum filesystem space overhead to store very large data I have. XFS has very interesting behavior that data with known format and some "care" when writing is recorvable even entire metadata is broken. > Anyway, please close this bug, there's nothing for us to fix here.... - XFS (sda3): Log size out of supported range. Continuing onwards, but if log hangs are experienced then please report this message in the bug report. Kernel says it does not officially supported log size range. BUT, It says "If hangs are experienced then please report this message". Then, I reported this issue. There's another issue that when log size is over than 4740 blocks, but not this.
Created attachment 141691 [details] dmesg output when extracting tarball during gentoo install. Similar problem occurs during compiling gentoo system packages with XFS filesystem which have 4800 blocks of log size. It occurs randomly... I'll attach dmesg output.
Two things: firstly, upgrade to a more recent kernel - there were fixes for the log grant head hang committed recently. Secondly - use a larger log - there is not benefit to having a tiny log, and all it does is constrain performance and make you more susceptible to log space accounting issues. Things like untarring tarballs and compiling kernels will be significantly faster with a larger log... -Dave.
1. I checked it with 3.15.3, and it seems that there's no hang anymore. Sorry, and Thanks. :) 2. I focused on minimal space overhead, not performance. Now what I doing is just stress test related on filesystem stability under condition that use to store my big file with few count, not performance test. There's no throughput performance loss even filesystem has small log size - I usually read data only, not often write.