Bug 21562 - btrfs is dead slow due to fragmentation
btrfs is dead slow due to fragmentation
Status: RESOLVED OBSOLETE
Product: File System
Classification: Unclassified
Component: btrfs
All Linux
: P1 normal
Assigned To: fs_btrfs@kernel-bugs.osdl.org
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2010-10-31 19:56 UTC by Felipe Contreras
Modified: 2013-04-30 16:33 UTC (History)
3 users (show)

See Also:
Kernel Version: 2.6.36
Tree: Mainline
Regression: No


Attachments

Description Felipe Contreras 2010-10-31 19:56:26 UTC
Some files are constantly being modified, in order for btrfs to be useful, it should take that into consideration. Currently files get fragmented to hell:

% cat History-old > History
% btrfs filesystem defragment /home
% echo 3 > /proc/sys/vm/drop_caches

% time dd if=History of=/dev/null && time dd if=History-old of=/dev/null
109664+0 records in
109664+0 records out
56147968 bytes (56 MB) copied, 1.90015 s, 29.5 MB/s
dd if=History of=/dev/null  0.08s user 0.29s system 15% cpu 2.458 total
109664+0 records in
109664+0 records out
56147968 bytes (56 MB) copied, 97.772 s, 574 kB/s
dd if=History-old of=/dev/null  0.07s user 0.80s system 0% cpu 1:37.79 total

Also, 'btrfs filesystem defragment' doesn't work.

Expected result: the filesystem doesn't become horribly slow with time, and like other filesystems, no special tools are needed for that
Comment 1 Felipe Contreras 2010-11-22 12:02:52 UTC
Actually 'btrfs filesystem defragment' doesn't defragment the filesystem... Talk about confusing.

Anyway, btrfs, like any other filesystem, should not become that slow due to fragmentation.

One solution would be when going through all the fragments of a file, have a counter, and if it exceeds certain limit mark it somehow, so that it gets
defragmented at least to a certain extent.
Comment 2 Felipe Contreras 2010-11-22 12:06:38 UTC
Actually 'btrfs filesystem defragment' doesn't defragment the filesystem... Talk about confusing.

Anyway, btrfs, like any other filesystem, should not become that slow due to fragmentation.

One solution would be when going through all the fragments of a file, have a counter, and if it exceeds certain limit mark it somehow, so that it gets
defragmented at least to a certain extent.
Comment 3 Josef Bacik 2013-04-30 16:33:54 UTC
Closing, if this is still affecting you on a newer kernel please reopen.

Note You need to log in before you can comment on or make changes to this bug.