Some files are constantly being modified, in order for btrfs to be useful, it should take that into consideration. Currently files get fragmented to hell: % cat History-old > History % btrfs filesystem defragment /home % echo 3 > /proc/sys/vm/drop_caches % time dd if=History of=/dev/null && time dd if=History-old of=/dev/null 109664+0 records in 109664+0 records out 56147968 bytes (56 MB) copied, 1.90015 s, 29.5 MB/s dd if=History of=/dev/null 0.08s user 0.29s system 15% cpu 2.458 total 109664+0 records in 109664+0 records out 56147968 bytes (56 MB) copied, 97.772 s, 574 kB/s dd if=History-old of=/dev/null 0.07s user 0.80s system 0% cpu 1:37.79 total Also, 'btrfs filesystem defragment' doesn't work. Expected result: the filesystem doesn't become horribly slow with time, and like other filesystems, no special tools are needed for that
Actually 'btrfs filesystem defragment' doesn't defragment the filesystem... Talk about confusing. Anyway, btrfs, like any other filesystem, should not become that slow due to fragmentation. One solution would be when going through all the fragments of a file, have a counter, and if it exceeds certain limit mark it somehow, so that it gets defragmented at least to a certain extent.
Closing, if this is still affecting you on a newer kernel please reopen.