I'm new to this so I don't know where I should post the bug. I have created a but on the github repo: https://github.com/kdave/btrfs-progs/issues/332
Details from the github ticket. I have an old btrfs / filesystem on single-disk which I had outgrew. I added a new disk and wanted to instead having DUP for metadata and system convert it to raid1. The data I wished to retain having in single mode as it's not so critical. so I did this command: balance: start -dusage=100 -mconvert=raid1 -sconvert=raid1 I did get a kernel error and filesystem went ro. Attached kernel log. % uname -a Linux staropramen 5.10.9-arch1-1 #1 SMP PREEMPT Tue, 19 Jan 2021 22:06:06 +0000 x86_64 GNU/Linux % btrfs --version btrfs-progs v5.10 % sudo btrfs filesystem usage / [sudo] password for markus: Overall: Device size: 576.55GiB Device allocated: 134.61GiB Device unallocated: 441.94GiB Device missing: 0.00B Used: 132.72GiB Free (estimated): 442.17GiB (min: 221.20GiB) Free (statfs, df): 442.16GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 342.38MiB (used: 0.00B) Multiple profiles: no Data,single: Size:128.55GiB, Used:128.32GiB (99.82%) /dev/nvme1n1p2 93.55GiB /dev/nvme0n1 35.00GiB Metadata,DUP: Size:3.00GiB, Used:2.20GiB (73.31%) /dev/nvme1n1p2 6.00GiB System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%) /dev/nvme1n1p2 64.00MiB Unallocated: /dev/nvme1n1p2 1.00MiB /dev/nvme0n1 441.94GiB % sudo btrfs device usage / /dev/nvme1n1p2, ID: 1 Device size: 119.14GiB Device slack: 19.53GiB Data,single: 93.55GiB Metadata,DUP: 6.00GiB System,DUP: 64.00MiB Unallocated: 1.00MiB /dev/nvme0n1, ID: 2 Device size: 476.94GiB Device slack: 0.00B Data,single: 35.00GiB Unallocated: 441.94GiB % sudo btrfs device stats / [/dev/nvme1n1p2].write_io_errs 0 [/dev/nvme1n1p2].read_io_errs 0 [/dev/nvme1n1p2].flush_io_errs 0 [/dev/nvme1n1p2].corruption_errs 0 [/dev/nvme1n1p2].generation_errs 0 [/dev/nvme0n1].write_io_errs 0 [/dev/nvme0n1].read_io_errs 0 [/dev/nvme0n1].flush_io_errs 0 [/dev/nvme0n1].corruption_errs 0 [/dev/nvme0n1].generation_errs 0
Created attachment 294805 [details] kernel log since boot
Issue solved with response from btrfs mailing list. On 23/01/2021 23:57, Zygo Blaxell wrote: > You don't have enough space to convert metadata yet, and you also don't > have enough space to lock one of your 3 metadata block groups without > running out of global reserve space, so this balance command forces the > filesystem read-only due to lack of space. > >> <snip> > First you need to get some unallocated space on devid 1, e.g. > > btrfs balance start -dlimit=12 / > > I picked 12 chunks here because your new disk is about 4x the size of > your old one, so you can expect metadata to expand from 3 GB to 15 GB. > By moving 12 data chunks to the new disk (plus 3 more from converting > from dup to raid1), we ensure that space is available for the metadata > later on. > > Once you have some unallocated space on two devices, you can do the > metadata conversion balance: > > btrfs balance start -mconvert=raid1,soft / > > Every dup chunk converted to raid1 will release more space on devid 1, > so the balance will complete (as long as you aren't writing hundreds > of GB of new data at the same time).