BTRFS seems to randomly file to write to my RAID1 partition built on two SSD's. gdisk partition on sda Number Start (sector) End (sector) Size Code Name 1 2048 4100095 2.0 GiB FD00 Linux RAID 2 4100096 468860927 221.6 GiB 8300 Linux filesystem gdisk partition on sdb Number Start (sector) End (sector) Size Code Name 1 2048 4100095 2.0 GiB FD00 Linux RAID 2 4100096 468860927 221.6 GiB 8300 Linux filesystem df erroneously reports the size of that raid to be 464GB when it is only about half of it. [root@www /]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 464760832 239334192 222792584 52% / however btrfs fi df seems to be correct: # btrfs fi df / Data, RAID1: total=216.58GiB, used=110.34GiB System, RAID1: total=32.00MiB, used=36.00KiB System, single: total=4.00MiB, used=0.00 Metadata, RAID1: total=5.00GiB, used=3.79GiB However btrfs fi show seems to report a contradicting story: # btrfs fi show Label: highdefinition_raid uuid: f5c0b013-d6f8-457c-896d-fb7bcc37d514 Total devices 2 FS bytes used 114.13GiB devid 1 size 221.62GiB used 221.62GiB path /dev/sda2 devid 2 size 221.62GiB used 221.61GiB path /dev/sdb2 I was able to download another gigabyte of data: [root@www /]# wget http://mirror.karneval.cz/pub/linux/fedora/linux/releases/20/Live/x86_64/Fedora-Live-Desktop-x86_64-20-1.iso --2014-01-23 18:48:07-- http://mirror.karneval.cz/pub/linux/fedora/linux/releases/20/Live/x86_64/Fedora-Live-Desktop-x86_64-20-1.iso Resolving mirror.karneval.cz (mirror.karneval.cz)... 89.102.0.150, 2a02:8301:0:2::150 Connecting to mirror.karneval.cz (mirror.karneval.cz)|89.102.0.150|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 999292928 (953M) [application/octet-stream] Saving to: ‘Fedora-Live-Desktop-x86_64-20-1.iso’ 100%[========================================================================================================================================================================>] 999,292,928 4.57MB/s in 1m 40s 2014-01-23 18:49:47 (9.54 MB/s) - ‘Fedora-Live-Desktop-x86_64-20-1.iso’ saved [999292928/999292928] So the disk can not be full! After that additional gigabyte, df reports # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 464760832 241295088 220828976 53% / and # btrfs fi df / Data, RAID1: total=216.58GiB, used=111.28GiB System, RAID1: total=32.00MiB, used=36.00KiB System, single: total=4.00MiB, used=0.00 Metadata, RAID1: total=5.00GiB, used=3.78GiB which seems to be consitent. Now the error I got was this one (when using chown): # chown shop:shop /home/shop/www/img/91/88/918837.jpeg chown: changing ownership of ‘/home/shop/www/img/91/88/918837.jpeg’: No space left on device When retrying, the operation went through. Furthermore the file was already owned by shop:shop before, I am 99% sure of that. This was not the only problem I had. I am experiencing regular crashes of my MySQL database. I have opened a bug: http://bugs.mysql.com/bug.php?id=71366 The bug was declared "Not a bug" and the OS/FS was blamed, in this case BTRFS. If you do not have access to that bug, here is a short version of mysqld.log: 2014-01-12 23:40:57 7f85c7fff700 InnoDB: Error: Write to file ./ibdata1 failed at offset 1048576. InnoDB: 16384 bytes should have been written, only -1 were written. InnoDB: Operating system error number 28. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 28 means 'No space left on device'.
correction: BTRFS seems to randomly fail to write to my RAID1 partition built on two SSD's.
In your case, as you see there is 2 different pools, one for data one for metadata. Your "Data" pool still have space, but your meta data is probably "full" (showing incorrectly "Metadata, RAID1: total=5.00GiB, used=3.79GiB" And btrfs trying to add more metadata - while trying to increase the metadata pool it fail because your ssd are fully assign: devid 1 size 221.62GiB used 221.62GiB path /dev/sda2 devid 2 size 221.62GiB used 221.61GiB path /dev/sdb2 So Btrfs can't make the metadata pool bigger and thus saying out of space. I would still consider it a bug because it: a) showing wrong metadata size/usage b) should decrease the "data" pool to make space for meta - or at least ability to do so