Bug 74101 - "Out of space" reported when there's lots of non-allocated space
Summary: "Out of space" reported when there's lots of non-allocated space
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: btrfs (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Josef Bacik
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-04-14 21:08 UTC by Jeff Mitchell
Modified: 2021-04-01 18:58 UTC (History)
10 users (show)

See Also:
Kernel Version: 3.13
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Jeff Mitchell 2014-04-14 21:08:05 UTC
Although the device has plenty of non-allocated space (after extending the partition and running a resize command), I am getting out of space errors. Rebalancing does not help. I was asked in #btrfs to report this on the bugtracker as it appears to be a legitimate bug.

# btrfs fi show
Label: none  uuid: c553ada1-031f-48f8-a497-cc5e1f913619
	Total devices 1 FS bytes used 2.79GiB
	devid    1 size 20.00GiB used 4.61GiB path /dev/sda2

Btrfs v3.12

# btrfs fi df /
Data, single: total=3.25GiB, used=2.54GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=664.12MiB, used=257.11MiB

# uname -a
Linux repo 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I have a btrfs-image dump of the filesystem, but it's above the maximum file size limit for the bugtracker.
Comment 1 Justin Alan Ryan 2015-07-15 17:36:50 UTC
Have you tried restoring the btrfs-image dump to another volume, or seeing if you can reproduce this with a fresh filesystem by following steps you recall leading to it?
Comment 2 Roman Kapusta 2016-06-14 09:22:08 UTC
I'm hitting same problem here, lot of unallocated space, but cannot be used.

First I have created lvm partition (l1) of similar size as second physical disk (l2) and formatted with btrfs with mirroring (RADI1): 
devid    2 size 1465136512.00KiB used 1339064320.00KiB path /dev/mapper/l2

Then when my free space was below 100 GB, I added new physical disk (l3):
devid    3 size 244196544.00KiB used 119537664.00KiB path /dev/mapper/l3

and extend lvm partition (l1) to around size of l2+l3:
devid    1 size 1709701120.00KiB used 1458601984.00KiB path /dev/mapper/l1

Current state: disk is full

# df -k /media/storage/
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/mapper/l1 1709517096 1457855616 126644080  93% /media/storage

# btrfs fi show --kbytes /media/storage/
Label: 'storage'  uuid: 912c98c3-5f1f-4c92-b7c5-710d6761154d
	Total devices 3 FS bytes used 1457394816.00KiB
	devid    1 size 1709701120.00KiB used 1458601984.00KiB path /dev/mapper/l1
	devid    2 size 1465136512.00KiB used 1339064320.00KiB path /dev/mapper/l2
	devid    3 size 244196544.00KiB used 119537664.00KiB path /dev/mapper/l3

# btrfs fi df --kbytes /media/storage/
Data, RAID1: total=1455423488.00KiB, used=1454843392.00KiB
System, RAID1: total=32768.00KiB, used=224.00KiB
Metadata, RAID1: total=3145728.00KiB, used=2480528.00KiB

I tried to resize individual disks:
# btrfs fi resize 1:max /media/storage/
# btrfs fi resize 2:max /media/storage/
# btrfs fi resize 3:max /media/storage/
dmesg:
[171983.005478] BTRFS info (device dm-18): resizing devid 1
[171983.005492] BTRFS info (device dm-18): new size for /dev/mapper/l1 is 1750733946880
[171990.403990] BTRFS info (device dm-18): resizing devid 2
[171990.404003] BTRFS info (device dm-18): new size for /dev/mapper/l2 is 1500299812864
[171994.630144] BTRFS info (device dm-18): resizing devid 3
[171994.630156] BTRFS info (device dm-18): new size for /dev/mapper/l3 is 250057252864

I tried rebalance and waited more then 24 hours to finish:
# btrfs balance start /media/storage/

Nothing helped, my kernel version is 4.4.12-200.fc22.x86_64, btrfs-progs version 4.3.1-1.fc22.x86_64.
It probably does not have impact but partitions are encrypted with LUKS
Comment 3 Kevin B. 2016-08-11 01:06:07 UTC
I've also come across this issue and it has started to become consistent (happened a few times over the course of weeks and now nearly once a day for the past few days).  I'll try deleting/compressing some files to see if that reduces the frequency.

I've tried balance and defrag.  I'm not sure if either fully finished because i started them at night and when I check in the evening for see progress, the computer is locked up/unusuable.

There are (4) drives are in RAID 5 configuration using a "Adaptec Series 6 - ASR-6805" controller.

uname -a:
Linux localhost.localdomain 4.6.4-201.fc23.x86_64 #1 SMP Tue Jul 12 11:43:59 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

df:
Filesystem       1K-blocks       Used  Available Use% Mounted on
devtmpfs          16437844          0   16437844   0% /dev
tmpfs             16448452      54488   16393964   1% /dev/shm
tmpfs             16448452       1840   16446612   1% /run
tmpfs             16448452          0   16448452   0% /sys/fs/cgroup
/dev/sda4      11695209472 5525759712 6159218512  48% /
tmpfs             16448452      28052   16420400   1% /tmp
/dev/sda2          3966144     256604    3488356   7% /boot
/dev/sda4      11695209472 5525759712 6159218512  48% /home
/dev/sda1          2043984      15884    2028100   1% /boot/efi
tmpfs              3289692          0    3289692   0% /run/user/0
tmpfs              3289692         24    3289668   1% /run/user/1000

btrfs fi df /:
Data, single: total=5.19TiB, used=5.13TiB
System, DUP: total=64.00MiB, used=580.00KiB
Metadata, DUP: total=15.00GiB, used=9.93GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

btrfs fi show /dev/sda4:
Label: 'fedora'  uuid: 09cfb441-2242-40dc-999f-e50ec6cad82a
        Total devices 1 FS bytes used 5.14TiB
        devid    1 size 10.89TiB used 5.22TiB path /dev/sda4
Comment 4 suncuss.exe 2017-03-09 16:37:45 UTC
I also run into this issue and it is becoming consistent.
btrfs balance will solve it temporarily, but it will become full again short after.

uname -a:
Linux localhost.localdomain 4.5.0-040500rc6-generic #201602281230 SMP Sun Feb 28 17:33:02 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

btrfs fi df /home:
Data, RAID0: total=5.62TiB, used=5.61TiB
System, RAID1: total=32.00MiB, used=400.00KiB
Metadata, RAID1: total=195.00GiB, used=22.89GiB
GlobalReserve, single: total=512.00MiB, used=156.27MiB

btrfs fi show /home:
Label: none  uuid: 6ceeca45-8d5d-4c48-adc0-211b65b2807e
        Total devices 4 FS bytes used 5.63TiB
        devid    1 size 1.82TiB used 1.50TiB path /dev/sde
        devid    2 size 1.82TiB used 1.50TiB path /dev/sdf
        devid    3 size 1.82TiB used 1.50TiB path /dev/sdg
        devid    4 size 1.82TiB used 1.50TiB path /dev/sdi


btrfs fi usage /home:
Overall:
    Device size:                   7.28TiB
    Device allocated:              6.00TiB
    Device unallocated:            1.28TiB
    Device missing:                  0.00B
    Used:                          5.66TiB
    Free (estimated):              1.28TiB      (min: 658.13GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 149.05MiB)

Data,RAID0: Size:5.62TiB, Used:5.61TiB
   /dev/sde        1.40TiB
   /dev/sdf        1.40TiB
   /dev/sdg        1.40TiB
   /dev/sdi        1.40TiB

Metadata,RAID1: Size:195.00GiB, Used:22.89GiB
   /dev/sde       98.00GiB
   /dev/sdf       97.00GiB
   /dev/sdg       97.00GiB
   /dev/sdi       98.00GiB

System,RAID1: Size:32.00MiB, Used:400.00KiB
   /dev/sde       32.00MiB
   /dev/sdi       32.00MiB

Unallocated:
   /dev/sde      326.99GiB
   /dev/sdf      328.02GiB
   /dev/sdg      328.02GiB
   /dev/sdi      326.99GiB
Comment 5 jojopost62 2020-02-21 10:29:17 UTC
I am experimenting with BTRFS on my NAS using OMV and I think I just encountered this bug.
I have 2 16 TB drives (data striped, metadata mirrored) in use. 
What confuses me is that my Samba network drive under Windows suddenly did not show the correct total storage size. Also the OMV overview did not show "Total: 32 TB, Used: 16 TB" anymore, but only "Total: 16 TB, Used: 16 TB". But as you can see below, btrfs fi still showed a device size 29.11Tib.

After a complete rebalance, the displays were normal again and OMV also showed Total 32 TB again.



df and usage output before rebalance:

# btrfs filesystem df -h /srv/dev-disk-by-label-Storage
Data, RAID0: total=14.28TiB, used=14.28TiB
System, RAID1: total=8.00MiB, used=1.00MiB
Metadata, RAID1: total=16.00GiB, used=15.92GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


# btrfs fi usage /srv/dev-disk-by-label-Storage
Overall:
    Device size:                  29.11TiB
    Device allocated:             14.31TiB
    Device unallocated:           14.79TiB
    Device missing:                  0.00B
    Used:                         14.31TiB
    Free (estimated):             14.80TiB      (min: 7.40TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID0: Size:14.28TiB, Used:14.28TiB
   /dev/sdb        7.14TiB
   /dev/sdc        7.14TiB

Metadata,RAID1: Size:16.00GiB, Used:15.92GiB
   /dev/sdb       16.00GiB
   /dev/sdc       16.00GiB

System,RAID1: Size:8.00MiB, Used:1.00MiB
   /dev/sdb        8.00MiB
   /dev/sdc        8.00MiB

Unallocated:
   /dev/sdb        7.39TiB
   /dev/sdc        7.39TiB



df and usage output after rebalance:

# btrfs filesystem df -h /srv/dev-disk-by-label-Storage
Data, RAID0: total=14.32TiB, used=14.32TiB
System, RAID1: total=32.00MiB, used=1.02MiB
Metadata, RAID1: total=16.00GiB, used=15.28GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


# btrfs fi usage /srv/dev-disk-by-label-Storage
Overall:
    Device size:                  29.11TiB
    Device allocated:             14.35TiB
    Device unallocated:           14.75TiB
    Device missing:                  0.00B
    Used:                         14.35TiB
    Free (estimated):             14.76TiB      (min: 7.38TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID0: Size:14.32TiB, Used:14.32TiB
   /dev/sdb        7.16TiB
   /dev/sdc        7.16TiB

Metadata,RAID1: Size:16.00GiB, Used:15.28GiB
   /dev/sdb       16.00GiB
   /dev/sdc       16.00GiB

System,RAID1: Size:32.00MiB, Used:1.02MiB
   /dev/sdb       32.00MiB
   /dev/sdc       32.00MiB

Unallocated:
   /dev/sdb        7.38TiB
   /dev/sdc        7.38TiB


# btrfs fi show /srv/dev-disk-by-label-Storage
Label: 'Storage'  uuid: 6ddb9a58-84bd-4d93-a513-f47d9cd71fd5
        Total devices 2 FS bytes used 14.33TiB
        devid    1 size 14.55TiB used 7.17TiB path /dev/sdb
        devid    2 size 14.55TiB used 7.17TiB path /dev/sdc
Comment 6 Ricardo Cescon 2020-03-19 07:50:54 UTC
Hello,

can't use the 11TB from /dev/sdf
How to fix it?

root@pm-arc:/home/ubuntu# btrfs fi usage /mnt/arc/
WARNING: RAID56 detected, not implemented
Overall:
    Device size:                  76.40TiB
    Device allocated:            180.16GiB
    Device unallocated:           76.22TiB
    Device missing:                  0.00B
    Used:                        179.11GiB
    Free (estimated):                0.00B      (min: 8.00EiB)
    Data ratio:                       0.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID5: Size:51.69TiB, Used:51.66TiB
   /dev/sda       12.70TiB
   /dev/sdb       12.70TiB
   /dev/sdc       12.70TiB
   /dev/sdd       12.70TiB
   /dev/sde       12.70TiB
   /dev/sdf      912.02GiB

Metadata,RAID1: Size:90.06GiB, Used:89.55GiB
   /dev/sda       35.00GiB
   /dev/sdb       35.00GiB
   /dev/sdc       34.06GiB
   /dev/sdd       35.00GiB
   /dev/sde       35.00GiB
   /dev/sdf        6.06GiB

System,RAID1: Size:20.00MiB, Used:3.02MiB
   /dev/sdd       20.00MiB
   /dev/sdf       20.00MiB

Unallocated:
   /dev/sda        1.00MiB
   /dev/sdb       21.00MiB
   /dev/sdc        1.00MiB
   /dev/sdd       33.00MiB
   /dev/sde       65.00MiB
   /dev/sdf       11.84TiB
Comment 7 John Freeman 2020-03-27 15:28:23 UTC
Got same issue on 3.6 TB storage, no raid, no deduplication except default metadata DUP mode.

Overall:
    Device size:                   3.55TiB
    Device allocated:              2.50TiB
    Device unallocated:            1.05TiB
    Device missing:                  0.00B
    Used:                          2.50TiB
    Free (estimated):              1.05TiB      (min: 539.60GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 32.00KiB)

Data,single: Size:2.48TiB, Used:2.48TiB
   /dev/md66       2.48TiB

Metadata,DUP: Size:7.50GiB, Used:6.99GiB
   /dev/md66      15.00GiB

System,DUP: Size:32.00MiB, Used:304.00KiB
   /dev/md66      64.00MiB

Unallocated:
   /dev/md66       1.05TiB

df : 

Data, single: total=2.48TiB, used=2.48TiB
System, DUP: total=32.00MiB, used=320.00KiB
Metadata, DUP: total=7.50GiB, used=6.99GiB
GlobalReserve, single: total=512.00MiB, used=32.00KiB

Stuck on 2.50 TB then "no free space left on device"

BUT it's status after rebalance, during problem there was strange metadata status

Metadata, DUP: total=7.50GiB, used=7.49GiB

Probably there is fail in btrfs architecture and it's NEEDED TO BE REGULARY REBALANCED in cron. Metadata full => no space left even if there is tons of free space.
Comment 8 Fabrice Quenneville 2020-10-20 12:04:17 UTC
Hello

I am having a similar issue to everyone here. Lots of unalocated space, all the data filling 4 drives while leaving one unalocated. I have tryed resizes to max and balancing to no avail.

My next debugging step is to switch to a newer kernel and try a balance again but this issue is giving me quite a problem as it now claims 0 free space 


[root@stor1 ~]# btrfs fi usage /mnt/tmp/
Overall:
    Device size:		  25.47TiB
    Device allocated:		  19.96TiB
    Device unallocated:		   5.51TiB
    Device missing:		     0.00B
    Used:			  19.95TiB
    Free (estimated):		   2.76TiB	(min: 2.76TiB)
    Data ratio:			      2.00
    Metadata ratio:		      2.00
    Global reserve:		 512.00MiB	(used: 0.00B)
    Multiple profiles:		        no

Data,RAID1: Size:9.96TiB, Used:9.96TiB (100.00%)
   /dev/sdd	   3.63TiB
   /dev/sdf	   3.63TiB
   /dev/sdc	   7.28TiB
   /dev/sde	   3.63TiB
   /dev/sdb	   1.76TiB

Metadata,RAID1: Size:15.00GiB, Used:12.70GiB (84.64%)
   /dev/sdd	   8.00GiB
   /dev/sdf	  10.00GiB
   /dev/sdc	   1.00GiB
   /dev/sde	   9.00GiB
   /dev/sdb	   2.00GiB

System,RAID1: Size:32.00MiB, Used:1.41MiB (4.39%)
   /dev/sdd	  32.00MiB
   /dev/sdf	  32.00MiB

Unallocated:
   /dev/sdd	   1.02MiB
   /dev/sdf	   1.02MiB
   /dev/sdc	   1.02MiB
   /dev/sde	   1.02MiB
   /dev/sdb	   5.51TiB



[root@stor1 ~]# df -h /mnt/tmp/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd         13T   10T  1.9M 100% /mnt/tmp



[root@stor1 ~]# btrfs fi df /mnt/tmp/
Data, RAID1: total=9.96TiB, used=9.96TiB
System, RAID1: total=32.00MiB, used=1.41MiB
Metadata, RAID1: total=15.00GiB, used=12.70GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


Thanks for the help
Comment 9 John Freeman 2020-10-20 16:02:05 UTC
Got more free space on metadata after scrub+balance, that takes a lot of time.
Also did it using kernel 4.15/5.4
Comment 11 Fabrice Quenneville 2020-10-21 22:45:09 UTC
Hi,

Tought I would add an update:

Ran a scrub which detected some errors on really old data from an old drive when my btrfs pool was a single drive, Since then I moved to btrfs raid1 and replaced that drive so the same data was giving two errors, found that interesting :). Those errors where data I can re-download so I deleted it.

I then started a full balance which I cancelled once there was enough space to re-balance the metadata only.

commands looked like that:

btrfs scrub start /mnt/tmp/
btrfs balance start --full-balance /mnt/tmp
btrfs balance cancel /mnt/tmp
btrfs balance start -m /mnt/tmp
btrfs balance start --full-balance /mnt/tmp
...

I will do another update after a full balance and another scrub where I corrected the erroneous data but this will take a few days.
Comment 12 hasezoey 2021-04-01 17:22:28 UTC
Just encountered this problem:

```
hasezoey@sylvi /mnt $ sudo btrfs fi usage .
Overall:
    Device size:		   6.49TiB
    Device allocated:		   2.40TiB
    Device unallocated:		   4.09TiB
    Device missing:		     0.00B
    Used:			   2.40TiB
    Free (estimated):		   2.05TiB	(min: 2.05TiB)
    Data ratio:			      2.00
    Metadata ratio:		      2.00
    Global reserve:		 512.00MiB	(used: 0.00B)

Data,RAID10: Size:1.20TiB, Used:1.20TiB (99.92%)
   /dev/sdb	 297.68GiB
   /dev/sdc	 464.85GiB
   /dev/sda	 464.85GiB
   /dev/sdg	 464.85GiB
   /dev/sdf	 464.85GiB
   /dev/sde	 297.68GiB

Metadata,RAID10: Size:2.03GiB, Used:1.45GiB (71.42%)
   /dev/sdb	 352.00MiB
   /dev/sdc	 864.00MiB
   /dev/sda	 864.00MiB
   /dev/sdg	 864.00MiB
   /dev/sdf	 864.00MiB
   /dev/sde	 352.00MiB

System,RAID10: Size:96.00MiB, Used:160.00KiB (0.16%)
   /dev/sdb	  32.00MiB
   /dev/sdc	  32.00MiB
   /dev/sda	  32.00MiB
   /dev/sdg	  32.00MiB
   /dev/sdf	  32.00MiB
   /dev/sde	  32.00MiB

Unallocated:
   /dev/sdb	  33.02MiB
   /dev/sdc	 465.78GiB
   /dev/sda	  33.02MiB
   /dev/sdg	 465.78GiB
   /dev/sdf	   3.18TiB
   /dev/sde	  33.02MiB
```

```
hasezoey@sylvi /mnt $ sudo fdisk -l
Disk /dev/sdb: 298,9 GiB, 320072933376 bytes, 625142448 sectors
Disk model: FUJITSU MHZ2320B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 931,53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000LM024 HN-M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdg: 931,53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST31000524AS    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sde: 298,9 GiB, 320072933376 bytes, 625142448 sectors
Disk model: ST3320820AS     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 223,58 GiB, 240057409536 bytes, 468862128 sectors
Disk model: DREVO X1 SSD    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D364106C-7FEE-4A83-AAE5-75CA9E60D6E5

Device       Start       End   Sectors   Size Type
/dev/sdd1     2048   1050623   1048576   512M EFI System
/dev/sdd2  1050624 468858879 467808256 223,1G Linux filesystem


Disk /dev/sdf: 3,65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN008-2DR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 465,78 GiB, 500107862016 bytes, 976773168 sectors
Disk model: TOSHIBA MK5075GS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
```

```
hasezoey@sylvi /mnt $ uname -a
Linux sylvi 5.8.0-48-generic #54~20.04.1-Ubuntu SMP Sat Mar 20 13:40:25 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
```

Before i had `5.4.0-x`, but after reading something that there was an problem on that version with btrfs, i tried upgrading to 5.8.0, but no change

I also ran an balance (d 70, m 70)

I chose btrfs to have *one* file system, while having many devices (that are of an different size) for data redundancy (disk failure) and some speed, i would have used raid5/6, but from what i know, the write hole still exists

"dmesg" is showing no problem
"btrfs fi balance" did not help (because everything was already balanced)
"btrfs scrub" (somehow) added 1 more GB free (without any error reported)
"btrfs fi resize" did also do nothing

the disks listed in "btrfs fi usage" are all raw disks, no parition table, no thing in between device and filesystem (nothing like lvm)
Comment 13 John Freeman 2021-04-01 18:58:17 UTC
(In reply to hasezoey from comment #12)
> Just encountered this problem:
> 
> ```
> hasezoey@sylvi /mnt $ sudo btrfs fi usage .
> Overall:
>     Device size:                 6.49TiB
>     Device allocated:            2.40TiB
>     Device unallocated:                  4.09TiB
>     Device missing:                0.00B
>     Used:                        2.40TiB
>     Free (estimated):            2.05TiB      (min: 2.05TiB)
>     Data ratio:                             2.00
>     Metadata ratio:                 2.00
>     Global reserve:            512.00MiB      (used: 0.00B)
> 
> Data,RAID10: Size:1.20TiB, Used:1.20TiB (99.92%)
>    /dev/sdb    297.68GiB
>    /dev/sdc    464.85GiB
>    /dev/sda    464.85GiB
>    /dev/sdg    464.85GiB
>    /dev/sdf    464.85GiB
>    /dev/sde    297.68GiB
> 
> Metadata,RAID10: Size:2.03GiB, Used:1.45GiB (71.42%)
>    /dev/sdb    352.00MiB
>    /dev/sdc    864.00MiB
>    /dev/sda    864.00MiB
>    /dev/sdg    864.00MiB
>    /dev/sdf    864.00MiB
>    /dev/sde    352.00MiB
> 
> System,RAID10: Size:96.00MiB, Used:160.00KiB (0.16%)
>    /dev/sdb     32.00MiB
>    /dev/sdc     32.00MiB
>    /dev/sda     32.00MiB
>    /dev/sdg     32.00MiB
>    /dev/sdf     32.00MiB
>    /dev/sde     32.00MiB
> 
> Unallocated:
>    /dev/sdb     33.02MiB
>    /dev/sdc    465.78GiB
>    /dev/sda     33.02MiB
>    /dev/sdg    465.78GiB
>    /dev/sdf      3.18TiB
>    /dev/sde     33.02MiB
> ```
> 
> ```
> hasezoey@sylvi /mnt $ sudo fdisk -l
> Disk /dev/sdb: 298,9 GiB, 320072933376 bytes, 625142448 sectors
> Disk model: FUJITSU MHZ2320B
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> 
> 
> Disk /dev/sdc: 931,53 GiB, 1000204886016 bytes, 1953525168 sectors
> Disk model: ST1000LM024 HN-M
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> 
> 
> Disk /dev/sdg: 931,53 GiB, 1000204886016 bytes, 1953525168 sectors
> Disk model: ST31000524AS    
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> 
> 
> Disk /dev/sde: 298,9 GiB, 320072933376 bytes, 625142448 sectors
> Disk model: ST3320820AS     
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> 
> 
> Disk /dev/sdd: 223,58 GiB, 240057409536 bytes, 468862128 sectors
> Disk model: DREVO X1 SSD    
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: gpt
> Disk identifier: D364106C-7FEE-4A83-AAE5-75CA9E60D6E5
> 
> Device       Start       End   Sectors   Size Type
> /dev/sdd1     2048   1050623   1048576   512M EFI System
> /dev/sdd2  1050624 468858879 467808256 223,1G Linux filesystem
> 
> 
> Disk /dev/sdf: 3,65 TiB, 4000787030016 bytes, 7814037168 sectors
> Disk model: ST4000VN008-2DR1
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> 
> 
> Disk /dev/sda: 465,78 GiB, 500107862016 bytes, 976773168 sectors
> Disk model: TOSHIBA MK5075GS
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> ```
> 
> ```
> hasezoey@sylvi /mnt $ uname -a
> Linux sylvi 5.8.0-48-generic #54~20.04.1-Ubuntu SMP Sat Mar 20 13:40:25 UTC
> 2021 x86_64 x86_64 x86_64 GNU/Linux
> ```
> 
> Before i had `5.4.0-x`, but after reading something that there was an
> problem on that version with btrfs, i tried upgrading to 5.8.0, but no change
> 
> I also ran an balance (d 70, m 70)
> 
> I chose btrfs to have *one* file system, while having many devices (that are
> of an different size) for data redundancy (disk failure) and some speed, i
> would have used raid5/6, but from what i know, the write hole still exists
> 
> "dmesg" is showing no problem
> "btrfs fi balance" did not help (because everything was already balanced)
> "btrfs scrub" (somehow) added 1 more GB free (without any error reported)
> "btrfs fi resize" did also do nothing
> 
> the disks listed in "btrfs fi usage" are all raw disks, no parition table,
> no thing in between device and filesystem (nothing like lvm)

Again, you should run FULL BALANCE, that takes a lot of time, or till you will get enough space. NO FILTERS.

Note You need to log in before you can comment on or make changes to this bug.