Bug 74101 - "Out of space" reported when there's lots of non-allocated space
Summary: "Out of space" reported when there's lots of non-allocated space
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: btrfs (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Josef Bacik
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-04-14 21:08 UTC by Jeff Mitchell
Modified: 2017-03-09 16:37 UTC (History)
5 users (show)

See Also:
Kernel Version: 3.13
Tree: Mainline
Regression: No


Attachments

Description Jeff Mitchell 2014-04-14 21:08:05 UTC
Although the device has plenty of non-allocated space (after extending the partition and running a resize command), I am getting out of space errors. Rebalancing does not help. I was asked in #btrfs to report this on the bugtracker as it appears to be a legitimate bug.

# btrfs fi show
Label: none  uuid: c553ada1-031f-48f8-a497-cc5e1f913619
	Total devices 1 FS bytes used 2.79GiB
	devid    1 size 20.00GiB used 4.61GiB path /dev/sda2

Btrfs v3.12

# btrfs fi df /
Data, single: total=3.25GiB, used=2.54GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=664.12MiB, used=257.11MiB

# uname -a
Linux repo 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I have a btrfs-image dump of the filesystem, but it's above the maximum file size limit for the bugtracker.
Comment 1 Justin Alan Ryan 2015-07-15 17:36:50 UTC
Have you tried restoring the btrfs-image dump to another volume, or seeing if you can reproduce this with a fresh filesystem by following steps you recall leading to it?
Comment 2 Roman Kapusta 2016-06-14 09:22:08 UTC
I'm hitting same problem here, lot of unallocated space, but cannot be used.

First I have created lvm partition (l1) of similar size as second physical disk (l2) and formatted with btrfs with mirroring (RADI1): 
devid    2 size 1465136512.00KiB used 1339064320.00KiB path /dev/mapper/l2

Then when my free space was below 100 GB, I added new physical disk (l3):
devid    3 size 244196544.00KiB used 119537664.00KiB path /dev/mapper/l3

and extend lvm partition (l1) to around size of l2+l3:
devid    1 size 1709701120.00KiB used 1458601984.00KiB path /dev/mapper/l1

Current state: disk is full

# df -k /media/storage/
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/mapper/l1 1709517096 1457855616 126644080  93% /media/storage

# btrfs fi show --kbytes /media/storage/
Label: 'storage'  uuid: 912c98c3-5f1f-4c92-b7c5-710d6761154d
	Total devices 3 FS bytes used 1457394816.00KiB
	devid    1 size 1709701120.00KiB used 1458601984.00KiB path /dev/mapper/l1
	devid    2 size 1465136512.00KiB used 1339064320.00KiB path /dev/mapper/l2
	devid    3 size 244196544.00KiB used 119537664.00KiB path /dev/mapper/l3

# btrfs fi df --kbytes /media/storage/
Data, RAID1: total=1455423488.00KiB, used=1454843392.00KiB
System, RAID1: total=32768.00KiB, used=224.00KiB
Metadata, RAID1: total=3145728.00KiB, used=2480528.00KiB

I tried to resize individual disks:
# btrfs fi resize 1:max /media/storage/
# btrfs fi resize 2:max /media/storage/
# btrfs fi resize 3:max /media/storage/
dmesg:
[171983.005478] BTRFS info (device dm-18): resizing devid 1
[171983.005492] BTRFS info (device dm-18): new size for /dev/mapper/l1 is 1750733946880
[171990.403990] BTRFS info (device dm-18): resizing devid 2
[171990.404003] BTRFS info (device dm-18): new size for /dev/mapper/l2 is 1500299812864
[171994.630144] BTRFS info (device dm-18): resizing devid 3
[171994.630156] BTRFS info (device dm-18): new size for /dev/mapper/l3 is 250057252864

I tried rebalance and waited more then 24 hours to finish:
# btrfs balance start /media/storage/

Nothing helped, my kernel version is 4.4.12-200.fc22.x86_64, btrfs-progs version 4.3.1-1.fc22.x86_64.
It probably does not have impact but partitions are encrypted with LUKS
Comment 3 Kevin B. 2016-08-11 01:06:07 UTC
I've also come across this issue and it has started to become consistent (happened a few times over the course of weeks and now nearly once a day for the past few days).  I'll try deleting/compressing some files to see if that reduces the frequency.

I've tried balance and defrag.  I'm not sure if either fully finished because i started them at night and when I check in the evening for see progress, the computer is locked up/unusuable.

There are (4) drives are in RAID 5 configuration using a "Adaptec Series 6 - ASR-6805" controller.

uname -a:
Linux localhost.localdomain 4.6.4-201.fc23.x86_64 #1 SMP Tue Jul 12 11:43:59 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

df:
Filesystem       1K-blocks       Used  Available Use% Mounted on
devtmpfs          16437844          0   16437844   0% /dev
tmpfs             16448452      54488   16393964   1% /dev/shm
tmpfs             16448452       1840   16446612   1% /run
tmpfs             16448452          0   16448452   0% /sys/fs/cgroup
/dev/sda4      11695209472 5525759712 6159218512  48% /
tmpfs             16448452      28052   16420400   1% /tmp
/dev/sda2          3966144     256604    3488356   7% /boot
/dev/sda4      11695209472 5525759712 6159218512  48% /home
/dev/sda1          2043984      15884    2028100   1% /boot/efi
tmpfs              3289692          0    3289692   0% /run/user/0
tmpfs              3289692         24    3289668   1% /run/user/1000

btrfs fi df /:
Data, single: total=5.19TiB, used=5.13TiB
System, DUP: total=64.00MiB, used=580.00KiB
Metadata, DUP: total=15.00GiB, used=9.93GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

btrfs fi show /dev/sda4:
Label: 'fedora'  uuid: 09cfb441-2242-40dc-999f-e50ec6cad82a
        Total devices 1 FS bytes used 5.14TiB
        devid    1 size 10.89TiB used 5.22TiB path /dev/sda4
Comment 4 suncuss.exe 2017-03-09 16:37:45 UTC
I also run into this issue and it is becoming consistent.
btrfs balance will solve it temporarily, but it will become full again short after.

uname -a:
Linux localhost.localdomain 4.5.0-040500rc6-generic #201602281230 SMP Sun Feb 28 17:33:02 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

btrfs fi df /home:
Data, RAID0: total=5.62TiB, used=5.61TiB
System, RAID1: total=32.00MiB, used=400.00KiB
Metadata, RAID1: total=195.00GiB, used=22.89GiB
GlobalReserve, single: total=512.00MiB, used=156.27MiB

btrfs fi show /home:
Label: none  uuid: 6ceeca45-8d5d-4c48-adc0-211b65b2807e
        Total devices 4 FS bytes used 5.63TiB
        devid    1 size 1.82TiB used 1.50TiB path /dev/sde
        devid    2 size 1.82TiB used 1.50TiB path /dev/sdf
        devid    3 size 1.82TiB used 1.50TiB path /dev/sdg
        devid    4 size 1.82TiB used 1.50TiB path /dev/sdi


btrfs fi usage /home:
Overall:
    Device size:                   7.28TiB
    Device allocated:              6.00TiB
    Device unallocated:            1.28TiB
    Device missing:                  0.00B
    Used:                          5.66TiB
    Free (estimated):              1.28TiB      (min: 658.13GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 149.05MiB)

Data,RAID0: Size:5.62TiB, Used:5.61TiB
   /dev/sde        1.40TiB
   /dev/sdf        1.40TiB
   /dev/sdg        1.40TiB
   /dev/sdi        1.40TiB

Metadata,RAID1: Size:195.00GiB, Used:22.89GiB
   /dev/sde       98.00GiB
   /dev/sdf       97.00GiB
   /dev/sdg       97.00GiB
   /dev/sdi       98.00GiB

System,RAID1: Size:32.00MiB, Used:400.00KiB
   /dev/sde       32.00MiB
   /dev/sdi       32.00MiB

Unallocated:
   /dev/sde      326.99GiB
   /dev/sdf      328.02GiB
   /dev/sdg      328.02GiB
   /dev/sdi      326.99GiB

Note You need to log in before you can comment on or make changes to this bug.