Bug 197681 - BTRFS qgroup limit problem with small files (Ubtunu 16.04 LTS)
Summary: BTRFS qgroup limit problem with small files (Ubtunu 16.04 LTS)
Status: NEW
Alias: None
Product: File System
Classification: Unclassified
Component: btrfs (show other bugs)
Hardware: x86-64 Linux
: P1 normal
Assignee: Josef Bacik
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-11-02 09:55 UTC by Thomas
Modified: 2018-01-16 13:27 UTC (History)
2 users (show)

See Also:
Kernel Version: 4.4.0
Tree: Mainline
Regression: No


Attachments

Description Thomas 2017-11-02 09:55:32 UTC
> BTRFS Quota is exceeded before the limit is reached.
>
> Here is a example of my current machine, that explains problem 
> at it's best:
> 
> btrfs quota enable '/data/input/kunden_loeschen'
> btrfs qgroup limit -e 50M '/data/input/kunden_loeschen'
> 
> root@dat2:/data/input# btrfs qgroup show -ref ./kunden_loeschen/
> qgroupid         rfer         excl     max_rfer     max_excl
> --------         ----         ----     --------     --------
> 0/265        16.00KiB     16.00KiB         none     50.00MiB
> 
> >>> Correct limit is set to 50 MB
> 
> Test it with dd
> ================
> 
> root@dat2:/data/input/kunden_loeschen# dd if=/dev/zero bs=1024k of=test
> dd: error writing 'test': Disk quota exceeded
> 50+0 records in
> 49+0 records out
> 52183040 bytes (52 MB, 50 MiB) copied, 0.0207662 s, 2.5 GB/s
> 
> rm test
> 
> >>> Perfekt, but now lets copy a lot of small files
> root@dat2:/data/input/kunden_loeschen# scp -qrp
> root@10.1.2.27:/mnt/input/kunden_loeschen/* .
> root@10.1.2.27's password:
> ./kunde_loeschen_20160729001052.txt: Disk quota exceeded
> ./kunde_loeschen_20160730001204.txt: Disk quota exceeded
> ...
> ...
> ...
> root@dat2:/data/input/kunden_loeschen# du -hs .
> 1.2M    .
> 
> >>> I would expect that I can store 50MB.
> root@dat2:/data/input# btrfs qgroup show -ref ./kunden_loeschen/
> qgroupid         rfer         excl     max_rfer     max_excl
> --------         ----         ----     --------     --------
> 0/265       240.00KiB    240.00KiB         none     50.00MiB
> 
> 
> >>> Whats wrong with small Text files?
Comment 1 lakshmipathi 2017-11-04 10:17:47 UTC
Hi thomas, can you provide a script with this issue?  I'll try and reproduce this issue on my test environment.
Comment 2 Thomas 2017-11-06 09:46:19 UTC
Hello lakshmipathi

The Script is the first Post. I only create the subvolume and set the limit with the commands above. Then I try to copy 646 files to this Subvolume with a complete size of 2.6MB. Each file has a size of 4.0k. If i disable the quota everything workes fine.
Comment 3 lakshmipathi 2017-11-06 11:54:52 UTC
Thanks for the response. 

Just to conform If I ran below scripts on newly formated
drive, it should re-create this problem? is that correct?

/data/input -> is btrfs mount point 

mkdir -p /data/input/kunden_loeschen
btrfs quota enable '/data/input/kunden_loeschen'
btrfs qgroup limit -e 50M '/data/input/kunden_loeschen'
btrfs qgroup show -ref /data/input/kunden_loeschen
dd if=/dev/zero bs=1024k of=/data/input/kunden_loeschen/test
rm /data/input/kunden_loeschen/test

for i in {1..700};do
dd if=/dev/urandom bs=4k of=/data/input/kunden_loeschen/file$i
done
Comment 4 Thomas 2017-11-06 12:32:43 UTC
Testscript:

Yes thats right /data/input is a BTRFS Filesystem mountpoint

/dev/sde1          20G   20M   18G   1% /data/input


root@dat2:/data/input# cat do_test.bash
#######################################################################
#!/bin/bash

btrfs subvolume create /data/input/testvol
btrfs quota enable /data/input/testvol
btrfs qgroup limit -e 50M /data/input/testvol

cd /data/input/testvol

echo "Excuting test dd bigfile..."
dd if=/dev/zero bs=1024k of=/data/input/testvol/test.dd
du -hs .
rm /data/input/testvol/test.dd


sync
echo

echo "Executing test dd 4k file..."
for i in {1..444}
do
   dd if=/dev/urandom bs=4k count=1 of=/data/input/testvol/file$i
done

du -hs .
#######################################################################

Output:
=======
root@dat2:/data/input# ./do_test.bash
Create subvolume '/data/input/testvol'
Excuting test dd bigfile...
dd: error writing '/data/input/testvol/test.dd': Disk quota exceeded
50+0 records in
49+0 records out
51953664 bytes (52 MB, 50 MiB) copied, 0.0192422 s, 2.7 GB/s
50M     .

Executing test dd 4k file...
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246645 s, 16.6 MB/s
1+0 records in
1+0 records out
...
...
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217569 s, 18.8 MB/s
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217635 s, 18.8 MB/s
dd: failed to open '/data/input/testvol/file442': Disk quota exceeded
dd: failed to open '/data/input/testvol/file443': Disk quota exceeded
dd: failed to open '/data/input/testvol/file444': Disk quota exceeded
1.8M    .
Comment 5 lakshmipathi 2017-11-06 14:44:05 UTC
Thanks for the script. I can easily reproduce this issue on local machine, Let me re-run the script with latest btrfs-devel branch and check/update its status.
Comment 6 lakshmipathi 2017-11-08 05:21:14 UTC
(In reply to lakshmipathi from comment #5)
> I can easily reproduce this issue on local machine,
My bad. I using wrong btrfs volume. 

If I use newly formated btrfs volume, Im unable to reproduce this issue. Can you share the output for "btrfs fi show" ?
Comment 7 lakshmipathi 2017-11-08 05:24:38 UTC
Outputs are matching expectation:

root@linuxbot:~# ls -ltr /data/input/testvol/ | wc -l
445

root@linuxbot:~# du -ksh  /data/input/testvol/ 
1.8M	/data/input/testvol/

root@linuxbot:~# python
Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 444 * 4096  / 1024.0/ 1024.0
1.734375
Comment 8 Thomas 2017-11-08 13:15:57 UTC
Hi, 

this is the output. I only added the path to btrfs fi show

root@dat2:/data# btrfs fi show /data/input/
Label: 'DaInputCache'  uuid: c62f300b-15c0-46ce-bea2-479895a16530
        Total devices 1 FS bytes used 2.93MiB
        devid    1 size 20.00GiB used 3.02GiB path /dev/sde1

root@dat2:/data# df -h /data/input/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sde1        20G   20M   18G   1% /data/input

root@dat2:/data# btrfs qgroup show /data/input/
qgroupid         rfer         excl
--------         ----         ----
0/5          24.00KiB     24.00KiB
0/257        16.00KiB     16.00KiB
0/259        16.00KiB     16.00KiB
0/265       488.00KiB    488.00KiB
0/266           0.00B        0.00B
0/267           0.00B        0.00B
0/268           0.00B        0.00B
0/269           0.00B        0.00B
0/270           0.00B        0.00B
0/272         1.97MiB      1.97MiB

root@dat2:/data# btrfs qgroup show -ercpf /data/input/testvol/
qgroupid         rfer         excl     max_rfer     max_excl parent  child
--------         ----         ----     --------     -------- ------  -----
0/272         1.97MiB      1.97MiB         none     50.00MiB ---     ---

I noticed that it seams, that the Problem only occures if you create a lot of files in short time. Can you try to create 900 Files? Maybe it occours later on your side.
Comment 9 lakshmipathi 2017-11-09 10:09:34 UTC
Yes, you are correct. I created upto 1000 files - received errors on file868.



096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667368 s, 6.1 MB/s
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653472 s, 6.3 MB/s
dd: error writing '/data/input/testvol/file868': Disk quota exceeded
1+0 records in
0+0 records out
0 bytes copied, 0.000819251 s, 0.0 kB/s
dd: failed to open '/data/input/testvol/file869': Disk quota exceeded
dd: failed to open '/data/input/testvol/file870': Disk quota exceeded
dd: failed to open '/data/input/testvol/file871': Disk quota exceeded
dd: failed to open '/data/input/testvol/file872': Disk quota exceeded
dd: failed to open '/data/input/testvol/file873': Disk quota exceeded

Waited few seconds and created files manually they working now.

dd if=/dev/urandom bs=4k count=1 of=/data/input/testvol/w1
dd if=/dev/urandom bs=4k count=1 of=/data/input/testvol/w2

both commands worked

$
-rw-r--r-- 1 root root 4096 Nov  9 15:35 file863
-rw-r--r-- 1 root root 4096 Nov  9 15:35 file865
-rw-r--r-- 1 root root 4096 Nov  9 15:35 file864
-rw-r--r-- 1 root root 4096 Nov  9 15:35 file867
-rw-r--r-- 1 root root 4096 Nov  9 15:35 file866
-rw-r--r-- 1 root root    0 Nov  9 15:35 file868
-rw-r--r-- 1 root root 4096 Nov  9 15:36 w1
-rw-r--r-- 1 root root 4096 Nov  9 15:37 w2
Comment 10 Thomas 2017-11-09 15:21:21 UTC
Yes I can confirm this behavior
Comment 11 lakshmipathi 2017-11-10 04:32:11 UTC
I checked this with latest btrfs-devel branch 4.14.0-rc7 with around 5000 files, Seems like this issue is fixed (may be while fixing other qgroup issues).

bash-4.4# pwd
/data/input/testvol
bash-4.4# 
-rw-r--r--. 1 root root 4096 Nov 10 04:28 file5441
-rw-r--r--. 1 root root 4096 Nov 10 04:28 file5442
-rw-r--r--. 1 root root 4096 Nov 10 04:28 file5443
-rw-r--r--. 1 root root 4096 Nov 10 04:28 file5444
bash-4.4# ls | wc -l
5444
bash-4.4# du -ksh .
22M	.


May be you can upgrade the kernel and check the script?
Comment 12 Thomas 2017-11-13 12:48:50 UTC
I'm going to test the latest Ubuntu Kernel that I can get.

If I have any results I report them here.
Comment 13 lakshmipathi 2017-11-26 07:29:41 UTC
Hi Thomas,
Problem existing after update too?
Comment 14 Thomas 2017-12-11 18:47:26 UTC
Hi,

currently I can't test this case because I'm not allowed to change the System Environment. I can test this behavior after the Xmas businesses in the new year.

I'l report as soon as possible with the latest updates applied.

Note You need to log in before you can comment on or make changes to this bug.