system user can bypass "disk quota limit" using "truncate -s 10T id" command (that can create a file whose size is 10T).
* Steps to Reproduce
1. create a user and setup a disk quota for this user
create user "test"
[root@vm10-50-0-18 ~]# dd if=/dev/zero of=ext4 bs=1G count=1
[root@vm10-50-0-18 ~]# mkfs.ext4 ext4
[root@vm10-50-0-18 ~]# mkdir -p /tmp/test && chmod -R 777 /tmp/test && mount -o usrquota,grpquota ext4 /tmp/test
setup disk quota
[root@vm10-50-0-18 ~]# quotacheck -u /tmp/test/ # create "aquota.user" file
[root@vm10-50-0-18 ~]# edquota -u test
[root@vm10-50-0-18 ~]# quotaon /tmp/test/ -u # open quota service
the quota setting is like below: user "test" can not use disk space which size exceed 10K.
Disk quotas for user test (uid 1000):
Filesystem blocks soft hard inodes soft hard
/dev/loop0 0 10 10 0 0 0
2. verify the quota limit using "dd"
[root@vm10-50-0-18 ~]# su - test
上一次登录：六 10月 9 18:14:31 CST 2021pts/1 上
[test@vm10-50-0-18 ~]$ dd if=/dev/zero of=/tmp/test/id bs=20K count=1
loop0: write failed, user block limit reached. # yes,this limit is as expected
dd: error writing ‘/tmp/test/id’: Disk quota exceeded
1+0 records in
0+0 records out
8192 bytes (8.2 kB) copied, 0.000221445 s, 37.0 MB/s
this result is as expected: "test" user can not write file whose size is more than 10K.
3. verify the quota limit using "truncate"
[test@vm10-50-0-18 test]$ truncate -s 10T id
[test@vm10-50-0-18 test]$ ll -h id
-rw-rw-r-- 1 test test 10T Oct 9 17:16 id
actual results is: "test" user can create file whose size is 10T, larger more than 10K
expected result is: like "dd result" above, "test" user can not write file whose size is more than 10K.
This is not a bug, but rather things working as expected. This is because truncate does not actually allocate any disk blocks. It merely sets the i_size of the inode to be the specified quantity. If i_size is less than where blocks currently are allocated and assigned to the inode at those logical offsets, then those blocks will be deallocated. But truncate never allocates any additional data blocks.
Try running "du id", and see how much disk space the file takes. Or try using "ls -s", which will show the disk space used by the file --- which is different from the size of the file. If this puzzles you, look up the definition of "sparse file".
i know `truncate` file does not task up disk space, but i still think it has some "design" problem about security.
* why i still think it has some problem?
because developer will trust "quota limit" very likely, so they will not check the file is "truncate file" or not before they do some operation on file.
for example(assume a scenario): developer limit every ftp user's disk space by using "disk quotas", and there is a crontab job which will backup ftp user's files every day. if this crontab job does not check "truncate file" exist or not and then backup using "tar" or "zip" compress command, then when a malicious user create a file using `truncate -s 100G id`, after compress this special "truncate file`, the machine disk space will be consumed more than 100G actually.
Quotas help to control the amount of space and number of inodes used. If the sparse file (created by truncate, or seek/write, or any other method available) does not actually consume the fs space, then it simply can't be accounted for by quota. So as Ted already said it is working as expected.
Back to your scenario. Quota has nothing to say about how the files are manipulated so if the program copying/decompressing or otherwise manipulating the sparse file decides to actually write the zeros and thus allocate the space, so be it. That's hardly a bug in quota or file system itself.
If your expectation is that while manipulating the sparse file, the file will remain sparse, you should make sure that the tools you're using will actually do what you want. Note that tar does have --sparse options which, if I understand your example correctly, should work as you expect.
Some basic information about sparse can be found here files https://en.wikipedia.org/wiki/Sparse_file
As Lukas said, "truncate" is not the only way to create sparse files. And there are many Unix / Linux programs that depend on the ability to create sparse files, since Unix support of sparse files goes back at roughly 50 years (half a century).
The fact that clueless users / sysadmins might not understand basic Unix/Linux behavior is not a bug in Linux. There are plenty of other ways that an experienced sysadmin might shoot themselves in the foot....
Correction to #4:
There are plenty of other ways that an *inexperienced* sysadmin might shoot themselves in the foot....
(In reply to Theodore Tso from comment #5)
> Correction to #4:
> There are plenty of other ways that an *inexperienced* sysadmin might shoot
> themselves in the foot....
I disagree, there are plenty of ways experienced sysadmins and kernel maintainers such as myself shoot themselves in the foot. ;)