I formatted a external 500 GB hard drive with UDF to be able to use it with different operating systems and store files larger than 4 GB. After that I started copying files on the drive till I got errors telling me there's no space left: $ touch /media/disk/foo touch: cannot touch '/media/disk/foo': No space left on device df told me that only 18% space is used and only 1% of inodes. Also I can create new files / copy files on the device on Windows without problems.
Number of inodes is irrelevant for UDF (it allocates inodes dynamically) but apparently allocation routines overflown somewhere. How much did you manage to copy to the media (you can use du(1))? BTW I think you are the first one to try UDF of such a large medium :-)
Created attachment 92181 [details] output of 'du /media/disk' (In reply to comment #1) > How much did you manage to copy to the media (you can use du(1))? I don't really know what information you want so hopefully one of this contains it: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdh1 488385295 86154513 402230782 18% /media/disk $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdh1 466G 83G 384G 18% /media/disk $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdh1 804461563 0 804461563 0% /media/disk $ du -c /media/disk 86006104 total Attached is also the full du output. This is not the same as before cause I have my Eclipse workspace on this drive so I need to be able to write to it so I removed a lot of files (in fact I reformatted) before. The weird thing is that I can create new files now but can't add content to it (so if I try to copy a file it creates the file but it has 0 bytes and creating a file with touch works). > BTW I think you are the first one to try UDF of such a large medium :-) I don't really have a choice. The only other file system being cross-platform that allows files > 4GB is NTFS but ntfs-3g is way to slow (the disc is attached via USB 3). ;)
Hum, strange. I tried to reproduce your problem but failed. I created 800 GB UDF filesystem on my spare harddrive and filled 600 GB in 10000 files in it without problems. So couple of questions: By any chance is your machine 32-bit? How did you initially create the UDF filesystem?
(In reply to comment #3) > By any chance is your machine 32-bit? No, 64 bit. > How did you initially create the UDF filesystem? first I used dd to zero the start of the partition, then I did mkudffs --media-type=hd --blocksize=512 --vid=V10Disc --lvid=V10Disc /dev/sdh1 I also tried formatting the partition with windows 7 but that didn't solve the problem.
Ah, OK. These options did the trick for me. Now I also see ENOSPC messages. I'll investigate more tomorrow.
OK, I found it. Attached patches fix the issue for me. I'll push them in the next merge window.
Created attachment 92541 [details] Fix premature ENOSPC on large filesystems with small blocksize This is the patch fixing the problem. Can you please test it? Thanks!
The patch seems to work. This is how my drive looks atm: Filesystem Size Used Avail Use% Mounted on /dev/sdh1 466G 285G 182G 62% /media/disk I hope this is enough testing as I don't want to copy more files to the drive atm. :)
I can also confirm the patch (and manually applied it to 3.5.7 as that's what Linux Mint is currently using). Can it be backported into stable and longterm? The bug exists very far back... (the oldest longterm still going has it).
The patch has been merged to stable tree on Feb 5. If you miss it from any particular stable tree, then please ping the tree maintainer...