While extracting a tar backup of a filesystem with a backuppc backup directory, I got errors telling me that the maximum number of hard links have been exceeded.
The backuppc directory structure is basically a pool of files named by their hashes, and the trees of the backed up directories with files as hard links to the pool.
The errors look like this:
tar: backuppc/pc/lap/2/f%2f/fhome/fsliedes/frec/fdbus-1.2.16/fdoc/fapi/fman/fman3dbus/fDBUS_MESSAGE_TYPE_INVALID.3dbus: Cannot hard link to `backuppc/cpool/4/4/d/44d6df5c255c9393cf647ecce2d1c8ba': Too many links
tar: backuppc/pc/lap/2/f%2f/fhome/fsliedes/frec/fdbus-1.2.16/fdoc/fapi/fman/fman3dbus/fDBUS_TYPE_SIGNATURE.3dbus: Cannot hard link to `backuppc/cpool/4/4/d/44d6df5c255c9393cf647ecce2d1c8ba': Too many links
tar: backuppc/pc/lap/2/f%2f/fhome/fsliedes/frec/fdbus-1.2.16/fdoc/fapi/fman/fman3dbus/fDBUS_ERROR_FILE_NOT_FOUND.3dbus: Cannot hard link to `backuppc/cpool/4/4/d/44d6df5c255c9393cf647ecce2d1c8ba': Too many links
As this bug apparently needs a disk format change (per other threads, see e.g.  and ), do you think it would be fixed before 1.0?
I'd imagine fixing it after the format is finalized would be even harder.
This is obviously a real-world test case. I've tried backuppc with several different filesystems to figure out what's the fastest one for the application, and btrfs just cannot handle the thing at all.
I suffer the same problem: I needed to upgrade a filesystem used for backup (using backuppc) and I was unable to rsync the data on the new partition with the error:
rsync: link "xxxx" => yyyy failed: Too many links (31)
I would like to add that 31 is such a ridiculous low number for maximum hard links and I'm very disappointed I can't use btrfs on a real world scenario on my first take on it.
I have the same problem too, while for me the number of links is sligthly higher (around 100 in my tests but it changed at every different test).
I've read on another thread that Zheng says that to fix this bug is necessary to make disk format changes.
The currenty hard-links limit is ridiculous, please fix it as long as btrfs is still in development!!!
That's really great news, thank for the link.
Hope to see it merged asap!
(In reply to comment #5)
Mark's extended inode ref patches which increase the limit of hard link targets to an inode have been merged upstream and were first available in v3.7-rc1.
$ git log --oneline v3.7-rc1 fs/btrfs/ | grep "extended inode"
d24bec3 btrfs: extended inode ref iteration
f186373 btrfs: extended inode refs
Closing, if this is still affecting you on a newer kernel please reopen.
Please reopen this Bug.
It still affects me on a freshly installes Ubuntu 13.10 (GNU/Linux 3.11.0-13-generic x86_64).
I was trying to copy my backuppc pool from the previous ext4 filesystem to a disk with btrfs. The btrfs filesystem was freshly created with the mentioned kernel on a single 2TB disk.
I ran the following command:
rsync -caHAX --del /home-raid/backuppc/ /home-btrfs/
It exited with an error stating that there were too many hardlinks.
Given the age of the bug I'd file a new one
The extended refs are available for a long time, you have to turn them on during mkfs time or via btrfstune for an unmoutned filestem. It will be possible to turn them on with a mounted filesytem as well, there's a patchset pending.
mkfs -O extref
HTH, no need to reopen a new bug IMHO.
Thanks for the hint, I'll try this one.
One question though: For what reason is this only optional and has to be triggered manually when creating the fs? Is there any downside, why one would not want extended refs enabled?
The downside is backward compatibility with kernels < 3.7, the fs would not mount. But, it's 3.12 time and extrefs should be on by default.
JFYI, patch to enable extrefs by default has landed in recent btrfs-progs.
Thx for the info.
Enabling extrefs through btrfstune did the job. Backuppc is up and running. :D