I'm trying to convert a 22TB ext4 FS with 850GB free and 6 million files to BTRFS using btrfs-convert from btrfs-progs amd64 4.9.1-1 First of all I start e2fsc -f on the ext4 file system, and I get no error. Around 1 minute after starting btrfs-convert, the conversion files with this error: ext2fs_read_block_bitmap: Unknown code ext2 137 ERROR: no file system found to convert WARNING: an error occurred during conversion, filesystem is partially created but not finalized and not mountable Googling around I have found NO info about this "ext2fs_read_block_bitmap" error. Will someone help me? Thanks in advance.
Do you still have this issue? If so can you upload error log along with output of 'tune2fs -l /ext/device/' ?
Hi and thanks for your reply. Yes, I still have this issue. Thia is the tune2fs -l ouput: tune2fs 1.43.4 (31-Jan-2017) Filesystem volume name: <none> Last mounted on: /media/d0907577-5da1-4e6e-a5ae-846f94149e2f Filesystem UUID: d0907577-5da1-4e6e-a5ae-846f94149e2f Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 364892160 Block count: 5838253174 Reserved block count: 291912658 Free blocks: 403319520 Free inodes: 358493406 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 2048 Inode blocks per group: 128 Flex block group size: 16 Filesystem created: Sat Feb 27 06:57:06 2016 Last mount time: Wed Apr 26 18:59:05 2017 Last write time: Wed Apr 26 18:59:12 2017 Mount count: 7 Maximum mount count: -1 Last checked: Sun Mar 5 18:46:18 2017 Check interval: 0 (<none>) Lifetime writes: 21 TB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 09d7e438-164c-4afa-b621-0aaa2c70f6e4 Journal backup: inode blocks Running btrfs-convert I get this ouput: ext2fs_read_block_bitmap: Unknown code ext2 137 ERROR: no file system found to convert WARNING: an error occurred during conversion, filesystem is partially created but not finalized and not mountable I found no other info in /var/log files. Thanks for your help.
Thanks for the outputs. I just wanted to check the ext4 flags. They seem fine. As you can see, >ext2fs_read_block_bitmap: Unknown code ext2 137 btrfs-convert fails to read ext4 block bitmap. I think, btrfs-convert possibly missing a flag(EXT2_FLAG_64BITS) while it opening 'large' ext4 device. Unfortunately, I don't have large TB sized hard-disk to test above flag. If you are willing to test, please use attached patch. It tries to open the ext4 device with above flag and prints message and quits. # ./btrfs-convert /dev/sda5 WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked!
Created attachment 256077 [details] ext2 flag
I'll give a try. Thanks for your help.
This is the result: # ./btrfs-convert /dev/sda6 WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked!
(In reply to Marco Aicardi from comment #6) > This is the result: > > # ./btrfs-convert /dev/sda6 > WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked! Okay, thanks. Adding above flag fixed "ext2fs_read_block_bitmap: Unknown code ext2 137" error message. While testing with ~5GB drive, btrfs-convert(+EXT2_FLAG_64BITS) fails at later stage while doing bitmap related tasks. Those calls also needs to be updated. I'm checking them now.
Hi Marco, Tested below patch with smaller device, I sent it for review. Lets wait for review comments from the devs who already worked on this code path. I don't recommended running this patch on your 22TB drive, unless it gets reviewed and accepted. thanks.
Created attachment 256093 [details] large device support patch (under review)
Ok, thanks a lot for your help. I do not have a complete backup of the drive so I agree it's better for me to wait for comments from the devs. Thanks again to everyone who's working on this!
I'm adding the patch to devel, but we'll have to test it further. It's quite slow unfortunatelly.
(In reply to David Sterba from comment #11) > I'm adding the patch to devel, but we'll have to test it further. It's quite > slow unfortunatelly. Are you able to reproduce this issue on your environment? With local machine,I tried with upto 15TB sparse (22TB fails with truncate with file too large error), unable to reproduce the issue. (tried aws spot instance, unfortunately they provide upto 1TB not more)
This is a semi-automated bugzilla cleanup, report is against an old kernel version. If the problem still happens, please open a new bug. Thanks.