Bug 194795 - Fail to convert EXT4 to BTRFS
Summary: Fail to convert EXT4 to BTRFS
Status: RESOLVED OBSOLETE
Alias: None
Product: File System
Classification: Unclassified
Component: btrfs (show other bugs)
Hardware: x86-64 Linux
: P1 blocking
Assignee: Josef Bacik
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-03-05 17:52 UTC by Marco Aicardi
Modified: 2022-10-06 17:45 UTC (History)
3 users (show)

See Also:
Kernel Version: 4.9.2
Subsystem:
Regression: No
Bisected commit-id:


Attachments
ext2 flag (985 bytes, patch)
2017-04-26 18:38 UTC, lakshmipathi
Details | Diff
large device support patch (under review) (2.27 KB, application/mbox)
2017-04-27 10:13 UTC, lakshmipathi
Details

Description Marco Aicardi 2017-03-05 17:52:54 UTC
I'm trying to convert a 22TB ext4 FS with 850GB free and 6 million files to BTRFS using btrfs-convert from btrfs-progs amd64 4.9.1-1

First of all I start e2fsc -f on the ext4 file system, and I get no error.

Around 1 minute after starting btrfs-convert, the conversion files with this error:

ext2fs_read_block_bitmap: Unknown code ext2 137
ERROR: no file system found to convert
WARNING: an error occurred during conversion, filesystem is partially created but not finalized and not mountable

Googling around I have found NO info about this "ext2fs_read_block_bitmap" error.

Will someone help me?

Thanks in advance.
Comment 1 lakshmipathi 2017-04-26 09:17:33 UTC
Do you still have this issue? If so can you upload error log along with output of 'tune2fs -l /ext/device/' ?
Comment 2 Marco Aicardi 2017-04-26 17:38:33 UTC
Hi and thanks for your reply.

Yes, I still have this issue.

Thia is the tune2fs -l ouput:

tune2fs 1.43.4 (31-Jan-2017)
Filesystem volume name:   <none>
Last mounted on:          /media/d0907577-5da1-4e6e-a5ae-846f94149e2f
Filesystem UUID:          d0907577-5da1-4e6e-a5ae-846f94149e2f
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              364892160
Block count:              5838253174
Reserved block count:     291912658
Free blocks:              403319520
Free inodes:              358493406
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
Flex block group size:    16
Filesystem created:       Sat Feb 27 06:57:06 2016
Last mount time:          Wed Apr 26 18:59:05 2017
Last write time:          Wed Apr 26 18:59:12 2017
Mount count:              7
Maximum mount count:      -1
Last checked:             Sun Mar  5 18:46:18 2017
Check interval:           0 (<none>)
Lifetime writes:          21 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      09d7e438-164c-4afa-b621-0aaa2c70f6e4
Journal backup:           inode blocks

Running btrfs-convert I get this ouput:

ext2fs_read_block_bitmap: Unknown code ext2 137
ERROR: no file system found to convert
WARNING: an error occurred during conversion, filesystem is partially created but not finalized and not mountable

I found no other info in /var/log files.

Thanks for your help.
Comment 3 lakshmipathi 2017-04-26 18:37:28 UTC
Thanks for the outputs. I just wanted to check the ext4 flags.  They seem fine.

As you can see,
>ext2fs_read_block_bitmap: Unknown code ext2 137

btrfs-convert fails to read ext4 block bitmap.

I think, btrfs-convert possibly missing a flag(EXT2_FLAG_64BITS) while it opening 'large' ext4 device. Unfortunately, I don't have large TB sized hard-disk to test above flag.  

If you are willing to test, please use attached patch. It tries to open the ext4
device with above flag and prints message and quits.

# ./btrfs-convert /dev/sda5
WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked!
Comment 4 lakshmipathi 2017-04-26 18:38:55 UTC
Created attachment 256077 [details]
ext2 flag
Comment 5 Marco Aicardi 2017-04-26 21:46:36 UTC
I'll give a try. Thanks for your help.
Comment 6 Marco Aicardi 2017-04-26 22:02:35 UTC
This is the result:

# ./btrfs-convert /dev/sda6
WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked!
Comment 7 lakshmipathi 2017-04-27 09:13:15 UTC
(In reply to Marco Aicardi from comment #6)
> This is the result:
> 
> # ./btrfs-convert /dev/sda6
> WARNING: ext2fs_open() using EXT2_FLAG_64BITS worked!

Okay, thanks. Adding above flag fixed "ext2fs_read_block_bitmap: Unknown code ext2 137" error message. 

While testing with ~5GB drive, btrfs-convert(+EXT2_FLAG_64BITS) fails at later stage while doing bitmap related tasks. Those calls also needs to be updated. I'm checking them now.
Comment 8 lakshmipathi 2017-04-27 10:10:43 UTC
Hi Marco,

Tested below patch with smaller device, I sent it for review. Lets wait for review comments from the devs who already worked on this code path. 

I don't recommended running this patch on your 22TB drive, unless it gets reviewed and accepted. thanks.
Comment 9 lakshmipathi 2017-04-27 10:13:33 UTC
Created attachment 256093 [details]
large device support patch (under review)
Comment 10 Marco Aicardi 2017-04-27 10:17:08 UTC
Ok, thanks a lot for your help.

I do not have a complete backup of the drive so I agree it's better for me to wait for comments from the devs.

Thanks again to everyone who's working on this!
Comment 11 David Sterba 2017-05-02 15:06:14 UTC
I'm adding the patch to devel, but we'll have to test it further. It's quite slow unfortunatelly.
Comment 12 lakshmipathi 2017-05-13 13:17:39 UTC
(In reply to David Sterba from comment #11)
> I'm adding the patch to devel, but we'll have to test it further. It's quite
> slow unfortunatelly.

Are you able to reproduce this issue on your environment? With local machine,I tried with upto 15TB sparse (22TB fails with truncate with file too large error), unable to reproduce the issue. (tried aws spot instance, unfortunately they provide upto 1TB not more)
Comment 13 David Sterba 2022-10-06 17:45:04 UTC
This is a semi-automated bugzilla cleanup, report is against an old kernel version. If the problem still happens, please open a new bug. Thanks.

Note You need to log in before you can comment on or make changes to this bug.