Bug 9171
Summary: | __bread and 2TiB problem | ||
---|---|---|---|
Product: | IO/Storage | Reporter: | Alexander Mamaev (shura_mam) |
Component: | Block Layer | Assignee: | Jens Axboe (axboe) |
Status: | CLOSED OBSOLETE | ||
Severity: | normal | CC: | akpm, alan, erik.andren, pmkernelbugs |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.22 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Alexander Mamaev
2007-10-16 08:47:02 UTC
Reply-To: akpm@linux-foundation.org well yes. I suppose this: From: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- fs/buffer.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff -puN fs/buffer.c~a fs/buffer.c --- a/fs/buffer.c~a +++ a/fs/buffer.c @@ -1113,6 +1113,20 @@ __getblk_slow(struct block_device *bdev, return NULL; } +#if (BITS_PER_LONG == 32) && defined(CONFIG_LBD) + if ((block >> (PAGE_CACHE_SHIFT - bdev->bd_inode->i_blkbits)) & + 0xffffffff00000000UL) { + /* + * We'll fail because the block is outside the range + * which a 32-bit pagecache index can address + */ + printk(KERN_ERR "getblk(): sector number too large for 32-bit" + "machines\n"); + dump_stack(); + return NULL; + } +#endif + for (;;) { struct buffer_head * bh; int ret; _ will stop it hanging, but I'm not sure that it justifies the additional overhead? Lets look my case: block = 0x100000000 PAGE_CACHE_SHIFT = 12 bdev->bd_inode->i_blkbits = 9 ( 'cause set_blocksize( sb->s_bdev, 512 ); ) This means that: (block >> (PAGE_CACHE_SHIFT - bdev->bd_inode->i_blkbits)) = 0x20000000 It fits into 32 bit! Is this still an issue with a recent kernel? I don't have enough knowledge on this but I have one question out of curiosity: Why are we doing this check in the __getblk_slow? I think we move do this check and also the check for block size which we are doing currently in __getblk_slow to __getblk i.e we can do this check before even looking for this block in CPU's lru list and page cache. |