Bug 15858
Summary: | [2.6.34-rc5] bad page state copying to/from HFS+ filesystem... | ||
---|---|---|---|
Product: | File System | Reporter: | Maciej Rutecki (maciej.rutecki) |
Component: | HFS/HFSPLUS | Assignee: | Roman Zippel (zippel) |
Status: | CLOSED INSUFFICIENT_DATA | ||
Severity: | normal | CC: | daniel.blueman, maciej.rutecki, rjw |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.34-rc5 | Subsystem: | |
Regression: | Yes | Bisected commit-id: | |
Bug Depends on: | |||
Bug Blocks: | 15310 |
Description
Maciej Rutecki
2010-04-26 19:41:59 UTC
Original report inlined for clarity: When copying data from a HFS+ filesystem to a freshly-created one, I experienced page state corruption [1]. I don't have access to the filesystem anymore, but can run some other filesystem tests if anyone is interested. Kernel is mainline 2.6.34-rc5 on x86-64. Thanks, Daniel --- [1] hfs: backup: 0,1953523120,244190389,3072 hfs: backup: 0,1953523120,244190389,3072 hfs: backup: 0,1953523120,244190389,3072 hfs: backup: 0,1953523120,244190389,3072 hfs: backup: 0,1953523120,244190389,3072 BUG: Bad page state in process cp pfn:37654 page:ffffea0000c1e260 count:0 mapcount:0 mapping:(null) index:0x0 page flags: 0x100000000000004(referenced) Pid: 3163, comm: cp Not tainted 2.6.34-020634rc5-generic #020634rc5 Call Trace: [<ffffffff810f6654>] ? dump_page+0x44/0x60 [<ffffffff810f66ff>] bad_page+0x8f/0x110 [<ffffffff8113a074>] ? try_get_mem_cgroup_from_mm+0x24/0x80 [<ffffffff810f68b3>] prep_new_page+0x133/0x150 [<ffffffff810f81a5>] get_page_from_freelist+0x1d5/0x460 [<ffffffff810f8a4a>] __alloc_pages_nodemask+0xda/0x180 [<ffffffff81127a9c>] alloc_pages_current+0x8c/0xd0 [<ffffffff810f12bd>] __page_cache_alloc+0x6d/0x80 [<ffffffff810f2edc>] grab_cache_page_write_begin+0x7c/0xc0 [<ffffffff8116a61c>] block_write_begin+0x8c/0xf0 [<ffffffff8116b89c>] cont_write_begin+0x8c/0xd0 [<ffffffffa0398110>] ? hfsplus_get_block+0x0/0x1a0 [hfsplus] [<ffffffffa03964f6>] hfsplus_write_begin+0x36/0x40 [hfsplus] [<ffffffffa0398110>] ? hfsplus_get_block+0x0/0x1a0 [hfsplus] [<ffffffff810f2191>] generic_perform_write+0xc1/0x1a0 [<ffffffff810f22d5>] generic_file_buffered_write+0x65/0xa0 [<ffffffff810f3441>] __generic_file_aio_write+0x251/0x3f0 [<ffffffff810f2ce9>] ? generic_file_aio_read+0xb9/0x1b0 [<ffffffff810f363c>] generic_file_aio_write+0x5c/0xb0 [<ffffffff81140a65>] do_sync_write+0xd5/0x120 [<ffffffff81248aa6>] ? security_file_permission+0x16/0x20 [<ffffffff81141c3c>] vfs_write+0xcc/0x1a0 [<ffffffff81141e05>] sys_write+0x55/0x90 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b Disabling lock debugging due to kernel taint Generating hfsplus filesystems over loopback and copying data from HFS+ to HFS+ filesystems didn't generate any issues, so I guess filesystem corruption on the source drive (but corruption observed when copying to the destination) may have been what triggered this case. I no longer have access to the source drive, but this will therefore be reproducible with filesystem fuzzing or similar. I have no means of reproducing this, as I don't have access to the filesystem, and other checks were in vain. Due to the lack of changes in the HFS+ filesystem code, this bug will still be present and will be reproducible with a filesystem fuzzer. ...but close it if it just adds noise. Closing. |