Ubuntu bug report is here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/931371 Attaching a completely blank disk image to a virtual machine causes the following stack trace when loading the virtio block driver: [ 1.106728] loop: module loaded [ 1.125680] vda: unknown partition table [ 1.789721] Switching to clocksource tsc [ 8.373409] Clocksource tsc unstable (delta = 87849991 ns) [ 8.374642] Switching to clocksource jiffies [ 241.037694] INFO: task swapper/0:1 blocked for more than 120 seconds. [ 241.037966] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 241.038424] swapper/0 D ffffffff81806240 0 1 0 0x00000000 [ 241.039028] ffff88001ed7d870 0000000000000046 0000000000000000 000000004dbb1e54 [ 241.039460] ffff88001ed7dfd8 ffff88001ed7dfd8 ffff88001ed7dfd8 0000000000013780 [ 241.039839] ffffffff81c0d020 ffff88001ed80000 ffff88001ed7d850 ffff88001f014040 [ 241.040220] Call Trace: [ 241.041349] [<ffffffff811157a0>] ? __lock_page+0x70/0x70 [ 241.041712] [<ffffffff8165574f>] schedule+0x3f/0x60 [ 241.041798] [<ffffffff816557ff>] io_schedule+0x8f/0xd0 [ 241.041943] [<ffffffff811157ae>] sleep_on_page+0xe/0x20 [ 241.045386] [<ffffffff81655eca>] __wait_on_bit_lock+0x5a/0xc0 [ 241.045467] [<ffffffff81115797>] __lock_page+0x67/0x70 [ 241.049105] [<ffffffff81089e60>] ? autoremove_wake_function+0x40/0x40 [ 241.049416] [<ffffffff81116a20>] do_read_cache_page+0x160/0x180 [ 241.049656] [<ffffffff811ad630>] ? blkdev_write_begin+0x30/0x30 [ 241.049783] [<ffffffff81116a89>] read_cache_page_async+0x19/0x20 [ 241.049931] [<ffffffff81116a9e>] read_cache_page+0xe/0x20 [ 241.053417] [<ffffffff811e14bd>] read_dev_sector+0x2d/0x90 [ 241.053629] [<ffffffff811e2604>] adfspart_check_ICS+0x74/0x2d0 [ 241.053834] [<ffffffff81311f64>] ? snprintf+0x34/0x40 [ 241.053987] [<ffffffff811e2590>] ? rescan_partitions+0x300/0x300 [ 241.054210] [<ffffffff811e1c38>] check_partition+0xf8/0x200 [ 241.057501] [<ffffffff811e236a>] rescan_partitions+0xda/0x300 [ 241.057746] [<ffffffff811ae68c>] __blkdev_get+0x2bc/0x420 [ 241.057888] [<ffffffff810899f7>] ? bit_waitqueue+0x17/0xc0 [ 241.058000] [<ffffffff811ae84e>] blkdev_get+0x5e/0x1e0 [ 241.058080] [<ffffffff812f6b62>] register_disk+0x162/0x180 [ 241.058177] [<ffffffff812f6c24>] add_disk+0xa4/0x1b0 [ 241.061902] [<ffffffff8162925a>] virtblk_probe+0x43d/0x4e2 [ 241.064195] [<ffffffff814083f0>] ? virtblk_config_changed+0x30/0x30 [ 241.065436] [<ffffffff813a3020>] ? vp_find_vqs+0xc0/0xc0 [ 241.065529] [<ffffffff813a15e3>] virtio_dev_probe+0xe3/0x140 [ 241.065617] [<ffffffff813f08d8>] really_probe+0x68/0x190 [ 241.065701] [<ffffffff813f0b65>] driver_probe_device+0x45/0x70 [ 241.065789] [<ffffffff813f0c3b>] __driver_attach+0xab/0xb0 [ 241.065873] [<ffffffff813f0b90>] ? driver_probe_device+0x70/0x70 [ 241.065962] [<ffffffff813f0b90>] ? driver_probe_device+0x70/0x70 [ 241.066051] [<ffffffff813ef9cc>] bus_for_each_dev+0x5c/0x90 [ 241.066137] [<ffffffff813f069e>] driver_attach+0x1e/0x20 [ 241.069422] [<ffffffff813f02f0>] bus_add_driver+0x1a0/0x270 [ 241.073426] [<ffffffff81d2fcb8>] ? loop_init+0x12f/0x12f [ 241.073641] [<ffffffff813f11a6>] driver_register+0x76/0x140 [ 241.073801] [<ffffffff81d2fcb8>] ? loop_init+0x12f/0x12f [ 241.073951] [<ffffffff813a1840>] register_virtio_driver+0x20/0x30 [ 241.077413] [<ffffffff81d2fd0a>] init+0x52/0x7c [ 241.077611] [<ffffffff81002040>] do_one_initcall+0x40/0x180 [ 241.077760] [<ffffffff81cf9ce9>] kernel_init+0xcf/0x14e [ 241.077913] [<ffffffff81661d74>] kernel_thread_helper+0x4/0x10 [ 241.078061] [<ffffffff81cf9c1a>] ? start_kernel+0x3c7/0x3c7 [ 241.081352] [<ffffffff81661d70>] ? gs_change+0x13/0x13 This happens with 3.2.0 and a more recent 3.3 kernel.
Still happening with 3.4.0-rc2+. It happens when CONFIG_ACORN_PARTITION_ICS=y (not "ADFS" as previously stated). I've updated the summary to reflect this. I can reproduce it easily using my own kernel compiled from Linus's git tree. I can only reproduce it in an Ubuntu VM (not on Fedora baremetal) although that is probably just a timing issue.
More oddness. I disabled all CONFIG_ACORN_PARTITION settings. The bug still happens! But in another partition function. [ 240.632690] INFO: task swapper/0:1 blocked for more than 120 seconds. [ 240.632891] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 240.633222] swapper/0 D ffffffff8180ca80 0 1 0 0x00000000 [ 240.633623] ffff88001e911750 0000000000000046 0000000000000000 000000004e14c044 [ 240.633930] ffff88001e908000 ffff88001e911fd8 ffff88001e911fd8 ffff88001e911fd8 [ 240.634167] ffffffff81c13020 ffff88001e908000 ffff88001e911730 ffff88001ec140e8 [ 240.634461] Call Trace: [ 240.635197] [<ffffffff81119840>] ? __lock_page+0x70/0x70 [ 240.635428] [<ffffffff81655979>] schedule+0x29/0x70 [ 240.635602] [<ffffffff81655a4f>] io_schedule+0x8f/0xd0 [ 240.635755] [<ffffffff8111984e>] sleep_on_page+0xe/0x20 [ 240.635907] [<ffffffff816540fa>] __wait_on_bit_lock+0x5a/0xc0 [ 240.636130] [<ffffffff81119837>] __lock_page+0x67/0x70 [ 240.636283] [<ffffffff81073840>] ? autoremove_wake_function+0x40/0x40 [ 240.636472] [<ffffffff8111aa70>] do_read_cache_page+0x160/0x180 [ 240.636486] [<ffffffff811b0170>] ? blkdev_write_begin+0x30/0x30 [ 240.636486] [<ffffffff8111aad9>] read_cache_page_async+0x19/0x20 [ 240.636486] [<ffffffff8111aaee>] read_cache_page+0xe/0x20 [ 240.636486] [<ffffffff812f832d>] read_dev_sector+0x2d/0x90 [ 240.636486] [<ffffffff812fe68c>] read_lba+0xbc/0x120 [ 240.636486] [<ffffffff812feaeb>] find_valid_gpt+0xbb/0x610 [ 240.636486] [<ffffffff81316d7e>] ? string.isra.4+0x3e/0xd0 [ 240.636486] [<ffffffff812ff040>] ? find_valid_gpt+0x610/0x610 [ 240.636486] [<ffffffff812ff0c2>] efi_partition+0x82/0x3b0 [ 240.636486] [<ffffffff813182c4>] ? snprintf+0x34/0x40 [ 240.636486] [<ffffffff812ff040>] ? find_valid_gpt+0x610/0x610 [ 240.636486] [<ffffffff812f93b8>] check_partition+0xf8/0x200 [ 240.636486] [<ffffffff812f9017>] rescan_partitions+0xb7/0x2b0 [ 240.636486] [<ffffffff811b13cb>] __blkdev_get+0x39b/0x480 [ 240.636486] [<ffffffff811937f2>] ? inode_init_always+0x102/0x1d0 [ 240.636486] [<ffffffff811b1503>] blkdev_get+0x53/0x310 [ 240.636486] [<ffffffff8119311c>] ? unlock_new_inode+0x5c/0x90 [ 240.636486] [<ffffffff811b0021>] ? bdget+0x121/0x140 [ 240.636486] [<ffffffff812f6885>] add_disk+0x3f5/0x490 [ 240.636486] [<ffffffff816343ff>] virtblk_probe+0x45f/0x503 [ 240.636486] [<ffffffff81413c20>] ? lo_release+0x90/0x90 [ 240.636486] [<ffffffff813a9af0>] ? vp_reset+0x90/0x90 [ 240.640804] [<ffffffff813a8053>] virtio_dev_probe+0xe3/0x140 [ 240.641851] [<ffffffff813fa30e>] driver_probe_device+0x7e/0x220 [ 240.642170] [<ffffffff813fa55b>] __driver_attach+0xab/0xb0 [ 240.642518] [<ffffffff813fa4b0>] ? driver_probe_device+0x220/0x220 [ 240.642821] [<ffffffff813f8746>] bus_for_each_dev+0x56/0x90 [ 240.643101] [<ffffffff813f9e2e>] driver_attach+0x1e/0x20 [ 240.643372] [<ffffffff813f99e0>] bus_add_driver+0x1a0/0x270 [ 240.643665] [<ffffffff81d28240>] ? loop_init+0x12e/0x12e [ 240.643939] [<ffffffff81d28240>] ? loop_init+0x12e/0x12e [ 240.644282] [<ffffffff813faab6>] driver_register+0x76/0x130 [ 240.644613] [<ffffffff81d28240>] ? loop_init+0x12e/0x12e [ 240.644887] [<ffffffff813a82b0>] register_virtio_driver+0x20/0x30 [ 240.644989] [<ffffffff81d28294>] init+0x54/0x7e [ 240.644989] [<ffffffff8100203f>] do_one_initcall+0x3f/0x170 [ 240.644989] [<ffffffff81cf2d53>] kernel_init+0x134/0x1b8 [ 240.644989] [<ffffffff81cf2549>] ? rdinit_setup+0x28/0x28 [ 240.644989] [<ffffffff8165fe64>] kernel_thread_helper+0x4/0x10 [ 240.644989] [<ffffffff81cf2c1f>] ? start_kernel+0x3ce/0x3ce [ 240.644989] [<ffffffff8165fe60>] ? gs_change+0x13/0x13
I have seen the same condition as Richard, using the same applications, but on a raw virtual disk with multiple primary and extended partitions and a functioning MBR using the most recent Precise Pangolin release's Ubuntu kernel. So it is more general than just a blank disk. It fails on a fully functional virtual disk.
I now believe this is a bug in the Ubuntu qemu package.