Bug 34662 - Warning at block/genhd.c:1556 disk_clear_events
Summary: Warning at block/genhd.c:1556 disk_clear_events
Status: CLOSED CODE_FIX
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: Block Layer (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Tejun Heo
URL:
Keywords:
Depends on:
Blocks: 32012
  Show dependency tree
 
Reported: 2011-05-07 18:17 UTC by Maciej Rutecki
Modified: 2011-06-28 07:52 UTC (History)
8 users (show)

See Also:
Kernel Version: 2.6.39-rc5
Subsystem:
Regression: Yes
Bisected commit-id:


Attachments
test-module.c (2.29 KB, text/plain)
2011-05-11 13:48 UTC, Tejun Heo
Details
fix-check_events-race.patch (657 bytes, patch)
2011-05-11 13:52 UTC, Tejun Heo
Details | Diff
Kernel configuration (56.41 KB, text/plain)
2011-05-13 16:09 UTC, Sitsofe Wheeler
Details
2.6.39-rc1-00220-g148086b dmesg (35.18 KB, text/plain)
2011-05-14 08:37 UTC, Sitsofe Wheeler
Details
fix-check_events-race-1.patch (2.33 KB, patch)
2011-05-16 15:38 UTC, Tejun Heo
Details | Diff
fix-check_events-race-2.patch (4.66 KB, patch)
2011-05-17 08:42 UTC, Tejun Heo
Details | Diff

Description Maciej Rutecki 2011-05-07 18:17:43 UTC
Subject    : Warning at block/genhd.c:1556 disk_clear_events
Submitter  : Meelis Roos <mroos@linux.ee>
Date       : 2011-05-02 9:59
Message-ID : alpine.SOC.1.00.1105021255400.6200@math.ut.ee
References : http://marc.info/?l=linux-kernel&m=130433037002610&w=2

This entry is being used for tracking a regression from 2.6.38. Please don't
close it until the problem is fixed in the mainline.
Comment 1 Florian Mickler 2011-05-09 09:35:04 UTC
Is the warning reproducible? Or was it a once off?
Comment 2 Meelis Roos 2011-05-09 10:45:20 UTC
Have not bveen able to reproduce with 2 later snapshots.
Comment 3 Florian Mickler 2011-05-09 18:19:33 UTC
Ok, I'm closing this for now as unreproducible.
Comment 4 Meelis Roos 2011-05-10 05:50:33 UTC
Got one more warning on the same machine with 2.6.39-rc6-00068-g8b06161 around the time (or up to 10 min later) when I left the computer for the evening and locked the screen. No USB insertions or removals at that time - last USB disk event was disk removal more than 5 hours before.

------------[ cut here ]------------
WARNING: at block/genhd.c:1556 disk_clear_events+0x111/0x120()
Hardware name: System Product Name
Modules linked in: nls_iso8859_1 nls_cp437 vfat fat ip6table_filter ip6_tables iptable_filter ip_tables powernow_k8 mperf cpufreq_powersave ebtable_nat cpufreq_stats ebtables x_tables cpufreq_conservative cpufreq_userspace cpufreq_ondemand freq_table parport_pc ppdev lp parport bnep rfcomm bluetooth rfkill crc16 snd_hrtimer uinput binfmt_misc fuse nfsd nfs lockd auth_rpcgss nfs_acl sunrpc bridge ipv6 stp llc w83627ehf hwmon_vid kvm_amd kvm firewire_sbp2 loop snd_hda_codec_realtek usb_storage usb_libusual uas option usbhid hid usb_wwan snd_hda_intel snd_hda_codec snd_hwdep snd_pcm_oss snd_mixer_oss snd_pcm snd_seq_dummy snd_seq_oss snd_seq_midi nouveau fbcon ttm tileblit snd_rawmidi snd_seq_midi_event sg snd_seq font bitblit fbcon_rotate fbcon_cw fbcon_ud fbcon_ccw softcursor drm_kms_helper usbserial sr_mod snd_timer snd_seq_device drm fb ohci_hcd fbdev ehci_hcd ssb nv_tco snd i2c_algo_bit cdrom k8temp ata_generic pata_acpi firewire_ohci i2c_nforce2 firewire_core soundcore hwmon usbcore i2c_core pcmcia pata_amd crc_itu_t snd_page_alloc cfbcopyarea cfbimgblt cfbfillrect edac_core pcmcia_core floppy evdev rtc_cmos rtc_core rtc_lib pcspkr processor fan thermal button
Pid: 8052, comm: udisks-daemon Not tainted 2.6.39-rc6-00068-g8b06161 #62
Call Trace:
 [<ffffffff81039d3b>] ? warn_slowpath_common+0x7b/0xc0
 [<ffffffff8118e5b1>] ? disk_clear_events+0x111/0x120
 [<ffffffff81106540>] ? check_disk_change+0x30/0x80
 [<ffffffff81238b73>] ? sd_open+0xc3/0x1e0
 [<ffffffff811077d3>] ? __blkdev_get+0x93/0x380
 [<ffffffff81107b1f>] ? blkdev_get+0x5f/0x250
 [<ffffffff81032e97>] ? sub_preempt_count.part.158+0x47/0x80
 [<ffffffff81340030>] ? _raw_spin_unlock+0x10/0x30
 [<ffffffff81107d10>] ? blkdev_get+0x250/0x250
 [<ffffffff810d4af1>] ? __dentry_open+0x191/0x340
 [<ffffffff81340030>] ? _raw_spin_unlock+0x10/0x30
 [<ffffffff810e4603>] ? do_last.isra.37+0x2a3/0x6c0
 [<ffffffff810e1fea>] ? path_init+0x30a/0x3d0
 [<ffffffff810e5084>] ? path_openat+0xc4/0x3b0
 [<ffffffff813407ce>] ? common_interrupt+0xe/0x13
 [<ffffffff810e53b5>] ? do_filp_open+0x45/0xb0
 [<ffffffff81032e39>] ? get_parent_ip+0x9/0x20
 [<ffffffff81032e97>] ? sub_preempt_count.part.158+0x47/0x80
 [<ffffffff81340030>] ? _raw_spin_unlock+0x10/0x30
 [<ffffffff810f1544>] ? alloc_fd+0xe4/0x130
 [<ffffffff810d5f05>] ? do_sys_open+0xf5/0x1d0
 [<ffffffff81340dfb>] ? system_call_fastpath+0x16/0x1b
---[ end trace 709acec9ff7b6b7a ]---
Comment 5 Meelis Roos 2011-05-10 06:54:29 UTC
> https://bugzilla.kernel.org/show_bug.cgi?id=34662

Some hours after closing it, it triggered again. Probably needs to be 
reopened?
Comment 6 Florian Mickler 2011-05-10 07:12:36 UTC
Yes, sorry. Normally I don't close quite that fast, but I misread the date to be one month old... reopened. You might want to also reply to your original email with the trace, so people know about it.
Comment 7 Tejun Heo 2011-05-10 14:24:39 UTC
There was an earlier report of this problem and debugging was in progress.  I'm waiting for test result but will try to reproduce the problem again.

 http://thread.gmane.org/gmane.linux.kernel/1122448

Thank you.
Comment 8 Tejun Heo 2011-05-10 16:47:48 UTC
Now my test code reliably reproduces the condition.  I'll dig in.  Thanks.
Comment 9 Sitsofe Wheeler 2011-05-10 21:33:27 UTC
Tetjun:
What do you use to reproduce this issue?
Comment 10 Tejun Heo 2011-05-11 13:48:37 UTC
Created attachment 57342 [details]
test-module.c

It's a workqueue flushing test program because I was a bit confused about what's going on.  It doesn't directly test disk event code.
Comment 11 Tejun Heo 2011-05-11 13:52:57 UTC
Created attachment 57352 [details]
fix-check_events-race.patch

Sitsofe, can you please test whether the attached patch fixes the problem?  The problem was that the queue/flush in disk_clear_events() could race with queue from other places in which case flush doesn't guarantee anything.

My first patch tried to fix this by modifying __disk_block_events() always cancel the work item.  It was in the right direction but only decreased the frequency of the problem because it sometimes ended up cancelling the work scheduled from disk_clear_events().  This patch removes the only non-syncing caller of __disk_block_events() so that the work item is always guaranteed to be offline if ev->block is non-zero.

Thank you.
Comment 12 Sitsofe Wheeler 2011-05-12 22:38:33 UTC
Tetjun:
I'm afraid the problem is still here with the above patch:

[  412.317941] ------------[ cut here ]------------
[  412.317960] WARNING: at block/genhd.c:1564 disk_clear_events+0xbc/0xd0()
[  412.317965] Hardware name: 900
[  412.317968] Modules linked in:
[  412.317977] Pid: 25927, comm: udisks-part-id Not tainted 2.6.39-rc1-00220-g148086b-dirty #441
[  412.317982] Call Trace:
[  412.317993]  [<b0121a68>] ? warn_slowpath_common+0x78/0xb0
[  412.318001]  [<b0238d5c>] ? disk_clear_events+0xbc/0xd0
[  412.318007]  [<b0238d5c>] ? disk_clear_events+0xbc/0xd0
[  412.318007]  [<b0121ab9>] ? warn_slowpath_null+0x19/0x20
[  412.318007]  [<b0238d5c>] ? disk_clear_events+0xbc/0xd0
[  412.318007]  [<b01ac07f>] ? check_disk_change+0x1f/0x70
[  412.318007]  [<b032d4a2>] ? sd_open+0x92/0x150
[  412.318007]  [<b032d410>] ? sd_release+0x60/0x60
[  412.318007]  [<b01ad160>] ? __blkdev_get+0x350/0x390
[  412.318007]  [<b01ad1e7>] ? blkdev_get+0x47/0x270
[  412.318007]  [<b018bccc>] ? do_lookup+0x12c/0x270
[  412.318007]  [<b01ac62e>] ? bd_acquire+0x1e/0x90
[  412.318007]  [<b0181533>] ? __dentry_open+0xf3/0x240
[  412.318007]  [<b0181770>] ? nameidata_to_filp+0x70/0x80
[  412.318007]  [<b01ad410>] ? blkdev_get+0x270/0x270
[  412.318007]  [<b018cfcc>] ? do_last+0x1ac/0x750
[  412.318007]  [<b018af4a>] ? path_get+0xa/0x20
[  412.318007]  [<b018e1cc>] ? path_openat+0x9c/0x340
[  412.318007]  [<b018e598>] ? do_filp_open+0x68/0x90
[  412.318007]  [<b017f345>] ? kmem_cache_alloc+0x95/0x120
[  412.318007]  [<b0199321>] ? alloc_fd+0x61/0xe0
[  412.318007]  [<b0182684>] ? do_sys_open+0xf4/0x1f0
[  412.318007]  [<b01827fc>] ? sys_open+0x2c/0x40
[  412.318007]  [<b04e99d7>] ? sysenter_do_call+0x12/0x26
[  412.318007] ---[ end trace 4d67050280c66439 ]---
Comment 13 Sitsofe Wheeler 2011-05-12 22:41:09 UTC
The above was caused by using:

for i in `seq 1 100`; do udevadm trigger --subsystem-match=block ; done

about 2 or 3 times.

(additionally, test-module.c relies on a function that is not in the main kernel but was in one of the test patches (flush_work_dbg()) but the interface has drifted slightly)
Comment 14 Tejun Heo 2011-05-13 08:26:58 UTC
Can you please attach your .config and full kernel boot log?  And, flush_work_dbg() is something I added for debugging.  It's not in the mainline.  The code just demonstrates certain behavior I was suspicious about.

Thanks.
Comment 15 Sitsofe Wheeler 2011-05-13 16:09:12 UTC
Created attachment 57752 [details]
Kernel configuration
Comment 16 Sitsofe Wheeler 2011-05-13 16:13:02 UTC
The benefit of flush_work_dbg was that it sounded like you had developed a way to trigger the bug without it hitting the disks really hard. I would have prefered to use your method to save wear and tear on the SSD...
Comment 17 Sitsofe Wheeler 2011-05-14 08:37:22 UTC
Created attachment 57792 [details]
2.6.39-rc1-00220-g148086b dmesg
Comment 18 Tejun Heo 2011-05-16 14:00:54 UTC
Heh, I build the kernel with the same config and connected a SSD and two hard disks and did the udev thing but it just won't reproduce for me.  :-(

I'll try a bit more.  Thanks for your patience.
Comment 19 Borislav Petkov 2011-05-16 14:14:53 UTC
(In reply to comment #18)
> Heh, I build the kernel with the same config and connected a SSD and two hard
> disks and did the udev thing but it just won't reproduce for me.  :-(
> 
> I'll try a bit more.  Thanks for your patience.

Same here, I cannot reproduce it either, meaning that it happens only sometimes - the only successful reproduction is from when I reported it. I'll keep trying though.

Thanks.
Comment 20 Tejun Heo 2011-05-16 15:38:02 UTC
Created attachment 58152 [details]
fix-check_events-race-1.patch

Sitsofe, can you please test this one?  I had another long look at the code and I _think_ this is the solution but given my history with the bug I'm not feeling particularly confident. :-)

If your SSD dies during testing, please send me a private email.  I'll be happy to pay for the replacement.

Thank you.
Comment 21 Sitsofe Wheeler 2011-05-17 07:58:43 UTC
I will note that this problem seemingly doesn't trigger with newer kernels. The kernel that I am reproducing (and applying the patches to) is 148086bb64694cc91624bab2a550d50f800eb4b7 .

However, a quick test (8 or so rounds of the udevadm test) with the patch in comment #20 did not reproduce the problem. If anything the drive seemed to remain busy for a shorter period of time during the test - normally once the program stops the disk light continues to flash for some time afterwards. This time it only continued flashing for about 10 or seconds...
Comment 22 Tejun Heo 2011-05-17 08:13:52 UTC
The bug is likely to be highly timing dependent, so seeming unrelated changes in the kernel may affect reproducibility.  That probably is also why only you seem to be able to reproduce the problem with some level of regularity.

That said, it is an obvious bug and I'm kinda embarrassed I didn't see that before.  You know, one of those "What was I thinking?  Not much apparently" moments.  I'll push the fix upstream.

Thank you.
Comment 23 Tejun Heo 2011-05-17 08:42:31 UTC
Created attachment 58262 [details]
fix-check_events-race-2.patch

This is the final version which will be sent upstream.  It's basically the same patch but I would really appreciate if you can verify just in case.

Thank you.
Comment 24 Florian Mickler 2011-06-06 16:41:05 UTC
I can't see the patch from comment #23 or comment #20 in linus tree... is this fixed some other way in the meantime?
Comment 25 Tejun Heo 2011-06-09 16:31:58 UTC
Hmmm... I thought it was in Linus' tree.  I'll check with Jens.

Thanks.
Comment 26 Sitsofe Wheeler 2011-06-09 22:14:58 UTC
Sorry for the massive delay on this. I'll try and get some testing done on the patch in #23 tomorrow...
Comment 27 Sitsofe Wheeler 2011-06-11 12:36:26 UTC
OK I've tested with the patch in Comment #23 and as previously mentioned in comment #20, things felt smoother while the test was running and the test seems to finish faster too. No error messages were printed after doing 6 runs on six different boots so things look fixed with that patch.

Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Comment 28 Florian Mickler 2011-06-11 15:59:03 UTC
Patch: https://bugzilla.kernel.org/attachment.cgi?id=58262
Comment 29 Rafael J. Wysocki 2011-06-26 21:48:58 UTC
Fixed by commit fdd514e16bb2531c0c61ae8a1f87740ce217f630 .
Comment 30 Florian Mickler 2011-06-28 07:52:03 UTC
A patch referencing this bug report has been merged in v3.0-rc5:

commit a9dce2a3b4f0686dd66cb44d4826a59508bce969
Author: Tejun Heo <tj@kernel.org>
Date:   Thu Jun 9 20:43:54 2011 +0200

    block: don't use non-syncing event blocking in disk_check_events()

Note You need to log in before you can comment on or make changes to this bug.