Bug 295
Summary: | RD device shares elevator queue objects | ||
---|---|---|---|
Product: | IO/Storage | Reporter: | Jon Smirl (jonsmirl) |
Component: | Block Layer | Assignee: | Jens Axboe (axboe) |
Status: | CLOSED CODE_FIX | ||
Severity: | normal | ||
Priority: | P2 | ||
Hardware: | i386 | ||
OS: | Linux | ||
Kernel Version: | 2.5.59 | Subsystem: | |
Regression: | --- | Bisected commit-id: |
Description
Jon Smirl
2003-01-25 19:22:32 UTC
Is there any problem with having each ramdisk have its own request_queue? I don't know anything about how RD is using the elevator queues. I do believe some people use large numbers of disks so this might imply large numbers of queues. Do the RD elevator queues actually do anything or are they always empty? I found this problem because UML shuts the kernel down gracefully. A graceful shut down calls the exit functions on all of the drivers. The RD exit function fails when destroying the queues. You don't see this in a normal kernel reboot. A normal reboot doesn't bother shutting down the hardware individually. It just call bios reset which resets everything. UML can't do a BIOS reset since that would reset the host OS too. It might be worth asking on lkml if someone is familar with the queues. Yeah, I would suggest just posting what you need to have happen and asking about the request_queues to lkml, because I really don't have any intimate knowledge of it. rd is not using an elevator queue, however it has a block layer queue assigned to it like any other block driver. What is the right fix for this? Should all ram disks be sharing the same queue object, and then modify elv_register_queue() to inc kobject ref counts if the object already exisits? Or should each RAM disk have it's own queue? del_gendisk has changed since this patch was written, but it still looks like the patch has not been applied Fixed in 2.5.70-bk |