Bug 10746 - cfq has worse io-throughput on certain hardware than deadline, as, noop (50-75%)
Summary: cfq has worse io-throughput on certain hardware than deadline, as, noop (50-75%)
Status: CLOSED OBSOLETE
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: Block Layer (show other bugs)
Hardware: All Linux
: P1 high
Assignee: Jens Axboe
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-05-19 03:34 UTC by Matthew
Modified: 2012-05-21 15:27 UTC (History)
1 user (show)

See Also:
Kernel Version: 2.6.25 to 2.6.26-rc2
Subsystem:
Regression: No
Bisected commit-id:


Attachments
this is the output of blktrace from a vanilla-kernel (2.6.25) and with the patch mentioned below in "comment" (55 bytes, text/plain)
2008-05-19 04:27 UTC, Matthew
Details

Description Matthew 2008-05-19 03:34:01 UTC
Latest working kernel version: unknown
Earliest failing kernel version: 2.6.25-rc2 to latest git
Distribution: GNU/Gentoo Linux
Hardware Environment: P5W DH Deluxe, ahci (Intel ICH7R + Jmicron JMB363/361), Intel Core 2 Duo 6600, 
Software Environment: 
Problem Description: 

cfq has good interactivity during heavy load (esp. for desktop systems) but worse io-throughput compared to as, deadline, noop in general:
(e.g. as, noop, deadline: 120 MB/s [platter speed], cfq: 55 MB/s)

Steps to reproduce: use the kernel
cat /sys/block/sda/queue/iosched/slice_idle
8
echo 1 >/proc/sys/vm/drop_caches
dd if=/dev/sda of=/dev/null bs=64k count=5000
or
hdparm -t /dev/sda

should give bad throughput e.g. around 55-65 MB/s

echo 0 >/sys/block/sda/queue/iosched/slice_idle
echo 1 >/proc/sys/vm/drop_caches
dd if=/dev/sda of=/dev/null bs=64k count=5000

should provide throughput around platter speed, e.g. 120 MB/s

http://lkml.org/lkml/2008/5/10/71
http://lkml.org/lkml/2008/5/11/48
Comment 1 Matthew 2008-05-19 04:27:58 UTC
Created attachment 16192 [details]
this is the output of blktrace from a vanilla-kernel (2.6.25) and with the patch mentioned below in "comment"

correction for the latest kernel:
2.6.25-rc2 should be 2.6.26-rc2 (little typo)

"vanilla" refers to 2.6.25

attached you'll find the output of a vanilla-run (sde, sdd) and a run
with patch2 + vanilla - still no change (at least for me ;) )
+ kernel-config and dmesg with some device-sensitive data cut out (printer)

01 == vanilla run
02 == run with patch2
03 == run of other harddisks [/dev/sda, /dev/sdc] (on vanilla-kernel +
patch2) not encountering this problem (in fact never, even not in
vanilla == no lowered throughput)

ah, before I forget it (this might make a change):
all hdd's run on ahci mode, all are not jumpered == 3 GB/s SATA-II/-I,
not limited to 1.5 GB/s (shouldn't make a difference)

/dev/sdd is on 4th port (?) of Intel ICH7R [ST3750330NS]
/dev/sde is on Jmicron JMB363/JMB361
/dev/sdd + /dev/sde show this behavior (slowing down / bad throughput)

* /dev/sda + /dev/sdc are on the ICH7R, too, but don't show this
behavior (port 1 and 2 / 3 ?)
* /dev/sda == /dev/sde --> both are of the same hdd model
[ST3250620AS, Seagate]
* /dev/sdc is [ST3250824AS]
all 4 drives are S-ATA drives, the old atapi/ide-driver - whether it's compiled in or not - doesn't make a change (shouldn't)

the p5w dh deluxe has some "strange" characteristics such as the
ez-backup hardware raid-controller (which is currently disabled)
enabling the combination of the jmicron and the ICH7R on a raid 5  &
but shouldn't be the reason here
for more information on this please consult the p5w dh deluxe handbook
at asus.com

the above mentioned patch is (should be) from:
http://lkml.org/lkml/2008/5/14/97

Note You need to log in before you can comment on or make changes to this bug.