Bug 13463 - Poor SSD performance
Summary: Poor SSD performance
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: Serial ATA (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Jeff Garzik
Depends on:
Blocks: 12398
  Show dependency tree
Reported: 2009-06-05 17:37 UTC by Jake
Modified: 2009-06-29 12:01 UTC (History)
4 users (show)

See Also:
Kernel Version:
Regression: Yes
Bisected commit-id:

Kernel 2.6.29 log file (25 bytes, application/octet-stream)
2009-06-07 04:36 UTC, Jake
dmesg from booting kernel 2.6.29 (20 bytes, application/octet-stream)
2009-06-07 04:37 UTC, Jake
Details kernel boot log. (742.92 KB, text/plain)
2009-06-07 12:20 UTC, Jake
Details kernel boot log. (846.63 KB, text/plain)
2009-06-07 12:22 UTC, Jake

Description Jake 2009-06-05 17:37:07 UTC
I have experienced a 3-fold performance cut on read times for my OCZ Vertex 30GB SATAII SSD between kernel versions 2.6.28 and 2.6.29. In 2.6.28, hdparm and other tests showed read speeds of 200-220 MB/s on my Vertex drive. In kernel 2.6.29, I perform the same tests and find read speeds of 70-80 MB/s. I did not benchmark the write speeds.

Kernel architecture is x86_64. Let me know if there's anything else I should check out. I discussed this in the Arch linux forums here: http://bbs.archlinux.org/viewtopic.php?pid=562748#p562748, but we did not make any progress diagnosing the problem.
Comment 1 Andrew Morton 2009-06-05 17:48:17 UTC
I marked this as a regression.
Comment 2 Tejun Heo 2009-06-07 03:12:26 UTC
Can you please attach boot logs from the two kernels?
Comment 3 Jake 2009-06-07 04:36:12 UTC
Created attachment 21781 [details]
Kernel 2.6.29 log file
Comment 4 Jake 2009-06-07 04:37:13 UTC
Comment on attachment 21781 [details]
Kernel 2.6.29 log file

This file corresponds to booting kernel 2.6.29 when I get slow SSD reads.
Comment 5 Jake 2009-06-07 04:37:52 UTC
Created attachment 21782 [details]
dmesg from booting kernel 2.6.29
Comment 6 Jake 2009-06-07 04:39:08 UTC
I also can submit kernel.log and dmesg from boots of the kernel in which I get full read speeds. I will do this if you think it is necessary.
Comment 7 Tejun Heo 2009-06-07 04:41:29 UTC
Yeap, that was what I meant by "the two kernels".  Sorry about not being clear.
Comment 8 Tejun Heo 2009-06-07 04:42:32 UTC
Also, you attached 25 and 20 byte files which only contain the file names.
Comment 9 Jake 2009-06-07 12:20:44 UTC
Created attachment 21785 [details] kernel boot log.
Comment 10 Jake 2009-06-07 12:22:57 UTC
Created attachment 21786 [details] kernel boot log.
Comment 11 Jake 2009-06-07 12:23:36 UTC
I'm sorry, I should have checked the files before uploading them. The proper files are now uploaded.
Comment 12 Tejun Heo 2009-06-08 02:45:02 UTC
Hmm... can't see any ATA related differences.  Can you please run the following command as root in 2.6.28 and 29 and report the results?

# dd if=/dev/sda of=/dev/null iflag=direct bs=1M count=1024

Comment 13 Jake 2009-06-08 02:49:32 UTC
In 2.6.28 and 2.6.29 the direct flag gives me 230 MB/s reads.
Comment 14 Tejun Heo 2009-06-08 04:02:41 UTC
In the original report, what 'other tests' showed large performance regression other than hdparm?  Also, does hdparm give consistent numbers over multiple trials?
Comment 15 Jake 2009-06-08 04:04:49 UTC
dd if=/dev/sda of=/dev/null also gives me read speeds of 70-80 MB/s in 2.6.29, where I get 220 MB/s in 2.6.28.
Comment 16 Tejun Heo 2009-06-08 04:09:09 UTC
Hmm... I see.  The difference is from far above the block/storage layer.  Most likely vm.  Andrew, can you please take it?  Thanks.
Comment 17 Jake 2009-06-16 04:01:25 UTC
I installed a version of the 2.6.30 kernel and this problem appears to be absent since I got read speeds of 220MB/s. Apparently this problem is isolated to the 2.6.29 kernel.
Comment 18 Tejun Heo 2009-06-17 06:24:25 UTC
Heh... so it solved itself.  It would be nice to find out which was the culprit.  I suppose the right people would already know.  Anyways, with 2.6.30 already out of the door, I don't think it's of high priority.  Jeff, can you please close this one?  Thanks.

Note You need to log in before you can comment on or make changes to this bug.