The mptsas module from LSI's website, v4.18.00.00, performs much better than the mptsas module included in the kernel, v3.something. I'm only comparing sequential writes (dd if=/dev/zero of=zeros bs=1M), but... 6 drive MD RAID5 with fresh btrfs, writing 10GB of zeros with dd, kernel version: <200MB/sec LSI's v4.18.00.00: 395MB/sec Pretty big difference. I've gone back and forth a few times, enabling/disabling write cache on the drives, enabling/disabling ioc, enabling/disabling filesystem barriers... I can make small changes in performance but nothing compares to simply updating to v4.18.00.00.
(switched to email. Please respond via emailed reply-to-all, not via the bugzilla web interface). scsi_drivers-other reports don't appear to be coming out on the linux-scsi list. On Sat, 3 Apr 2010 21:16:31 GMT bugzilla-daemon@bugzilla.kernel.org wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > Summary: mptsas & poor performance > Product: SCSI Drivers > Version: 2.5 > Kernel Version: 2.6.34-020634rc1 > Platform: All > OS/Version: Linux > Tree: Mainline > Status: NEW > Severity: normal > Priority: P1 > Component: Other > AssignedTo: scsi_drivers-other@kernel-bugs.osdl.org > ReportedBy: bexamous@gmail.com > Regression: No > > > The mptsas module from LSI's website, v4.18.00.00, performs much better than > the mptsas module included in the kernel, v3.something. > > I'm only comparing sequential writes (dd if=/dev/zero of=zeros bs=1M), but... > > 6 drive MD RAID5 with fresh btrfs, writing 10GB of zeros with dd, > kernel version: <200MB/sec > LSI's v4.18.00.00: 395MB/sec > > Pretty big difference. I've gone back and forth a few times, > enabling/disabling write cache on the drives, enabling/disabling ioc, > enabling/disabling filesystem barriers... I can make small changes in > performance but nothing compares to simply updating to v4.18.00.00. >
Andrew, Today I tried same steps as mentioned by you. In my case I am able to see both the drivers Performance is similar. 3.4.14 is driver version available at kernel.org 4.22.00.00 is driver LSI internally uses. [4.18.00.00 does not have support for 2.6.34 kernel] I guess you must have done some changes to make sure 4.18.00.00 works with 2.6.34-020634rc1. Both the case I am getting 190~210MB/sec. Can you help me to reproduce ~395MB/sec using 4.18.00.00 ? ~Kashyap > -----Original Message----- > From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi- > owner@vger.kernel.org] On Behalf Of Andrew Morton > Sent: Tuesday, April 06, 2010 2:05 AM > To: linux-scsi@vger.kernel.org; Moore, Eric > Cc: bugzilla-daemon@bugzilla.kernel.org; bugme- > daemon@bugzilla.kernel.org; bexamous@gmail.com > Subject: Re: [Bugme-new] [Bug 15688] New: mptsas & poor performance > > > (switched to email. Please respond via emailed reply-to-all, not via > the > bugzilla web interface). > > scsi_drivers-other reports don't appear to be coming out on the > linux-scsi list. > > On Sat, 3 Apr 2010 21:16:31 GMT > bugzilla-daemon@bugzilla.kernel.org wrote: > > > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > Summary: mptsas & poor performance > > Product: SCSI Drivers > > Version: 2.5 > > Kernel Version: 2.6.34-020634rc1 > > Platform: All > > OS/Version: Linux > > Tree: Mainline > > Status: NEW > > Severity: normal > > Priority: P1 > > Component: Other > > AssignedTo: scsi_drivers-other@kernel-bugs.osdl.org > > ReportedBy: bexamous@gmail.com > > Regression: No > > > > > > The mptsas module from LSI's website, v4.18.00.00, performs much > better than > > the mptsas module included in the kernel, v3.something. > > > > I'm only comparing sequential writes (dd if=/dev/zero of=zeros > bs=1M), but... > > > > 6 drive MD RAID5 with fresh btrfs, writing 10GB of zeros with dd, > > kernel version: <200MB/sec > > LSI's v4.18.00.00: 395MB/sec > > > > Pretty big difference. I've gone back and forth a few times, > > enabling/disabling write cache on the drives, enabling/disabling ioc, > > enabling/disabling filesystem barriers... I can make small changes > in > > performance but nothing compares to simply updating to v4.18.00.00. > > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-scsi" > in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
I think the only changes I made to 4.18.00.00 was to just not use mptlan (since I have no need and it wouldn't compile on 2.6.34) and make a few changes to mptsas.c (only to get it to compile on 2.6.34). diff ./mptlinux-4.18.00.00/dkms.conf /usr/src/mptlinux-4.18.00.00/dkms.conf 13d12 < MAKE[6]="make -C ${kernel_source_dir} SUBDIRS=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build modules" 28,29c27,28 < BUILT_MODULE_NAME[3]="mptlan" < DEST_MODULE_NAME[3]="mptlan" --- > BUILT_MODULE_NAME[3]="mptspi" > DEST_MODULE_NAME[3]="mptspi" 30a30,31 > #MODULES_CONF_OBSOLETES[3]="mptbase,mptscsih,mptctl,mptlan" > MODULES_CONF_ALIAS_TYPE[3]="scsi_hostadapter" 32,33c33,34 < BUILT_MODULE_NAME[4]="mptspi" < DEST_MODULE_NAME[4]="mptspi" --- > BUILT_MODULE_NAME[4]="mptsas" > DEST_MODULE_NAME[4]="mptsas" 35d35 < #MODULES_CONF_OBSOLETES[4]="mptbase,mptscsih,mptctl,mptlan" 38,39c38,39 < BUILT_MODULE_NAME[5]="mptsas" < DEST_MODULE_NAME[5]="mptsas" --- > BUILT_MODULE_NAME[5]="mptfc" > DEST_MODULE_NAME[5]="mptfc" 41,45d40 < MODULES_CONF_ALIAS_TYPE[5]="scsi_hostadapter" < < BUILT_MODULE_NAME[6]="mptfc" < DEST_MODULE_NAME[6]="mptfc" < DEST_MODULE_LOCATION[6]="/kernel/drivers/message/fusion" 47a43 > diff ./mptlinux-4.18.00.00/Makefile /usr/src/mptlinux-4.18.00.00/Makefile 2c2 < # LSI Logic mpt fusion --- > # LSI mpt fusion 19c19 < obj-$(CONFIG_FUSION_LAN) += mptlan.o --- > #obj-$(CONFIG_FUSION_LAN) += mptlan.o diff ./mptlinux-4.18.00.00/mptsas.c /usr/src/mptlinux-4.18.00.00/mptsas.c 2437,2438c2437,2438 < ioc->name, __func__, req->bio->bi_vcnt, req->data_len, < rsp->bio->bi_vcnt, rsp->data_len); --- > ioc->name, __func__, req->bio->bi_vcnt, blk_rq_bytes(req), > rsp->bio->bi_vcnt, blk_rq_bytes(rsp)); 2455c2455 < smpreq->RequestDataLength = cpu_to_le16(req->data_len - 4); --- > smpreq->RequestDataLength = cpu_to_le16(blk_rq_bytes(req) - 4); 2485c2485 < flagsLength |= (req->data_len - 4); --- > flagsLength |= (blk_rq_bytes(req) - 4); 2488c2488 < req->data_len, PCI_DMA_BIDIRECTIONAL); --- > blk_rq_bytes(req), PCI_DMA_BIDIRECTIONAL); 2501c2501 < flagsLength |= rsp->data_len + 4; --- > flagsLength |= blk_rq_bytes(rsp) + 4; 2503c2503 < rsp->data_len, PCI_DMA_BIDIRECTIONAL); --- > blk_rq_bytes(rsp), PCI_DMA_BIDIRECTIONAL); 2534,2535c2534,2535 < req->data_len = 0; < rsp->data_len -= smprep->ResponseDataLength; --- > req->resid_len = 0; > rsp->resid_len -= smprep->ResponseDataLength; 2544c2544 < pci_unmap_single(ioc->pcidev, dma_addr_out, req->data_len, --- > pci_unmap_single(ioc->pcidev, dma_addr_out, blk_rq_bytes(req), 2547c2547 < pci_unmap_single(ioc->pcidev, dma_addr_in, rsp->data_len, --- > pci_unmap_single(ioc->pcidev, dma_addr_in, blk_rq_bytes(rsp), BTW This was Bonnie++ output when btrfs was still empty, 388MB/sec write 526MB/sec read: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nine 24G 259 95 388802 57 127153 51 221 94 526553 66 167.9 331 Latency 30004us 1451ms 378ms 23627us 118ms 104ms Version 1.96 ------Sequential Create------ --------Random Create-------- nine -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 8406 24 +++++ +++ 11176 49 14249 40 +++++ +++ 11370 49 Latency 498us 441us 926us 601us 43us 1628us 1.96,1.96,nine,1,1270361591,24G,,259,95,388802,57,127153,51,221,94,526553,66,167.9,331,16,,,,,8406,24,+++++,+++,11176,49,14249,40,+++++,+++,11370,49,30004us,1451ms,378ms,23627us,118ms,104ms,498us,441us,926us,601us,43us,1628us The <200MB/sec and 395MB/sec i was referring to was just using dd to write zeros. Later on I ran bonnie and saved this. Also system: Tyan mobo wiht two E5410 CPUs, 12GB RAM, using onboard LSI 1068E. The LSI connects to a HP SAS Expander. All drives are connected to the HP SAS Expander. The drives are all 2TB WD GP drives WD20EADS, not the newer EARS version. On Thu, Apr 8, 2010 at 3:31 AM, <bugzilla-daemon@bugzilla.kernel.org> wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > --- Comment #2 from kdesai <kashyap.desai@lsi.com> 2010-04-08 10:31:17 > --- > Andrew, > > Today I tried same steps as mentioned by you. In my case I am able to see > both > the drivers > Performance is similar. > > 3.4.14 is driver version available at kernel.org > 4.22.00.00 is driver LSI internally uses. [4.18.00.00 does not have support > for > 2.6.34 kernel] > I guess you must have done some changes to make sure 4.18.00.00 works with > 2.6.34-020634rc1. > > Both the case I am getting 190~210MB/sec. > > Can you help me to reproduce ~395MB/sec using 4.18.00.00 ? > > > ~Kashyap > > > > > > -----Original Message----- > > From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi- > > owner@vger.kernel.org] On Behalf Of Andrew Morton > > Sent: Tuesday, April 06, 2010 2:05 AM > > To: linux-scsi@vger.kernel.org; Moore, Eric > > Cc: bugzilla-daemon@bugzilla.kernel.org; bugme- > > daemon@bugzilla.kernel.org; bexamous@gmail.com > > Subject: Re: [Bugme-new] [Bug 15688] New: mptsas & poor performance > > > > > > (switched to email. Please respond via emailed reply-to-all, not via > > the > > bugzilla web interface). > > > > scsi_drivers-other reports don't appear to be coming out on the > > linux-scsi list. > > > > On Sat, 3 Apr 2010 21:16:31 GMT > > bugzilla-daemon@bugzilla.kernel.org wrote: > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > Summary: mptsas & poor performance > > > Product: SCSI Drivers > > > Version: 2.5 > > > Kernel Version: 2.6.34-020634rc1 > > > Platform: All > > > OS/Version: Linux > > > Tree: Mainline > > > Status: NEW > > > Severity: normal > > > Priority: P1 > > > Component: Other > > > AssignedTo: scsi_drivers-other@kernel-bugs.osdl.org > > > ReportedBy: bexamous@gmail.com > > > Regression: No > > > > > > > > > The mptsas module from LSI's website, v4.18.00.00, performs much > > better than > > > the mptsas module included in the kernel, v3.something. > > > > > > I'm only comparing sequential writes (dd if=/dev/zero of=zeros > > bs=1M), but... > > > > > > 6 drive MD RAID5 with fresh btrfs, writing 10GB of zeros with dd, > > > kernel version: <200MB/sec > > > LSI's v4.18.00.00: 395MB/sec > > > > > > Pretty big difference. I've gone back and forth a few times, > > > enabling/disabling write cache on the drives, enabling/disabling ioc, > > > enabling/disabling filesystem barriers... I can make small changes > > in > > > performance but nothing compares to simply updating to v4.18.00.00. > > > > > > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-scsi" > > in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- > Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email > ------- You are receiving this mail because: ------- > You reported the bug. >
> -----Original Message----- > From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi- > owner@vger.kernel.org] On Behalf Of bugzilla-daemon@bugzilla.kernel.org > Sent: Thursday, April 08, 2010 9:07 PM > To: linux-scsi@vger.kernel.org > Subject: [Bug 15688] mptsas & poor performance > > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > --- Comment #3 from Brian Sullivan <bexamous@gmail.com> 2010-04-08 > 15:36:49 --- > I think the only changes I made to 4.18.00.00 was to just not use > mptlan > (since I have no need and it wouldn't compile on 2.6.34) and make a few > changes to mptsas.c (only to get it to compile on 2.6.34). > > diff ./mptlinux-4.18.00.00/dkms.conf /usr/src/mptlinux- > 4.18.00.00/dkms.conf > 13d12 > < MAKE[6]="make -C ${kernel_source_dir} > SUBDIRS=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build modules" > 28,29c27,28 > < BUILT_MODULE_NAME[3]="mptlan" > < DEST_MODULE_NAME[3]="mptlan" > --- > > BUILT_MODULE_NAME[3]="mptspi" > > DEST_MODULE_NAME[3]="mptspi" > 30a30,31 > > #MODULES_CONF_OBSOLETES[3]="mptbase,mptscsih,mptctl,mptlan" > > MODULES_CONF_ALIAS_TYPE[3]="scsi_hostadapter" > 32,33c33,34 > < BUILT_MODULE_NAME[4]="mptspi" > < DEST_MODULE_NAME[4]="mptspi" > --- > > BUILT_MODULE_NAME[4]="mptsas" > > DEST_MODULE_NAME[4]="mptsas" > 35d35 > < #MODULES_CONF_OBSOLETES[4]="mptbase,mptscsih,mptctl,mptlan" > 38,39c38,39 > < BUILT_MODULE_NAME[5]="mptsas" > < DEST_MODULE_NAME[5]="mptsas" > --- > > BUILT_MODULE_NAME[5]="mptfc" > > DEST_MODULE_NAME[5]="mptfc" > 41,45d40 > < MODULES_CONF_ALIAS_TYPE[5]="scsi_hostadapter" > < > < BUILT_MODULE_NAME[6]="mptfc" > < DEST_MODULE_NAME[6]="mptfc" > < DEST_MODULE_LOCATION[6]="/kernel/drivers/message/fusion" > 47a43 > > > diff ./mptlinux-4.18.00.00/Makefile /usr/src/mptlinux- > 4.18.00.00/Makefile > 2c2 > < # LSI Logic mpt fusion > --- > > # LSI mpt fusion > 19c19 > < obj-$(CONFIG_FUSION_LAN) += mptlan.o > --- > > #obj-$(CONFIG_FUSION_LAN) += mptlan.o > diff ./mptlinux-4.18.00.00/mptsas.c /usr/src/mptlinux- > 4.18.00.00/mptsas.c > 2437,2438c2437,2438 > < ioc->name, __func__, req->bio->bi_vcnt, req->data_len, > < rsp->bio->bi_vcnt, rsp->data_len); > --- > > ioc->name, __func__, req->bio->bi_vcnt, > blk_rq_bytes(req), > > rsp->bio->bi_vcnt, blk_rq_bytes(rsp)); > 2455c2455 > < smpreq->RequestDataLength = cpu_to_le16(req->data_len - 4); > --- > > smpreq->RequestDataLength = cpu_to_le16(blk_rq_bytes(req) - 4); > 2485c2485 > < flagsLength |= (req->data_len - 4); > --- > > flagsLength |= (blk_rq_bytes(req) - 4); > 2488c2488 > < req->data_len, PCI_DMA_BIDIRECTIONAL); > --- > > blk_rq_bytes(req), PCI_DMA_BIDIRECTIONAL); > 2501c2501 > < flagsLength |= rsp->data_len + 4; > --- > > flagsLength |= blk_rq_bytes(rsp) + 4; > 2503c2503 > < rsp->data_len, PCI_DMA_BIDIRECTIONAL); > --- > > blk_rq_bytes(rsp), PCI_DMA_BIDIRECTIONAL); > 2534,2535c2534,2535 > < req->data_len = 0; > < rsp->data_len -= smprep->ResponseDataLength; > --- > > req->resid_len = 0; > > rsp->resid_len -= smprep->ResponseDataLength; > 2544c2544 > < pci_unmap_single(ioc->pcidev, dma_addr_out, req->data_len, > --- > > pci_unmap_single(ioc->pcidev, dma_addr_out, > blk_rq_bytes(req), > 2547c2547 > < pci_unmap_single(ioc->pcidev, dma_addr_in, rsp->data_len, > --- > > pci_unmap_single(ioc->pcidev, dma_addr_in, blk_rq_bytes(rsp), > OK. This is fine. All you did is just to make sure compilation is fine. > > BTW This was Bonnie++ output when btrfs was still empty, 388MB/sec > write > 526MB/sec read: > Version 1.96 ------Sequential Output------ --Sequential Input- > --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- > --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP > /sec > %CP > nine 24G 259 95 388802 57 127153 51 221 94 526553 > 66 > 167.9 331 > Latency 30004us 1451ms 378ms 23627us 118ms > 104ms > Version 1.96 ------Sequential Create------ --------Random > Create-------- > nine -Create-- --Read--- -Delete-- -Create-- --Read--- > -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > /sec > %CP > 16 8406 24 +++++ +++ 11176 49 14249 40 +++++ +++ > 11370 > 49 > Latency 498us 441us 926us 601us 43us > 1628us > 1.96,1.96,nine,1,1270361591,24G,,259,95,388802,57,127153,51,221,94,5265 > 53,66,167.9,331,16,,,,,8406,24,+++++,+++,11176,49,14249,40,+++++,+++,11 > 370,49,30004us,1451ms,378ms,23627us,118ms,104ms,498us,441us,926us,601us > ,43us,1628us > > The <200MB/sec and 395MB/sec i was referring to was just using dd to > write > zeros. Later on I ran bonnie and saved this. > > Also system: > Tyan mobo wiht two E5410 CPUs, 12GB RAM, using onboard LSI 1068E. The CPU is bottleneck in my system. I have to go for Two quadcore CPUs to achieve 395 MB/sec throughput. I will collect similar h/w and redo the same test again. I will post my findings once I am done with repro. ~Kashyap > LSI > connects to a HP SAS Expander. All drives are connected to the HP SAS > Expander. The drives are all 2TB WD GP drives WD20EADS, not the newer > EARS > version. > > > On Thu, Apr 8, 2010 at 3:31 AM, <bugzilla-daemon@bugzilla.kernel.org> > wrote: > > > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > > > > > > > --- Comment #2 from kdesai <kashyap.desai@lsi.com> 2010-04-08 > 10:31:17 > > --- > > Andrew, > > > > Today I tried same steps as mentioned by you. In my case I am able to > see > > both > > the drivers > > Performance is similar. > > > > 3.4.14 is driver version available at kernel.org > > 4.22.00.00 is driver LSI internally uses. [4.18.00.00 does not have > support > > for > > 2.6.34 kernel] > > I guess you must have done some changes to make sure 4.18.00.00 works > with > > 2.6.34-020634rc1. > > > > Both the case I am getting 190~210MB/sec. > > > > Can you help me to reproduce ~395MB/sec using 4.18.00.00 ? > > > > > > ~Kashyap > > > > > > > > > > > -----Original Message----- > > > From: linux-scsi-owner@vger.kernel.org [mailto:linux-scsi- > > > owner@vger.kernel.org] On Behalf Of Andrew Morton > > > Sent: Tuesday, April 06, 2010 2:05 AM > > > To: linux-scsi@vger.kernel.org; Moore, Eric > > > Cc: bugzilla-daemon@bugzilla.kernel.org; bugme- > > > daemon@bugzilla.kernel.org; bexamous@gmail.com > > > Subject: Re: [Bugme-new] [Bug 15688] New: mptsas & poor performance > > > > > > > > > (switched to email. Please respond via emailed reply-to-all, not > via > > > the > > > bugzilla web interface). > > > > > > scsi_drivers-other reports don't appear to be coming out on the > > > linux-scsi list. > > > > > > On Sat, 3 Apr 2010 21:16:31 GMT > > > bugzilla-daemon@bugzilla.kernel.org wrote: > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > > > Summary: mptsas & poor performance > > > > Product: SCSI Drivers > > > > Version: 2.5 > > > > Kernel Version: 2.6.34-020634rc1 > > > > Platform: All > > > > OS/Version: Linux > > > > Tree: Mainline > > > > Status: NEW > > > > Severity: normal > > > > Priority: P1 > > > > Component: Other > > > > AssignedTo: scsi_drivers-other@kernel-bugs.osdl.org > > > > ReportedBy: bexamous@gmail.com > > > > Regression: No > > > > > > > > > > > > The mptsas module from LSI's website, v4.18.00.00, performs much > > > better than > > > > the mptsas module included in the kernel, v3.something. > > > > > > > > I'm only comparing sequential writes (dd if=/dev/zero of=zeros > > > bs=1M), but... > > > > > > > > 6 drive MD RAID5 with fresh btrfs, writing 10GB of zeros with dd, > > > > kernel version: <200MB/sec > > > > LSI's v4.18.00.00: 395MB/sec > > > > > > > > Pretty big difference. I've gone back and forth a few times, > > > > enabling/disabling write cache on the drives, enabling/disabling > ioc, > > > > enabling/disabling filesystem barriers... I can make small > changes > > > in > > > > performance but nothing compares to simply updating to > v4.18.00.00. > > > > > > > > > > > > > -- > > > To unsubscribe from this list: send the line "unsubscribe linux- > scsi" > > > in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > -- > > Configure bugmail: > https://bugzilla.kernel.org/userprefs.cgi?tab=email > > ------- You are receiving this mail because: ------- > > You reported the bug. > > > > -- > Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email > ------- You are receiving this mail because: ------- > You are watching the assignee of the bug. > -- > To unsubscribe from this list: send the line "unsubscribe linux-scsi" > in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Andrew & Kashyap, It seems to me that the 4.22.00.00 driver from lsi is much faster than kernel 3.xx version driver. We have 16 WD SATA disks connected through lsi sas expander to 1068e hba and linux md raid5 using these disks. The system has only one quad-core E5405 CPU. With the new lsi driver, we get sequential dd read performance up to 795MB/s and write performance up to 395MB/s which is almost twice as the kernel 3.xx driver. We will investigate different hardware setup further and let u know as soon as possible. Johnson
BTW, I did not look into this as much, but using 3.xx there also seemed to be a limit of ~600MB/sec between controller & expander. If I put two arrays on the expander, each array by itself would do almost 500MB/sec in reads, but together they couldn't break 600MB/sec. I didn't really look into this further, maybe I was doing something wrong. ~600MB/sec, however, seems like exactly 1/2 the bandwidth of the minisas cable between controlller & expander. On Sat, Apr 10, 2010 at 5:00 AM, <bugzilla-daemon@bugzilla.kernel.org>wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=15688 > > > > > > --- Comment #5 from dujun@perabytes.com 2010-04-10 12:00:53 --- > Hi Andrew & Kashyap, > > It seems to me that the 4.22.00.00 driver from lsi is much faster than > kernel > 3.xx version driver. We have 16 WD SATA disks connected through lsi sas > expander to 1068e hba and linux md raid5 using these disks. The system has > only > one quad-core E5405 CPU. With the new lsi driver, we get sequential dd > read > performance up to 795MB/s and write performance up to 395MB/s which is > almost > twice as the kernel 3.xx driver. > > We will investigate different hardware setup further and let u know as soon > as > possible. > > Johnson > > -- > Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email > ------- You are receiving this mail because: ------- > You reported the bug. >
I've been seeing a similar issue with a LSI 1068 connected to a Sun J4400 JBOD. Solaris can drive a single 4-port PHY (12Gb/s) at about 1000 MB/sec but Linux maxes out at 830 MB/sec running 2.6.32 included with RHEL 6 beta. As I write to more disks at the same time, the throughput (MB/sec) per disk starts dropping after the 7th disk: #disks thruput tpdelta tp/disk expected tp 1 82 0 82.00 82 2 165 83 82.50 164 3 248 83 82.67 246 4 331 83 82.75 328 5 413 82 82.60 410 6 494 81 82.33 492 7 578 84 82.57 574 8 623 45 77.88 656 9 643 20 71.44 738 10 659 16 65.90 820 11 671 12 61.00 902 12 681 10 56.75 984 I'll do some tests with 4.22.00.00 now.
albert, kashyap: any news?
I found 4.22.00.00 to be a bit slower than the driver that comes with RHEL 6's 2.6.32. I would really like to understand what is causing the throughput per disk to drop when one starts writing to more than 7 disks at the same time over one PHY.
kashyap: almost a year ago, you wrote "I will post my findings once I am done with repro." could you please give your customers a status update?
kashyap?
This is probably a long shot, but does https://lkml.org/lkml/2011/5/27/367 help you?
I see that change isn't in the latest Git? http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=drivers/message/fusion/mptbase.h;hb=6bc2b95ee602659c1be6fac0f6aadeb0c5c29a5d Is there a reason it didn't go in?
I've asked a few times, but LSI has never picked it up. I can try linux-scsi again, but it'd be useful to have at least a few supporters to confirm that it doesn't break anything. :)
I'll do a test on Friday and report back.
(In reply to comment #14) > I've asked a few times, but LSI has never picked it up. I can try linux-scsi > again, but it'd be useful to have at least a few supporters to confirm that > it > doesn't break anything. :) I was now moved to other role inside LSI, so could not focus on this. Sorry for delay. My colleague "Lakshmi" is looking this issue and she wants helps to reproduce the issue at LSI. Latest, I was also not able to reproduce the this issue due to h/w components. If someone can help Lakshmi to reproduce this issue locally, LSI can look at the possible root cause. BTW, Can you compare it with latest "upstream" vs "4.18.00.00" ?
Nobody seems to be doing any getting back so closing