Bug 56771
Summary: | One random read streaming is fast (~1200MB/s), but two or more are slower (~750MB/s)? | ||
---|---|---|---|
Product: | File System | Reporter: | Matt Pursley (mpursley) |
Component: | btrfs | Assignee: | Josef Bacik (josef) |
Status: | RESOLVED OBSOLETE | ||
Severity: | high | CC: | dsterba, josef, mfasheh, szg00000 |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 3.9.0-rc4 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Matt Pursley
2013-04-17 23:22:36 UTC
Here are the results of making and reading back a 13GB file on "mdraid6 + ext4", "mdraid6 + btrfs", and "btrfsraid6 + btrfs". Seems to show that: 1) "mdraid6 + ext4" can do ~1100 MB/s for these sequential reads with either one or two files at once. 2) "btrfsraid6 + btrfs" can do ~1100 MB/s for sequential reads with one file at a time, but only ~750 MB/s with two (or more). 3) "mdraid6 + btrfs" can only do ~750 MB/s for these sequential reads with either one or two files at once. So, seems like the speed drop is related more to the btrfs files system, then the experimental raid. Although it is interesting that btrfs can only do the full ~1100 MB/s with a single file on the btrfsraid6, but not mdraid6. Anyway, just some more info and reproducible results... Thanks, Matt ___ mdraid6 + ext4 ___ kura1 / # mount | grep -i /var/data /dev/md0 on /var/data type ext4 (rw) kura1 / # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] [multipath] md0 : active raid6 sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 29302650880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU] [>....................] resync = 0.0% (2731520/2930265088) finish=47268.1min speed=1031K/sec unused devices: <none> ## Create two 13GB testfiles... kura1 / # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile1 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 47.27 s, 277 MB/s kura1 / # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile2 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 47.0237 s, 279 MB/s ## Read back one testfile... ~1300 MB/s kura1 / # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 10.3469 s, 1.3 GB/s kura1 / # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 10.0073 s, 1.3 GB/s kura1 / # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 10.69 s, 1.2 GB/s ## Read back the two testfiles at the same time.. ~1100MB/s kura1 / # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.4988 s, 535 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.591 s, 533 MB/s kura1 / # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.7013 s, 531 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.7016 s, 531 MB/s kura1 / # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.5512 s, 534 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 24.8276 s, 528 MB/s ________________________________ ___ mdraid6 + btrfs _______________ kura1 ~ # mount | grep -i /var/data /dev/md0 on /var/data type btrfs (rw,noatime) kura1 ~ # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] [multipath] md0 : active raid6 sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 29302650880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU] [>....................] resync = 0.0% (1917184/2930265088) finish=44415.7min speed=1098K/sec unused devices: <none> kura1 ~ # btrfs filesystem show failed to open /dev/sr0: No medium found Label: none uuid: 5eb756b5-03a1-4d06-8e91-0f683a763a88 Total devices 1 FS bytes used 448.00KB devid 1 size 27.29TB used 2.04GB path /dev/md0 Label: none uuid: 4546715c-8948-42b3-b529-a1c9cd175c2e Total devices 12 FS bytes used 80.74GB devid 12 size 2.73TB used 9.35GB path /dev/sdm devid 11 size 2.73TB used 9.35GB path /dev/sdl devid 10 size 2.73TB used 9.35GB path /dev/sdk devid 9 size 2.73TB used 9.35GB path /dev/sdj devid 8 size 2.73TB used 9.35GB path /dev/sdi devid 7 size 2.73TB used 9.35GB path /dev/sdh devid 6 size 2.73TB used 9.35GB path /dev/sdg devid 5 size 2.73TB used 9.35GB path /dev/sdf devid 4 size 2.73TB used 9.35GB path /dev/sde devid 3 size 2.73TB used 9.35GB path /dev/sdd devid 2 size 2.73TB used 9.35GB path /dev/sdc devid 1 size 2.73TB used 9.37GB path /dev/sdb Btrfs v0.20-rc1-253-g7854c8b ## Create two 13GB testfiles... kura1 ~ # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile1 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 34.2789 s, 382 MB/s kura1 ~ # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile2 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 43.2937 s, 303 MB/s ## Read back one testfile... ~750 MB/s kura1 ~ # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 16.7785 s, 781 MB/s kura1 ~ # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 18.1361 s, 723 MB/s kura1 ~ # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 19.1985 s, 683 MB/s ## Read back the two testfiles at the same time.. ~750MB/s kura1 ~ # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 30.8396 s, 425 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 35.5478 s, 369 MB/s kura1 ~ # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 34.6504 s, 378 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 35.7795 s, 366 MB/s kura1 ~ # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=//dev/null if=/var/data/persist/testfile2 bs=640k) & wait vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 36.9101 s, 355 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 37.7395 s, 347 MB/s ________________________ ___ btrfsraid6 + btrfs ___ kura1 ~ # mount | grep -i /var/data /dev/sdl on /var/data type btrfs (rw,noatime) kura1 ~ # btrfs filesystem show failed to open /dev/sr0: No medium found Label: none uuid: 4546715c-8948-42b3-b529-a1c9cd175c2e Total devices 12 FS bytes used 80.74GB devid 12 size 2.73TB used 9.35GB path /dev/sdm devid 11 size 2.73TB used 9.35GB path /dev/sdl devid 10 size 2.73TB used 9.35GB path /dev/sdk devid 9 size 2.73TB used 9.35GB path /dev/sdj devid 8 size 2.73TB used 9.35GB path /dev/sdi devid 7 size 2.73TB used 9.35GB path /dev/sdh devid 6 size 2.73TB used 9.35GB path /dev/sdg devid 5 size 2.73TB used 9.35GB path /dev/sdf devid 4 size 2.73TB used 9.35GB path /dev/sde devid 3 size 2.73TB used 9.35GB path /dev/sdd devid 2 size 2.73TB used 9.35GB path /dev/sdc devid 1 size 2.73TB used 9.37GB path /dev/sdb Btrfs v0.20-rc1-253-g7854c8b ## Create two 13GB testfiles... kura1 data # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile2 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 21.5018 s, 610 MB/s kura1 data # sysctl vm.drop_caches=1 ; dd if=/dev/zero of=/var/data/persist/testfile1 bs=640k count=20000 conv=fdatasync vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 21.3389 s, 614 MB/s ## Read back one testfile... ~1100 MB/s kura1 data # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 11.8312 s, 1.1 GB/s kura1 data # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 11.7888 s, 1.1 GB/s kura1 data # sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k vm.drop_caches = 1 20000+0 records in 20000+0 records out 20000+0 records out 13107200000 bytes (13 GB) copied, 41.4113 s, 317 MB/s kura1 data # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile2 bs=640k) & wait [1] 19482 [2] 19483 vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 36.0124 s, 364 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 36.2298 s, 362 MB/s kura1 data # (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k) & (sysctl vm.drop_caches=1 ; dd of=/dev/null if=/var/data/persist/testfile2 bs=640k) & wait [1] 19500 [2] 19501 vm.drop_caches = 1 vm.drop_caches = 1 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 35.4703 s, 370 MB/s 20000+0 records in 20000+0 records out 13107200000 bytes (13 GB) copied, 35.7789 s, 366 MB/s [1]- Done ( sysctl vm.drop_caches=1; dd of=/dev/null if=/var/data/persist/testfile1 bs=640k ) [2]+ Done ( sysctl vm.drop_caches=1; dd of=/dev/null if=/var/data/persist/testfile2 bs=640k ) _____ So I tried to reproduce this and my combined values added up to the single threaded case. Can you run perf record -ag and see if that shows any thing big for the multi-threaded case? Also some sysrq+w a few times (spread out) during the multi-threaded run would be good so I can see what is going on. (In reply to comment #2) > So I tried to reproduce this and my combined values added up to the single > threaded case. Can you run perf record -ag and see if that shows any thing > big > for the multi-threaded case? Also some sysrq+w a few times (spread out) > during > the multi-threaded run would be good so I can see what is going on. Ok, I will try that.. Thanks, Matt Also, here is the results of a multi-drive that that I just emailed you.. Thanks Josef, Matt ---------- Forwarded message ---------- From: Matt Pursley <mpursley@gmail.com> Date: Thu, May 2, 2013 at 11:51 AM Subject: Re: One random read streaming is fast (~1200MB/s), but two or more are slower (~750MB/s)? To: Josef Bacik <jbacik@fusionio.com> Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org> Hey Josef, Were you able to try this multi-thread test on any more drives? I did a test with 12, 6, 3, and 1 drive. And, it looks like I see the multi-thread speed reduces, as the number of drives in the raid goes up. Like this: - 50% speed reduction with 2 threads on 12 drives - 25% speed reduction with 2 threads on 6 drives - 10% speed reduction with 2 threads on 3 drives - 5% speed reduction with 2 threads on 1 drive I only have 12 slots on my HBA card, but I wonder if 24 drives would reduce the speed to 25% with 2 threads? Matt make btrfs fs... ___ 12 drives... mkfs.btrfs -f -d raid6 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl 6 drives... mkfs.btrfs -f -d raid6 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf 3 drives... mkfs.btrfs -f -d raid5 /dev/sda /dev/sdb /dev/sdc 1 drive... mkfs.btrfs -f /dev/sda mount /dev/sda /tmp/btrfs_test/ ___ make zero files... ___ kura1 ~ # for j in {1..2} ; do dd if=/dev/zero of=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M count=10000 conv=fdatasync & done ___ =================== btrfs raid6 on 12 drives with 2 threads = ~650MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done vm.drop_caches = 1 10485760000 bytes (10 GB) copied, 31.0431 s, 338 MB/s 10485760000 bytes (10 GB) copied, 31.2235 s, 336 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 29.869 s, 351 MB/s 10485760000 bytes (10 GB) copied, 30.5561 s, 343 MB/s ___ btrfs raid6 on 12 drives with 1 thread = ~1100MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 9.69881 s, 1.1 GB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 9.56475 s, 1.1 GB/s ___ ================== btrfs raid6 on 6 drives with 2 thread = ~500MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 41.3899 s, 253 MB/s 10485760000 bytes (10 GB) copied, 41.6916 s, 252 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 40.3178 s, 260 MB/s 10485760000 bytes (10 GB) copied, 41.4087 s, 253 MB/s ___ btrfs raid6 on 6 drives with 1 thread = ~600MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 17.5686 s, 597 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 17.5396 s, 598 MB/s ___ ================== btrfs raid5 on 3 drives with 2 thread = ~300MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 67.636 s, 155 MB/s 10485760000 bytes (10 GB) copied, 70.1783 s, 149 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 69.4945 s, 151 MB/s 10485760000 bytes (10 GB) copied, 70.8279 s, 148 MB/s ___ btrfs raid5 on 3 drives with 1 thread = ~319MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 32.8559 s, 319 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 32.8483 s, 319 MB/s ___ ================== btrfs (no raid) on 1 drive with 2 thread = ~155MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 134.982 s, 77.7 MB/s 10485760000 bytes (10 GB) copied, 135.237 s, 77.5 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..2} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 134.549 s, 77.9 MB/s 10485760000 bytes (10 GB) copied, 135.293 s, 77.5 MB/s ___ btrfs (no raid) on 1 drive with 1 thread = ~162MB/s ___ kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 64.5931 s, 162 MB/s kura1 btrfs_test # sysctl vm.drop_caches=1 ; for j in {1..1} ; do dd of=/dev/null if=/tmp/btrfs_test/testfile_bs1m_size10GB_${j} bs=1M & done 10485760000 bytes (10 GB) copied, 64.6299 s, 162 MB/s ___ ================== On Fri, Apr 26, 2013 at 4:21 PM, Matt Pursley <mpursley@gmail.com> wrote: > Hey Josef, > > Thanks for looking into this further! That is about the same > results that I was seeing, though I didn't test it with just one > drive.. only with all 12 drives in my jbod. I will do a test with > just one disk, and see if I also get the same results. > > Let me know if you also see the same results with multiple drives in > your raid... > > > Thanks, > Matt > > > > > > On Thu, Apr 25, 2013 at 2:10 PM, Josef Bacik <jbacik@fusionio.com> wrote: >> On Thu, Apr 25, 2013 at 03:01:18PM -0600, Matt Pursley wrote: >>> Ok, awesome, let me know how it goes.. I don't have the raid >>> formatted to btrfs right now, but I could probably do that in about 30 >>> minutes or so. >>> >> >> Huh so I'm getting the full bandwidth, 120 mb/s with one thread and 60 mb/s >> with >> two threads. These are just cheap sata drives tho, I'll try and dig up a >> box >> with 3 fusion cards for something a little closer to the speeds you are >> seeing >> and see if that makes a difference. Thanks, >> >> Josef Ok, here's targz file with a "perf_12drives_raid6_one-threads" and "perf_12drives_raid6_two-threads" tests. See anything wrong in there? https://docs.google.com/file/d/0BxdIbDDheBeHcjVzc1pqNEstZ28/edit?usp=sharing Thanks, Matt Ok looks like just ye olde lock contention between the completion threads and dd, I'll work some stuff up to try and address the low hanging fruit and let you test it to see how it helps. So Miao just did a whole bunch of work to help this lock contention and his work is in btrfs-next, could you build and test that and see if the performance is better? This is a semi-automated bugzilla cleanup, report is against an old kernel version. If the problem still happens, please open a new bug. Thanks. |