Bug 1556

Summary: Promise TX2-Ultra100 PDC20268 lockups
Product: IO/Storage Reporter: Thomas M (etwcn)
Component: IDEAssignee: Bartlomiej Zolnierkiewicz (bzolnier)
Status: REJECTED UNREPRODUCIBLE    
Severity: blocking CC: bunk
Priority: P2    
Hardware: i386   
OS: Linux   
Kernel Version: 2.4.22 Subsystem:
Regression: --- Bisected commit-id:

Description Thomas M 2003-11-18 15:32:54 UTC
Distribution: Debian Woody
Hardware Environment: AMD-Athlon, 2xPromise Ultra 100 Tx2
Software Environment: 2.4.22 from kernel.org, no additional patches, no X
Problem Description: I get hard lockups when transfering larger files from or to
disks attached to the Promise controller. The problem disapears when dma is
disabled. Before the lockup it is possible to observe a rising ERR counter in
/proc/interrupts (about 100-200 per second). The problem shows up especially
when more than one attached disk drive is accessed simultaneously at the same
time. For my system this is true since the two controllers alltogehter have four
drives attached (two each) which build a software raid 5. There are no messages
in whatever log I search. When the lockup happens the num lock on the keyboard
is not reacting.

Steps to reproduce: Attach at least two drives to Promise Ultra100 Tx2
controller. Make sure you have DMA mode on. (Setup a software raid 5 over the
disks - you may probably skip this). Transfer large files (>>100MB) so that
_both_ drives are used simultaneously.
Comment 1 Adrian Bunk 2005-07-04 18:50:28 UTC
Does this problem still happen with recent 2.4 and 2.6 kernels?
Comment 2 Thomas M 2005-07-05 01:44:00 UTC
I don't know. Since the problem was insistently ignored, i was forced to buy two
new controllers from a different brand. I have, however, received two or three
emails over the time from people with the same problem that had discovered this
bug report and asked me if I had found a solution. The only thing I could tell
them was that I would never buy anything from Promise ever again.
Comment 3 Adrian Bunk 2005-07-05 10:24:18 UTC
Thanks for this information.

First of all sorry that your problem wasn't handled in time.

I'm closing this bug.

If anyone will contact you again regarding this bug, tell him to update this bug
and we might be able to search for what was causing it.