Bug 9790
Summary: | strange USB related problem | ||
---|---|---|---|
Product: | Drivers | Reporter: | Serge Gavrilov (serge) |
Component: | USB | Assignee: | Greg Kroah-Hartman (greg) |
Status: | CLOSED CODE_FIX | ||
Severity: | normal | CC: | stern |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.23 and above | Subsystem: | |
Regression: | Yes | Bisected commit-id: | |
Bug Depends on: | |||
Bug Blocks: | 5089 |
Description
Serge Gavrilov
2008-01-21 10:23:45 UTC
Sorry, I gave wrong description of the scanner: Bus 001 Device 008: ID 04b0:4001 Nikon Corp. Also, I forget to say that I try to use the scanner as USB 1.1 device (plugging it into usb 1.1 hub) and obtain the same result. Reply-To: akpm@linux-foundation.org On Mon, 21 Jan 2008 10:23:48 -0800 (PST) bugme-daemon@bugzilla.kernel.org wrote: > http://bugzilla.kernel.org/show_bug.cgi?id=9790 > > Summary: strange USB related problem > Product: Drivers > Version: 2.5 > KernelVersion: 2.6.23 and above > Platform: All > OS/Version: Linux > Tree: Mainline > Status: NEW > Severity: normal > Priority: P1 > Component: USB > AssignedTo: greg@kroah.com > ReportedBy: serge@pdmi.ras.ru > > > Latest working kernel version: 2.6.22.12 > > Earliest failing kernel version: 2.6.23.9 Regression... > Distribution: Gentoo > > Hardware Environment: > > ... > > Problem Description: > > First of all, I am sorry for the stupid bug title. Perhaps, you can suggest > Bus > the better one :). > > I use Nikon Coolscan V film scanner to scan the negative films. > > 002 Device 005: ID 04b8:0110 Seiko Epson Corp. Perfection 1650 > > The software is Vuescan (http://www.hamrick.com). This is closed source, > commercial product. I need this since I need infrared cleaning :(. This > software > uses libusb library to access the scanner. > > So, after upgrade from 2.6.20 to 2.6.23 the strange things become occur. > After scanning approx 30-35 frames (one frame is approximately 150Mb of data) > vuescan freezes (precisely, it goes into interruptible sleep). This can > happen > while scanning, and also is can happen while image processing (when the > software > does not talk with the scanner). Strace shows that the last system call is > msync(..,MS_ASYNC). After that another processes marked as 'D' appear, and > after some time the system becomes unresponsible. To prevent, it is possible > kill X server. After that there are no processes marked as 'D' and the system > seems to be absolutely normal. But, it is not the case. After some time > 'D'-processes begin to appear again and the system become more or less > unresponsible. So the system becomes completely broken and I need to reboot. > I > compiled the kernel with all possible debug options, but this error is not > accompanies by any kernel message :(. With 2.6.23 this is absolutely > reproducible result. But to reproduce I need approx 2 hours (this is > scanning > time) :( > > I also have tested 2.6.22 and 2.6.24. > > 2.6.22 seems to be not broken. > With 2.6.24-rc8 the situation becomes worse: the system completely freezes > after several scans. When things are stuck in D state, please hit ALT-SYSRQ-W or type "echo w > /proc/sysrq-trigger" and then send us (via reply-to-all to this email) the resulting dmesg output, thanks. This is with gentoo 2.6.23 kernel: Sched Debug Version: v0.05-v20, 2.6.23-gentoo-r5 #2 now at 9040171717158 nsecs cpu#0, 1869.895 MHz .nr_running : 3 .load : 5169 .ls.delta_fair : 664772323 .ls.delta_exec : 4091541087 .nr_switches : 10544602 .nr_load_updates : 4325482 .nr_uninterruptible : 4294961682 .jiffies : 8740172 .next_balance : 8740166 .curr->pid : 0 .clock : 4324829045057 .idle_clock : 0 .prev_clock_raw : 9066780320360 .clock_warps : 0 .clock_overflows : 4715123 .clock_deep_idle_events : 0 .clock_max_delta : 999848 .cpu_load[0] : 0 .cpu_load[1] : 0 .cpu_load[2] : 0 .cpu_load[3] : 0 .cpu_load[4] : 0 cfs_rq .fair_clock : 1056766578052 .exec_clock : 1060302060293 .wait_runtime : 0 .wait_runtime_overruns : 0 .wait_runtime_underruns : 0 .sleeper_bonus : 10510116 .wait_runtime_rq_sum : 40047342 runnable tasks: task PID tree-key delta waiting switches prio sum-exec sum-wait sum-sleep wait-overrun wait-underrun ------------------------------------------------------------------------------------------------------------------------------------------------------------------ events/0 9 1056753454054 -13123998 40000000 17253 115 0 0 0 0 0 hald-addon-keyb 10512 1056726578052 -40000000 40000000 6512 120 0 0 0 0 0 X 12835 1056806530710 39952658 -39952658 1337718 120 0 0 0 0 0 cpu#1, 1869.895 MHz .nr_running : 1 .load : 1024 .ls.delta_fair : 1024090210 .ls.delta_exec : 653910632 .nr_switches : 8659200 .nr_load_updates : 4150215 .nr_uninterruptible : 5620 .jiffies : 8740172 .next_balance : 8740246 .curr->pid : 10053 .clock : 4149587041612 .idle_clock : 0 .prev_clock_raw : 9066779873123 .clock_warps : 0 .clock_overflows : 4345713 .clock_deep_idle_events : 0 .clock_max_delta : 999848 .cpu_load[0] : 1024 .cpu_load[1] : 522 .cpu_load[2] : 331 .cpu_load[3] : 262 .cpu_load[4] : 226 cfs_rq .fair_clock : 949189481552 .exec_clock : 948539490194 .wait_runtime : 0 .wait_runtime_overruns : 0 .wait_runtime_underruns : 0 .sleeper_bonus : 3994179 .wait_runtime_rq_sum : 40000000 runnable tasks: task PID tree-key delta waiting switches prio sum-exec sum-wait sum-sleep wait-overrun wait-underrun ------------------------------------------------------------------------------------------------------------------------------------------------------------------ R syslog-ng 10053 949149481552 -40000000 40000000 1209 120 0 0 0 0 0 SysRq : Show Blocked State task PC stack pid father syslog-ng D c066ce00 5848 10053 1 dfa75c00 00000046 c066ce00 c066ce00 00000282 dfa75be0 c046c0f9 dfdb7780 c066ce00 c2071080 dfbe2c70 00000000 00857e29 00000282 dfa75c10 00857e29 dfa75c70 dfa75c38 c0469d66 00000046 c058a120 d8adfdc4 ceb5dc10 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c0166320>] generic_file_buffered_write+0x2b0/0x660 [<c0166917>] __generic_file_aio_write_nolock+0x247/0x560 [<c0166d4a>] generic_file_aio_write+0x5a/0xd0 [<c01fd04d>] ext3_file_write+0x2d/0xc0 [<c0187577>] do_sync_write+0xc7/0x120 [<c0187730>] vfs_write+0x160/0x170 [<c01877ed>] sys_write+0x3d/0x70 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= mono D c066ce00 6464 13500 1 e08bbdb4 00200046 c066ce00 c066ce00 00200282 e08bbd94 c046c0f9 df8fe780 c066ce00 c2071080 d6e12000 00000000 00857e29 00200282 e08bbdc4 00857e29 e08bbe24 e08bbdec c0469d66 00200046 c058a120 c066cf74 e1ba9b94 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c016a1d1>] set_page_dirty_balance+0x41/0x50 [<c0172bd6>] do_wp_page+0x256/0x470 [<c01740b9>] handle_mm_fault+0x239/0x2a0 [<c011c247>] do_page_fault+0x157/0x660 [<c046c5b2>] error_code+0x72/0x78 ======================= nautilus D c066ce00 5168 16054 16007 d40e1c00 00000046 c066ce00 c066ce00 00000282 d40e1be0 c046c0f9 d4077000 c066ce00 c2071080 d40de000 00000000 00857e29 00000282 d40e1c10 00857e29 d40e1c70 d40e1c38 c0469d66 00000046 c058a120 ceb5dc10 c066cf74 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c0166320>] generic_file_buffered_write+0x2b0/0x660 [<c0166917>] __generic_file_aio_write_nolock+0x247/0x560 [<c0166d4a>] generic_file_aio_write+0x5a/0xd0 [<c01fd04d>] ext3_file_write+0x2d/0xc0 [<c0187577>] do_sync_write+0xc7/0x120 [<c0187730>] vfs_write+0x160/0x170 [<c01877ed>] sys_write+0x3d/0x70 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= python D c5b8d8d0 7084 3248 1 daf9befc 00200046 00000000 c5b8d8d0 00200046 00000000 00000000 d3ea0c80 c046a00c c2071080 f5b8ac70 c5b8d8d0 daf9befc 00200046 c5b8d894 00200246 f5b8ac70 daf9bf48 c046a0fe 00000000 00000002 c046a00c d90bc800 c5b8d8d0 Call Trace: [<c046a0fe>] __mutex_lock_slowpath+0xde/0x2f0 [<c046a00c>] mutex_lock+0x1c/0x20 [<c0186cee>] generic_file_llseek+0x2e/0xe0 [<c0186fda>] vfs_llseek+0x3a/0x50 [<c01870dd>] sys_llseek+0x4d/0xa0 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= vuescan D c066ce00 4972 25367 1 d8adfdb4 00000046 c066ce00 c066ce00 00000282 d8adfd94 c046c0f9 d878ac80 c066ce00 c2071080 d8be6c70 00000000 00857e29 00000282 d8adfdc4 00857e29 d8adfe24 d8adfdec c0469d66 00000046 c058a120 e1ba9b94 dfa75c10 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c016a1d1>] set_page_dirty_balance+0x41/0x50 [<c0172bd6>] do_wp_page+0x256/0x470 [<c01740b9>] handle_mm_fault+0x239/0x2a0 [<c011c247>] do_page_fault+0x157/0x660 [<c046c5b2>] error_code+0x72/0x78 ======================= uucico D c066ce00 5972 3174 1 ceb5dc00 00000046 c066ce00 c066ce00 00000282 ceb5dbe0 c046c0f9 c5af6000 c066ce00 c2071080 d2cdcc70 00000000 00857e29 00000282 ceb5dc10 00857e29 ceb5dc70 ceb5dc38 c0469d66 00000046 c058a120 dfa75c10 d40e1c10 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c0166320>] generic_file_buffered_write+0x2b0/0x660 [<c0166917>] __generic_file_aio_write_nolock+0x247/0x560 [<c0166d4a>] generic_file_aio_write+0x5a/0xd0 [<c01fd04d>] ext3_file_write+0x2d/0xc0 [<c0187577>] do_sync_write+0xc7/0x120 [<c0187730>] vfs_write+0x160/0x170 [<c01877ed>] sys_write+0x3d/0x70 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= df D c066ce00 5364 3238 16223 e1ba9b84 00000046 c066ce00 c066ce00 00000282 e1ba9b64 c046c0f9 d878ac80 c066ce00 c2071080 d8968c70 00000000 00857e29 00000282 e1ba9b94 00857e29 e1ba9bf4 e1ba9bbc c0469d66 00000046 c058a120 e08bbdc4 d8adfdc4 00857e29 Call Trace: [<c0469d66>] schedule_timeout+0x46/0x90 [<c0469cee>] io_schedule_timeout+0x1e/0x30 [<c016fdcb>] congestion_wait+0x7b/0xa0 [<c016a0ce>] balance_dirty_pages+0xae/0x170 [<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0 [<c0166320>] generic_file_buffered_write+0x2b0/0x660 [<c0166917>] __generic_file_aio_write_nolock+0x247/0x560 [<c0166d4a>] generic_file_aio_write+0x5a/0xd0 [<c01fd04d>] ext3_file_write+0x2d/0xc0 [<c0187577>] do_sync_write+0xc7/0x120 [<c0158637>] do_acct_process+0x297/0x2d0 [<c015882a>] acct_process+0x3a/0x60 [<c012bb64>] do_exit+0x324/0x450 [<c012bce9>] do_group_exit+0x29/0x70 [<c012bd3f>] sys_exit_group+0xf/0x20 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= nvidia-settin D c5b051d0 6620 3245 3244 ea14dd90 00000046 00000000 c5b051d0 00000046 00000000 00000000 cc8a5c80 c046a00c c2071080 d72d6c70 c5b051d0 ea14dd90 00000046 c5b05194 00000246 d72d6c70 ea14dddc c046a0fe 00000000 00000002 c046a00c 00000000 c5b051d0 Call Trace: [<c046a0fe>] __mutex_lock_slowpath+0xde/0x2f0 [<c046a00c>] mutex_lock+0x1c/0x20 [<c0166d37>] generic_file_aio_write+0x47/0xd0 [<c01fd04d>] ext3_file_write+0x2d/0xc0 [<c0187577>] do_sync_write+0xc7/0x120 [<c0158637>] do_acct_process+0x297/0x2d0 [<c015882a>] acct_process+0x3a/0x60 [<c012bb64>] do_exit+0x324/0x450 [<c012bce9>] do_group_exit+0x29/0x70 [<c012bd3f>] sys_exit_group+0xf/0x20 [<c01043a2>] sysenter_past_esp+0x5f/0x99 ======================= This is with gentoo 2.6.23 kernel:
Sched Debug Version: v0.05-v20, 2.6.23-gentoo-r5 #2
now at 9040171717158 nsecs
cpu#0, 1869.895 MHz
.nr_running : 3
.load : 5169
.ls.delta_fair : 664772323
.ls.delta_exec : 4091541087
.nr_switches : 10544602
.nr_load_updates : 4325482
.nr_uninterruptible : 4294961682
.jiffies : 8740172
.next_balance : 8740166
.curr->pid : 0
.clock : 4324829045057
.idle_clock : 0
.prev_clock_raw : 9066780320360
.clock_warps : 0
.clock_overflows : 4715123
.clock_deep_idle_events : 0
.clock_max_delta : 999848
.cpu_load[0] : 0
.cpu_load[1] : 0
.cpu_load[2] : 0
.cpu_load[3] : 0
.cpu_load[4] : 0
cfs_rq
.fair_clock : 1056766578052
.exec_clock : 1060302060293
.wait_runtime : 0
.wait_runtime_overruns : 0
.wait_runtime_underruns : 0
.sleeper_bonus : 10510116
.wait_runtime_rq_sum : 40047342
runnable tasks:
task PID tree-key delta waiting switches
prio sum-exec sum-wait sum-sleep wait-overrun
wait-underrun
------------------------------------------------------------------------------------------------------------------------------------------------------------------
events/0 9 1056753454054 -13123998 40000000 17253
115 0 0 0 0
0
hald-addon-keyb 10512 1056726578052 -40000000 40000000 6512
120 0 0 0 0
0
X 12835 1056806530710 39952658 -39952658 1337718
120 0 0 0 0
0
cpu#1, 1869.895 MHz
.nr_running : 1
.load : 1024
.ls.delta_fair : 1024090210
.ls.delta_exec : 653910632
.nr_switches : 8659200
.nr_load_updates : 4150215
.nr_uninterruptible : 5620
.jiffies : 8740172
.next_balance : 8740246
.curr->pid : 10053
.clock : 4149587041612
.idle_clock : 0
.prev_clock_raw : 9066779873123
.clock_warps : 0
.clock_overflows : 4345713
.clock_deep_idle_events : 0
.clock_max_delta : 999848
.cpu_load[0] : 1024
.cpu_load[1] : 522
.cpu_load[2] : 331
.cpu_load[3] : 262
.cpu_load[4] : 226
cfs_rq
.fair_clock : 949189481552
.exec_clock : 948539490194
.wait_runtime : 0
.wait_runtime_overruns : 0
.wait_runtime_underruns : 0
.sleeper_bonus : 3994179
.wait_runtime_rq_sum : 40000000
runnable tasks:
task PID tree-key delta waiting switches
prio sum-exec sum-wait sum-sleep wait-overrun
wait-underrun
------------------------------------------------------------------------------------------------------------------------------------------------------------------
R syslog-ng 10053 949149481552 -40000000 40000000 1209
120 0 0 0 0
0
SysRq : Show Blocked State
task PC stack pid father
syslog-ng D c066ce00 5848 10053 1
dfa75c00 00000046 c066ce00 c066ce00 00000282 dfa75be0 c046c0f9 dfdb7780
c066ce00 c2071080 dfbe2c70 00000000 00857e29 00000282 dfa75c10 00857e29
dfa75c70 dfa75c38 c0469d66 00000046 c058a120 d8adfdc4 ceb5dc10 00857e29
Call Trace:
[<c0469d66>] schedule_timeout+0x46/0x90
[<c0469cee>] io_schedule_timeout+0x1e/0x30
[<c016fdcb>] congestion_wait+0x7b/0xa0
[<c016a0ce>] balance_dirty_pages+0xae/0x170
[<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0
[<c0166320>] generic_file_buffered_write+0x2b0/0x660
[<c0166917>] __generic_file_aio_write_nolock+0x247/0x560
[<c0166d4a>] generic_file_aio_write+0x5a/0xd0
[<c01fd04d>] ext3_file_write+0x2d/0xc0
[<c0187577>] do_sync_write+0xc7/0x120
[<c0187730>] vfs_write+0x160/0x170
[<c01877ed>] sys_write+0x3d/0x70
[<c01043a2>] sysenter_past_esp+0x5f/0x99
=======================
mono D c066ce00 6464 13500 1
e08bbdb4 00200046 c066ce00 c066ce00 00200282 e08bbd94 c046c0f9 df8fe780
c066ce00 c2071080 d6e12000 00000000 00857e29 00200282 e08bbdc4 00857e29
e08bbe24 e08bbdec c0469d66 00200046 c058a120 c066cf74 e1ba9b94 00857e29
Call Trace:
[<c0469d66>] schedule_timeout+0x46/0x90
[<c0469cee>] io_schedule_timeout+0x1e/0x30
[<c016fdcb>] congestion_wait+0x7b/0xa0
[<c016a0ce>] balance_dirty_pages+0xae/0x170
[<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0
[<c016a1d1>] set_page_dirty_balance+0x41/0x50
[<c0172bd6>] do_wp_page+0x256/0x470
[<c01740b9>] handle_mm_fault+0x239/0x2a0
[<c011c247>] do_page_fault+0x157/0x660
[<c046c5b2>] error_code+0x72/0x78
=======================
nautilus D c066ce00 5168 16054 16007
d40e1c00 00000046 c066ce00 c066ce00 00000282 d40e1be0 c046c0f9 d4077000
c066ce00 c2071080 d40de000 00000000 00857e29 00000282 d40e1c10 00857e29
d40e1c70 d40e1c38 c0469d66 00000046 c058a120 ceb5dc10 c066cf74 00857e29
Call Trace:
[<c0469d66>] schedule_timeout+0x46/0x90
[<c0469cee>] io_schedule_timeout+0x1e/0x30
[<c016fdcb>] congestion_wait+0x7b/0xa0
[<c016a0ce>] balance_dirty_pages+0xae/0x170
[<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0
[<c0166320>] generic_file_buffered_write+0x2b0/0x660
[<c0166917>] __generic_file_aio_write_nolock+0x247/0x560
[<c0166d4a>] generic_file_aio_write+0x5a/0xd0
[<c01fd04d>] ext3_file_write+0x2d/0xc0
[<c0187577>] do_sync_write+0xc7/0x120
[<c0187730>] vfs_write+0x160/0x170
[<c01877ed>] sys_write+0x3d/0x70
[<c01043a2>] sysenter_past_esp+0x5f/0x99
=======================
python D c5b8d8d0 7084 3248 1
daf9befc 00200046 00000000 c5b8d8d0 00200046 00000000 00000000 d3ea0c80
c046a00c c2071080 f5b8ac70 c5b8d8d0 daf9befc 00200046 c5b8d894 00200246
f5b8ac70 daf9bf48 c046a0fe 00000000 00000002 c046a00c d90bc800 c5b8d8d0
Call Trace:
[<c046a0fe>] __mutex_lock_slowpath+0xde/0x2f0
[<c046a00c>] mutex_lock+0x1c/0x20
[<c0186cee>] generic_file_llseek+0x2e/0xe0
[<c0186fda>] vfs_llseek+0x3a/0x50
[<c01870dd>] sys_llseek+0x4d/0xa0
[<c01043a2>] sysenter_past_esp+0x5f/0x99
=======================
vuescan D c066ce00 4972 25367 1
d8adfdb4 00000046 c066ce00 c066ce00 00000282 d8adfd94 c046c0f9 d878ac80
c066ce00 c2071080 d8be6c70 00000000 00857e29 00000282 d8adfdc4 00857e29
d8adfe24 d8adfdec c0469d66 00000046 c058a120 e1ba9b94 dfa75c10 00857e29
Call Trace:
[<c0469d66>] schedule_timeout+0x46/0x90
[<c0469cee>] io_schedule_timeout+0x1e/0x30
[<c016fdcb>] congestion_wait+0x7b/0xa0
[<c016a0ce>] balance_dirty_pages+0xae/0x170
[<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0
[<c016a1d1>] set_page_dirty_balance+0x41/0x50
[<c0172bd6>] do_wp_page+0x256/0x470
[<c01740b9>] handle_mm_fault+0x239/0x2a0
[<c011c247>] do_page_fault+0x157/0x660
[<c046c5b2>] error_code+0x72/0x78
=======================
uucico D c066ce00 5972 3174 1
ceb5dc00 00000046 c066ce00 c066ce00 00000282 ceb5dbe0 c046c0f9 c5af6000
c066ce00 c2071080 d2cdcc70 00000000 00857e29 00000282 ceb5dc10 00857e29
ceb5dc70 ceb5dc38 c0469d66 00000046 c058a120 dfa75c10 d40e1c10 00857e29
Call Trace:
[<c0469d66>] schedule_timeout+0x46/0x90
[<c0469cee>] io_schedule_timeout+0x1e/0x30
[<c016fdcb>] congestion_wait+0x7b/0xa0
[<c016a0ce>] balance_dirty_pages+0xae/0x170
[<c016a277>] balance_dirty_pages_ratelimited_nr+0x97/0xb0
[<c0166320>] generic_file_buffered_write+0x2b0/0x660
[<c0166917>] __generic_file_aio_write_nolock+0x247/0x560
[<c0166d4a>] generic_file_aio_write+0x5a/0xd0
[<c01fd04d>] ext3_file_write+0x2d/0xc0
[<c0187577>] do_sync_write+0xc7/0x120
[<c0187730>] vfs_write+0x160/0x170
[<c01877ed>] sys_write+0x3d/0x70
[<c01043a2>] sysenter_past_esp+0x5f/0x99
=======================
df D c066ce00 5364 3238 16223
e1ba9b84 00000046 c066ce00 c066ce00 00000282 e1ba9b64 c046c0f9 d878ac80
On Mon, Jan 21, 2008 at 11:41:12AM -0800, Andrew Morton wrote:
> On Mon, 21 Jan 2008 10:23:48 -0800 (PST) bugme-daemon@bugzilla.kernel.org
> wrote:
>
> > http://bugzilla.kernel.org/show_bug.cgi?id=9790
> >
> > Summary: strange USB related problem
> > Product: Drivers
> > Version: 2.5
> > KernelVersion: 2.6.23 and above
> > Platform: All
> > OS/Version: Linux
> > Tree: Mainline
> > Status: NEW
> > Severity: normal
> > Priority: P1
> > Component: USB
> > AssignedTo: greg@kroah.com
> > ReportedBy: serge@pdmi.ras.ru
> >
> >
> > Latest working kernel version: 2.6.22.12
> >
> > Earliest failing kernel version: 2.6.23.9
>
> Regression...
>
> > Distribution: Gentoo
> >
> > Hardware Environment:
> >
> > ...
> >
> > Problem Description:
> >
> > First of all, I am sorry for the stupid bug title. Perhaps, you can suggest
> Bus
> > the better one :).
> >
> > I use Nikon Coolscan V film scanner to scan the negative films.
> >
> > 002 Device 005: ID 04b8:0110 Seiko Epson Corp. Perfection 1650
> >
> > The software is Vuescan (http://www.hamrick.com). This is closed source,
> > commercial product. I need this since I need infrared cleaning :(. This
> > software
> > uses libusb library to access the scanner.
> >
> > So, after upgrade from 2.6.20 to 2.6.23 the strange things become occur.
> > After scanning approx 30-35 frames (one frame is approximately 150Mb of
> data)
> > vuescan freezes (precisely, it goes into interruptible sleep). This can
> happen
> > while scanning, and also is can happen while image processing (when the
> > software
> > does not talk with the scanner). Strace shows that the last system call is
> > msync(..,MS_ASYNC). After that another processes marked as 'D' appear, and
> > after some time the system becomes unresponsible. To prevent, it is
> possible
> > kill X server. After that there are no processes marked as 'D' and the
> system
> > seems to be absolutely normal. But, it is not the case. After some time
> > 'D'-processes begin to appear again and the system become more or less
> > unresponsible. So the system becomes completely broken and I need to
> reboot. I
> > compiled the kernel with all possible debug options, but this error is not
> > accompanies by any kernel message :(. With 2.6.23 this is absolutely
> > reproducible result. But to reproduce I need approx 2 hours (this is
> scanning
> > time) :(
> >
> > I also have tested 2.6.22 and 2.6.24.
> >
> > 2.6.22 seems to be not broken.
> > With 2.6.24-rc8 the situation becomes worse: the system completely freezes
> > after several scans.
>
> When things are stuck in D state, please hit ALT-SYSRQ-W or type "echo w >
> /proc/sysrq-trigger" and then send us (via reply-to-all to this email) the
> resulting dmesg output, thanks.
>
Reply-To: oliver@neukum.org Am Dienstag, 22. Januar 2008 00:28:57 schrieb Serge Gavrilov: > vuescan Reply-To: akpm@linux-foundation.org > On Tue, 22 Jan 2008 09:29:27 +0100 Oliver Neukum <oliver@neukum.org> wrote: > Am Dienstag, 22. Januar 2008 00:28:57 schrieb Serge Gavrilov: > > vuescan Reply-To: oliver@neukum.org Am Dienstag, 22. Januar 2008 09:52:00 schrieb Andrew Morton: > > On Tue, 22 Jan 2008 09:29:27 +0100 Oliver Neukum <oliver@neukum.org> wrote: > > Am Dienstag, 22. Januar 2008 00:28:57 schrieb Serge Gavrilov: > > > vuescan I definitely can cp 1Gb of files from one partition to another one. Is it answer to your question? I have reproduced the same behavior with 2.6.22-gentoo-r10. In the latter case I did not scan, but processed raw scan files from the hard disk. So the problem is not related with USB. In any case 2.6.22 seems to be quite more stable regarding this bug than 2.6.23. The bug seems to be very similar to 9182. I also have data=journal in fstab for ext3 partitions Does the problem still exist in 2.6.24 or 2.6.25? I think this can be closed... looks like it was caused by the well-known IO starvation bug that is now fixed. Ok, am closing out. |