Bug 9182 - Critical memory leak (dirty pages)
Summary: Critical memory leak (dirty pages)
Status: VERIFIED CODE_FIX
Alias: None
Product: Memory Management
Classification: Unclassified
Component: Other (show other bugs)
Hardware: All Linux
: P1 blocking
Assignee: Jan Kara
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2007-10-18 08:01 UTC by Krzysztof Oledzki
Modified: 2008-03-26 01:39 UTC (History)
7 users (show)

See Also:
Kernel Version: >=2.6.20-rc2
Subsystem:
Regression: Yes
Bisected commit-id:


Attachments
sysrq+M in 2.6.23.9 (1.36 KB, text/plain)
2007-12-02 06:27 UTC, Krzysztof Oledzki
Details
sysrq+D in 2.6.23.9 (2.10 KB, text/plain)
2007-12-02 06:27 UTC, Krzysztof Oledzki
Details
sysrq+P in 2.6.23.9 (19 bytes, text/plain)
2007-12-02 06:27 UTC, Krzysztof Oledzki
Details
sysrq+T in 2.6.23.9 (122.27 KB, text/plain)
2007-12-02 06:28 UTC, Krzysztof Oledzki
Details
oops taken from a IP-KVM (121.75 KB, image/jpeg)
2007-12-02 06:31 UTC, Krzysztof Oledzki
Details
grep ^Dirty: /proc/meminfo in 1kb units (22.20 KB, image/png)
2007-12-05 05:54 UTC, Krzysztof Oledzki
Details
grep ^Dirty: /proc/meminfo on 2.6.24-rc4-git7 in 1kb units (33.87 KB, image/png)
2007-12-11 07:08 UTC, Krzysztof Oledzki
Details
grep ^Dirty: /proc/meminfo on 2.6.20-rc4 in 1kb units (23.79 KB, image/png)
2007-12-15 13:21 UTC, Krzysztof Oledzki
Details
grep ^Dirty: /proc/meminfo on 2.6.20-rc2 in 1kb units (25.29 KB, image/png)
2007-12-15 13:22 UTC, Krzysztof Oledzki
Details
grep ^Dirty: /proc/meminfo on 2.6.24-rc5 without data=journal (34.79 KB, image/png)
2007-12-17 12:08 UTC, Krzysztof Oledzki
Details
ext3 dirty data accounting fix/debug patch (5.60 KB, patch)
2007-12-19 10:10 UTC, Ingo Molnar
Details | Diff

Description Krzysztof Oledzki 2007-10-18 08:01:39 UTC
Distribution: Gentoo

Hardware Environment:
00:00.0 Host bridge: Intel Corporation E7230/3000/3010 Memory Controller Hub
00:01.0 PCI bridge: Intel Corporation E7230/3000/3010 PCI Express Root Port
00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01)
00:1c.4 PCI bridge: Intel Corporation 82801GR/GH/GHM (ICH7 Family) PCI Express Port 5 (rev 01)
00:1c.5 PCI bridge: Intel Corporation 82801GR/GH/GHM (ICH7 Family) PCI Express Port 6 (rev 01)
00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)
00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01)
00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01)
00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01)
02:00.0 PCI bridge: Intel Corporation 6702PXH PCI Express-to-PCI Bridge A (rev 09)
03:02.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
03:02.1 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express (rev 11)
06:05.0 VGA compatible controller: XGI Technology Inc. (eXtreme Graphics Innovation) Volari Z7

--- cat /proc/interrupts: begin ---
           CPU0       CPU1
  0:        541          0   IO-APIC-edge      timer
  1:         56          0   IO-APIC-edge      i8042
  6:          3          0   IO-APIC-edge      floppy
  8:         43          0   IO-APIC-edge      rtc
  9:          0          0   IO-APIC-fasteoi   acpi
 12:       1418          0   IO-APIC-edge      i8042
 14:         26          0   IO-APIC-edge      libata
 15:          0          0   IO-APIC-edge      libata
 17:          0          0   IO-APIC-fasteoi   uhci_hcd:usb3
 18:          0          0   IO-APIC-fasteoi   uhci_hcd:usb4
 20:  106711998          0   IO-APIC-fasteoi   eth0
 21:      42948          0   IO-APIC-fasteoi   eth1
 22:   10816027          0   IO-APIC-fasteoi   libata, ehci_hcd:usb1, uhci_hcd:usb2
NMI:          0          0
LOC:   86943769   86934941
ERR:          0
--- cat /proc/interrupts: end ---

--- dmesg: begin ---
Linux version 2.6.22.6 (root@cougar) (gcc version 4.1.2 (Gentoo 4.1.2)) #1 SMP PREEMPT Fri Sep 21 11:41:24 CEST 2007
BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000000 - 00000000000a0000 (usable)
 BIOS-e820: 0000000000100000 - 000000007ffc0000 (usable)
 BIOS-e820: 000000007ffc0000 - 000000007ffcfc00 (ACPI data)
 BIOS-e820: 000000007ffcfc00 - 000000007ffff000 (reserved)
 BIOS-e820: 00000000f0000000 - 00000000f4000000 (reserved)
 BIOS-e820: 00000000fec00000 - 00000000fed00400 (reserved)
 BIOS-e820: 00000000fed13000 - 00000000feda0000 (reserved)
 BIOS-e820: 00000000fee00000 - 00000000fee10000 (reserved)
 BIOS-e820: 00000000ffb00000 - 0000000100000000 (reserved)
2047MB LOWMEM available.
found SMP MP-table at 000fe710
Entering add_active_range(0, 0, 524224) 0 entries of 256 used
Zone PFN ranges:
  DMA             0 ->     4096
  Normal       4096 ->   524224
early_node_map[1] active PFN ranges
    0:        0 ->   524224
On node 0 totalpages: 524224
  DMA zone: 32 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 4064 pages, LIFO batch:0
  Normal zone: 4063 pages used for memmap
  Normal zone: 516065 pages, LIFO batch:31
DMI 2.3 present.
ACPI: RSDP 000FD160, 0014 (r0 DELL  )
ACPI: RSDT 000FD174, 0038 (r1 DELL   PE830           1 MSFT  100000A)
ACPI: FACP 000FD1B8, 0074 (r1 DELL   PE830           1 MSFT  100000A)
ACPI: DSDT 7FFC0000, 1C19 (r1 DELL   PE830           1 MSFT  100000E)
ACPI: FACS 7FFCFC00, 0040
ACPI: APIC 000FD22C, 0074 (r1 DELL   PE830           1 MSFT  100000A)
ACPI: SPCR 000FD2A0, 0050 (r1 DELL   PE830           1 MSFT  100000A)
ACPI: HPET 000FD2F0, 0038 (r1 DELL   PE830           1 MSFT  100000A)
ACPI: MCFG 000FD328, 003C (r1 DELL   PE830           1 MSFT  100000A)
ACPI: PM-Timer IO Port: 0x808
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
Processor #0 15:4 APIC version 20
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
Processor #1 15:4 APIC version 20
ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
ACPI: IOAPIC (id[0x03] address[0xfec10000] gsi_base[32])
IOAPIC[1]: apic_id 3, version 32, address 0xfec10000, GSI 32-55
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
ACPI: IRQ0 used by override.
ACPI: IRQ2 used by override.
ACPI: IRQ9 used by override.
Enabling APIC mode:  Flat.  Using 2 I/O APICs
ACPI: HPET id: 0xffffffff base: 0xfed00000
Using ACPI (MADT) for SMP configuration information
Allocating PCI resources starting at 80000000 (gap: 7ffff000:70001000)
Built 1 zonelists.  Total pages: 520129
Kernel command line: auto BOOT_IMAGE=Linux-2.6.22.6 ro root=900 rootflags=data=journal nf_conntrack.hashsize=32768
mapped APIC to ffffd000 (fee00000)
mapped IOAPIC to ffffc000 (fec00000)
mapped IOAPIC to ffffb000 (fec10000)
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Initializing CPU#0
PID hash table entries: 4096 (order: 12, 16384 bytes)
Detected 3200.405 MHz processor.
Console: colour VGA+ 80x30
Dentry cache hash table entries: 262144 (order: 8, 1048576 bytes)
Inode-cache hash table entries: 131072 (order: 7, 524288 bytes)
Memory: 2073468k/2096896k available (3092k kernel code, 23040k reserved, 1269k data, 220k init, 0k highmem)
virtual kernel memory layout:
    fixmap  : 0xfffb7000 - 0xfffff000   ( 288 kB)
    vmalloc : 0xf8800000 - 0xfffb5000   ( 119 MB)
    lowmem  : 0x78000000 - 0xf7fc0000   (2047 MB)
      .init : 0x7854b000 - 0x78582000   ( 220 kB)
      .data : 0x784051cb - 0x7854280c   (1269 kB)
      .text : 0x78100000 - 0x784051cb   (3092 kB)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
hpet0: 3 64-bit timers, 14318180 Hz
Calibrating delay using timer specific routine.. 6404.65 BogoMIPS (lpj=3202326)
Mount-cache hash table entries: 512
CPU: After generic identify, caps: bfebfbff 20100000 00000000 00000000 0000649d 00000000 00000001
monitor/mwait feature present.
using mwait in idle threads.
CPU: Trace cache: 12K uops, L1 D cache: 16K
CPU: L2 cache: 1024K
CPU: Physical Processor ID: 0
CPU: Processor Core ID: 0
CPU: After all inits, caps: bfebfbff 20100000 00000000 0000b180 0000649d 00000000 00000001
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
CPU0: Intel P4/Xeon Extended MCE MSRs (24) available
CPU0: Thermal monitoring enabled
Compat vDSO mapped to ffffe000.
Checking 'hlt' instruction... OK.
SMP alternatives: switching to UP code
ACPI: Core revision 20070126
CPU0: Intel(R) Pentium(R) D CPU 3.20GHz stepping 07
SMP alternatives: switching to SMP code
Booting processor 1/1 eip 3000
Initializing CPU#1
Calibrating delay using timer specific routine.. 6400.33 BogoMIPS (lpj=3200168)
CPU: After generic identify, caps: bfebfbff 20100000 00000000 00000000 0000649d 00000000 00000001
monitor/mwait feature present.
CPU: Trace cache: 12K uops, L1 D cache: 16K
CPU: L2 cache: 1024K
CPU: Physical Processor ID: 0
CPU: Processor Core ID: 1
CPU: After all inits, caps: bfebfbff 20100000 00000000 0000b180 0000649d 00000000 00000001
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#1.
CPU1: Intel P4/Xeon Extended MCE MSRs (24) available
CPU1: Thermal monitoring enabled
CPU1: Intel(R) Pentium(R) D CPU 3.20GHz stepping 07
Total of 2 processors activated (12804.98 BogoMIPS).
ENABLING IO-APIC IRQs
..TIMER: vector=0x31 apic1=0 pin1=2 apic2=-1 pin2=-1
checking TSC synchronization [CPU#0 -> CPU#1]: passed.
Brought up 2 CPUs
migration_cost=291
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: Using MMCONFIG
Setting up standard PCI resources
ACPI: Interpreter enabled
ACPI: (supports S0 S4 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
PCI: Probing PCI hardware (bus 00)
PCI quirk: region 0800-087f claimed by ICH6 ACPI/GPIO/TCO
PCI quirk: region 0880-08bf claimed by ICH6 GPIO
PCI: PXH quirk detected, disabling MSI for SHPC device
PCI: Transparent bridge - 0000:00:1e.0
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PES1._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEP0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEP0.PXHA._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEP1._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEP2._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PCIS._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 *10 11 12)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled.
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 9 10 *11 12)
ACPI: PCI Interrupt Link [LNKD] (IRQs *3 4 5 6 7 9 10 11 12)
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 9 10 *11 12)
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 9 *10 11 12)
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 7 9 10 11 12)
ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 9 10 11 12) *0, disabled.
Linux Plug and Play Support v0.97 (c) Adam Belay
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp: PnP ACPI: found 13 devices
ACPI: ACPI bus type pnp unregistered
SCSI subsystem initialized
libata version 2.21 loaded.
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
PCI: Using ACPI for IRQ routing
PCI: If a device doesn't work, try "pci=routeirq".  If it helps, post a report
pnp: 00:09: ioport range 0x800-0x87f has been reserved
pnp: 00:09: ioport range 0x880-0x8bf has been reserved
pnp: 00:09: ioport range 0x8c0-0x8df has been reserved
Time: tsc clocksource has been installed.
pnp: 00:09: ioport range 0x8e0-0x8e3 has been reserved
pnp: 00:09: ioport range 0xc00-0xc0f has been reserved
pnp: 00:09: ioport range 0xc10-0xc1f has been reserved
pnp: 00:09: ioport range 0xca0-0xca7 has been reserved
pnp: 00:09: ioport range 0xca9-0xcab has been reserved
pnp: 00:0b: iomem range 0xf0000000-0xf3ffffff could not be reserved
PCI: Bridge: 0000:00:01.0
  IO window: disabled.
  MEM window: disabled.
  PREFETCH window: disabled.
PCI: Bridge: 0000:02:00.0
  IO window: e000-efff
  MEM window: fe900000-feafffff
  PREFETCH window: disabled.
PCI: Bridge: 0000:00:1c.0
  IO window: e000-efff
  MEM window: fe800000-feafffff
  PREFETCH window: disabled.
PCI: Bridge: 0000:00:1c.4
  IO window: d000-dfff
  MEM window: fe600000-fe7fffff
  PREFETCH window: disabled.
PCI: Bridge: 0000:00:1c.5
  IO window: disabled.
  MEM window: disabled.
  PREFETCH window: disabled.
PCI: Bridge: 0000:00:1e.0
  IO window: c000-cfff
  MEM window: fe400000-fe5fffff
  PREFETCH window: fd000000-fdffffff
ACPI: PCI Interrupt 0000:00:01.0[A] -> GSI 16 (level, low) -> IRQ 16
PCI: Setting latency timer of device 0000:00:01.0 to 64
ACPI: PCI Interrupt 0000:00:1c.0[A] -> GSI 21 (level, low) -> IRQ 17
PCI: Setting latency timer of device 0000:00:1c.0 to 64
PCI: Setting latency timer of device 0000:02:00.0 to 64
ACPI: PCI Interrupt 0000:00:1c.4[B] -> GSI 22 (level, low) -> IRQ 18
PCI: Setting latency timer of device 0000:00:1c.4 to 64
ACPI: PCI Interrupt 0000:00:1c.5[C] -> GSI 23 (level, low) -> IRQ 19
PCI: Setting latency timer of device 0000:00:1c.5 to 64
PCI: Setting latency timer of device 0000:00:1e.0 to 64
NET: Registered protocol family 2
IP route cache hash table entries: 65536 (order: 6, 262144 bytes)
TCP established hash table entries: 262144 (order: 10, 4194304 bytes)
TCP bind hash table entries: 65536 (order: 7, 786432 bytes)
TCP: Hash tables configured (established 262144 bind 65536)
TCP reno registered
Machine check exception polling timer started.
IA-32 Microcode Update Driver: v1.14a <tigran@xxxxxxxxxxxxxxxxxxxx>
Total HugeTLB memory allocated, 0
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered (default)
io scheduler cfq registered
Boot video device is 0000:06:05.0
PCI: Setting latency timer of device 0000:00:01.0 to 64
assign_interrupt_mode Found MSI capability
Allocate Port Service[0000:00:01.0:pcie00]
Allocate Port Service[0000:00:01.0:pcie03]
PCI: Setting latency timer of device 0000:00:1c.0 to 64
assign_interrupt_mode Found MSI capability
Allocate Port Service[0000:00:1c.0:pcie00]
Allocate Port Service[0000:00:1c.0:pcie03]
PCI: Setting latency timer of device 0000:00:1c.4 to 64
assign_interrupt_mode Found MSI capability
Allocate Port Service[0000:00:1c.4:pcie00]
Allocate Port Service[0000:00:1c.4:pcie03]
PCI: Setting latency timer of device 0000:00:1c.5 to 64
assign_interrupt_mode Found MSI capability
Allocate Port Service[0000:00:1c.5:pcie00]
Allocate Port Service[0000:00:1c.5:pcie02]
Allocate Port Service[0000:00:1c.5:pcie03]
sisfb: Video ROM found
sisfb: Video RAM at 0xfd000000, mapped to 0xf8880000, size 16384k
sisfb: MMIO at 0xfe4c0000, mapped to 0xf9900000, size 256k
sisfb: Memory heap starting at 16160K, size 32K
sisfb: CRT1 DDC supported

sisfb: CRT1 DDC level: 2 Switched to high resolution mode on CPU 1

Switched to high resolution mode on CPU 0
sisfb: Monitor range H 24-61KHz, V 56-75Hz, Max. dotclock 80MHz
sisfb: Default mode is 800x600x8 (60Hz)
sisfb: Initial vbflags 0x0
Console: switching to colour frame buffer device 100x37
sisfb: 2D acceleration is enabled, y-panning enabled (auto-max)
fb0: XGI Z7 frame buffer device version 1.8.9
sisfb: Copyright (C) 2001-2005 Thomas Winischhofer
input: Power Button (FF) as /class/input/input0
ACPI: Power Button (FF) [PWRF]
Real Time Clock Driver v1.12ac
hpet_resources: 0xfed00000 is busy
intel_rng: FWH not detected
Hangcheck: starting hangcheck timer 0.9.0 (tick is 180 seconds, margin is 60 seconds).
Hangcheck: Using get_cycles().
Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Floppy drive(s): fd0 is 1.44M
FDC 0 is a National Semiconductor PC87306
loop: module loaded
Intel(R) PRO/1000 Network Driver - version 7.3.20-k2-NAPI
Copyright (c) 1999-2006 Intel Corporation.
ACPI: PCI Interrupt 0000:03:02.0[A] -> GSI 35 (level, low) -> IRQ 20
e1000: 0000:03:02.0: e1000_probe: (PCI-X:133MHz:64-bit) 00:04:23:c7:a4:de
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
ACPI: PCI Interrupt 0000:03:02.1[B] -> GSI 34 (level, low) -> IRQ 21
e1000: 0000:03:02.1: e1000_probe: (PCI-X:133MHz:64-bit) 00:04:23:c7:a4:df
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection
Ethernet Channel Bonding Driver: v3.1.3 (June 13, 2007)
bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details.
tg3.c:v3.77 (May 31, 2007)
ACPI: PCI Interrupt 0000:04:00.0[A] -> GSI 16 (level, low) -> IRQ 16
PCI: Setting latency timer of device 0000:04:00.0 to 64
eth2: Tigon3 [partno(BCM95721) rev 4101 PHY(5750)] (PCI Express) 10/100/1000Base-T Ethernet 00:13:72:3c:fd:fe
eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] WireSpeed[1] TSOcap[1]
eth2: dma_rwctrl[76180000] dma_mask[64-bit]
netconsole: not configured, aborting
st: Version 20070203, fixed bufsize 32768, s/g segs 256
ata_piix 0000:00:1f.1: version 2.11
ACPI: Unable to derive IRQ for device 0000:00:1f.1
ACPI: PCI Interrupt 0000:00:1f.1[A]: no GSI
PCI: Setting latency timer of device 0000:00:1f.1 to 64
scsi0 : ata_piix
scsi1 : ata_piix
ata1: PATA max UDMA/133 cmd 0x000101f0 ctl 0x000103f6 bmdma 0x0001fc00 irq 14
ata2: PATA max UDMA/133 cmd 0x00010170 ctl 0x00010376 bmdma 0x0001fc08 irq 15
ata1.00: ATAPI: TSSTcorpDVD-ROM TS-H352C, DE02, max UDMA/33
ata1.01: ATAPI: Seagate STT3401A, 310B, max PIO3
ata1.00: configured for UDMA/33
ata1.01: configured for PIO3
ata2: port disabled. ignoring.
scsi 0:0:0:0: CD-ROM            TSSTcorp DVD-ROM TS-H352C DE02 PQ: 0 ANSI: 5
sr0: scsi3-mmc drive: 4x/48x cd/rw xa/form2 cdda tray
Uniform CD-ROM driver Revision: 3.20
sr 0:0:0:0: Attached scsi CD-ROM sr0
sr 0:0:0:0: Attached scsi generic sg0 type 5
scsi 0:0:1:0: Sequential-Access Seagate  STT3401A         310B PQ: 0 ANSI: 2
st 0:0:1:0: Attached scsi tape st0
st 0:0:1:0: st0: try direct i/o: yes (alignment 512 B)
st 0:0:1:0: Attached scsi generic sg1 type 1
ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ]
ACPI: PCI Interrupt 0000:00:1f.2[C] -> GSI 20 (level, low) -> IRQ 22
PCI: Setting latency timer of device 0000:00:1f.2 to 64
scsi2 : ata_piix
scsi3 : ata_piix
ata3: SATA max UDMA/133 cmd 0x0001bc98 ctl 0x0001bc92 bmdma 0x0001bc60 irq 22
ata4: SATA max UDMA/133 cmd 0x0001bc80 ctl 0x0001bc7a bmdma 0x0001bc68 irq 22
ata3.00: ATA-7: WDC WD2500JS-75NCB1, 10.02E01, max UDMA/133
ata3.00: 488281250 sectors, multi 8: LBA48 NCQ (depth 0/32)
ata3.01: ATA-7: Maxtor 7L250S0, BACE1G10, max UDMA/133
ata3.01: 488281250 sectors, multi 8: LBA48 NCQ (depth 0/32)
ata3.00: configured for UDMA/133
ata3.01: configured for UDMA/133
ata4.00: ATA-7: Maxtor 7L250S0, BACE1G10, max UDMA/133
ata4.00: 488281250 sectors, multi 8: LBA48 NCQ (depth 0/32)
ata4.00: configured for UDMA/133
scsi 2:0:0:0: Direct-Access     ATA      WDC WD2500JS-75N 10.0 PQ: 0 ANSI: 5
sd 2:0:0:0: [sda] 488281250 512-byte hardware sectors (250000 MB)
sd 2:0:0:0: [sda] Write Protect is off
sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:0:0: [sda] 488281250 512-byte hardware sectors (250000 MB)
sd 2:0:0:0: [sda] Write Protect is off
sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2 sda3 sda4
sd 2:0:0:0: [sda] Attached SCSI disk
sd 2:0:0:0: Attached scsi generic sg2 type 0
scsi 2:0:1:0: Direct-Access     ATA      Maxtor 7L250S0   BACE PQ: 0 ANSI: 5
sd 2:0:1:0: [sdb] 488281250 512-byte hardware sectors (250000 MB)
sd 2:0:1:0: [sdb] Write Protect is off
sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:1:0: [sdb] 488281250 512-byte hardware sectors (250000 MB)
sd 2:0:1:0: [sdb] Write Protect is off
sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdb: sdb1 sdb2 sdb3 sdb4
sd 2:0:1:0: [sdb] Attached SCSI disk
sd 2:0:1:0: Attached scsi generic sg3 type 0
scsi 3:0:0:0: Direct-Access     ATA      Maxtor 7L250S0   BACE PQ: 0 ANSI: 5
sd 3:0:0:0: [sdc] 488281250 512-byte hardware sectors (250000 MB)
sd 3:0:0:0: [sdc] Write Protect is off
sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 3:0:0:0: [sdc] 488281250 512-byte hardware sectors (250000 MB)
sd 3:0:0:0: [sdc] Write Protect is off
sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdc: sdc1 sdc2 sdc3 sdc4
sd 3:0:0:0: [sdc] Attached SCSI disk
sd 3:0:0:0: Attached scsi generic sg4 type 0
ACPI: PCI Interrupt 0000:00:1d.7[A] -> GSI 20 (level, low) -> IRQ 22
PCI: Setting latency timer of device 0000:00:1d.7 to 64
ehci_hcd 0000:00:1d.7: EHCI Host Controller
ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:1d.7: debug port 1
PCI: cache line size of 128 is not supported by device 0000:00:1d.7
ehci_hcd 0000:00:1d.7: irq 22, io mem 0xfeb00400
ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004
usb usb1: configuration #1 chosen from 1 choice
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 6 ports detected
ohci_hcd: 2006 August 04 USB 1.1 'Open' Host Controller (OHCI) Driver
USB Universal Host Controller Interface driver v3.0
ACPI: PCI Interrupt 0000:00:1d.0[A] -> GSI 20 (level, low) -> IRQ 22
PCI: Setting latency timer of device 0000:00:1d.0 to 64
uhci_hcd 0000:00:1d.0: UHCI Host Controller
uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
uhci_hcd 0000:00:1d.0: irq 22, io base 0x0000bce0
usb usb2: configuration #1 chosen from 1 choice
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 2 ports detected
ACPI: PCI Interrupt 0000:00:1d.1[B] -> GSI 21 (level, low) -> IRQ 17
PCI: Setting latency timer of device 0000:00:1d.1 to 64
uhci_hcd 0000:00:1d.1: UHCI Host Controller
uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3
uhci_hcd 0000:00:1d.1: irq 17, io base 0x0000bcc0
usb usb3: configuration #1 chosen from 1 choice
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
ACPI: PCI Interrupt 0000:00:1d.2[C] -> GSI 22 (level, low) -> IRQ 18
PCI: Setting latency timer of device 0000:00:1d.2 to 64
uhci_hcd 0000:00:1d.2: UHCI Host Controller
uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4
uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000bca0
usb usb4: configuration #1 chosen from 1 choice
hub 4-0:1.0: USB hub found
hub 4-0:1.0: 2 ports detected
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
serio: i8042 KBD port at 0x60,0x64 irq 1
serio: i8042 AUX port at 0x60,0x64 irq 12
mice: PS/2 mouse device common for all mice
input: AT Translated Set 2 keyboard as /class/input/input1
md: raid0 personality registered for level 0
md: raid1 personality registered for level 1
md: raid10 personality registered for level 10
raid6: int32x1    730 MB/s
raid6: int32x2    742 MB/s
raid6: int32x4    667 MB/s
raid6: int32x8    507 MB/s
raid6: mmxx1     1660 MB/s
raid6: mmxx2     1781 MB/s
raid6: sse1x1    1019 MB/s
raid6: sse1x2    1113 MB/s
raid6: sse2x1    2039 MB/s
raid6: sse2x2    1851 MB/s
raid6: using algorithm sse2x1 (2039 MB/s)
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
raid5: automatically using best checksumming function: pIII_sse
   pIII_sse  :  4008.000 MB/sec
raid5: using function: pIII_sse (4008.000 MB/sec)
md: faulty personality registered for level -5
device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised: dm-devel@xxxxxxxxxx
device-mapper: multipath: version 1.0.5 loaded
device-mapper: multipath round-robin: version 1.0.0 loaded
EDAC MC: Ver: 2.0.1 Sep 21 2007
input: ImExPS/2 Generic Explorer Mouse as /class/input/input2
dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2)
usbcore: registered new interface driver usbhid
drivers/hid/usbhid/hid-core.c: v2.6:USB HID core driver
Netfilter messages via NETLINK v0.30.
nf_conntrack version 0.5.0 (32768 buckets, 262144 max)
ctnetlink v0.93: registering with nfnetlink.
ip_tables: (C) 2000-2006 Netfilter Core Team
IP_TPROXY: Transparent proxy support initialized, version 4.0.0
IP_TPROXY: Copyright (c) 2002-2007 BalaBit IT Ltd.
arp_tables: (C) 2002 David S. Miller
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 10
ip6_tables: (C) 2000-2006 Netfilter Core Team
IPv6 over IPv4 tunneling driver
NET: Registered protocol family 17
Using IPI No-Shortcut mode
BIOS EDD facility v0.16 2004-Jun-25, 3 devices found
md: Autodetecting RAID arrays.
md: autorun ...
md: considering sdc4 ...
md:  adding sdc4 ...
md: sdc3 has different UUID to sdc4
md: sdc2 has different UUID to sdc4
md:  adding sdb4 ...
md: sdb3 has different UUID to sdc4
md: sdb2 has different UUID to sdc4
md:  adding sda4 ...
md: sda3 has different UUID to sdc4
md: sda2 has different UUID to sdc4
md: created md4
md: bind<sda4>
md: bind<sdb4>
md: bind<sdc4>
md: running: <sdc4><sdb4><sda4>
raid5: device sdc4 operational as raid disk 2
raid5: device sdb4 operational as raid disk 1
raid5: device sda4 operational as raid disk 0
raid5: allocated 3164kB for md4
raid5: raid level 5 set md4 active with 3 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:3
 disk 0, o:1, dev:sda4
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
md: considering sdc3 ...
md:  adding sdc3 ...
md: sdc2 has different UUID to sdc3
md:  adding sdb3 ...
md: sdb2 has different UUID to sdc3
md:  adding sda3 ...
md: sda2 has different UUID to sdc3
md: created md15
md: bind<sda3>
md: bind<sdb3>
md: bind<sdc3>
md: running: <sdc3><sdb3><sda3>
raid10: raid set md15 active with 3 out of 3 devices
md15: bitmap initialized from disk: read 12/12 pages, set 0 bits, status: 0
created bitmap (179 pages) for device md15
md: considering sdc2 ...
md:  adding sdc2 ...
md:  adding sdb2 ...
md:  adding sda2 ...
md: created md0
md: bind<sda2>
md: bind<sdb2>
md: bind<sdc2>
md: running: <sdc2><sdb2><sda2>
raid1: raid set md0 active with 3 out of 3 mirrors
md0: bitmap initialized from disk: read 15/15 pages, set 0 bits, status: 0
created bitmap (239 pages) for device md0
md: ... autorun DONE.
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during recovery.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: md0: orphan cleanup on readonly fs
ext3_orphan_cleanup: deleting unreferenced inode 130332
EXT3-fs: md0: 1 orphan inode deleted
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with journal data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Freeing unused kernel memory: 220k freed
EXT3 FS on md0, internal journal
kjournald starting.  Commit interval 5 seconds
EXT3 FS on dm-0, internal journal
EXT3-fs: mounted filesystem with journal data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on dm-1, internal journal
EXT3-fs: mounted filesystem with journal data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on dm-2, internal journal
EXT3-fs: mounted filesystem with writeback data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on dm-4, internal journal
EXT3-fs: mounted filesystem with writeback data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on dm-3, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
Adding 2927608k swap on /dev/md15.  Priority:8192 extents:1 across:2927608k
bonding: bond0: setting mode to active-backup (1).
bonding: bond0: Setting MII monitoring interval to 100.
ADDRCONF(NETDEV_UP): bond0: link is not ready
bonding: bond0: Adding slave eth0.
e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
bonding: bond0: making interface eth0 the new active one.
bonding: bond0: first active interface up!
bonding: bond0: enslaving eth0 as an active interface with an up link.
bonding: bond0: Adding slave eth1.
ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
ADDRCONF(NETDEV_UP): eth1: link is not ready
bonding: bond0: enslaving eth1 as a backup interface with a down link.
bonding: bond0: Setting eth0 as primary slave.
eth0: no IPv6 routers present
bond0: no IPv6 routers present
device eth0 entered promiscuous mode
device bond0 entered promiscuous mode
device eth0 left promiscuous mode
device bond0 left promiscuous mode
device eth0 entered promiscuous mode
device bond0 entered promiscuous mode
device eth0 left promiscuous mode
device bond0 left promiscuous mode
--- dmesg: end ---

Software Environment:
Linux cougar 2.6.22.9 #1 SMP PREEMPT Wed Oct 3 10:24:19 CEST 2007 i686 Intel(R) Pentium(R) D CPU 3.20GHz GenuineIntel GNU/Linux

Gnu C                  4.1.2
Gnu make               3.81
binutils               2.17
util-linux             2.12r
mount                  2.12r
module-init-tools      3.2.2
e2fsprogs              1.40.2
Linux C Library        > libc.2.5
Dynamic linker (ldd)   2.5
Procps                 3.2.7
Net-tools              1.60
Kbd                    1.12
Sh-utils               6.9

Problem Description:
I am experiencing weird system hangs. Once about 2-5 weeks system freezes and stops accepting remote connections, so it is no longer possible to connect to most important services: smtp (postfix), www (squid) or even ssh. Such connection is accepted but then it hangs.

What is strange, that previously established ssh session is usable. It is possible to work on such system until you do something stupid like "less /var/log/all.log". Using strace I found that process blocks on:

--- strace: being ---
execve("/usr/bin/tail", ["tail", "-f", "/var/log/all.log"], [/* 33 vars */]) = 0
brk(0)                                  = 0x8052000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x6ff00000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=20944, ...}) = 0
mmap2(NULL, 20944, PROT_READ, MAP_PRIVATE, 3, 0) = 0x6fefa000
close(3)                                = 0
open("/lib/libc.so.6", O_RDONLY)        = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0RY\1\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1175920, ...}) = 0
mmap2(NULL, 1185212, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x6fdd8000
mmap2(0x6fef4000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x11b) = 0x6fef4000
mmap2(0x6fef7000, 9660, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x6fef7000
close(3)                                = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x6fdd7000
set_thread_area({entry_number:-1 -> 6, base_addr:0x6fdd76b0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0
mprotect(0x6fef4000, 4096, PROT_READ)   = 0
mprotect(0x6ff1c000, 4096, PROT_READ)   = 0
munmap(0x6fefa000, 20944)               = 0
brk(0)                                  = 0x8052000
brk(0x8073000)                          = 0x8073000
open("/var/log/all.log", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0640, st_size=3171841, ...})
llseek(3, 0,  <unfinished ...>
--- strace: end ---

This file is not very big:

# ls -l /var/log/all.log
-rw-r----- 1 root root 3171841 Sep 27 04:36 /var/log/all.log

Also running "dmesg > file" hangs, creating a file with only 4096 bytes.

--- Show Blocked State: begin ---
SysRq : Show Blocked State

                         free                        sibling
  task             PC    stack   pid father child younger older
syslogd       D F5C83C60     0  2162      1 (NOTLB)
       f5c83c74 00000082 00000002 f5c83c60 f5c83c5c 00000000 00000000 78538d20
       00000009 00000001 f7f6a070 f7cb8030 82c47e5f 0001cfed 00000a43 f7f6a17c
       7a016980 f705dc80 78404217 7812c708 00000000 00000213 f5c83c84 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<783c55a1>] unix_dgram_recvmsg+0x1b4/0x1c8
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f34f>] do_sync_readv_writev+0xc1/0xfe
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<784041ae>] _spin_unlock+0xd/0x21
 [<781a8c38>] log_wait_commit+0xc3/0xe3
 [<7814448b>] find_get_pages_tag+0x76/0x80
 [<7815f204>] rw_copy_check_uvector+0x50/0xaa
 [<7815f9d4>] do_readv_writev+0x99/0x164
 [<78194b04>] ext3_file_write+0x0/0x8f
 [<7815fadc>] vfs_writev+0x3d/0x48
 [<7815feb5>] sys_writev+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 =======================
freshclam     D 00000282     0  2866      1 (NOTLB)
       f36e3cc4 00000082 00000009 00000282 7a0173c0 00000002 00000000 0000007b
       00000009 00000001 f7cb8030 f7c72030 82c4884d 0001cfed 000009ee f7cb813c
       7a016980 f66c0b80 78404217 7812c708 00000000 00000213 f36e3cd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<7840408f>] _spin_lock+0xd/0x5a
 [<78176f3d>] __mark_inode_dirty+0xdd/0x16f
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<78103159>] setup_sigcontext+0x105/0x189
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781085d2>] convert_fxsr_from_user+0x15/0xd5
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 =======================
crond         D 00000140     0  3029      1 (NOTLB)
       f1577cc4 00000082 00000008 00000140 7a00f3c0 00000002 00000000 00026b9a
       00000009 00000001 7a1d8030 e89f0070 82c469af 0001cfed 00000ad9 7a1d813c
       7a016980 f73afb40 78404217 7812c708 00000000 00000213 f1577cd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<7840408f>] _spin_lock+0xd/0x5a
 [<78176f3d>] __mark_inode_dirty+0xdd/0x16f
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<78162281>] sys_stat64+0x1e/0x23
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 [<78400000>] svcauth_gss_accept+0x8a3/0xb2b
 =======================
ntpd          D 00000140     0  3237      1 (NOTLB)
       ed027cc4 00200082 00000006 00000140 7a00f3c0 00000002 00000000 888925f9
       00000009 00000001 7a1c8070 ea1d2a50 82c49c08 0001cfed 00000a11 7a1c817c
       7a016980 f66c0580 78404217 7812c708 00000000 00200213 ed027cd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7810477b>] common_interrupt+0x23/0x28
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<781484b8>] get_page_from_freelist+0x278/0x325
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 =======================
snmpd         D EB4EDCB0     0  3506      1 (NOTLB)
       eb4edcc4 00200086 00000002 eb4edcb0 eb4edcac 00000000 ebdbe100 000f4240
       00000007 00000001 eb94e030 ed65a030 82c4e0b2 0001cfed 00000a63 eb94e13c
       7a016980 ebdbe100 78404217 7812c708 00000000 00200213 eb4edcd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 [<78400000>] svcauth_gss_accept+0x8a3/0xb2b
 =======================
squid         D 00000140     0  3565   3563 (NOTLB)
       eb2f7cc4 00000086 00000004 00000140 7a00f3c0 00000002 00000000 f7fac000
       00000009 00000001 f7c72030 7a1c8070 82c491f7 0001cfed 000009aa f7c7213c
       7a016980 ebdbeb00 78404217 7812c708 00000000 00000213 eb2f7cd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<7840408f>] _spin_lock+0xd/0x5a
 [<78176f3d>] __mark_inode_dirty+0xdd/0x16f
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<782bc211>] e1000_alloc_rx_buffers+0x1bb/0x280
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<78102d86>] __switch_to+0x108/0x127
 [<784023bb>] __sched_text_start+0x78b/0x7b2
 [<78374bb5>] net_rx_action+0x13a/0x17b
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 [<78400000>] svcauth_gss_accept+0x8a3/0xb2b
 =======================
zabbix_agentd D E9C41E34     0  3687      1 (NOTLB)
       e9c41e48 00200082 00000002 e9c41e34 e9c41e30 00000000 00000310 ea20cb84
       00000008 00000001 eb94e540 f7f92030 ad13acaf 0001ce8d 00012a82 eb94e64c
       7a016980 eab33b40 1e634329 00000003 00000000 00000000 e9c41e60 9d739158
Call Trace:
 [<7840324c>] __mutex_lock_slowpath+0x10f/0x20e
 [<78146206>] generic_file_aio_write+0x40/0xb3
 [<781484b8>] get_page_from_freelist+0x278/0x325
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 =======================
zabbix_agentd D EB6C5CB0     0  3754   3687 (NOTLB)
       eb6c5cc4 00200082 00000002 eb6c5cb0 eb6c5cac 00000000 f7c81574 00000001
       00000008 00000001 ed65a030 f7f92030 82c4ea91 0001cfed 000009df ed65a13c
       7a016980 f5cc5c40 1e7a6458 00000003 00000000 00000000 eb6c5cd4 1e7a64bb
Call Trace:
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<7840408f>] _spin_lock+0xd/0x5a
 [<78176f3d>] __mark_inode_dirty+0xdd/0x16f
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<781710e9>] mntput_no_expire+0x11/0x63
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<781484b8>] get_page_from_freelist+0x278/0x325
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<78160123>] sys_write+0x41/0x67
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 =======================
amavisd       D 00000270     0 19280      1 (NOTLB)
       c2563e2c 00000082 00000005 00000270 7a0173c0 00000002 00000000 00000000
       00000008 00000001 f7e86a90 944bc070 82c4b069 0001cfed 000009e0 f7e86b9c
       7a016980 c13f6680 1e7a6458 00000003 00000000 00000270 c2563e3c 1e7a64bb
Call Trace:
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814e32c>] do_wp_page+0x3bb/0x3fd
 [<7814f6da>] __handle_mm_fault+0x7b4/0x82c
 [<7812f815>] __atomic_notifier_call_chain+0x3f/0x5a
 [<78119744>] do_page_fault+0x20d/0x52d
 [<78138135>] do_gettimeofday+0x31/0xd4
 [<78119537>] do_page_fault+0x0/0x52d
 [<784043ea>] error_code+0x72/0x78
 [<78400000>] svcauth_gss_accept+0x8a3/0xb2b
 =======================
amavisd       D 00000001     0 19570      1 (NOTLB)
       97f79ccc 00000006 f7c81574 00000001 0000000a 7822768f 97f79c80 97f79c80
       00000009 00000001 e89f0070 f7f6a070 82c4741c 0001cfed 00000a6d e89f017c
       7a016980 78622bc0 78404217 7812c708 00000000 00000213 97f79cdc 1e7a64bb
Call Trace:
 [<7822768f>] blk_done_softirq+0x44/0x4f
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7810477b>] common_interrupt+0x23/0x28
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78145bd0>] generic_file_buffered_write+0x4ee/0x605
 [<7819cdb4>] __ext3_journal_stop+0x19/0x34
 [<7840408f>] _spin_lock+0xd/0x5a
 [<78128c8e>] current_fs_time+0x41/0x46
 [<78146167>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7814621b>] generic_file_aio_write+0x55/0xb3
 [<78176f3d>] __mark_inode_dirty+0xdd/0x16f
 [<78194b28>] ext3_file_write+0x24/0x8f
 [<7815f453>] do_sync_write+0xc7/0x10a
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<7815f38c>] do_sync_write+0x0/0x10a
 [<7815fbb6>] vfs_write+0x8a/0x10c
 [<781601f0>] sys_pwrite64+0x48/0x5f
 [<78103d6a>] sysenter_past_esp+0x5f/0x85
 [<78400000>] svcauth_gss_accept+0x8a3/0xb2b
 =======================
sadc          D 00000140     0 21692      1 (NOTLB)
       a830bcc4 00000082 00000002 00000140 7a00f3c0 00000002 00000000 00000002
       00000007 00000001 ebff0a90 eb94e030 82c4d64f 0001cfed 00000a11 ebff0b9c
       7a016980 f5cc5040 78404217 7812c708 00000000 00000213 a830bcd4 1e7a64bb
Call Trace:
 [<78404217>] _spin_unlock_irqrestore+0xf/0x23
 [<7812c708>] __mod_timer+0x92/0x9c
 [<78402b34>] schedule_timeout+0x70/0x8d
 [<7812c521>] process_timeout+0x0/0x5
 [<78402548>] io_schedule_timeout+0x1e/0x28
 [<7814d41e>] congestion_wait+0x50/0x64
 [<78134abc>] autoremove_wake_function+0x0/0x35
 [<781493e7>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
--- Show Blocked State: end ---

I have tested 2.6.20, 2.6.21 and 2.6.22 kernels so far and this problem exists with each release.
Comment 1 Krzysztof Oledzki 2007-10-18 08:06:20 UTC
On Sat, 29 Sep 2007, Nick Piggin wrote:


    On Friday 28 September 2007 18:42, Krzysztof Oledzki wrote:

        Hello,

        I am experiencing weird system hangs. Once about 2-5 weeks system freezes
        and stops accepting remote connections, so it is no longer possible to
        connect to most important services: smtp (postfix), www (squid) or even
        ssh. Such connection is accepted but then it hangs.

        What is strange, that previously established ssh session is usable. It is
        possible to work on such system until you do something stupid like "less
        /var/log/all.log". Using strace I found that process blocks on:

    Is this a regression? If so, what's the most recent kernel that didn't show
    the problem?

I don't know. First kernel I ran was 2.6.20.x. This is quite fresh system.


    The symptoms could be consistent with some place doing a
    balance_dirty_pages while holding a lock that is required for IO, but I can't
    see a smoking gun (you've got contention on i_mutex, but that should be
    OK).

    Can you see if there is any memory under writeback that isn't being
    completed (sysrq+M), also a list the locks held after the hang might be
    helpful (compile in lockdep and sysrq+D)

OK. I'll try to do it next time if there will be a chance. It may take some time, BTW.

    Is anything currently running? (sysrq+P and even a full sysrq+T task list
    could be useful).

I'll have to check - maybe I have this captured. If not I'll check it next time.

    Are any IO errors occurring at all?

Didn't notice - so no.
Comment 2 Krzysztof Oledzki 2007-10-18 08:07:49 UTC
On Sat, 29 Sep 2007, Nick Piggin wrote:


    On Friday 28 September 2007 18:42, Krzysztof Oledzki wrote:

        Hello,

        I am experiencing weird system hangs. Once about 2-5 weeks system freezes
        and stops accepting remote connections, so it is no longer possible to
        connect to most important services: smtp (postfix), www (squid) or even
        ssh. Such connection is accepted but then it hangs.

        What is strange, that previously established ssh session is usable. It is
        possible to work on such system until you do something stupid like "less
        /var/log/all.log". Using strace I found that process blocks on:

    Is this a regression? If so, what's the most recent kernel that didn't show
    the problem?

    The symptoms could be consistent with some place doing a
    balance_dirty_pages while holding a lock that is required for IO, but I can't
    see a smoking gun (you've got contention on i_mutex, but that should be
    OK).

    Can you see if there is any memory under writeback that isn't being
    completed (sysrq+M),

Mem-info:
DMA per-cpu:
CPU    0: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
CPU    1: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: Hot: hi:  186, btch:  31 usd: 167   Cold: hi:   62, btch:  15 usd:  47
CPU    1: Hot: hi:  186, btch:  31 usd: 148   Cold: hi:   62, btch:  15 usd:  54
Active:340169 inactive:117325 dirty:48579 writeback:0 unstable:0
 free:25266 slab:28357 mapped:3109 pagetables:816 bounce:0
DMA free:8784kB min:512kB low:640kB high:768kB active:0kB inactive:72kB present:16256kB pages_scanned:36 all_unreclaimable? no
lowmem_reserve[]: 0 2015
Normal free:92280kB min:65016kB low:81268kB high:97524kB active:1360676kB inactive:469228kB present:2064260kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0
DMA: 12*4kB 6*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB = 8784kB
Normal: 2724*4kB 1863*8kB 95*16kB 0*32kB 1*64kB 3*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 15*4096kB = 92280kB
Swap cache: add 6396, delete 6331, find 3048/3691, race 0+0
Free swap  = 2927260kB
Total swap = 2927608kB
Free swap:       2927260kB
524224 pages of RAM
0 pages of HIGHMEM
6345 reserved pages
302073 pages shared
65 pages swap cached
48579 pages dirty
0 pages writeback
3109 pages mapped
28357 pages slab
816 pages pagetables
--------------------------



    also a list the locks held after the hang might be
    helpful (compile in lockdep and sysrq+D)

SysRq : Show Locks Held
Showing all locks held in the system:
1 lock held by syslogd/2163:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by freshclam/2864:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by named/2925:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by crond/3027:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by squid/3559:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by zabbix_agentd/3701:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by agetty/3753:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3754:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3755:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3756:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3758:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3759:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3760:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by bash/3851:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by bash/3866:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by bash/3888:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by bash/6622:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/27417:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by bash/9224:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by amavisd/15049:
 #0:  (&mm->mmap_sem){----}, at: [<781196ad>] do_page_fault+0x152/0x535
1 lock held by amavisd/15169:
 #0:  (&mm->mmap_sem){----}, at: [<781196ad>] do_page_fault+0x152/0x535
1 lock held by sadc/16498:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by mkpir/16842:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by fetch_mail_stat/16843:
 #0:  (&inode->i_mutex){--..}, at: [<78164ae6>] generic_file_llseek+0x29/0xa0

=============================================



    Is anything currently running? (sysrq+P

SysRq : Show Regs

Pid: 0, comm:              swapper
EIP: 0060:[<781021a6>] CPU: 0
EIP is at mwait_idle_with_hints+0x3b/0x3f
 EFLAGS: 00000246    Not tainted  (2.6.22.9 #1)
EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
ESI: 00000000 EDI: 7855a008 EBP: 00000002 DS: 007b ES: 007b FS: 00d8
CR0: 8005003b CR2: 6ff2e000 CR3: 7db52000 CR4: 000006d0
 [<78102257>] mwait_idle+0x0/0xf
 [<78102391>] cpu_idle+0x99/0xc6
 [<78561907>] start_kernel+0x2dc/0x2e4
 [<7856117b>] unknown_bootoption+0x0/0x202
 =======================


    and even a full sysrq+T task list
    could be useful).

fault_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 9A8AFEC8     0 27417      1 (NOTLB)
       9a8afedc 00000086 00000002 9a8afec8 9a8afec4 00000000 00000000 00000000
       00000009 00000001 81ab0070 f7fbe030 d7259500 00007d9a 00000000 81ab017c
       7a022980 f6e6b2c0 0836b8d5 00000003 00000000 00000000 7fffffff f7184000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
bash          S 00000000     0  9224  27386 (NOTLB)
       bae9bedc 00000086 00000000 00000000 f531248c f7cb8130 00000000 00000000
       0000000a 00000000 f7cb8130 f7c740b0 fe89f040 0001ab8a 00000000 f7cb823c
       7a01a980 f7cb7540 f7cb8668 f7cb8130 00000000 00000000 7fffffff f5312000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
scache        S AEB09D0C     0 29135      1 (NOTLB)
       aeb09d20 00200092 00000002 aeb09d0c aeb09d08 00000000 00000000 f6f2bf54
       0000000a 00000001 a2cd4030 f7fbe030 c853d980 0001f475 00000000 a2cd413c
       7a022980 eb12ad00 20c7bfd6 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
pdflush       S A30BDF9C     0  8843      2 (L-TLB)
       a30bdfb0 00000092 00000002 a30bdf9c a30bdf98 00000000 00000292 daaee560
       0000000a 00000000 daaee030 785143e0 abd01e80 0001f615 00000000 daaee13c
       7a01a980 ea38bcc0 20e3014d 00000003 00000000 00000000 a30bdfc8 7814e33a
Call Trace:
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e3e8>] pdflush+0xae/0x1a6
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
pdflush       S F7E2AB20     0 13129      2 (L-TLB)
       b41bbfb0 00000092 7851c174 f7e2ab20 00000001 786b80f0 b41bbf60 f7e2b050
       0000000a 00000001 f7e2ab20 eb1bd510 8245efc0 0001f3f1 00000000 f7e2ac2c
       7a022980 eb12aa80 7814e33a 00000046 7851c160 7814e33a b41bbfc8 7814e33a
Call Trace:
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e3e8>] pdflush+0xae/0x1a6
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
smtpd         S D0A83B4C     0 14300      1 (NOTLB)
       d0a83b60 00200096 00000002 d0a83b4c d0a83b48 00000000 7840d6e3 785cde80
       00000007 00000000 d5a75590 785143e0 13b0b200 0001f60f 00000000 d5a7569c
       7a01a980 eba4fd40 20e292aa 00000003 00000000 00000000 d0a83b70 20e72688
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7812cada>] process_timeout+0x0/0x5
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<78120686>] run_rebalance_domains+0x74/0x3ae
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783a3bd7>] tcp_recvmsg+0x63f/0x74b
 [<7840d1f4>] _spin_lock_bh+0x38/0x43
 [<783a3bd7>] tcp_recvmsg+0x63f/0x74b
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816f30a>] sys_select+0xa0/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       S 00000000     0 14884   2790 (NOTLB)
       a5367e44 00000086 00000000 00000000 7813e7a9 00000000 00000000 f3333940
       0000000a 00000001 c670eae0 ac2a7510 f3d5ec00 0001f3f9 000f4240 c670ebec
       7a022980 f457a300 c670f010 c670eae0 781299b4 a5367ea0 7fffffff 7fffffff
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       S 931D3B4C     0 14931   2790 (NOTLB)
       931d3b60 00000096 00000002 931d3b4c 931d3b48 00000000 00000000 00000000
       0000000a 00000000 ce58e070 785143e0 f3d5ec00 0001f3f9 00000000 ce58e17c
       7a01a980 e3809ac0 20bfa251 00000003 00000000 00000000 7fffffff 931d3f9c
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<783734b9>] sk_reset_timer+0xc/0x16
 [<783aa645>] tcp_rcv_established+0x5c5/0x628
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<783b133f>] tcp_v4_rcv+0x3de/0x787
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<7839949e>] ip_local_deliver+0x1aa/0x1c8
 [<78398c10>] ip_local_deliver_finish+0x0/0x14c
 [<783992bb>] ip_rcv+0x498/0x4d1
 [<7839896c>] ip_rcv_finish+0x0/0x2a4
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c73c>] process_backlog+0xef/0xfa
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c9d8>] net_rx_action+0x162/0x18f
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781298c7>] local_bh_enable+0x110/0x130
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7839dedc>] ip_output+0x270/0x2ac
 [<7839c7d5>] ip_finish_output+0x0/0x212
 [<7839d5e6>] ip_queue_xmit+0x316/0x35b
 [<7839b270>] dst_output+0x0/0x7
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<7816f340>] sys_select+0xd6/0x188
 [<78103e68>] restore_nocheck+0x12/0x15
 [<7811955b>] do_page_fault+0x0/0x535
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       D 00000000     0 15049   2790 (NOTLB)
       f5de5e2c 00000082 00000000 00000000 7813e7a9 e9568030 7840d6e3 785cde80
       00000009 00000000 e9568030 f7ca0070 6e5cce80 0001f616 00000000 e956813c
       7a01a980 826e4580 785cde80 7812ccc1 00000000 00000292 f5de5e3c 20e30e6e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78152ded>] do_wp_page+0x3c4/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<78138305>] down_read_trylock+0x47/0x4e
 [<7811976d>] do_page_fault+0x212/0x535
 [<7811955b>] do_page_fault+0x0/0x535
 [<7840d96a>] error_code+0x72/0x78
 =======================
amavisd       D 00000000     0 15169   2790 (NOTLB)
       c217de2c 00000082 00000000 00000000 7813e7a9 bf1fcb20 7840d6e3 785cde80
       00000009 00000000 bf1fcb20 daaeea60 6e5cce80 0001f616 00000000 bf1fcc2c
       7a01a980 e9a420c0 785cde80 7812ccc1 00000000 00000292 c217de3c 20e30e6e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78152ded>] do_wp_page+0x3c4/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<78138305>] down_read_trylock+0x47/0x4e
 [<7811976d>] do_page_fault+0x212/0x535
 [<7811955b>] do_page_fault+0x0/0x535
 [<7840d96a>] error_code+0x72/0x78
 =======================
squidauth.pl  S 00000122     0 15322   3559 (NOTLB)
       e7b47dd8 00000082 00000000 00000122 00000000 00000000 00000000 ec925cac
       0000000a 00000001 c05340f0 eb1bd510 31a413c0 0001f3d9 00000000 c05341fc
       7a022980 b0c607c0 c0534620 c05340f0 00000000 c05340f0 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squidauth.pl  S F660FDC4     0 15323   3559 (NOTLB)
       f660fdd8 00000082 00000002 f660fdc4 f660fdc0 00000000 00000000 cbcc89ac
       00000007 00000000 f7ca14d0 785143e0 c9f296c0 0001efd2 00000000 f7ca15dc
       7a01a980 b0c602c0 2079f567 00000003 00000000 00000000 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78165177>] __fput+0x10a/0x134
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squidauth.pl  S 00000122     0 15324   3559 (NOTLB)
       ea80fdd8 00000082 00000000 00000122 00000000 00000000 00000000 a2fe36ac
       00000006 00000001 f7f46b60 f7ca14d0 b8d75100 0001efd2 00000000 f7f46c6c
       7a022980 ea38b540 f7f47090 f7f46b60 00000000 f7f46b60 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78165177>] __fput+0x10a/0x134
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
unlinkd       S CBEF3E44     0 15325   3559 (NOTLB)
       cbef3e58 00000086 00000002 cbef3e44 cbef3e40 00000000 e13d8c3c e769ea60
       00000007 00000000 e769ea60 785143e0 7fd85180 0001f3f9 000f4240 e769eb6c
       7a01a980 e3809d40 20bf9ab8 00000003 00000000 00000000 db5f7600 e13d8b78
Call Trace:
 [<7816943d>] pipe_wait+0x53/0x73
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78169b7d>] pipe_read+0x2af/0x321
 [<78174b7c>] file_update_time+0x22/0x6a
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<78164176>] do_sync_read+0x0/0x10a
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A1053D0C     0 15570      1 (NOTLB)
       a1053d20 00200092 00000002 a1053d0c a1053d08 00000000 00000000 f6f2bf54
       0000000a 00000001 c670e0b0 f7fbe030 3a020200 0001f403 000f4240 c670e1bc
       7a022980 b0c60540 20c03deb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A199DD0C     0 15949      1 (NOTLB)
       a199dd20 00200092 00000002 a199dd0c a199dd08 00000000 00000000 f6f2bf54
       0000000a 00000000 81ab14d0 785143e0 f85451c0 0001f405 00000000 81ab15dc
       7a01a980 db566cc0 20c06bf2 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 86493D0C     0 16128      1 (NOTLB)
       86493d20 00200092 00000002 86493d0c 86493d08 00000000 00000000 f6f2bf54
       00000008 00000000 e769f490 785143e0 636dde40 0001f401 00000000 e769f59c
       7a01a980 f6b9f300 20c01f12 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       S 00000000     0 16169   2790 (NOTLB)
       ae5bfe44 00000086 00000000 00000000 7813e7a9 00000000 00000000 f3333940
       0000000a 00000000 e769e030 f7ca0070 b5ba4380 0001f3f9 00000000 e769e13c
       7a01a980 f6c8f340 e769e560 e769e030 781299b4 fffffe00 7fffffff 7fffffff
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A984DD0C     0 16395      1 (NOTLB)
       a984dd20 00200092 00000002 a984dd0c a984dd08 00000000 00000000 f6f2bf54
       0000000a 00000000 c0535550 785143e0 a06b9a00 0001f409 00000000 c053565c
       7a01a980 eba4f5c0 20c0a94a 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
lmtp          S 9FEDBD0C     0 16454      1 (NOTLB)
       9fedbd20 00200092 00000002 9fedbd0c 9fedbd08 00000000 00000000 f6f2bf54
       00000003 00000001 7a1dc030 f7fbe030 ce9997c0 0001f43f 00000000 7a1dc13c
       7a022980 f457a080 20c43648 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sadc          D 00000000     0 16498      1 (NOTLB)
       9bb11cc4 00000082 00000000 00000000 7813e7a9 e9d8d4d0 7840d6e3 785cde80
       00000007 00000000 e9d8d4d0 f7f47590 6e5cce80 0001f616 00000000 e9d8d5dc
       7a01a980 f71d5300 785cde80 7812ccc1 00000000 00000296 9bb11cd4 20e30e6e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<7814cb18>] free_hot_cold_page+0x13b/0x167
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
lmtp          S DAAEF490     0 16638      1 (NOTLB)
       a92dfd20 00200092 00000002 daaef490 daaefa10 00000000 00000000 f6f2bf54
       00000007 00000001 daaef490 86709550 a41a1380 0001f43f 00000000 daaef59c
       7a022980 eb12a800 daaef9c0 daaef490 00000000 daaef490 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A4427D0C     0 16714      1 (NOTLB)
       a4427d20 00200092 00000002 a4427d0c a4427d08 00000000 00000000 f6f2bf54
       0000000a 00000000 9a946b20 785143e0 49bed3c0 0001f404 00000000 9a946c2c
       7a01a980 b0c60a40 20c04fba 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S DABA3D0C     0 16716      1 (NOTLB)
       daba3d20 00200092 00000002 daba3d0c daba3d08 00000000 00000000 f6f2bf54
       0000000a 00000000 d5a74130 785143e0 0aeb9080 0001f403 00000000 d5a7423c
       7a01a980 f6c8fac0 20c03ad5 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 7813E99C     0 16720      1 (NOTLB)
       e2ebbd20 00200092 00000000 7813e99c 00200046 00000000 00000000 f6f2bf54
       00000007 00000001 ec8e34d0 f7cb0b20 05d792c0 0001f3fd 00000000 ec8e35dc
       7a022980 e9c8cd00 ec8e3a00 ec8e34d0 00000000 ec8e34d0 7fffffff 7fffffff
Call Trace:
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 841A3D0C     0 16754      1 (NOTLB)
       841a3d20 00200092 00000002 841a3d0c 841a3d08 00000000 00000000 f6f2bf54
       0000000a 00000000 f7d09490 785143e0 5c6753c0 0001f405 00000000 f7d0959c
       7a01a980 9169aa80 20c061ba 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 99069D0C     0 16755      1 (NOTLB)
       99069d20 00200092 00000002 99069d0c 99069d08 00000000 00000000 f6f2bf54
       0000000a 00000000 9a9460f0 785143e0 5a34bf80 0001f409 00000000 9a9461fc
       7a01a980 826e4300 20c0a4b2 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C6AF7D0C     0 16756      1 (NOTLB)
       c6af7d20 00200092 00000002 c6af7d0c c6af7d08 00000000 00000000 f6f2bf54
       0000000a 00000000 a2cd5490 785143e0 3d1cf640 0001f406 00000000 a2cd559c
       7a01a980 826e4800 20c07072 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S BF57BD0C     0 16757      1 (NOTLB)
       bf57bd20 00200092 00000002 bf57bd0c bf57bd08 00000000 00000000 f6f2bf54
       0000000a 00000000 ecbbc0b0 785143e0 d3205700 0001f400 00000000 ecbbc1bc
       7a01a980 826e4080 20c015a0 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9B7E9D0C     0 16758      1 (NOTLB)
       9b7e9d20 00200092 00000002 9b7e9d0c 9b7e9d08 00000000 00000000 f6f2bf54
       0000000a 00000000 f7fb4130 785143e0 8e156240 0001f414 00000000 f7fb423c
       7a01a980 e9c8ca80 20c160a3 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9766BD0C     0 16759      1 (NOTLB)
       9766bd20 00200092 00000002 9766bd0c 9766bd08 00000000 00000000 f6f2bf54
       0000000a 00000000 f7fb4b60 785143e0 a07adc40 0001f409 00000000 f7fb4c6c
       7a01a980 eba4fac0 20c0a94a 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B31D9D0C     0 16760      1 (NOTLB)
       b31d9d20 00200092 00000002 b31d9d0c b31d9d08 00000000 00000000 f6f2bf54
       0000000a 00000000 bf1fc0f0 785143e0 5aaed180 0001f409 00000000 bf1fc1fc
       7a01a980 eb12a300 20c0a4ba 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783b2c0d>] ip4_datagram_connect+0x2ad/0x308
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 836E9D0C     0 16761      1 (NOTLB)
       836e9d20 00200092 00000002 836e9d0c 836e9d08 00000000 00000000 f6f2bf54
       00000003 00000000 f7e220b0 785143e0 efd80140 0001f437 00000000 f7e221bc
       7a01a980 f5aa9840 20c3b241 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
cleanup       S E9507090     0 16793      1 (NOTLB)
       c546bb60 00200096 e9506b60 e9507090 7a1d2014 e9506b60 7840d6e3 7a1d2000
       0000000a 00000001 e9506b60 ec8e34d0 05d792c0 0001f3fd 00000000 e9506c6c
       7a022980 b0c60040 7a1d2000 7812ccc1 00000000 00200282 c546bb70 20f6c453
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7817f013>] __find_get_block+0x7c/0x158
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7817f013>] __find_get_block+0x7c/0x158
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7817ede9>] __find_get_block_slow+0x114/0x125
 [<7817ed2f>] __find_get_block_slow+0x5a/0x125
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7817f0cc>] __find_get_block+0x135/0x158
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7817f0e5>] __find_get_block+0x14e/0x158
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<783cdb0e>] unix_stream_sendmsg+0x22f/0x2f7
 [<7840d33a>] _read_lock+0x33/0x3e
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<78370f09>] sock_aio_write+0xcb/0xd7
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816f30a>] sys_select+0xa0/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
mkpir         D 00000000     0 16842   3027 (NOTLB)
       98303cc4 00000082 00000000 00000000 7813e7a9 daaeea60 7840d6e3 785cde80
       00000007 00000000 daaeea60 e9d8d4d0 6e5cce80 0001f616 00000000 daaeeb6c
       7a01a980 f457ad00 785cde80 7812ccc1 00000000 00000296 98303cd4 20e30e6e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781649e7>] vfs_read+0xcf/0x10a
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
fetch_mail_st D C7A0FF14     0 16843   3500 (NOTLB)
       c7a0ff28 00200086 00000002 c7a0ff14 c7a0ff10 00000000 00000000 9185284c
       00000008 00000001 f1c9d4d0 f7fbe030 f9ec5080 0001f3fd 004c4b40 f1c9d5dc
       7a022980 f457aa80 20bfe5cc 00000003 00000000 00000000 c7a0ff40 91852814
Call Trace:
 [<7840bf07>] __mutex_lock_slowpath+0x14d/0x267
 [<78164ae6>] generic_file_llseek+0x29/0xa0
 [<78164ae6>] generic_file_llseek+0x29/0xa0
 [<78164abd>] generic_file_llseek+0x0/0xa0
 [<78163e17>] vfs_llseek+0x36/0x3a
 [<78164d25>] sys_llseek+0x43/0x85
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 7FC8BD0C     0 16844      1 (NOTLB)
       7fc8bd20 00200012 00000002 7fc8bd0c 7fc8bd08 00000000 00000000 f6f2bf54
       0000000a 00000000 f7cc0030 785143e0 e674adc0 0001f409 00000000 f7cc013c
       7a01a980 b0c60cc0 20c0ade1 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 00000080     0 16845      1 (NOTLB)
       d95d9d20 00200092 00000001 00000080 7a0233e0 00000002 00000000 f6f2bf54
       00000009 00000001 f7cd9550 f7cb0b20 17271c00 0001f40a 000f4240 f7cd965c
       7a022980 f6b9f800 20c0b10f 00000003 00000000 00000080 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S F601BD0C     0 16846      1 (NOTLB)
       f601bd20 00200092 00000002 f601bd0c f601bd08 00000000 00000000 f6f2bf54
       00000009 00000000 f7cc94d0 785143e0 17fcbb80 0001f40a 000f4240 f7cc95dc
       7a01a980 f6e7e300 20c0b11e 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9E3A7D0C     0 16847      1 (NOTLB)
       9e3a7d20 00200092 00000002 9e3a7d0c 9e3a7d08 00000000 00000000 f6f2bf54
       00000009 00000001 f7cd8b20 f7fbe030 18c318c0 0001f40a 00000000 f7cd8c2c
       7a022980 e38090c0 20c0b12b 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 95655D0C     0 16848      1 (NOTLB)
       95655d20 00200092 00000002 95655d0c 95655d08 00000000 00000000 f6f2bf54
       00000009 00000001 ac2a60b0 f7fbe030 19a7fa80 0001f40a 00000000 ac2a61bc
       7a022980 f6c8f0c0 20c0b13a 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C5229D0C     0 16849      1 (NOTLB)
       c5229d20 00200092 00000002 c5229d0c c5229d08 00000000 00000000 f6f2bf54
       00000009 00000000 e9fc3510 785143e0 1a7d9a00 0001f40a 00000000 e9fc361c
       7a01a980 f6e7e800 20c0b149 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 840DFD0C     0 16850      1 (NOTLB)
       840dfd20 00200092 00000002 840dfd0c 840dfd08 00000000 00000000 f6f2bf54
       00000008 00000001 e9d8caa0 f7fbe030 1b533980 0001f40a 00000000 e9d8cbac
       7a022980 f5aa9d40 20c0b157 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C3D33D0C     0 16851      1 (NOTLB)
       c3d33d20 00200092 00000002 c3d33d0c c3d33d08 00000000 00000000 f6f2bf54
       00000008 00000001 ac2a6ae0 f7fbe030 1c381b40 0001f40a 00000000 ac2a6bec
       7a022980 f6c8fd40 20c0b165 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D3603D0C     0 16852      1 (NOTLB)
       d3603d20 00200092 00000002 d3603d0c d3603d08 00000000 00000000 f6f2bf54
       00000008 00000000 a2cd4a60 785143e0 1d0dbac0 0001f40a 000f4240 a2cd4b6c
       7a01a980 dc7ae5c0 20c0b174 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S BEFAFD0C     0 16853      1 (NOTLB)
       befafd20 00200092 00000002 befafd0c befafd08 00000000 00000000 f6f2bf54
       00000008 00000001 d129f510 f7fbe030 1de35a40 0001f40a 000f4240 d129f61c
       7a022980 dc7aed40 20c0b181 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B1745D0C     0 16854      1 (NOTLB)
       b1745d20 00200092 00000002 b1745d0c b1745d08 00000000 00000000 f6f2bf54
       00000008 00000001 f1c9c070 f7fbe030 1ef602c0 0001f40a 00000000 f1c9c17c
       7a022980 f6c8f5c0 20c0b193 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C1985D0C     0 16855      1 (NOTLB)
       c1985d20 00200092 00000002 c1985d0c c1985d08 00000000 00000000 f6f2bf54
       00000008 00000000 d129eae0 785143e0 1f9ddb80 0001f40a 00000000 d129ebec
       7a01a980 f650aa40 20c0b19f 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 84C57D0C     0 16856      1 (NOTLB)
       84c57d20 00200092 00000002 84c57d0c 84c57d08 00000000 00000000 f6f2bf54
       00000008 00000000 f1c9caa0 785143e0 20737b00 0001f40a 00000000 f1c9cbac
       7a01a980 dc7ae0c0 20c0b1af 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B3A63D0C     0 16857      1 (NOTLB)
       b3a63d20 00200092 00000002 b3a63d0c b3a63d08 00000000 00000000 f6f2bf54
       00000007 00000000 ca81b490 785143e0 21491a80 0001f40a 00000000 ca81b59c
       7a01a980 dc7ae340 20c0b1bb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9E141D0C     0 16858      1 (NOTLB)
       9e141d20 00200092 00000002 9e141d0c 9e141d08 00000000 00000000 f6f2bf54
       00000007 00000000 f7cd0ae0 785143e0 221eba00 0001f40a 00000000 f7cd0bec
       7a01a980 e9a42ac0 20c0b1cb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A887BD0C     0 16859      1 (NOTLB)
       a887bd20 00200092 00000002 a887bd0c a887bd08 00000000 00000000 f6f2bf54
       00000007 00000001 ce58eaa0 f7fbe030 22f45980 0001f40a 00000000 ce58ebac
       7a022980 e9a425c0 20c0b1d7 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D1A11D0C     0 16860      1 (NOTLB)
       d1a11d20 00200092 00000002 d1a11d0c d1a11d08 00000000 00000000 f6f2bf54
       00000008 00000000 f7cb1550 785143e0 23c9f900 0001f40a 00000000 f7cb165c
       7a01a980 f457a800 20c0b1e6 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
(...)
Comment 3 Krzysztof Oledzki 2007-10-18 08:09:02 UTC
=======================
smtpd         S 00000080     0 16861      1 (NOTLB)
       9083fd20 00200092 00000001 00000080 7a0233e0 00000002 00000000 f6f2bf54
       00000008 00000001 d129e0b0 f7cb0b20 249f9880 0001f40a 00000000 d129e1bc
       7a022980 dc7ae840 20c0b1f2 00000003 00000000 00000080 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 00000080     0 16862      1 (NOTLB)
       c5043d20 00200092 00000001 00000080 7a01b3e0 00000002 00000000 f6f2bf54
       00000008 00000000 ec8e2070 f7cb0b20 25847a40 0001f40a 000f4240 ec8e217c
       7a01a980 f457a580 20c0b201 00000003 00000000 00000080 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C8C75D0C     0 16863      1 (NOTLB)
       c8c75d20 00200092 00000002 c8c75d0c c8c75d08 00000000 00000000 f6f2bf54
       00000007 00000001 dc459550 f7fbe030 265a19c0 0001f40a 00000000 dc45965c
       7a022980 cce05d40 20c0b210 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B0931D0C     0 16864      1 (NOTLB)
       b0931d20 00200092 00000002 b0931d0c b0931d08 00000000 00000000 f6f2bf54
       00000008 00000000 a13b3590 785143e0 273efb80 0001f40a 00000000 a13b369c
       7a01a980 f67c9040 20c0b221 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 89769D0C     0 16865      1 (NOTLB)
       89769d20 00200092 00000002 89769d0c 89769d08 00000000 00000000 f6f2bf54
       0000000a 00000000 a13b2130 785143e0 c7b523c0 0001f40d 00000000 a13b223c
       7a01a980 f6e6bcc0 20c0eefb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 991F7D0C     0 16866      1 (NOTLB)
       991f7d20 00200092 00000002 991f7d0c 991f7d08 00000000 00000000 f6f2bf54
       0000000a 00000000 a13b2b60 785143e0 c48cec40 0001f40e 00000000 a13b2c6c
       7a01a980 cce050c0 20c0ff8b 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8F65BD0C     0 16867      1 (NOTLB)
       8f65bd20 00200092 00000002 8f65bd0c 8f65bd08 00000000 00000000 f6f2bf54
       00000009 00000000 dc458b20 785143e0 b817d600 0001f40e 000f4240 dc458c2c
       7a01a980 cce05340 20c0febb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78154205>] __handle_mm_fault+0x809/0x83b
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B6DF7D0C     0 16868      1 (NOTLB)
       b6df7d20 00200092 00000002 b6df7d0c b6df7d08 00000000 00000000 f6f2bf54
       0000000a 00000000 dc4580f0 785143e0 6614bd80 0001f410 00000000 dc4581fc
       7a01a980 cce05ac0 20c11ae8 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A15E7D0C     0 16869      1 (NOTLB)
       a15e7d20 00200092 00000002 a15e7d0c a15e7d08 00000000 00000000 f6f2bf54
       0000000a 00000000 9a4e3490 785143e0 29398440 0001f410 00000000 9a4e359c
       7a01a980 cce05840 20c116ec 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D4F43D0C     0 16870      1 (NOTLB)
       d4f43d20 00200092 00000002 d4f43d0c d4f43d08 00000000 00000000 f6f2bf54
       00000009 00000000 9a4e2a60 785143e0 2891ab80 0001f410 00000000 9a4e2b6c
       7a01a980 f6e6ba40 20c116e0 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 846F5D0C     0 16871      1 (NOTLB)
       846f5d20 00200092 00000002 846f5d0c 846f5d08 00000000 00000000 f6f2bf54
       00000006 00000000 df6e20f0 785143e0 15eaaac0 0001f412 00000000 df6e21fc
       7a01a980 9169a080 20c13735 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 88395D0C     0 16872      1 (NOTLB)
       88395d20 00200092 00000002 88395d0c 88395d08 00000000 00000000 f6f2bf54
       00000006 00000000 c670f510 785143e0 f2466900 0001f412 000f4240 c670f61c
       7a01a980 826e4d00 20c145a7 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S CD089D0C     0 16873      1 (NOTLB)
       cd089d20 00200092 00000002 cd089d0c cd089d08 00000000 00000000 f6f2bf54
       00000007 00000000 9a947550 785143e0 d9d6ce40 0001f413 00000000 9a94765c
       7a01a980 e3809340 20c154d5 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 98913D0C     0 16874      1 (NOTLB)
       98913d20 00200092 00000002 98913d0c 98913d08 00000000 00000000 f6f2bf54
       00000006 00000000 9a4e2030 785143e0 9c44f9c0 0001f414 00000000 9a4e213c
       7a01a980 e38095c0 20c16193 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C3685D0C     0 16875      1 (NOTLB)
       c3685d20 00200092 00000002 c3685d0c c3685d08 00000000 00000000 f6f2bf54
       0000000a 00000000 f5de34d0 785143e0 a6b44300 0001f416 00000000 f5de35dc
       7a01a980 a7484cc0 20c183ce 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B14A5D0C     0 16876      1 (NOTLB)
       b14a5d20 00200092 00000002 b14a5d0c b14a5d08 00000000 00000000 f6f2bf54
       00000009 00000000 f5de2aa0 785143e0 08466680 0001f416 00000000 f5de2bac
       7a01a980 db566540 20c1796d 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A20E9D0C     0 16877      1 (NOTLB)
       a20e9d20 00200092 00000002 a20e9d0c a20e9d08 00000000 00000000 f6f2bf54
       00000009 00000000 f5de2070 785143e0 5bc7f6c0 0001f416 00000000 f5de217c
       7a01a980 db566a40 20c17ee6 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8C27FD0C     0 16878      1 (NOTLB)
       8c27fd20 00200092 00000002 8c27fd0c 8c27fd08 00000000 00000000 f6f2bf54
       00000002 00000000 c978b510 785143e0 3832f740 0001f417 00000000 c978b61c
       7a01a980 db566040 20c18d56 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A2C15D0C     0 16879      1 (NOTLB)
       a2c15d20 00200092 00000002 a2c15d0c a2c15d08 00000000 00000000 f6f2bf54
       00000006 00000000 c978aae0 785143e0 1e275fc0 0001f418 00000000 c978abec
       7a01a980 db5662c0 20c19c6b 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C70F5D0C     0 16880      1 (NOTLB)
       c70f5d20 00200092 00000002 c70f5d0c c70f5d08 00000000 00000000 f6f2bf54
       00000008 00000000 c978a0b0 785143e0 2e5dc3c0 0001f418 00000000 c978a1bc
       7a01a980 e3809840 20c19d7b 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 99BE1D0C     0 16881      1 (NOTLB)
       99be1d20 00200092 00000002 99be1d0c 99be1d08 00000000 00000000 f6f2bf54
       00000008 00000000 dbd9d550 785143e0 3cbb2200 0001f418 00000000 dbd9d65c
       7a01a980 f7cb7040 20c19e6a 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S DBD9D058     0 16882      1 (NOTLB)
       90b79d20 00200092 00000000 dbd9d058 7813f8ca 00000000 00000000 f6f2bf54
       00000009 00000000 dbd9cb20 c4d33590 75d50d80 0001f418 00000000 dbd9cc2c
       7a01a980 f67c9a40 dbd9d050 dbd9cb20 00000000 dbd9cb20 7fffffff 7fffffff
Call Trace:
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E7CFBD0C     0 16883      1 (NOTLB)
       e7cfbd20 00200092 00000002 e7cfbd0c e7cfbd08 00000000 00000000 f6f2bf54
       0000000a 00000000 c4d33590 785143e0 897131c0 0001f418 00000000 c4d3369c
       7a01a980 cce055c0 20c1a370 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B2CE9D0C     0 16884      1 (NOTLB)
       b2ce9d20 00200092 00000002 b2ce9d0c b2ce9d08 00000000 00000000 f6f2bf54
       00000002 00000000 c4d32130 785143e0 c8ec4dc0 0001f419 00000000 c4d3223c
       7a01a980 e9fb9d00 20c1b860 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 7FE3FD0C     0 16885      1 (NOTLB)
       7fe3fd20 00200012 00000002 7fe3fd0c 7fe3fd08 00000000 00000000 f6f2bf54
       0000000a 00000000 c4d32b60 785143e0 2a04df00 0001f41a 00000000 c4d32c6c
       7a01a980 f7cb77c0 20c1bebd 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S CA9E7D0C     0 16886      1 (NOTLB)
       ca9e7d20 00200092 00000002 ca9e7d0c ca9e7d08 00000000 00000000 f6f2bf54
       0000000a 00000000 dbd9c0f0 785143e0 82e43980 0001f41b 00000000 dbd9c1fc
       7a01a980 f71d5580 20c1d559 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 94943D0C     0 16887      1 (NOTLB)
       94943d20 00200092 00000002 94943d0c 94943d08 00000000 00000000 f6f2bf54
       0000000a 00000000 84393490 785143e0 97673e80 0001f41f 00000000 8439359c
       7a01a980 da41fd40 20c219cb 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8EFF7D0C     0 16888      1 (NOTLB)
       8eff7d20 00200092 00000002 8eff7d0c 8eff7d08 00000000 00000000 f6f2bf54
       00000009 00000000 84392a60 785143e0 7c0bf580 0001f420 00000000 84392b6c
       7a01a980 9169ad00 20c228c7 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8B12FD0C     0 16889      1 (NOTLB)
       8b12fd20 00200092 00000002 8b12fd0c 8b12fd08 00000000 00000000 f6f2bf54
       0000000a 00000000 84392030 785143e0 0b7719c0 0001f426 00000000 8439213c
       7a01a980 9169a300 20c28611 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C61A9D0C     0 16890      1 (NOTLB)
       c61a9d20 00200092 00000002 c61a9d0c c61a9d08 00000000 00000000 f6f2bf54
       00000006 00000000 9963f4d0 785143e0 a4f47a80 0001f42a 00000000 9963f5dc
       7a01a980 da41f0c0 20c2d33d 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C0EDFD0C     0 16891      1 (NOTLB)
       c0edfd20 00200092 00000002 c0edfd0c c0edfd08 00000000 00000000 f6f2bf54
       0000000a 00000000 9963eaa0 785143e0 d4efcdc0 0001f42a 00000000 9963ebac
       7a01a980 da41fac0 20c2d662 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9AEC1D0C     0 16892      1 (NOTLB)
       9aec1d20 00200092 00000002 9aec1d0c 9aec1d08 00000000 00000000 f6f2bf54
       0000000a 00000000 a25c7510 785143e0 3c1cc480 0001f42b 00000000 a25c761c
       7a01a980 a1257cc0 20c2dd23 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S DC5A3D0C     0 16893      1 (NOTLB)
       dc5a3d20 00200092 00000002 dc5a3d0c dc5a3d08 00000000 00000000 f6f2bf54
       00000009 00000001 a25c6ae0 f7fbe030 1cc2a500 0001f42b 000f4240 a25c6bec
       7a022980 a1257a40 20c2db17 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D8409D0C     0 16894      1 (NOTLB)
       d8409d20 00200092 00000002 d8409d0c d8409d08 00000000 00000000 f6f2bf54
       0000000a 00000000 9963e070 785143e0 894ce640 0001f42b 00000000 9963e17c
       7a01a980 da41f840 20c2e233 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E2FEBD0C     0 16895      1 (NOTLB)
       e2febd20 00200092 00000002 e2febd0c e2febd08 00000000 00000000 f6f2bf54
       00000009 00000000 87dc9550 785143e0 b77e7600 0001f42b 000f4240 87dc965c
       7a01a980 da41f5c0 20c2e53b 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D6923D0C     0 16896      1 (NOTLB)
       d6923d20 00200092 00000002 d6923d0c d6923d08 00000000 00000000 f6f2bf54
       0000000a 00000000 87dc8b20 785143e0 8792e4c0 0001f42c 00000000 87dc8c2c
       7a01a980 da41f340 20c2f2df 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C6CC1D0C     0 16897      1 (NOTLB)
       c6cc1d20 00200092 00000002 c6cc1d0c c6cc1d08 00000000 00000000 f6f2bf54
       0000000a 00000000 87dc80f0 785143e0 3f38b840 0001f430 00000000 87dc81fc
       7a01a980 a1257540 20c3313a 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S B98C7D0C     0 16898      1 (NOTLB)
       b98c7d20 00200092 00000002 b98c7d0c b98c7d08 00000000 00000000 f6f2bf54
       00000006 00000000 a25c60b0 785143e0 3f75c140 0001f430 00000000 a25c61bc
       7a01a980 a12572c0 20c3313e 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D2203D0C     0 16899      1 (NOTLB)
       d2203d20 00200092 00000002 d2203d0c d2203d08 00000000 00000000 f6f2bf54
       00000007 00000000 9a747590 785143e0 07413740 0001f431 00000000 9a74769c
       7a01a980 a12577c0 20c33e56 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 80687D0C     0 16900      1 (NOTLB)
       80687d20 00200092 00000002 80687d0c 80687d08 00000000 00000000 f6f2bf54
       00000006 00000000 9a746b60 785143e0 414f46c0 0001f431 000f4240 9a746c6c
       7a01a980 e9fb9a80 20c34226 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C473DD0C     0 16901      1 (NOTLB)
       c473dd20 00200092 00000002 c473dd0c c473dd08 00000000 00000000 f6f2bf54
       0000000a 00000000 9a746130 785143e0 c935d080 0001f432 00000000 9a74623c
       7a01a980 e9fb9800 20c35bd4 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S DA57DD0C     0 16902      1 (NOTLB)
       da57dd20 00200092 00000002 da57dd0c da57dd08 00000000 00000000 f6f2bf54
       0000000a 00000000 cd433490 785143e0 62942d00 0001f436 00000000 cd43359c
       7a01a980 c01f2d00 20c39835 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 83283D0C     0 16903      1 (NOTLB)
       83283d20 00200092 00000002 83283d0c 83283d08 00000000 00000000 f6f2bf54
       00000009 00000001 cd432030 f7fbe030 48a799c0 0001f438 00000000 cd43213c
       7a022980 e9fb9300 20c3b810 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E3FE1D0C     0 16904      1 (NOTLB)
       e3fe1d20 00200092 00000002 e3fe1d0c e3fe1d08 00000000 00000000 f6f2bf54
       00000009 00000000 ce74d4d0 785143e0 49126980 0001f438 00000000 ce74d5dc
       7a01a980 9cfd0d40 20c3b817 00000001 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8F511D0C     0 16905      1 (NOTLB)
       8f511d20 00200092 00000002 8f511d0c 8f511d08 00000000 00000000 f6f2bf54
       00000009 00000001 ce74caa0 f7fbe030 49f74b40 0001f438 00000000 ce74cbac
       7a022980 a74847c0 20c3b827 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S F26E9D0C     0 16906      1 (NOTLB)
       f26e9d20 00200092 00000002 f26e9d0c f26e9d08 00000000 00000000 f6f2bf54
       00000009 00000000 cd432a60 785143e0 4adc2d00 0001f438 00000000 cd432b6c
       7a01a980 9cfd0ac0 20c3b838 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
Comment 4 Krzysztof Oledzki 2007-10-18 08:09:19 UTC
=======================
smtpd         S 8A7EBD0C     0 16907      1 (NOTLB)
       8a7ebd20 00200092 00000002 8a7ebd0c 8a7ebd08 00000000 00000000 f6f2bf54
       00000007 00000001 ce74c070 f7fbe030 a1d4bec0 0001f43c 00000000 ce74c17c
       7a022980 e9fb9580 20c40104 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 912A9D0C     0 16908      1 (NOTLB)
       912a9d20 00200092 00000002 912a9d0c 912a9d08 00000000 00000000 f6f2bf54
       0000000a 00000000 89695510 785143e0 82018cc0 0001f43e 00000000 8969561c
       7a01a980 9cfd0840 20c4207d 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A1B39D0C     0 16909      1 (NOTLB)
       a1b39d20 00200092 00000002 a1b39d0c a1b39d08 00000000 00000000 f6f2bf54
       0000000a 00000000 89694ae0 785143e0 a8272cc0 0001f43e 00000000 89694bec
       7a01a980 9cfd00c0 20c422fd 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 9A793D0C     0 16910      1 (NOTLB)
       9a793d20 00200092 00000002 9a793d0c 9a793d08 00000000 00000000 f6f2bf54
       0000000a 00000000 896940b0 785143e0 15c41680 0001f43f 00000000 896941bc
       7a01a980 a063ccc0 20c42a2c 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E2A39D0C     0 16912      1 (NOTLB)
       e2a39d20 00200092 00000002 e2a39d0c e2a39d08 00000000 00000000 f6f2bf54
       00000002 00000000 867080f0 785143e0 acfca240 0001f441 00000000 867081fc
       7a01a980 9cfd0340 20c455a3 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E278FD0C     0 16913      1 (NOTLB)
       e278fd20 00200092 00000002 e278fd0c e278fd08 00000000 00000000 f6f2bf54
       00000004 00000000 86708b20 785143e0 badfee80 0001f441 00000000 86708c2c
       7a01a980 a063c540 20c4568c 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8B41DD0C     0 16914      1 (NOTLB)
       8b41dd20 00200092 00000002 8b41dd0c 8b41dd08 00000000 00000000 f6f2bf54
       0000000a 00000000 bf519590 785143e0 e2edd680 0001f441 000f4240 bf51969c
       7a01a980 a063c7c0 20c4592c 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S BCB57D0C     0 16915      1 (NOTLB)
       bcb57d20 00200092 00000002 bcb57d0c bcb57d08 00000000 00000000 f6f2bf54
       00000002 00000000 bf518130 785143e0 cda772c0 0001f443 00000000 bf51823c
       7a01a980 e33f5d00 20c47955 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S CD79FD0C     0 16916      1 (NOTLB)
       cd79fd20 00200092 00000002 cd79fd0c cd79fd08 00000000 00000000 f6f2bf54
       00000006 00000000 bf518b60 785143e0 3c388080 0001f444 00000000 bf518c6c
       7a01a980 a7484a40 20c48096 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S ED8E7D0C     0 16917      1 (NOTLB)
       ed8e7d20 00200092 00000002 ed8e7d0c ed8e7d08 00000000 00000000 f6f2bf54
       00000007 00000000 b4993490 785143e0 46790380 0001f444 00000000 b499359c
       7a01a980 a7484540 20c48142 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S A9E2FD0C     0 16918      1 (NOTLB)
       a9e2fd20 00200092 00000002 a9e2fd0c a9e2fd08 00000000 00000000 f6f2bf54
       0000000a 00000000 b4992030 785143e0 d8724980 0001f445 000f4240 b499213c
       7a01a980 a74842c0 20c49b96 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S C3213D0C     0 16919      1 (NOTLB)
       c3213d20 00200092 00000002 c3213d0c c3213d08 00000000 00000000 f6f2bf54
       0000000a 00000000 b4992a60 785143e0 c9414ac0 0001f449 00000000 b4992b6c
       7a01a980 e6fbdd40 20c4ddb7 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S F29BDD0C     0 16920      1 (NOTLB)
       f29bdd20 00200092 00000002 f29bdd0c f29bdd08 00000000 00000000 f6f2bf54
       0000000a 00000000 88d9f4d0 785143e0 d1e650c0 0001f44a 00000000 88d9f5dc
       7a01a980 a063c2c0 20c4ef0c 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 857EBD0C     0 16921      1 (NOTLB)
       857ebd20 00200092 00000002 857ebd0c 857ebd08 00000000 00000000 f6f2bf54
       00000008 00000000 afcaf510 785143e0 e8405c80 0001f44a 00000000 afcaf61c
       7a01a980 e6fbdac0 20c4f085 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S E1483D0C     0 16922      1 (NOTLB)
       e1483d20 00200092 00000002 e1483d0c e1483d08 00000000 00000000 f6f2bf54
       00000009 00000000 afcaeae0 785143e0 3c978c40 0001f44b 00000000 afcaebec
       7a01a980 a7484040 20c4f60d 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S AAABBD0C     0 16923      1 (NOTLB)
       aaabbd20 00200092 00000002 aaabbd0c aaabbd08 00000000 00000000 f6f2bf54
       00000008 00000000 afcae0b0 785143e0 5127d480 0001f44b 00000000 afcae1bc
       7a01a980 a063ca40 20c4f764 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 8C039D0C     0 16924      1 (NOTLB)
       8c039d20 00200092 00000002 8c039d0c 8c039d08 00000000 00000000 f6f2bf54
       0000000a 00000000 8a4ab550 785143e0 8479a5c0 0001f44b 00000000 8a4ab65c
       7a01a980 a063c040 20c4fac1 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S CB7EBD0C     0 16925      1 (NOTLB)
       cb7ebd20 00200092 00000002 cb7ebd0c cb7ebd08 00000000 00000000 f6f2bf54
       0000000a 00000000 8a4aab20 785143e0 3df6ffc0 0001f44c 00000000 8a4aac2c
       7a01a980 88ea2cc0 20c506e8 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D105FD0C     0 16926      1 (NOTLB)
       d105fd20 00200092 00000002 d105fd0c d105fd08 00000000 00000000 f6f2bf54
       00000009 00000001 8a4aa0f0 f7fbe030 15da5540 0001f44d 00000000 8a4aa1fc
       7a022980 e33f5a80 20c5150f 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S D59B9D0C     0 16927      1 (NOTLB)
       d59b9d20 00200092 00000002 d59b9d0c d59b9d08 00000000 00000000 f6f2bf54
       00000009 00000000 88d9eaa0 785143e0 a145a380 0001f44f 00000000 88d9ebac
       7a01a980 88ea2040 20c53fc0 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smtpd         S 93F03D0C     0 16928      1 (NOTLB)
       93f03d20 00200092 00000002 93f03d0c 93f03d08 00000000 00000000 f6f2bf54
       00000009 00000000 88d9e070 785143e0 a22a8540 0001f44f 00000000 88d9e17c
       7a01a980 e33f5300 20c53fcf 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
Comment 5 Krzysztof Oledzki 2007-10-18 08:10:24 UTC
On Sat, 29 Sep 2007, Nick Piggin wrote:


    On Friday 28 September 2007 18:42, Krzysztof Oledzki wrote:

        Hello,

        I am experiencing weird system hangs. Once about 2-5 weeks system freezes
        and stops accepting remote connections, so it is no longer possible to
        connect to most important services: smtp (postfix), www (squid) or even
        ssh. Such connection is accepted but then it hangs.

        What is strange, that previously established ssh session is usable. It is
        possible to work on such system until you do something stupid like "less
        /var/log/all.log". Using strace I found that process blocks on:

    Is this a regression? If so, what's the most recent kernel that didn't show
    the problem?

    The symptoms could be consistent with some place doing a
    balance_dirty_pages while holding a lock that is required for IO, but I can't
    see a smoking gun (you've got contention on i_mutex, but that should be
    OK).

OK. It did it again :(, so I send additional debug information. Today I discovered that "echo i > /proc/sysrq-trigger" makes my system being usable.

    Can you see if there is any memory under writeback that isn't being
    completed (sysrq+M),

SysRq : Show Memory
Mem-info:
DMA per-cpu:
CPU    0: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
CPU    1: Hot: hi:    0, btch:   1 usd:   0   Cold: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: Hot: hi:  186, btch:  31 usd:  11   Cold: hi:   62, btch:  15 usd:   8
CPU    1: Hot: hi:  186, btch:  31 usd: 170   Cold: hi:   62, btch:  15 usd:  23
Active:281859 inactive:140551 dirty:46161 writeback:0 unstable:0
 free:39081 slab:50571 mapped:3261 pagetables:356 bounce:0
DMA free:8704kB min:512kB low:640kB high:768kB active:48kB inactive:0kB present:16256kB pages_scanned:12 all_unreclaimable? no
lowmem_reserve[]: 0 2015
Normal free:147620kB min:65016kB low:81268kB high:97524kB active:1127388kB inactive:562204kB present:2064260kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0
DMA: 4*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB = 8704kB
Normal: 15781*4kB 522*8kB 264*16kB 56*32kB 9*64kB 8*128kB 6*256kB 1*512kB 1*1024kB 2*2048kB 16*4096kB = 147620kB
Swap cache: add 5022, delete 4928, find 2281/2804, race 0+0
Free swap  = 2927208kB
Total swap = 2927608kB
Free swap:       2927208kB
524224 pages of RAM
0 pages of HIGHMEM
6345 reserved pages
249411 pages shared
94 pages swap cached
46161 pages dirty
0 pages writeback
3261 pages mapped
50571 pages slab
356 pages pagetables


    also a list the locks held after the hang might be
    helpful (compile in lockdep and sysrq+D)

SysRq : Show Locks Held

Showing all locks held in the system:
1 lock held by syslogd/2197:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by freshclam/2898:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by crond/3061:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by snmpd/3546:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by squid/3606:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by zabbix_agentd/3728:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by agetty/3791:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3793:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3794:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3795:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3796:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3797:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3798:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
1 lock held by agetty/3799:
 #0:  (&tty->atomic_read_lock){--..}, at: [<7829cdde>] read_chan+0x1b1/0x54b
2 locks held by bash/3887:
 #0:  (sysrq_key_table_lock){....}, at: [<782a6ea9>] __handle_sysrq+0x17/0x109
 #1:  (tasklist_lock){..-?}, at: [<7813d883>] debug_show_all_locks+0x3b/0x12d
1 lock held by amavisd/7227:
 #0:  (&mm->mmap_sem){----}, at: [<781196ad>] do_page_fault+0x152/0x535
1 lock held by sadc/8202:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by emerge/8204:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by mkpir/8297:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6
1 lock held by screen/8365:
 #0:  (&inode->i_mutex){--..}, at: [<7814aabc>] generic_file_aio_write+0x43/0xb6

=============================================



    Is anything currently running? (sysrq+P

Empty.
Comment 6 Krzysztof Oledzki 2007-10-18 08:11:41 UTC
    and even a full sysrq+T task list
    could be useful).

SysRq : Show State

                         free                        sibling
  task             PC    stack   pid father child younger older
init          S 7A1C1B4C     0     1      0 (NOTLB)
       7a1c1b60 00000016 00000002 7a1c1b4c 7a1c1b48 00000000 7840d6e3 7a1d2000
       0000000a 00000001 f7fbf490 f7fbe030 9c45a103 0001ff61 00005b8a f7fbf59c
       7a022980 f71bdd00 219904e6 00000003 00000000 00000000 7a1c1b70 2199186c
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7812cada>] process_timeout+0x0/0x5
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<781298c7>] local_bh_enable+0x110/0x130
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7837cb10>] dev_queue_xmit+0x10b/0x213
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78172a90>] __d_lookup+0xbb/0x10b
 [<78172a3f>] __d_lookup+0x6a/0x10b
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78172ac3>] __d_lookup+0xee/0x10b
 [<7816a323>] do_lookup+0x4f/0x143
 [<78171ebc>] dput+0x16/0xe4
 [<7816be16>] __link_path_walk+0xa1a/0xb5e
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7816c003>] link_path_walk+0xa9/0xb3
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<7816f30a>] sys_select+0xa0/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
kthreadd      S F7FBEA60     0     2      0 (L-TLB)
       7a1c3fd0 00000016 7851aa94 f7fbea60 00000001 786b7ec8 f7fbef90 f7fbef90
       0000000a 00000000 f7fbea60 edcd0030 e200f0d9 0001df92 00000d92 f7fbeb6c
       7a01a980 f7cafa40 00001890 00000246 7851aa80 00001890 7851aa60 00001890
Call Trace:
 [<78134f4d>] kthreadd+0x61/0xfe
 [<78134eec>] kthreadd+0x0/0xfe
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
migration/0   S 7A022980     0     3      2 (L-TLB)
       7a1c9fa4 00000002 7811e4eb 7a022980 f7c7d550 7a1c9fa4 7811f199 7a1c7a00
       00000001 00000000 7a1c74d0 f7e84b60 78531ca2 0001ff5b 00001431 7a1c75dc
       7a01a980 e913a340 7a0229e8 00000046 7a01a980 7a01a980 7a01b2f4 7a01a980
Call Trace:
 [<7811e4eb>] resched_task+0x3b/0x59
 [<7811f199>] move_tasks+0x1f5/0x2bd
 [<781221cf>] migration_thread+0x14a/0x1cd
 [<78122085>] migration_thread+0x0/0x1cd
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ksoftirqd/0   S 7A1CBFB4     0     4      2 (L-TLB)
       7a1cbfc8 00000002 00000002 7a1cbfb4 7a1cbfb0 00000000 edcd1490 edcd1490
       00000009 00000000 7a1c6aa0 785143e0 f1a2fd1e 0001e9af 0000065e 7a1c6bac
       7a01a980 c053d340 202bebb7 00000003 00000000 00000000 7a018100 00000000
Call Trace:
 [<78129a29>] ksoftirqd+0x55/0x134
 [<781299d4>] ksoftirqd+0x0/0x134
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
watchdog/0    S 7A01A980     0     5      2 (L-TLB)
       7a1cdfcc 00000006 7840d6e3 7a01a980 7a1cdfcc 7813e99c 00000002 00000046
       00000001 00000000 7a1c6070 f7fbf490 35f147b4 0000000a 00002c3f 7a1c617c
       7a01a980 78514180 7a1c6530 ffffffff 78596a0c 01a85000 00000000 78146910
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78146910>] watchdog+0x0/0x58
 [<78146958>] watchdog+0x48/0x58
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
migration/1   S 7A1D1F90     0     6      2 (L-TLB)
       7a1d1fa4 00000002 00000002 7a1d1f90 7a1d1f8c 00000000 7a1cfa40 7a1cfa40
       00000001 00000001 7a1cf510 f7fbe030 04d4c284 0001ff60 00001b5e 7a1cf61c
       7a022980 e8d34580 2198ea1b 00000003 00000000 00000000 7a0232f4 7a022980
Call Trace:
 [<781221cf>] migration_thread+0x14a/0x1cd
 [<78122085>] migration_thread+0x0/0x1cd
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ksoftirqd/1   S 7A1D5FB4     0     7      2 (L-TLB)
       7a1d5fc8 00000002 00000002 7a1d5fb4 7a1d5fb0 00000000 00000046 00000000
       0000000a 00000001 7a1ceae0 f7fbe030 07b31f2a 0001ff3a 0004c80b 7a1cebec
       7a022980 e913a340 21966ac7 00000003 00000000 00000000 7a020100 00000001
Call Trace:
 [<78129a29>] ksoftirqd+0x55/0x134
 [<781299d4>] ksoftirqd+0x0/0x134
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
watchdog/1    S 7A022980     0     8      2 (L-TLB)
       7a1d7fcc 00000006 7840d6e3 7a022980 7a1d7fcc 7813e99c 00000002 00000046
       00000001 00000001 7a1ce0b0 7a1ceae0 372914ac 0000000a 00003a96 7a1ce1bc
       7a022980 78514180 7a1ce570 ffffffff 78596a0c 01a8d000 00000001 78146910
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78146910>] watchdog+0x0/0x58
 [<78146958>] watchdog+0x48/0x58
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
events/0      S F7FBDFA0     0     9      2 (L-TLB)
       f7fbdfb4 00000086 00000002 f7fbdfa0 f7fbdf9c 00000000 f7ea1a80 00000000
       0000000a 00000000 f7ea1550 785143e0 959db3cd 0001ff61 0000158b f7ea165c
       7a01a980 c053d840 21990475 00000003 00000000 00000000 f7f940c0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
events/1      S F7FB9FA0     0    10      2 (L-TLB)
       f7fb9fb4 00000086 00000002 f7fb9fa0 f7fb9f9c 00000000 f7ea1050 00000000
       0000000a 00000001 f7ea0b20 f7fbe030 992e5f8c 0001ff61 000013bc f7ea0c2c
       7a022980 e913a340 219904b2 00000003 00000000 00000000 f7c4df40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
khelper       S 00000000     0    11      2 (L-TLB)
       f7fbbfb4 00000086 00000046 00000000 00000000 f7c4defc f7ea0620 00000000
       00000009 00000001 f7ea00f0 f7cc8130 6fcbad27 0001df91 000011c4 f7ea01fc
       7a022980 f6b15840 00000046 f7c4dee8 00000286 f7c4dee8 f7c4dec0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kblockd/0     S F7E3BFA0     0    64      2 (L-TLB)
       f7e3bfb4 00000086 00000002 f7e3bfa0 f7e3bf9c 00000000 f7e85090 00000000
       0000000a 00000000 f7e84b60 785143e0 6ff1ad71 0001ff61 00001779 f7e84c6c
       7a01a980 e913a340 219901fd 00000003 00000000 00000000 f7e951c0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kblockd/1     S F7E3DFA0     0    65      2 (L-TLB)
       f7e3dfb4 00000086 00000002 f7e3dfa0 f7e3df9c 00000000 00000000 00000000
       0000000a 00000001 f7e85590 f7fbe030 593eee3f 0001ff61 00001932 f7e8569c
       7a022980 d3e4acc0 2199007e 00000002 00000000 00000000 f7e95140 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kacpid        S 00000000     0    66      2 (L-TLB)
       f7e3ffb4 00000086 00000046 00000000 00000000 f7c76cfc f7c7c620 00000000
       00000001 00000000 f7c7c0f0 f7fbf490 3bbce431 0000000a 0000072f f7c7c1fc
       7a01a980 78514180 00000046 f7c76ce8 00000286 f7c76ce8 f7c76cc0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kacpi_notify  S 00000000     0    67      2 (L-TLB)
       f7e41fb4 00000086 00000046 00000000 00000000 f7c76bfc f7c7d050 00000000
       00000001 00000000 f7c7cb20 f7fbf490 3bbd4590 0000000a 00000720 f7c7cc2c
       7a01a980 78514180 00000046 f7c76be8 00000286 f7c76be8 f7c76bc0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ata/0         S 00000000     0   147      2 (L-TLB)
       f7e21fb4 00000086 00000046 00000000 00000000 f7c4bd7c f7e23a40 00000000
       0000000a 00000000 f7e23510 f7cd0030 a23c198a 0000000b 0002d0fb f7e2361c
       7a01a980 78514180 00000046 f7c4bd68 00000286 f7c4bd68 f7c4bd40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ata/1         S 00000000     0   148      2 (L-TLB)
       f7e19fb4 00000086 00000046 00000000 00000000 f7c4bcfc f7e2a620 00000000
       0000000a 00000001 f7e2a0f0 f7fbf490 7e41e2a8 0000000b 00005ba5 f7e2a1fc
       7a022980 78514180 00000046 f7c4bce8 00000286 f7c4bce8 f7c4bcc0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ata_aux       S F7E43FA0     0   149      2 (L-TLB)
       f7e43fb4 00000086 00000002 f7e43fa0 f7e43f9c 00000000 f7e1a5a0 00000000
       00000001 00000000 f7e1a070 785143e0 3dbcad9d 0000000a 00000e19 f7e1a17c
       7a01a980 78514180 fffb6d29 00000003 00000000 00000000 f7c4bc40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ksuspend_usbd S F7E29FA0     0   150      2 (L-TLB)
       f7e29fb4 00000086 00000002 f7e29fa0 f7e29f9c 00000000 f7e84660 00000000
       00000001 00000000 f7e84130 785143e0 3dbd4cb7 0000000a 00001042 f7e8423c
       7a01a980 78514180 fffb6d29 00000003 00000000 00000000 f7c4bb40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
khubd         S 00000000     0   153      2 (L-TLB)
       f7e05f54 00000086 00000046 00000000 00000000 78538a34 f7e225e0 00000000
       0000000a 00000000 f7e220b0 f7fbf490 d1ac4454 0000000b 00005717 f7e221bc
       7a01a980 78514180 00000046 78538a20 00000286 78538a20 00000000 f7d36200
Call Trace:
 [<78316b2e>] hub_thread+0x98f/0x9f9
 [<7840d732>] _spin_unlock_irq+0x2b/0x41
 [<7840ae59>] __sched_text_start+0x701/0x810
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7831619f>] hub_thread+0x0/0x9f9
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kseriod       S 00000000     0   155      2 (L-TLB)
       7a1fdf9c 00000002 00000046 00000000 00000000 7853bcd4 f7e0a660 00000000
       0000000a 00000000 f7e0a130 f7fbf490 0dfac2e3 0000000c 00003730 f7e0a23c
       7a01a980 78514180 00000046 7853bcc0 00000216 7853bcc0 f7d50580 7832efe5
Call Trace:
 [<7832efe5>] serio_thread+0x0/0x2bb
 [<7832f255>] serio_thread+0x270/0x2bb
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7832efe5>] serio_thread+0x0/0x2bb
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kswapd0       S F7E59F44     0   187      2 (L-TLB)
       f7e59f58 00000086 00000002 f7e59f44 f7e59f40 00000000 7a1fb010 00000000
       00000009 00000001 7a1faae0 f7fbe030 1a81a860 0001e6a0 00e7fbcc 7a1fabec
       7a022980 abc7dd40 1ff863fd 00000003 00000000 00000000 00000001 ffffffff
Call Trace:
 [<78150a2e>] kswapd+0xb0/0x40f
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7815097e>] kswapd+0x0/0x40f
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
aio/0         S 00000000     0   188      2 (L-TLB)
       f7e5dfb4 00000086 00000046 00000000 00000000 f7ed547c 7a1fba40 00000000
       00000001 00000000 7a1fb510 f7fbea60 42056d94 0000000a 00000dba 7a1fb61c
       7a01a980 78514180 00000046 f7ed5468 00000286 f7ed5468 f7ed5440 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
aio/1         S 00000000     0   189      2 (L-TLB)
       f7e5ffb4 00000086 00000046 00000000 00000000 f7ed53fc 7a1f25a0 00000000
       00000001 00000001 7a1f2070 f7fbf490 4205dc87 0000000a 000009cd 7a1f217c
       7a022980 78514180 00000046 f7ed53e8 00000286 f7ed53e8 f7ed53c0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
cryptd        S F7E7FFA8     0   190      2 (L-TLB)
       f7e7ffbc 00000082 00000002 f7e7ffa8 f7e7ffa4 00000000 787ce0d8 f7fb5590
       00000001 00000000 f7fb5590 785143e0 42116cfc 0000000a 000076c7 f7fb569c
       7a01a980 78514180 fffb6d71 00000003 00000000 00000000 00000000 00000000
Call Trace:
 [<7820a221>] cryptd_thread+0x95/0xa5
 [<7820a18c>] cryptd_thread+0x0/0xa5
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
scsi_eh_0     S 7A1EA030     0   320      2 (L-TLB)
       f7ca1f80 00000092 f7e0f058 7a1ea030 7840d6e3 f7e0f000 f7cef000 7813e99c
       0000000a 00000000 7a1ea030 f7fbf490 7ebe3537 0000000b 00005ddb 7a1ea13c
       7a01a980 78514180 00000292 f7cef000 f7e0f000 00000000 00000282 f7e0f000
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<782e9a32>] scsi_error_handler+0x3b/0x28d
 [<782e99f7>] scsi_error_handler+0x0/0x28d
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
scsi_eh_1     S F7CA5F6C     0   322      2 (L-TLB)
       f7ca5f80 00000092 00000002 f7ca5f6c f7ca5f68 00000000 00000000 7813e99c
       00000003 00000000 f7cf94d0 785143e0 7e3733ad 0000000b 0000c8a9 f7cf95dc
       7a01a980 78514180 fffb8236 00000003 00000000 00000000 00000282 f7cbd800
Call Trace:
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<782e9a32>] scsi_error_handler+0x3b/0x28d
 [<782e99f7>] scsi_error_handler+0x0/0x28d
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
scsi_eh_2     S F7E0B590     0   339      2 (L-TLB)
       f7ca7f80 00000092 7a1d8058 f7e0b590 7840d6e3 7a1d8000 00000000 7813e99c
       00000006 00000000 f7e0b590 f7fbf490 9835b106 0000000b 004284a4 f7e0b69c
       7a01a980 78514180 00000292 00000282 7a1d8000 00000000 00000282 7a1d8000
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<782e9a32>] scsi_error_handler+0x3b/0x28d
 [<782e99f7>] scsi_error_handler+0x0/0x28d
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
scsi_eh_3     S F7CD0030     0   341      2 (L-TLB)
       f7ca3f80 00000092 f7cbd058 f7cd0030 7840d6e3 f7cbd000 00000000 7813e99c
       00000008 00000000 f7cd0030 f7fbf490 a28957bc 0000000b 003e72b9 f7cd013c
       7a01a980 78514180 00000292 00000282 f7cbd000 00000000 00000282 f7cbd000
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<782e9a32>] scsi_error_handler+0x3b/0x28d
 [<782e99f7>] scsi_error_handler+0x0/0x28d
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kpsmoused     S 00000000     0   407      2 (L-TLB)
       f7d75fb4 00000086 00000046 00000000 00000000 f7d1767c f7d005e0 00000000
       00000009 00000000 f7d000b0 f7fbf490 d3b0a6af 0000000b 0000087a f7d001bc
       7a01a980 78514180 00000046 f7d17668 00000286 f7d17668 f7d17640 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kcryptd/0     S 00000000     0   412      2 (L-TLB)
       f7d77fb4 00000086 00000046 00000000 00000000 f7d170fc f7e33090 00000000
       00000009 00000000 f7e32b60 f7fbf490 e6cc0d2f 0000000b 000007f4 f7e32c6c
       7a01a980 78514180 00000046 f7d170e8 00000286 f7d170e8 f7d170c0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kcryptd/1     S F7D71FA0     0   413      2 (L-TLB)
       f7d71fb4 00000086 00000002 f7d71fa0 f7d71f9c 00000000 f7d01010 00000000
       00000008 00000001 f7d00ae0 f7fbe030 e6cc9219 0000000b 000010ad f7d00bec
       7a022980 78514180 fffb8914 00000003 00000000 00000000 f7d5cf40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kmpathd/0     S 00000000     0   414      2 (L-TLB)
       f7d73fb4 00000086 00000046 00000000 00000000 f7d5cd7c f7cf85a0 00000000
       00000009 00000000 f7cf8070 f7fbf490 e6cd348c 0000000b 00000786 f7cf817c
       7a01a980 78514180 00000046 f7d5cd68 00000286 f7d5cd68 f7d5cd40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kmpathd/1     S F7D6DFA0     0   415      2 (L-TLB)
       f7d6dfb4 00000086 00000002 f7d6dfa0 f7d6df9c 00000000 f7cf8fd0 00000000
       00000008 00000001 f7cf8aa0 f7fbe030 e6cda902 0000000b 00000f2f f7cf8bac
       7a022980 78514180 fffb8914 00000003 00000000 00000000 f7d5ccc0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ksnapd        S 00000000     0   416      2 (L-TLB)
       f7d31fb4 00000086 00000046 00000000 00000000 f7d5c9fc f7d01a40 00000000
       00000008 00000000 f7d01510 f7fbf490 e72c4b66 0000000b 0000070f f7d0161c
       7a01a980 78514180 00000046 f7d5c9e8 00000286 f7d5c9e8 f7d5c9c0 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kedac         S F7D7BF80     0   419      2 (L-TLB)
       f7d7bf94 00000086 00000002 f7d7bf80 f7d7bf7c 00000000 7840d6e3 7a1d2000
       0000000a 00000001 f7e0ab60 f7fbe030 acabac43 0001ff61 00000d5e f7e0ac6c
       7a022980 e913a340 219905f9 00000003 00000000 00000000 f7d7bfa4 219909e0
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<78363dd9>] edac_kernel_thread+0x0/0xf3
 [<7812cada>] process_timeout+0x0/0x5
 [<78363dd9>] edac_kernel_thread+0x0/0xf3
 [<78363ea0>] edac_kernel_thread+0xc7/0xf3
 [<78363dd9>] edac_kernel_thread+0x0/0xf3
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
md4_raid5     S F7E2AB20     0   434      2 (L-TLB)
       7a255f84 00000006 7a21a920 f7e2ab20 7840d727 00000000 00000000 7a224d5c
       0000000a 00000000 f7e2ab20 f6928030 63aadcb9 0001ff61 00002f44 f7e2ac2c
       7a01a980 e913a340 00000000 00000000 f7e2ab20 f7e2b050 7fffffff 7fffffff
Call Trace:
 [<7840d727>] _spin_unlock_irq+0x20/0x41
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<78354f55>] md_thread+0xab/0xdc
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78354eaa>] md_thread+0x0/0xdc
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
md15_raid10   S 7A1F3A00     0   437      2 (L-TLB)
       7a253f84 00000006 7a1f34d0 7a1f3a00 785cde94 7a1f34d0 7840d6e3 785cde80
       0000000a 00000000 7a1f34d0 f6928a60 39bb729c 0001ff61 000011b8 7a1f35dc
       7a01a980 f58bf040 785cde80 7812ccc1 00000000 00000216 7a253f94 219911f1
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7812cada>] process_timeout+0x0/0x5
 [<78354f55>] md_thread+0xab/0xdc
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78354eaa>] md_thread+0x0/0xdc
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
md0_raid1     S F7C7DA80     0   439      2 (L-TLB)
       f719ff84 00000086 f7c7d550 f7c7da80 785cde94 f7c7d550 7840d6e3 785cde80
       0000000a 00000000 f7c7d550 f6928030 706bbe1a 0001ff61 00002cc3 f7c7d65c
       7a01a980 e913a340 785cde80 7812ccc1 00000000 00000296 f719ff94 2199158a
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7812cada>] process_timeout+0x0/0x5
 [<78354f55>] md_thread+0xab/0xdc
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78354eaa>] md_thread+0x0/0xdc
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S 00000000     0   441      2 (L-TLB)
       f717dfa8 00000082 00000046 00000000 00000000 f7199d3c f7c74608 f7c745e0
       0000000a 00000000 f7c740b0 f6928030 575613ff 0001ff61 000092f3 f7c741bc
       7a01a980 f6e31800 00000046 00000246 f7199c14 f7199c00 f7199c14 f7199c00
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S F6DD5F94     0   584      2 (L-TLB)
       f6dd5fa8 00000082 00000002 f6dd5f94 f6dd5f90 00000000 f7fb4688 f7fb4660
       0000000a 00000001 f7fb4130 f7fbe030 545ba352 0001ff61 0000700b f7fb423c
       7a022980 e632d2c0 2199002b 00000003 00000000 00000000 f68c8814 f68c8800
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S F588DF94     0   585      2 (L-TLB)
       f588dfa8 00000082 00000002 f588df94 f588df90 00000000 f7cd19e8 f7cd19c0
       0000000a 00000000 f7cd1490 785143e0 f0b5365d 0001ff24 00001614 f7cd159c
       7a01a980 f7cafa40 219507da 00000003 00000000 00000000 f58c1814 f58c1800
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S 00000000     0   586      2 (L-TLB)
       f6fdbfa8 00000082 00000046 00000000 00000000 f58c113c 7a1dcfb8 7a1dcf90
       00000009 00000000 7a1dca60 f7cb80b0 53c6eaba 0000000d 00001d05 7a1dcb6c
       7a01a980 f68e9ac0 00000046 00000246 f58c1014 f58c1000 f58c1014 f58c1000
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S 00000000     0   587      2 (L-TLB)
       f587bfa8 00000082 00000046 00000000 00000000 f64c993c f7c6da28 f7c6da00
       0000000a 00000001 f7c6d4d0 f7ea0b20 f44b1328 0001e632 0000149c f7c6d5dc
       7a022980 e632d2c0 00000046 00000246 f64c9814 f64c9800 f64c9814 f64c9800
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
kjournald     S 00000000     0   588      2 (L-TLB)
       f58c9fa8 00000082 00000046 00000000 00000000 f64c913c 7a1dc588 7a1dc560
       00000008 00000000 7a1dc030 f7cb80b0 5d289ec0 0000000d 00001bb0 7a1dc13c
       7a01a980 f68e9ac0 00000046 00000246 f64c9014 f64c9000 f64c9014 f64c9000
Call Trace:
 [<781aeb09>] kjournald+0x16d/0x1ec
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781ae99c>] kjournald+0x0/0x1ec
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
syslogd       D 00000000     0  2197      1 (NOTLB)
       f6267c74 00000082 00000000 00000000 7813e7a9 f7cc8130 7840d6e3 785cde80
       00000007 00000000 f7cc8130 f7c74ae0 abd727e3 0001ff61 00002fdc f7cc823c
       7a01a980 f6b15840 785cde80 7812ccc1 00000000 00000292 f6267c84 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<7814a8cf>] __generic_file_aio_write_nolock+0x335/0x4df
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<7816402f>] do_sync_readv_writev+0xc1/0xfe
 [<78139494>] getnstimeofday+0x30/0xbc
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78163ee4>] rw_copy_check_uvector+0x50/0xaa
 [<781646b4>] do_readv_writev+0x99/0x164
 [<78199b90>] ext3_file_write+0x0/0x92
 [<781647bc>] vfs_writev+0x3d/0x48
 [<78164b9e>] sys_writev+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
klogd         S F5F37DB8     0  2211      1 (NOTLB)
       f5f37dcc 00000086 00000002 f5f37db8 f5f37db4 00000000 00000000 f6f77f54
       00000007 00000000 f7c6caa0 785143e0 579dd14b 0001ff41 000c175e f7c6cbac
       7a01a980 f62ada40 2196e5cc 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78370f09>] sock_aio_write+0xcb/0xd7
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781649e7>] vfs_read+0xcf/0x10a
 [<781648aa>] vfs_write+0x9e/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
acpid         S F5C35BE8     0  2265      1 (NOTLB)
       f5c35bfc 00000082 00000002 f5c35be8 f5c35be4 00000000 78528e74 f7cc1550
       00000001 00000000 f7cc1550 785143e0 87711dce 0000000f 0000efdb f7cc165c
       7a01a980 f6b150c0 fffbc622 00000003 00000000 00000000 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<783cbaf6>] unix_poll+0x1a/0x9a
 [<7816e77e>] do_sys_poll+0x258/0x327
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d732>] _spin_unlock_irq+0x2b/0x41
 [<7840af41>] __sched_text_start+0x7e9/0x810
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<783cc862>] unix_dgram_sendmsg+0x3b3/0x431
 [<7840d33a>] _read_lock+0x33/0x3e
 [<7840d699>] _read_unlock+0x25/0x3b
 [<783cc862>] unix_dgram_sendmsg+0x3b3/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<781383ba>] up_read+0x14/0x26
 [<781197d3>] do_page_fault+0x278/0x535
 [<78103e68>] restore_nocheck+0x12/0x15
 [<7811955b>] do_page_fault+0x0/0x535
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816e881>] sys_poll+0x34/0x37
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       S F31C9B4C     0  2823      1 (NOTLB)
       f31c9b60 00000096 00000002 f31c9b4c f31c9b48 00000000 7840d6e3 7a1d2000
       00000007 00000001 f7e32130 f7fbe030 65ed4f6c 0001ff60 000151f2 f7e3223c
       7a022980 f64f1800 2198f07d 00000003 00000000 00000000 f31c9b70 2199178d
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7812cada>] process_timeout+0x0/0x5
 [<7816ec1d>] do_select+0x399/0x3e7
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7838b171>] __nf_ct_refresh_acct+0xcf/0x10a
 [<7816f1be>] __pollwait+0x0/0xac
 [<7838b171>] __nf_ct_refresh_acct+0xcf/0x10a
 [<7838eaf9>] tcp_packet+0xa0f/0xa3e
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<783bb722>] inet_sock_destruct+0x179/0x1bb
 [<783b147b>] tcp_v4_rcv+0x51a/0x787
 [<783994b1>] ip_local_deliver+0x1bd/0x1c8
 [<78398c10>] ip_local_deliver_finish+0x0/0x14c
 [<783992bb>] ip_rcv+0x498/0x4d1
 [<7839896c>] ip_rcv_finish+0x0/0x2a4
 [<78139494>] getnstimeofday+0x30/0xbc
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<782be370>] e1000_clean_tx_irq+0xbc/0x2c5
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7812e2a3>] group_send_sig_info+0x12/0x56
 [<7812e378>] kill_pid_info+0x61/0x7b
 [<7812f183>] sys_kill+0x108/0x125
 [<7816f30a>] sys_select+0xa0/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
clamd         S 00000000     0  2889      1 (NOTLB)
       f32d1e58 00000086 00000002 00000000 00010822 00000000 00000000 ee9a69ac
       00000001 00000001 f7cb80b0 db4c8030 6f522ff7 0001e655 00010e05 f7cb81bc
       7a022980 f68e9ac0 00000000 00000000 f7cb80b0 f7cb85e0 7fffffff f5e2d380
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<783787ab>] skb_recv_datagram+0x13f/0x1c4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783ccdf1>] unix_accept+0x51/0xde
 [<7837262e>] sys_accept+0xb2/0x187
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<781671ca>] sys_fstat64+0x1e/0x23
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
freshclam     D 00000000     0  2898      1 (NOTLB)
       f2f43cc4 00000082 00000000 00000000 7813e7a9 f6928030 7840d6e3 785cde80
       00000008 00000000 f6928030 f7cc8130 abd6f807 0001ff61 00003332 f692813c
       7a01a980 f58bfa40 785cde80 7812ccc1 00000000 00000296 f2f43cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7840d727>] _spin_unlock_irq+0x20/0x41
 [<7840d732>] _spin_unlock_irq+0x2b/0x41
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
named         S F0989D0C     0  2959      1 (NOTLB)
       f0989d20 00200092 00000002 f0989d0c f0989d08 00000000 00000000 f6f77f54
       0000000a 00000000 f7e13490 785143e0 0e816649 0001e659 00013c17 f7e1359c
       7a01a980 f58bf2c0 1ff3b86f 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<78171ebc>] dput+0x16/0xe4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<78169ed9>] path_release+0xa/0x20
 [<78166c60>] cp_new_stat64+0xfc/0x10e
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
crond         D 00000000     0  3061      1 (NOTLB)
       f1367cc4 00000082 00000000 00000000 7813e7a9 f7e02b20 7840d6e3 7a1d2000
       00000008 00000001 f7e02b20 edcd0a60 abd708dc 0001ff61 0000367c f7e02c2c
       7a022980 f58bfcc0 7a1d2000 7812ccc1 00000000 00000296 f1367cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
mdadm         S F1ADDB4C     0  3110      1 (NOTLB)
       f1addb60 00000096 00000002 f1addb4c f1addb48 00000000 7840d6e3 785cde80
       00000007 00000000 f6928a60 785143e0 39bb937f 0001ff61 000020e3 f6928b6c
       7a01a980 f58bf040 2198fe6a 00000003 00000000 00000000 f1addb70 2199e8c9
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7812cada>] process_timeout+0x0/0x5
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7840d727>] _spin_unlock_irq+0x20/0x41
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d732>] _spin_unlock_irq+0x2b/0x41
 [<7840af41>] __sched_text_start+0x7e9/0x810
 [<7840ae59>] __sched_text_start+0x701/0x810
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7811e3ba>] __wake_up+0x32/0x43
 [<7840b09e>] preempt_schedule+0x46/0x58
 [<7834dc68>] md_wakeup_thread+0x26/0x28
 [<78354e25>] md_ioctl+0x1261/0x12e6
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d727>] _spin_unlock_irq+0x20/0x41
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7840bd93>] __mutex_unlock_slowpath+0x105/0x127
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78352e61>] md_open+0x51/0x58
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<781335c2>] call_rcu+0x64/0x69
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816f30a>] sys_select+0xa0/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
portmap       S F1571BE8     0  3161      1 (NOTLB)
       f1571bfc 00000082 00000002 f1571be8 f1571be4 00000000 00000000 00000000
       00000007 00000001 7a1dd490 f7fbe030 e696d8aa 00000013 000281e5 7a1dd59c
       7a022980 f6e31080 fffc0fae 00000003 00000000 00000000 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<783b435a>] udp_poll+0x10/0xdb
 [<7816e77e>] do_sys_poll+0x258/0x327
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7840d1f4>] _spin_lock_bh+0x38/0x43
 [<783b57a5>] udp_sendmsg+0x4a7/0x5b9
 [<783ba7cb>] inet_sendmsg+0x3b/0x45
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<78377ed6>] verify_iovec+0x3e/0x70
 [<783719e4>] sys_sendmsg+0x194/0x1f9
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7814cb18>] free_hot_cold_page+0x13b/0x167
 [<7814cb24>] free_hot_cold_page+0x147/0x167
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814cb58>] __pagevec_free+0x14/0x1a
 [<7814ecc5>] release_pages+0x126/0x12e
 [<78159060>] anon_vma_unlink+0x13/0x53
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816e881>] sys_poll+0x34/0x37
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
rpciod/0      S 00000000     0  3216      2 (L-TLB)
       ec88dfb4 00000086 00000046 00000000 00000000 f6343c7c 7a1eb9c0 00000000
       0000000a 00000000 7a1eb490 89333550 78dbc731 0001e655 0000401b 7a1eb59c
       7a01a980 c053d840 00000046 f6343c68 00000286 f6343c68 f6343c40 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
rpciod/1      S 00000000     0  3217      2 (L-TLB)
       ec8e7fb4 00000086 00000046 00000000 00000000 f5a6867c f69299c0 00000000
       0000000a 00000001 f6929490 a79ec0b0 2e2b55c5 00000834 0000ce3a f692959c
       7a022980 a71d7340 00000046 f5a68668 00000286 f5a68668 f5a68640 78132a2b
Call Trace:
 [<78132a2b>] worker_thread+0x0/0xc5
 [<78132ab3>] worker_thread+0x88/0xc5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
lockd         S 00000001     0  3218      2 (L-TLB)
       ec865f18 00000082 7813534c 00000001 784d6071 f7e02628 00000000 00000000
       00000001 00000000 f7e020f0 f7ca8a60 e69b3708 00000013 0000a1e7 f7e021fc
       7a01a980 f68e90c0 f7e02620 f7e020f0 781299b4 f0fecd7c 7fffffff f0fec000
Call Trace:
 [<7813534c>] add_wait_queue+0x12/0x30
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<783ffbab>] svc_recv+0x249/0x3ca
 [<7812027d>] default_wake_function+0x0/0xc
 [<781e6643>] lockd+0xe8/0x1fd
 [<78121e77>] schedule_tail+0x0/0xb6
 [<78103e68>] restore_nocheck+0x12/0x15
 [<781e655b>] lockd+0x0/0x1fd
 [<781e655b>] lockd+0x0/0x1fd
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
ntpd          S EC959B4C     0  3269      1 (NOTLB)
       ec959b60 00200096 00000002 ec959b4c ec959b48 00000000 f7cb05a0 00200046
       0000000a 00000000 f7cb0070 785143e0 a27c1eef 0001ff61 0000aea1 f7cb017c
       7a01a980 f7cafa40 2199054d 00000003 00000000 00000000 7fffffff ec959f9c
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783b435a>] udp_poll+0x10/0xdb
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<783760ad>] skb_dequeue+0x39/0x3f
 [<783786ed>] skb_recv_datagram+0x81/0x1c4
 [<783b5b8f>] udp_recvmsg+0x57/0x1cd
 [<78372cc9>] sock_common_recvmsg+0x3e/0x54
 [<78371763>] sock_recvmsg+0xcf/0xe8
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7837221d>] sys_recvmsg+0x11d/0x1cf
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<7840d732>] _spin_unlock_irq+0x2b/0x41
 [<78103986>] do_notify_resume+0x511/0x5f3
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816f340>] sys_select+0xd6/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
Comment 7 Krzysztof Oledzki 2007-10-18 08:11:59 UTC
sqlgrey       S 00000000     0  3323      1 (NOTLB)
       ec79fb60 00200096 786b7ed0 00000000 7837cbf7 00200046 00000000 00000000
       00000004 00000001 f7cb0aa0 f7fbf490 b29583a6 0001f67f 00068968 f7cb0bac
       7a022980 f68e90c0 7813e99c 00000000 00200046 ec975398 7fffffff ec79ff9c
Call Trace:
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<783b133f>] tcp_v4_rcv+0x3de/0x787
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<7839949e>] ip_local_deliver+0x1aa/0x1c8
 [<78398c10>] ip_local_deliver_finish+0x0/0x14c
 [<783992bb>] ip_rcv+0x498/0x4d1
 [<7839896c>] ip_rcv_finish+0x0/0x2a4
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c73c>] process_backlog+0xef/0xfa
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c9d8>] net_rx_action+0x162/0x18f
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781298c7>] local_bh_enable+0x110/0x130
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816f340>] sys_select+0xd6/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
smartd        S EA68DD0C     0  3487      1 (NOTLB)
       ea68dd20 00000092 00000002 ea68dd0c ea68dd08 00000000 00000000 f6f77f54
       00000009 00000000 ec907490 785143e0 ad75944f 0001e66c 00034dc2 ec90759c
       7a01a980 f6e11040 1ff502a1 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783cc30c>] unix_wait_for_peer+0x75/0x9a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783cc80c>] unix_dgram_sendmsg+0x35d/0x431
 [<78371838>] sock_sendmsg+0xbc/0xd4
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813facb>] lock_release_non_nested+0xec/0x14d
 [<783cdc63>] unix_dgram_connect+0x8d/0x157
 [<783cdd00>] unix_dgram_connect+0x12a/0x157
 [<783cdc63>] unix_dgram_connect+0x8d/0x157
 [<78371b64>] sys_sendto+0x11b/0x13b
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783cdd00>] unix_dgram_connect+0x12a/0x157
 [<7837135e>] sys_connect+0x72/0x9c
 [<78371bbb>] sys_send+0x37/0x3b
 [<78372830>] sys_socketcall+0x12d/0x242
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
snmpd         D 00000000     0  3546      1 (NOTLB)
       ea333cc4 00200082 00000000 00000000 7813e7a9 f7c74ae0 7840d6e3 785cde80
       00000007 00000000 f7c74ae0 89333550 abd756a0 0001ff61 00002ebd f7c74bec
       7a01a980 f6e31800 785cde80 7812ccc1 00000000 00200296 ea333cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squid         S E9A41F20     0  3604      1 (NOTLB)
       e9a41f34 00200082 00000002 e9a41f20 e9a41f1c 00000000 e9e6da40 e9e6da40
       00000005 00000000 e9e6d510 785143e0 74bc2237 00000016 00014e53 e9e6d61c
       7a01a980 f6e31d00 fffc3aae 00000003 00000000 00000000 ffffffff 00000001
Call Trace:
 [<781276ea>] do_wait+0x8eb/0x9e6
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7812027d>] default_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squid         D 00000000     0  3606   3604 (NOTLB)
       e9a65cc4 00000082 00000000 00000000 7813e7a9 f7cb9510 7840d6e3 7a1d2000
       00000007 00000001 f7cb9510 edcd1490 abd7ba4d 0001ff61 000021db f7cb961c
       7a022980 f6ce25c0 7a1d2000 7812ccc1 00000000 00000296 e9a65cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sshd          S E9239B4C     0  3663      1 (NOTLB)
       e9239b60 00200096 00000002 e9239b4c e9239b48 00000000 00000000 00000000
       00000006 00000000 e9b7e0f0 785143e0 61b30795 0001ff2d 000245bd e9b7e1fc
       7a01a980 f62ad540 219595ea 00000003 00000000 00000000 7fffffff e9239f9c
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7816e4d2>] free_poll_entry+0xe/0x16
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<782be370>] e1000_clean_tx_irq+0xbc/0x2c5
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7840d727>] _spin_unlock_irq+0x20/0x41
 [<7816f340>] sys_select+0xd6/0x188
 [<78103e68>] restore_nocheck+0x12/0x15
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd S 7A1F2AA0     0  3722      1 (NOTLB)
       e9369f34 00200082 7854ec14 7a1f2aa0 00000001 786b7e90 7a1f2fd0 7a1f2fd0
       00000001 00000001 7a1f2aa0 f7fb4b60 0ec74ab6 00000017 0000b6ef 7a1f2bac
       7a022980 e9314540 00000000 00200246 7854ec00 00000001 00000e8c 00000001
Call Trace:
 [<781276ea>] do_wait+0x8eb/0x9e6
 [<781197d3>] do_page_fault+0x278/0x535
 [<7812027d>] default_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd S E93D1F2C     0  3724   3722 (NOTLB)
       e93d1f40 00200092 00000002 e93d1f2c e93d1f28 00000000 7813e99c 7a02391c
       00000008 00000001 ec8de0f0 f7fbe030 aa5c0061 0001ff61 0004acea ec8de1fc
       7a022980 e9314cc0 219905d2 00000003 00000000 00000000 e93d1f58 00000001
Call Trace:
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840c5af>] do_nanosleep+0x48/0x73
 [<78137d84>] hrtimer_nanosleep+0x39/0xdc
 [<78137879>] hrtimer_wakeup+0x0/0x18
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78137e70>] sys_nanosleep+0x49/0x59
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd S 00000070     0  3725   3722 (NOTLB)
       e8ccbe44 00200086 00000001 00000070 7a01b3e0 00000002 00000000 ead5e140
       00000001 00000000 e90b8b60 e9e6cae0 0f7701b8 00000017 000051d5 e90b8c6c
       7a01a980 e913a0c0 fffc44d7 00000003 00000000 00000070 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<78153c96>] __handle_mm_fault+0x29a/0x83b
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78153d89>] __handle_mm_fault+0x38d/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd S E8CCDE30     0  3726   3722 (NOTLB)
       e8ccde44 00200086 00000002 e8ccde30 e8ccde2c 00000000 00000000 ead5e140
       00000001 00000000 e9e6cae0 785143e0 0fa415f1 00000017 0001181a e9e6cbec
       7a01a980 e93147c0 fffc44d7 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<78153c96>] __handle_mm_fault+0x29a/0x83b
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78153d89>] __handle_mm_fault+0x38d/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd S 00000000     0  3727   3722 (NOTLB)
       e8cede44 00200086 00000000 00000000 7813e7a9 00000000 00000000 ead5e140
       00000001 00000001 f7cb8ae0 e9e6c0b0 0fbbed36 00000017 0003651f f7cb8bec
       7a022980 f62ad040 f7cb9010 f7cb8ae0 781299b4 e8cedea0 7fffffff 7fffffff
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<78153c96>] __handle_mm_fault+0x29a/0x83b
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78153d89>] __handle_mm_fault+0x38d/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
zabbix_agentd D E8CEFCB0     0  3728   3722 (NOTLB)
       e8cefcc4 00200082 00000002 e8cefcb0 e8cefcac 00000000 7840d6e3 7a1d2000
       00000008 00000001 e9e6c0b0 f7fbe030 abd7fd2c 0001ff61 00002173 e9e6c1bc
       7a022980 e913a340 219905eb 00000003 00000000 00000000 e8cefcd4 2199064e
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<7814a8cf>] __generic_file_aio_write_nolock+0x335/0x4df
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7814ce3d>] get_page_from_freelist+0x293/0x35e
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<78153e4d>] __handle_mm_fault+0x451/0x83b
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781197d3>] do_page_fault+0x278/0x535
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<7811955b>] do_page_fault+0x0/0x535
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S E8849EC8     0  3791      1 (NOTLB)
       e8849edc 00000086 00000002 e8849ec8 e8849ec4 00000000 00000000 00000000
       00000007 00000001 f7fb4b60 f7fbe030 6d0455f6 00000017 0018bcb8 f7fb4c6c
       7a022980 f6b15d40 fffc4afe 00000003 00000000 00000000 7fffffff f539f000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3793      1 (NOTLB)
       e8d19edc 00000086 00000000 00000000 f53a5c8c ea120a60 00000000 00000000
       00000007 00000001 ea120a60 f7e12030 70e57c7e 00000017 0010cf04 ea120b6c
       7a022980 e9314040 ea120f98 ea120a60 00000000 00000000 7fffffff f53a5800
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3794      1 (NOTLB)
       e8d61edc 00000086 00000000 00000000 f56dcc8c f7e12030 00000000 00000000
       00000007 00000001 f7e12030 ec8deb20 70f48c60 00000017 000f0fe2 f7e1213c
       7a022980 f62adcc0 f7e12568 f7e12030 00000000 00000000 7fffffff f56dc800
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3795      1 (NOTLB)
       e8d23edc 00000086 00000000 00000000 f52fcc8c ec8deb20 00000000 00000000
       00000007 00000001 ec8deb20 ec8ff590 71036a29 00000017 000eddc9 ec8dec2c
       7a022980 e913a840 ec8df058 ec8deb20 00000000 00000000 7fffffff f52fc800
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3796      1 (NOTLB)
       e8903edc 00000086 00000000 00000000 f583748c ec8ff590 00000000 00000000
       00000007 00000001 ec8ff590 e8eea130 71118e5d 00000017 000e2434 ec8ff69c
       7a022980 f6f5b080 ec8ffac8 ec8ff590 00000000 00000000 7fffffff f5837000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S E8921EC8     0  3797      1 (NOTLB)
       e8921edc 00000086 00000002 e8921ec8 e8921ec4 00000000 00000000 00000000
       00000007 00000001 e8eea130 f7fbe030 713eaf24 00000017 00023594 e8eea23c
       7a022980 e913a5c0 fffc4b41 00000003 00000000 00000000 7fffffff f7242000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3798      1 (NOTLB)
       e8923edc 00000086 00000000 00000000 f5d5048c ec906030 00000000 00000000
       00000007 00000001 ec906030 ea024070 712da585 00000017 000ebb02 ec90613c
       7a022980 e913aac0 ec906568 ec906030 00000000 00000000 7fffffff f5d50000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
agetty        S 00000000     0  3799      1 (NOTLB)
       e8925edc 00000086 00000000 00000000 f6f2348c ea024070 00000000 00000000
       00000007 00000001 ea024070 e8eea130 713c7990 00000017 000ed40b ea02417c
       7a022980 f6e31580 ea0245a8 ea024070 00000000 00000000 7fffffff f6f23000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cf28>] read_chan+0x2fb/0x54b
 [<78297ed7>] tty_ldisc_try+0x2e/0x32
 [<7812027d>] default_wake_function+0x0/0xc
 [<7829cc2d>] read_chan+0x0/0x54b
 [<7829a215>] tty_read+0x67/0xaf
 [<7829a1ae>] tty_read+0x0/0xaf
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sshd          S E3429DC4     0  3835   3663 (NOTLB)
       e3429dd8 00000082 00000002 e3429dc4 e3429dc0 00000000 00000000 e12799ac
       00000007 00000000 7a1eaa60 785143e0 1e0f5b36 00000025 00001291 7a1eab6c
       7a01a980 e8d34d00 fffd3160 00000003 00000000 00000000 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sshd          S E6C59B4C     0  3838   3835 (NOTLB)
       e6c59b60 00000096 00000002 e6c59b4c e6c59b48 00000000 e90b8688 e90b8668
       0000000a 00000000 e90b8130 785143e0 28faff08 0001ff23 0001a008 e90b823c
       7a01a980 f71bd580 2194e9e6 00000003 00000000 00000000 7fffffff e6c59f9c
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cb0e>] normal_poll+0x0/0x11f
 [<78298d67>] tty_poll+0x56/0x63
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7839dedc>] ip_output+0x270/0x2ac
 [<7839c7d5>] ip_finish_output+0x0/0x212
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7839b270>] dst_output+0x0/0x7
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783a3340>] tcp_sendmsg+0x91c/0xa0d
 [<7840d1f4>] _spin_lock_bh+0x38/0x43
 [<783a3340>] tcp_sendmsg+0x91c/0xa0d
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<78370f09>] sock_aio_write+0xcb/0xd7
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816f340>] sys_select+0xd6/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
bash          S EA120030     0  3839   3838 (NOTLB)
       e3c81f34 00000082 7854ec14 ea120030 00000001 786b7e90 ea120560 ea120560
       00000009 00000000 ea120030 e7428030 3ac85a37 00000028 00019fc2 ea12013c
       7a01a980 f71bd800 00000000 00000246 7854ec00 00000001 ffffffff 00000001
Call Trace:
 [<781276ea>] do_wait+0x8eb/0x9e6
 [<7812027d>] default_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
bash          S E7428030     0  3866   3839 (NOTLB)
       e5f65f34 00000082 7854ec14 e7428030 00000001 786b7e90 e7428560 e7428560
       00000008 00000000 e7428030 f7c6c070 8cd13ab9 0001ff22 0001ae9d e742813c
       7a01a980 f71bda80 00000000 00000246 7854ec00 00000001 ffffffff 00000001
Call Trace:
 [<781276ea>] do_wait+0x8eb/0x9e6
 [<7812027d>] default_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sshd          S 00000127     0  3878   3663 (NOTLB)
       e3d2bdd8 00000082 00000000 00000127 00000000 00000000 00000000 e4c4c0ac
       00000007 00000001 e21fa0b0 e21fb510 5abe5861 0000002b 00001417 e21fa1bc
       7a022980 f71bd080 e21fa5e0 e21fa0b0 00000000 e21fa0b0 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78160ec2>] kmem_cache_free+0x9a/0xa1
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sshd          S 00000000     0  3880   3878 (NOTLB)
       e406fb60 00000096 00000046 00000000 00000002 00000001 e21fba68 e21fba48
       0000000a 00000000 e21fb510 e4632130 adc501bc 0001ff61 00010239 e21fb61c
       7a01a980 e93142c0 e91624c0 e1acb80c 7813e99c e91624c0 7fffffff e406ff9c
Call Trace:
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<7829cb0e>] normal_poll+0x0/0x11f
 [<78298d67>] tty_poll+0x56/0x63
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7839dedc>] ip_output+0x270/0x2ac
 [<7839c7d5>] ip_finish_output+0x0/0x212
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7839b270>] dst_output+0x0/0x7
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7840d6ef>] _spin_unlock_irqrestore+0x40/0x58
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783a3340>] tcp_sendmsg+0x91c/0xa0d
 [<7840d1f4>] _spin_lock_bh+0x38/0x43
 [<783a3340>] tcp_sendmsg+0x91c/0xa0d
 [<78120273>] try_to_wake_up+0x393/0x39d
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<78370f09>] sock_aio_write+0xcb/0xd7
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816f340>] sys_select+0xd6/0x188
 [<78103dc6>] sysenter_past_esp+0x8f/0x99
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
bash          S E401D550     0  3881   3880 (NOTLB)
       e401ff34 00000082 7854ec14 e401d550 00000001 786b7e90 e401da80 e401da80
       00000009 00000000 e401d550 e8eeab60 53749886 0000002c 0001b062 e401d65c
       7a01a980 e4743d40 00000000 00000246 7854ec00 00000001 ffffffff 00000001
Call Trace:
 [<781276ea>] do_wait+0x8eb/0x9e6
 [<7812027d>] default_wake_function+0x0/0xc
 [<78127816>] sys_wait4+0x31/0x34
 [<78127840>] sys_waitpid+0x27/0x2b
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
bash          R running     0  3887   3881 (NOTLB)
unlinkd       S 00000218     0  1139   3606 (NOTLB)
       c1d17e58 00000086 00000002 00000218 7a0233e0 00000002 00000000 89332b20
       00000007 00000001 89332b20 f7cd1490 861dbffc 0001e62a 00045f0d 89332c2c
       7a022980 f6e11a40 1ff0a94d 00000003 00000000 00000218 f5a60c00 f7cb5b78
Call Trace:
 [<7816943d>] pipe_wait+0x53/0x73
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78169b7d>] pipe_read+0x2af/0x321
 [<78174b7c>] file_update_time+0x22/0x6a
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<78164176>] do_sync_read+0x0/0x10a
 [<781649a0>] vfs_read+0x88/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
pdflush       S ACC9FF9C     0   944      2 (L-TLB)
       acc9ffb0 00000092 00000002 acc9ff9c acc9ff98 00000000 9c505590 9c505590
       00000007 00000001 7a1fa0b0 f7fbe030 6ce9b6b3 0001e655 0000065b 7a1fa1bc
       7a022980 f68e9340 1ff37b50 00000003 00000000 00000000 acc9ffc8 7814e33a
Call Trace:
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e3e8>] pdflush+0xae/0x1a6
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
amavisd       S 00000000     0  6000   2823 (NOTLB)
       929dfe44 00000086 00000000 00000000 7813e7a9 00000000 00000000 f532f940
       0000000a 00000001 e8eeb590 f7e03550 a54ef8a7 0001e62a 000028bb e8eeb69c
       7a022980 d3e4a040 e8eebac0 e8eeb590 781299b4 fffffe00 7fffffff 7fffffff
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<781332be>] __rcu_process_callbacks+0xe3/0x172
 [<781332a7>] __rcu_process_callbacks+0xcc/0x172
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squidauth.pl  S 95039DC4     0  6285   3606 (NOTLB)
       95039dd8 00000082 00000002 95039dc4 95039dc0 00000000 00000000 c31076ac
       00000005 00000001 7f498aa0 f7fbe030 84d480c7 0001e4d0 00001b01 7f498bac
       7a022980 f6e117c0 1fd9ea3c 00000003 00000000 00000000 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squidauth.pl  S 00000127     0  6286   3606 (NOTLB)
       bc6c7dd8 00000082 00000000 00000127 00000000 00000000 00000000 d18f63ac
       00000007 00000001 e7429490 f7e2b550 15274906 0001df92 0004a752 e742959c
       7a022980 f68e9d40 e74299c0 e7429490 00000000 e7429490 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78165177>] __fput+0x10a/0x134
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
squidauth.pl  S 00000127     0  6287   3606 (NOTLB)
       e31d5dd8 00000082 00000000 00000127 00000000 00000000 00000000 9cb76cac
       00000007 00000001 f7e2b550 7f498aa0 152ad54d 0001df92 00038c47 f7e2b65c
       7a022980 c053dac0 f7e2ba80 f7e2b550 00000000 f7e2b550 7fffffff 00000000
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<783ccb1d>] unix_stream_recvmsg+0x1f2/0x475
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78370fdd>] sock_aio_read+0xc8/0xd4
 [<7814aada>] generic_file_aio_write+0x61/0xb6
 [<7816423d>] do_sync_read+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78165177>] __fput+0x10a/0x134
 [<781649b4>] vfs_read+0x9c/0x10a
 [<78164da8>] sys_read+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
pdflush       S F6A75F9C     0  6288      2 (L-TLB)
       f6a75fb0 00000092 00000002 f6a75f9c f6a75f98 00000000 00000292 edcd0560
       0000000a 00000001 edcd0030 f7fbe030 bc332535 0001ff60 00002604 edcd013c
       7a022980 f6e31800 2198f62b 00000003 00000000 00000000 f6a75fc8 7814e33a
Call Trace:
 [<7814e33a>] pdflush+0x0/0x1a6
 [<7814e3e8>] pdflush+0xae/0x1a6
 [<781350d5>] kthread+0x38/0x5f
 [<7813509d>] kthread+0x0/0x5f
 [<78104a8f>] kernel_thread_helper+0x7/0x10
 =======================
amavisd       S 85DD5E30     0  6350   2823 (NOTLB)
       85dd5e44 00000086 00000002 85dd5e30 85dd5e2c 00000000 00000000 f532f940
       00000007 00000001 a163eaa0 f7fbe030 bcc842a9 0001e62a 000024c3 a163ebac
       7a022980 f62ad2c0 1ff0acdf 00000003 00000000 00000000 7fffffff 7fffffff
Call Trace:
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a0b90>] inet_csk_accept+0x9e/0x214
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<783bb524>] inet_accept+0x1f/0xa4
 [<78371564>] sock_attach_fd+0x53/0xb5
 [<7837262e>] sys_accept+0xb2/0x187
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<783727bc>] sys_socketcall+0xb9/0x242
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103e68>] restore_nocheck+0x12/0x15
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       S F7E03550     0  6922   2823 (NOTLB)
       d7bdfb60 00000096 786b7ed0 f7e03550 00000001 00000046 00000000 00000000
       0000000a 00000001 f7e03550 edcd0a60 6a543cc5 0001e655 0008e72a f7e0365c
       7a022980 f6e11cc0 7813e99c 00000000 00000046 f5829698 7fffffff d7bdff9c
Call Trace:
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<783b133f>] tcp_v4_rcv+0x3de/0x787
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<783b1465>] tcp_v4_rcv+0x504/0x787
 [<7839949e>] ip_local_deliver+0x1aa/0x1c8
 [<78398c10>] ip_local_deliver_finish+0x0/0x14c
 [<783992bb>] ip_rcv+0x498/0x4d1
 [<7839896c>] ip_rcv_finish+0x0/0x2a4
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c73c>] process_backlog+0xef/0xfa
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7837c9d8>] net_rx_action+0x162/0x18f
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781298c7>] local_bh_enable+0x110/0x130
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7838108c>] neigh_resolve_output+0x1f2/0x224
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<7812938d>] _local_bh_enable+0xb1/0xbe
 [<781197d3>] do_page_fault+0x278/0x535
 [<7816f340>] sys_select+0xd6/0x188
 [<78103e68>] restore_nocheck+0x12/0x15
 [<7811955b>] do_page_fault+0x0/0x535
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
amavisd       D 00000000     0  7227   2823 (NOTLB)
       d1953e2c 00000082 00000000 00000000 7813e7a9 edcd0a60 7840d6e3 7a1d2000
       00000007 00000001 edcd0a60 f7cb14d0 abd73b81 0001ff61 000032a5 edcd0b6c
       7a022980 e632d2c0 7a1d2000 7812ccc1 00000000 00000292 d1953e3c 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<78152ded>] do_wp_page+0x3c4/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<78138305>] down_read_trylock+0x47/0x4e
 [<7811976d>] do_page_fault+0x212/0x535
 [<7811955b>] do_page_fault+0x0/0x535
 [<7840d96a>] error_code+0x72/0x78
 =======================
amavisd       S 00000002     0  7836   2823 (NOTLB)
       98947b60 00000096 00000000 00000002 7840ae59 00000000 00000000 00000000
       0000000a 00000000 ec8feb60 89333550 bca917ba 0001e62a 00001d50 ec8fec6c
       7a01a980 e8d34a80 1ff0ab63 00000003 00000000 00000000 7fffffff 98947f9c
Call Trace:
 [<7840ae59>] __sched_text_start+0x701/0x810
 [<7840b667>] schedule_timeout+0x13/0x8d
 [<783a2023>] tcp_poll+0x1c/0x129
 [<7816ec1d>] do_select+0x399/0x3e7
 [<7816f1be>] __pollwait+0x0/0xac
 [<7812027d>] default_wake_function+0x0/0xc
 [<7812027d>] default_wake_function+0x0/0xc
 [<7813e986>] trace_hardirqs_on+0x10c/0x14c
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<781298c7>] local_bh_enable+0x110/0x130
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7837cbf7>] dev_queue_xmit+0x1f2/0x213
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<781299b4>] local_bh_enable_ip+0xcd/0xed
 [<7816ef17>] core_sys_select+0x2ac/0x2c9
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d623>] _spin_unlock+0x25/0x3b
 [<78152de2>] do_wp_page+0x3b9/0x406
 [<7840d1b1>] _spin_lock+0x33/0x3e
 [<781541bf>] __handle_mm_fault+0x7c3/0x83b
 [<781197d3>] do_page_fault+0x278/0x535
 [<7816f340>] sys_select+0xd6/0x188
 [<78103e68>] restore_nocheck+0x12/0x15
 [<7811955b>] do_page_fault+0x0/0x535
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
sadc          D 00000000     0  8202      1 (NOTLB)
       ccff9cc4 00000082 00000000 00000000 7813e7a9 edcd1490 7840d6e3 7a1d2000
       00000007 00000001 edcd1490 e9e6c0b0 abd7dbb9 0001ff61 0000216c edcd159c
       7a022980 c053d340 7a1d2000 7812ccc1 00000000 00000296 ccff9cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<7814cb18>] free_hot_cold_page+0x13b/0x167
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781648dd>] vfs_write+0xd1/0x10c
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
emerge        D B7BF9CB0     0  8204   3061 (NOTLB)
       b7bf9cc4 00200082 00000002 b7bf9cb0 b7bf9cac 00000000 7840d6e3 785cde80
       00000007 00000000 89333550 785143e0 abd783fe 0001ff61 00002d5e 8933365c
       7a01a980 c053d840 219905ed 00000003 00000000 00000000 b7bf9cd4 2199064e
Call Trace:
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<7813f8ca>] __lock_acquire+0xaf0/0xb84
 [<7814ce3d>] get_page_from_freelist+0x293/0x35e
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<78153e4d>] __handle_mm_fault+0x451/0x83b
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<781197d3>] do_page_fault+0x278/0x535
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<7811955b>] do_page_fault+0x0/0x535
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
mkpir         D 00000000     0  8297   3061 (NOTLB)
       9c507cc4 00000082 00000000 00000000 7813e7a9 9c505590 7840d6e3 7a1d2000
       00000007 00000001 9c505590 f7cb9510 abd79872 0001ff61 00002bf0 9c50569c
       7a022980 f68e9340 7a1d2000 7812ccc1 00000000 00000296 9c507cd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<781a8658>] journal_stop+0x1ce/0x1da
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78148be7>] file_read_actor+0x0/0xde
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<78165177>] __fput+0x10a/0x134
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
screen        S 8DBAFF98     0  8364   3866 (NOTLB)
       8dbaffac 00000086 00000002 8dbaff98 8dbaff94 00000000 00000000 7812850a
       00000007 00000000 f7c6c070 785143e0 455dd02b 0001ff61 00005ab9 f7c6c17c
       7a01a980 e913ad40 2198ff2e 00000003 00000000 00000000 ffffffff 00000000
Call Trace:
 [<7812850a>] do_setitimer+0x16e/0x30b
 [<7812f315>] sys_pause+0x11/0x17
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================
screen        D 00000000     0  8365   8364 (NOTLB)
       eca0fcc4 00000082 00000000 00000000 7813e7a9 f7cb14d0 7840d6e3 7a1d2000
       00000007 00000001 f7cb14d0 9c505590 abd76c82 0001ff61 00003101 f7cb15dc
       7a022980 d3e4acc0 7a1d2000 7812ccc1 00000000 00000296 eca0fcd4 2199064e
Call Trace:
 [<7813e7a9>] mark_held_locks+0x39/0x53
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812ccc1>] __mod_timer+0x92/0x9c
 [<7840b6c4>] schedule_timeout+0x70/0x8d
 [<7840d6e3>] _spin_unlock_irqrestore+0x34/0x58
 [<7812cada>] process_timeout+0x0/0x5
 [<7840b0ce>] io_schedule_timeout+0x1e/0x28
 [<78151eca>] congestion_wait+0x50/0x64
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7814dd9f>] balance_dirty_pages_ratelimited_nr+0x16e/0x1dc
 [<7814a483>] generic_file_buffered_write+0x4ee/0x605
 [<7814a8cf>] __generic_file_aio_write_nolock+0x335/0x4df
 [<78128f60>] current_fs_time+0x41/0x46
 [<7814aa1a>] __generic_file_aio_write_nolock+0x480/0x4df
 [<7813e99c>] trace_hardirqs_on+0x122/0x14c
 [<7814aad1>] generic_file_aio_write+0x58/0xb6
 [<78199bb4>] ext3_file_write+0x24/0x92
 [<78164133>] do_sync_write+0xc7/0x10a
 [<7813518d>] autoremove_wake_function+0x0/0x35
 [<7840c019>] __mutex_lock_slowpath+0x25f/0x267
 [<7816406c>] do_sync_write+0x0/0x10a
 [<78164896>] vfs_write+0x8a/0x10c
 [<78164e0f>] sys_write+0x41/0x67
 [<78103d96>] sysenter_past_esp+0x5f/0x99
 =======================


    Are any IO errors occurring at all?

No.
Comment 8 Natalie Protasevich 2007-11-20 00:00:37 UTC
(Krzysztof, can you please make attachments instead? It is quite impossible to do anything useful with this huge inline traces.
Thanks!
Comment 9 Krzysztof Oledzki 2007-11-20 03:28:32 UTC

> ------- Comment #8 from protasnb@gmail.com  2007-11-20 00:00 -------
> (Krzysztof, can you please make attachments instead? It is quite impossible
> to
> do anything useful with this huge inline traces.

Of course! I am also going to put 2.6.23.8 there and chack if it helps.

Best regards,

 				Krzysztof Ol
Comment 10 Krzysztof Oledzki 2007-12-02 06:27:11 UTC
Created attachment 13824 [details]
sysrq+M in 2.6.23.9
Comment 11 Krzysztof Oledzki 2007-12-02 06:27:39 UTC
Created attachment 13825 [details]
sysrq+D in 2.6.23.9
Comment 12 Krzysztof Oledzki 2007-12-02 06:27:55 UTC
Created attachment 13826 [details]
sysrq+P in 2.6.23.9
Comment 13 Krzysztof Oledzki 2007-12-02 06:28:21 UTC
Created attachment 13827 [details]
sysrq+T in 2.6.23.9
Comment 14 Krzysztof Oledzki 2007-12-02 06:31:09 UTC
Created attachment 13828 [details]
oops taken from a IP-KVM
Comment 15 Krzysztof Oledzki 2007-12-02 06:54:57 UTC
OK. Something similar just happened on 2.6.23.9 but it was somehow different. This time system recovered while I was collecting all above sysreqs.

I also found in my logs "nfs: server owl not responding, still trying" message which I'm nearly sure never appeared before. I'm not sure but it seems that this recovery happened shortly after the second message about nfs. However, finnaly I was able to kill all process using the nfs share and umount & mount it again.

I was so surprised that the system had started working that I retried "echo d > sysrq-trigger" and then everything died as the system oopsed (opps also attached).
Comment 16 Krzysztof Oledzki 2007-12-03 05:06:37 UTC

---------- Forwarded message ----------
Date: Mon, 3 Dec 2007 09:36:24 +0100
From: Thomas Osterried <osterried@jesse.de>
To: Krzysztof Oledzki <olel@ans.pl>, Nick Piggin <nickpiggin@yahoo.com.au>,
     rzysztof Oledzki <olel@ans.pl>
Cc: Andrew Morton <akpm@linux-foundation.org>,
     Peter Zijlstra <peterz@infradead.org>
Subject: Re: [#JJ150720] Re: Strange system hangs

Hello Krzysztof,

thank you for your Cc.

On the machine which has troubles, the bug occured within about 10 days
During these days, the amount of dirty pages increased, up to 400MB.
I have testet kernel 2.6.19, 2.6.20, 2.6.22.1 and 2.6.22.10 (with our config),
and even linux-2.6.20 from ubuntu-sever. They have all shown that behaviour.

Kernel 2.6.22.10 had even more troubles than the others:
after 10 days, when the bug occured, with this kernel, running processes went
to 'D' (while editing grub/menu.lst, and while typing in bash in a scree
  window).

With my patch for looking if process accounting is on (and to which file it
logs), i found out that the program atop(8) did enable "process accounting".
Process accounting was responsible for the problem, that processes did not
terminate: during sys_exit(), the accounting data will be written to the file,
but this locks due to the dead lock in balance_dirty_pages_ratelimited_nr.
And this lead to the problem that i was not able open an new ssh connection
to the machine which was in trouble.
Generally, the lock in balance_dirty_pages_ratelimited_nr caused services not
to work properly anymore, due to the locks in filesystem operations.

10 days ago, i installed kernel 2.6.18.5 on this machine (with backported
3ware controller code). I'm quite sure that this kernel will now fixes our
severe stability problems on this production machine (currently:
Dirty:  472 kB, nr_dirty 118).
If so, it's the "lastest" kernel i found usable, after half of a year of pain.

Regards,
   - Thomas Osterried

Am Sonntag, 2. Dezember 2007 schrieb Krzysztof Oledzki:
>
> On Sat, 29 Sep 2007, Nick Piggin wrote:
>
>> On Friday 28 September 2007 18:42, Krzysztof Oledzki wrote:
>>> Hello,
>>>
>>> I am experiencing weird system hangs. Once about 2-5 weeks system freezes
>>> and stops accepting remote connections, so it is no longer possible to
>>> connect to most important services: smtp (postfix), www (squid) or even
>>> ssh. Such connection is accepted but then it hangs.
>>>
>>> What is strange, that previously established ssh session is usable. It is
>>> possible to work on such system until you do something stupid like "less
>>> /var/log/all.log". Using strace I found that process blocks on:
>>
>> Is this a regression? If so, what's the most recent kernel that didn't show
>> the problem?
>>
>> The symptoms could be consistent with some place doing a
>> balance_dirty_pages while holding a lock that is required for IO, but I
>> can't
>> see a smoking gun (you've got contention on i_mutex, but that should be
>> OK).
>>
>> Can you see if there is any memory under writeback that isn't being
>> completed (sysrq+M), also a list the locks held after the hang might be
>> helpful (compile in lockdep and sysrq+D)
>>
>> Is anything currently running? (sysrq+P and even a full sysrq+T task list
>> could be useful).
>>
>> Are any IO errors occurring at all?
>
> It seems that 2.6.23.x still fails but somehow different. I updated my
> bugreport at: http://bugzilla.kernel.org/show_bug.cgi?id=9182. There are
> new attachments with traces and an oops that happened while I was taking
> the debugging data.
>
> Thank you.
>
> Best regards,
>
>
>                       Krzysztof Ol
Comment 17 Krzysztof Oledzki 2007-12-05 05:54:33 UTC
Created attachment 13864 [details]
grep ^Dirty: /proc/meminfo in 1kb units
Comment 18 Krzysztof Oledzki 2007-12-05 06:09:49 UTC

On Wed, 5 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:

> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>
>
> olel@ans.pl changed:
>
>           What    |Removed                     |Added
> ----------------------------------------------------------------------------
>          Component|Other                       |Other
>      KernelVersion|2.6.22-stable/2.6.23-stable |2.6.20-stable/2.6.22-
>                   |                            |stable/2.6.23-stable
>            Product|IO/Storage                  |Memory Management
>         Regression|0                           |1
>            Summary|Strange system hangs        |Critical memory leak (dirty
>                   |                            |pages)
>

After additional hint from Thomas Osterried I can confirm that the problem 
I have been dealing with for half of a year comes from continuous dirty 
pages increas:

http://bugzilla.kernel.org/attachment.cgi?id=13864&action=view (in 1 KB 
units)

So, after two days of uptime I have ~140MB of dirty pages and that 
explains why my system crashes every 2-3 weeks.

Best regards,


 				Krzysztof Ol
Comment 19 Natalie Protasevich 2007-12-05 06:24:34 UTC
Thanks, Krzysztof. This is very compelling indeed.
Urgh, several processes are in congestion_wait, like one below:

sadc          D 00000000     0 26860      1
       a9e99cc4 00000082 00000000 00000000 7813e959 94927350 78415531 78625e80
       188758ce 7813eb4c 94927350 94927494 7a065e00 00000000 78625e80 ea913840
       7841553d a9e99cd4 78625e80 7812c35d 00000000 00000282 a9e99cd4 188758ce
Call Trace:
 [<7813e959>] mark_held_locks+0x39/0x53
 [<78415531>] _spin_unlock_irqrestore+0x34/0x58
 [<7813eb4c>] trace_hardirqs_on+0x122/0x14c
 [<7841553d>] _spin_unlock_irqrestore+0x40/0x58
 [<7812c35d>] __mod_timer+0x92/0x9c
 [<78413259>] schedule_timeout+0x70/0x8d
 [<78415531>] _spin_unlock_irqrestore+0x34/0x58
 [<7812c176>] process_timeout+0x0/0x5
 [<78412c63>] io_schedule_timeout+0x1e/0x28
 [<78151cda>] congestion_wait+0x50/0x64
 [<781348ee>] autoremove_wake_function+0x0/0x35
 [<7814da13>] balance_dirty_pages_ratelimited_nr+0x171/0x1df
 [<78149f22>] generic_file_buffered_write+0x505/0x61c
 [<7814a382>] __generic_file_aio_write_nolock+0x349/0x4f3
 [<7817bd64>] __mark_inode_dirty+0xc9/0x155
 [<7814a4cd>] __generic_file_aio_write_nolock+0x494/0x4f3
 [<78413c23>] __mutex_lock_slowpath+0x296/0x29e
 [<7814a584>] generic_file_aio_write+0x58/0xb6
 [<7814c704>] free_hot_cold_page+0x121/0x14d
 [<7819a4b4>] ext3_file_write+0x24/0x92
 [<78163f1b>] do_sync_write+0xc7/0x10a
 [<7814e7fe>] release_pages+0x123/0x12b
 [<781348ee>] autoremove_wake_function+0x0/0x35
 [<7813e959>] mark_held_locks+0x39/0x53
 [<78160cae>] kmem_cache_free+0x9a/0xa1
 [<78163e54>] do_sync_write+0x0/0x10a
 [<78164694>] vfs_write+0x8a/0x10c
 [<78164c0d>] sys_write+0x41/0x67
 [<78103e62>] sysenter_past_esp+0x5f/0x99
Comment 20 Andrew Morton 2007-12-05 13:37:49 UTC
Please monitor the "Dirty:" record in /proc/meminfo.  Is it slowly rising
and never falling?

Does it then fall if you run /bin/sync?

Compile up usemem.c from http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz and run

usemem -m <N>

where N is the number of megabytes whcih that machine has.  Did this cause
/proc/meminfo:Dirty to fall?

Which types of filesystem are in use?

You might find that 2.6.24-rc4 has fixed all this.

Thanks.
Comment 21 Krzysztof Oledzki 2007-12-05 13:53:58 UTC

On Wed, 5 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:

> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>
>
> ------- Comment #20 from akpm@osdl.org  2007-12-05 13:37 -------
> Please monitor the "Dirty:" record in /proc/meminfo.  Is it slowly rising
> and never falling?

It is slowly rising with respect to a small fluctuation caused by a 
current load.

> Does it then fall if you run /bin/sync?
Only a little, by ~1-2MB like in a normal system. But it is not able to 
fall below a local minimum. So, after a first sync it does not fall more 
with additional synces.

> Compile up usemem.c from
> http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz and run
>
> usemem -m <N>
>
> where N is the number of megabytes whcih that machine has.

It has 2GB but:

# ./usemem -m 1662 ; echo $?
0

# ./usemem -m 1663 ; echo $?
./usemem: mmap failed: Cannot allocate memory
1

>  Did this cause /proc/meminfo:Dirty to fall?
No.

> Which types of filesystem are in use?
ext3 on raid1 and ext3 on LVM on MD (raid5)

> You might find that 2.6.24-rc4 has fixed all this.
I'm not brave enough to use 2.6.24-rc4 here yet. :(

Best regards,

 				Krzysztof Ol
Comment 22 Krzysztof Oledzki 2007-12-11 07:08:40 UTC
Created attachment 13973 [details]
grep ^Dirty: /proc/meminfo on 2.6.24-rc4-git7 in 1kb units
Comment 23 Krzysztof Oledzki 2007-12-11 07:10:16 UTC

On Wed, 5 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Wed, 5 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
>
>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>> 
>> 
>> ------- Comment #20 from akpm@osdl.org  2007-12-05 13:37 -------
>> Please monitor the "Dirty:" record in /proc/meminfo.  Is it slowly rising
>> and never falling?
>
> It is slowly rising with respect to a small fluctuation caused by a current 
> load.
>
>> Does it then fall if you run /bin/sync?
> Only a little, by ~1-2MB like in a normal system. But it is not able to fall 
> below a local minimum. So, after a first sync it does not fall more with 
> additional synces.
>
>> Compile up usemem.c from
>> http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz and run
>> 
>> usemem -m <N>
>> 
>> where N is the number of megabytes whcih that machine has.
>
> It has 2GB but:
>
> # ./usemem -m 1662 ; echo $?
> 0
>
> # ./usemem -m 1663 ; echo $?
> ./usemem: mmap failed: Cannot allocate memory
> 1
>
>>  Did this cause /proc/meminfo:Dirty to fall?
> No.
>
>> Which types of filesystem are in use?
> ext3 on raid1 and ext3 on LVM on MD (raid5)
>
>> You might find that 2.6.24-rc4 has fixed all this.
> I'm not brave enough to use 2.6.24-rc4 here yet. :(

2.6.24-rc4-git7 did *not* solve this problem.

http://bugzilla.kernel.org/attachment.cgi?id=13973&action=view

Best regards,


 				Krzysztof Ol
Comment 24 Krzysztof Oledzki 2007-12-11 09:50:18 UTC

On Wed, 5 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Wed, 5 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
>
>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>> 
>> 
>> ------- Comment #20 from akpm@osdl.org  2007-12-05 13:37 -------
>> Please monitor the "Dirty:" record in /proc/meminfo.  Is it slowly rising
>> and never falling?
>
> It is slowly rising with respect to a small fluctuation caused by a current 
> load.
>
>> Does it then fall if you run /bin/sync?
> Only a little, by ~1-2MB like in a normal system. But it is not able to fall 
> below a local minimum. So, after a first sync it does not fall more with 
> additional synces.
>
>> Compile up usemem.c from
>> http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz and run
>> 
>> usemem -m <N>
>> 
>> where N is the number of megabytes whcih that machine has.
>
> It has 2GB but:
>
> # ./usemem -m 1662 ; echo $?
> 0
>
> # ./usemem -m 1663 ; echo $?
> ./usemem: mmap failed: Cannot allocate memory
> 1
>
>>  Did this cause /proc/meminfo:Dirty to fall?
> No.

OK, I booted a kernel without 2:2 memsplit but instead with a standard 
3.1:0.9 and even without highmem. So, now I have ~900MB and I am able to 
set -m to the number of megabytes which themachine has. However, usemem 
still does does not cause dirty memory usage to fall. :(

Best regards,


 				Krzysztof Ol
Comment 25 Krzysztof Oledzki 2007-12-12 05:29:04 UTC

On Tue, 11 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Wed, 5 Dec 2007, Krzysztof Oledzki wrote:
>
>> 
>> 
>> On Wed, 5 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
>> 
>>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>>> 
>>> 
>>> ------- Comment #20 from akpm@osdl.org  2007-12-05 13:37 -------
>>> Please monitor the "Dirty:" record in /proc/meminfo.  Is it slowly rising
>>> and never falling?
>> 
>> It is slowly rising with respect to a small fluctuation caused by a current 
>> load.
>> 
>>> Does it then fall if you run /bin/sync?
>> Only a little, by ~1-2MB like in a normal system. But it is not able to 
>> fall below a local minimum. So, after a first sync it does not fall more 
>> with additional synces.
>> 
>>> Compile up usemem.c from
>>> http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz and run
>>> 
>>> usemem -m <N>
>>> 
>>> where N is the number of megabytes whcih that machine has.
>> 
>> It has 2GB but:
>> 
>> # ./usemem -m 1662 ; echo $?
>> 0
>> 
>> # ./usemem -m 1663 ; echo $?
>> ./usemem: mmap failed: Cannot allocate memory
>> 1
>>
>>>  Did this cause /proc/meminfo:Dirty to fall?
>> No.
>
> OK, I booted a kernel without 2:2 memsplit but instead with a standard 
> 3.1:0.9 and even without highmem. So, now I have ~900MB and I am able to set 
> -m to the number of megabytes which themachine has. However, usemem still 
> does does not cause dirty memory usage to fall. :(

OK, I can confirm that this is a regression from 2.6.18 where it works OK:

ole@cougar:~$ uname -r
2.6.18.8

ole@cougar:~$ uptime;grep Dirt /proc/meminfo;sync;sleep 2;sync;sleep 1;sync;grep Dirt /proc/meminfo
  14:21:53 up  1:00,  1 user,  load average: 0.23, 0.36, 0.35
Dirty:             376 kB
Dirty:               0 kB

It seems that this leak also exists in my other system as even after many 
synces number of dirty pages are still >> 0, but this the only one where 
it is so critical and at the same time - so easy to reproduce.

Best regards,


 				Krzysztof Ol
Comment 26 Krzysztof Oledzki 2007-12-13 07:19:05 UTC
On Mon, 3 Dec 2007, Thomas Osterried wrote:

> On the machine which has troubles, the bug occured within about 10 days
> During these days, the amount of dirty pages increased, up to 400MB.
> I have testet kernel 2.6.19, 2.6.20, 2.6.22.1 and 2.6.22.10 (with our 
> config),
> and even linux-2.6.20 from ubuntu-sever. They have all shown that behaviour.

<CUT>

> 10 days ago, i installed kernel 2.6.18.5 on this machine (with backported
> 3ware controller code). I'm quite sure that this kernel will now fixes our
> severe stability problems on this production machine (currently:
> Dirty:  472 kB, nr_dirty 118).
> If so, it's the "lastest" kernel i found usable, after half of a year of 
> pain.

Strange, my tests show that both 2.6.18(.8) and 2.6.19(.7) are OK and the first 
wrong kernel is 2.6.20.

Best regards,

 				Krzysztof Ol
Comment 27 Krzysztof Oledzki 2007-12-13 08:17:25 UTC

On Thu, 13 Dec 2007, Peter Zijlstra wrote:

>
> On Thu, 2007-12-13 at 16:17 +0100, Krzysztof Oledzki wrote:
>>
>
>> BTW: Could someone please look at this problem? I feel little ignored and
>> in my situation this is a critical regression.
>
> I was hoping to get around to it today, but I guess tomorrow will have
> to do :-/

Thanks.

> So, its ext3, dirty some pages, sync, and dirty doesn't fall to 0,
> right?

Not only doesn't fall but continuously grows.

> Does it happen with other filesystems as well?

Don't know. I generally only use ext3 and I'm afraid I'm not able to 
switch this system to other filesystem.

> What are you ext3 mount options?
/dev/root / ext3 rw,data=journal 0 0
/dev/VolGrp0/usr /usr ext3 rw,nodev,data=journal 0 0
/dev/VolGrp0/var /var ext3 rw,nodev,data=journal 0 0
/dev/VolGrp0/squid_spool /var/cache/squid/cd0 ext3 rw,nosuid,nodev,noatime,data=writeback 0 0
/dev/VolGrp0/squid_spool2 /var/cache/squid/cd1 ext3 rw,nosuid,nodev,noatime,data=writeback 0 0
/dev/VolGrp0/news_spool /var/spool/news ext3 rw,nosuid,nodev,noatime,data=ordered 0 0

Best regards,

 				Krzysztof Ol
Comment 28 Anonymous Emailer 2007-12-13 23:56:00 UTC
Reply-To: osterried@jesse.de

Hello,

> > BTW: Could someone please look at this problem? I feel little ignored and 
> > in my situation this is a critical regression.
> 
> I was hoping to get around to it today, but I guess tomorrow will have
> to do :-/
> 
> So, its ext3, dirty some pages, sync, and dirty doesn't fall to 0,
> right?
> 
> Does it happen with other filesystems as well?
> 
> What are you ext3 mount options?

I had described my problem in detail in the thread "Strange system hangs",
  http://marc.info/?l=linux-kernel&m=119400497503209&w=2

In the meantime i found out that kernel 2.6.18.5 is the latest kernel working
for me. Some machines had not or seldom shown the error of inusability,
other machines showed the symptom in behalf of 10 days.

rsync seems to be a catalysator for the problem, but not the (only) cause.
If you have a machine where the error occurs often, and one where it never
happend (with the same kernel), and completely exchange the hardware,
then the error still occours on the system where it occoured often.

The problem of non-terminating processes was caused by the program "atop"
(which i installed for reason of debuging that problem). atop enabled process
accounting, and the accounting data would have written to a file during
sys_exit() - but the write of the accounting data stalled in 
balance_dirty_pages_rtatelimited_nr()..
My extension in Appendix II (sys_acct() diagnostic) helped me to find out. I'd
be glad if this patch could go to the kernel, because it helps for diagnose of
those kind of side effects.

In the question Krzysztof raised, if really 2.6.18.5 and not 2.6.19 is the latest
kernel not showing this bug, i thought to be sure. But yesterday i dig in the
logs ($MAIL when the first hang was complained, and a script that log'ed
the kernel version on that day), and obviously it's 2.6.20 on the day of the
first report. I may have misguessed, because the machine was in test
for quite a long time before it went into production, and maybe in that time
we've upgraded the kernel from 2.6.19 (the kernel we've shiped it).

Regards,
	- Thomas Osterried
Comment 29 Krzysztof Oledzki 2007-12-15 04:34:09 UTC

On Thu, 13 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Thu, 13 Dec 2007, Peter Zijlstra wrote:
>
>> 
>> On Thu, 2007-12-13 at 16:17 +0100, Krzysztof Oledzki wrote:
>>> 
>> 
>>> BTW: Could someone please look at this problem? I feel little ignored and
>>> in my situation this is a critical regression.
>> 
>> I was hoping to get around to it today, but I guess tomorrow will have
>> to do :-/
>
> Thanks.
>
>> So, its ext3, dirty some pages, sync, and dirty doesn't fall to 0,
>> right?
>
> Not only doesn't fall but continuously grows.
>
>> Does it happen with other filesystems as well?
>
> Don't know. I generally only use ext3 and I'm afraid I'm not able to switch 
> this system to other filesystem.
>
>> What are you ext3 mount options?
> /dev/root / ext3 rw,data=journal 0 0
> /dev/VolGrp0/usr /usr ext3 rw,nodev,data=journal 0 0
> /dev/VolGrp0/var /var ext3 rw,nodev,data=journal 0 0
> /dev/VolGrp0/squid_spool /var/cache/squid/cd0 ext3 
> rw,nosuid,nodev,noatime,data=writeback 0 0
> /dev/VolGrp0/squid_spool2 /var/cache/squid/cd1 ext3 
> rw,nosuid,nodev,noatime,data=writeback 0 0
> /dev/VolGrp0/news_spool /var/spool/news ext3 
> rw,nosuid,nodev,noatime,data=ordered 0 0

BTW: this regression also exists in 2.6.24-rc5. I'll try to find when it 
was introduced but it is hard to do it on a highly critical production 
system, especially since it takes ~2h after a reboot, to be sure.

However, 2h is quite good time, on other systems I have to wait ~2 months 
to get 20MB of leaked memory:

# uptime
  13:29:34 up 58 days, 13:04,  9 users,  load average: 0.38, 0.27, 0.31

# sync;sync;sleep 1;sync;grep Dirt /proc/meminfo
Dirty:           23820 kB

Best regards,

 				Krzysztof Ol
Comment 30 Krzysztof Oledzki 2007-12-15 13:21:42 UTC
Created attachment 14057 [details]
grep ^Dirty: /proc/meminfo on 2.6.20-rc4 in 1kb units
Comment 31 Krzysztof Oledzki 2007-12-15 13:22:05 UTC
Created attachment 14058 [details]
grep ^Dirty: /proc/meminfo on 2.6.20-rc2 in 1kb units
Comment 32 Krzysztof Oledzki 2007-12-15 13:53:46 UTC
http://bugzilla.kernel.org/show_bug.cgi?id=9182


On Sat, 15 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Thu, 13 Dec 2007, Krzysztof Oledzki wrote:
>
>> 
>> 
>> On Thu, 13 Dec 2007, Peter Zijlstra wrote:
>> 
>>> 
>>> On Thu, 2007-12-13 at 16:17 +0100, Krzysztof Oledzki wrote:
>>>> 
>>> 
>>>> BTW: Could someone please look at this problem? I feel little ignored and
>>>> in my situation this is a critical regression.
>>> 
>>> I was hoping to get around to it today, but I guess tomorrow will have
>>> to do :-/
>> 
>> Thanks.
>> 
>>> So, its ext3, dirty some pages, sync, and dirty doesn't fall to 0,
>>> right?
>> 
>> Not only doesn't fall but continuously grows.
>> 
>>> Does it happen with other filesystems as well?
>> 
>> Don't know. I generally only use ext3 and I'm afraid I'm not able to switch 
>> this system to other filesystem.
>> 
>>> What are you ext3 mount options?
>> /dev/root / ext3 rw,data=journal 0 0
>> /dev/VolGrp0/usr /usr ext3 rw,nodev,data=journal 0 0
>> /dev/VolGrp0/var /var ext3 rw,nodev,data=journal 0 0
>> /dev/VolGrp0/squid_spool /var/cache/squid/cd0 ext3 
>> rw,nosuid,nodev,noatime,data=writeback 0 0
>> /dev/VolGrp0/squid_spool2 /var/cache/squid/cd1 ext3 
>> rw,nosuid,nodev,noatime,data=writeback 0 0
>> /dev/VolGrp0/news_spool /var/spool/news ext3 
>> rw,nosuid,nodev,noatime,data=ordered 0 0
>
> BTW: this regression also exists in 2.6.24-rc5. I'll try to find when it was 
> introduced but it is hard to do it on a highly critical production system, 
> especially since it takes ~2h after a reboot, to be sure.
>
> However, 2h is quite good time, on other systems I have to wait ~2 months to 
> get 20MB of leaked memory:
>
> # uptime
> 13:29:34 up 58 days, 13:04,  9 users,  load average: 0.38, 0.27, 0.31
>
> # sync;sync;sleep 1;sync;grep Dirt /proc/meminfo
> Dirty:           23820 kB

More news, I hope this time my problem get more attention from developers 
since now I have much more information.

So far I found that:
  - 2.6.20-rc4 - bad: http://bugzilla.kernel.org/attachment.cgi?id=14057
  - 2.6.20-rc2 - bad: http://bugzilla.kernel.org/attachment.cgi?id=14058
  - 2.6.20-rc1 - OK (probably, I need to wait little more to be 100% sure).

2.6.20-rc1 with 33m uptime:
~$ grep Dirt /proc/meminfo ;sync ; sleep 1 ; sync ; grep Dirt /proc/meminfo
Dirty:           10504 kB
Dirty:               0 kB

2.6.20-rc2 was released Dec 23/24 2006 (BAD)
2.6.20-rc1 was released Dec 13/14 2006 (GOOD?)

It seems that this bug was introduced exactly one year ago. Surprisingly, 
dirty memory in 2.6.20-rc2/2.6.20-rc4 leaks _much_ more faster than in 
2.6.20-final and later kernels as it took only about 6h to reach 172MB. 
So, this bug might be cured afterward, but only a little.

There are three commits that may be somehow related:

http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.20.y.git;a=commitdiff;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.20.y.git;a=commitdiff;h=3e67c0987d7567ad666641164a153dca9a43b11d
http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.20.y.git;a=commitdiff;h=5f2a105d5e33a038a717995d2738434f9c25aed2

I'm going to check 2.6.20-rc1-git... releases but it would be *very* nice 
if someone could finally give ma a hand and point some hints helping 
debugging this problem.

Please note that none of my systems with kernels >= 2.6.20-rc1 is able to 
reach 0 kb of dirty memory, even after many synces, even when idle.

Best regards,

 				Krzysztof Ol
Comment 33 Natalie Protasevich 2007-12-15 14:19:34 UTC
Krzysztof, I'd hate point you to a hard path (at least time consuming), but you've done a lot of digging by now anyway. How about git bisecting between 2.6.20-rc2 and rc1? Here is great info on bisecting:
http://www.kernel.org/doc/local/git-quick.html
Comment 34 Krzysztof Oledzki 2007-12-15 15:09:03 UTC

On Sat, 15 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:

> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>
>
> ------- Comment #33 from protasnb@gmail.com  2007-12-15 14:19 -------
> Krzysztof, I'd hate point you to a hard path (at least time consuming), but
> you've done a lot of digging by now anyway. How about git bisecting between
> 2.6.20-rc2 and rc1? Here is great info on bisecting:
> http://www.kernel.org/doc/local/git-quick.html

As I'm smarter than git-bistect I can tell that 2.6.20-rc1-git8 is as bad 
as 2.6.20-rc2 but 2.6.20-rc1-git8 with one patch reverted seems to be OK. 
So it took me only 2 reboots. ;)

The guilty patch is the one I proposed just an hour ago:
  http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fstable%2Flinux-2.6.20.y.git;a=commitdiff_plain;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9

So:
  - 2.6.20-rc1: OK
  - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 reverted: OK
  - 2.6.20-rc1-git8: very BAD
  - 2.6.20-rc2: very BAD
  - 2.6.20-rc4: very BAD
  - >= 2.6.20: BAD (but not *very* BAD!)

Best regards,

 				Krzysztof Ol
Comment 35 Krzysztof Oledzki 2007-12-15 15:42:12 UTC
Just found this:
 http://lkml.org/lkml/2007/8/1/469

Can it be related to my problem?
Comment 36 Anonymous Emailer 2007-12-15 20:36:50 UTC
Reply-To: akpm@linux-foundation.org

On Sun, 16 Dec 2007 00:08:52 +0100 (CET) Krzysztof Oledzki <olel@ans.pl> wrote:

> 
> 
> On Sat, 15 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
> 
> > http://bugzilla.kernel.org/show_bug.cgi?id=9182
> >
> >
> > ------- Comment #33 from protasnb@gmail.com  2007-12-15 14:19 -------
> > Krzysztof, I'd hate point you to a hard path (at least time consuming), but
> > you've done a lot of digging by now anyway. How about git bisecting between
> > 2.6.20-rc2 and rc1? Here is great info on bisecting:
> > http://www.kernel.org/doc/local/git-quick.html
> 
> As I'm smarter than git-bistect I can tell that 2.6.20-rc1-git8 is as bad 
> as 2.6.20-rc2 but 2.6.20-rc1-git8 with one patch reverted seems to be OK. 
> So it took me only 2 reboots. ;)
> 
> The guilty patch is the one I proposed just an hour ago:
>  
>   http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fstable%2Flinux-2.6.20.y.git;a=commitdiff_plain;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
> 
> So:
>   - 2.6.20-rc1: OK
>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 reverted:
>   OK
>   - 2.6.20-rc1-git8: very BAD
>   - 2.6.20-rc2: very BAD
>   - 2.6.20-rc4: very BAD
>   - >= 2.6.20: BAD (but not *very* BAD!)
> 

well..  We have code which has been used by *everyone* for a year and it's
misbehaving for you alone.  I wonder what you're doing that is
different/special.

Which filesystem, which mount options, what sort of workload?

Thanks.
Comment 37 Krzysztof Oledzki 2007-12-16 01:33:34 UTC

On Sat, 15 Dec 2007, Andrew Morton wrote:

> On Sun, 16 Dec 2007 00:08:52 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
> wrote:
>
>>
>>
>> On Sat, 15 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
>>
>>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>>>
>>>
>>> ------- Comment #33 from protasnb@gmail.com  2007-12-15 14:19 -------
>>> Krzysztof, I'd hate point you to a hard path (at least time consuming), but
>>> you've done a lot of digging by now anyway. How about git bisecting between
>>> 2.6.20-rc2 and rc1? Here is great info on bisecting:
>>> http://www.kernel.org/doc/local/git-quick.html
>>
>> As I'm smarter than git-bistect I can tell that 2.6.20-rc1-git8 is as bad
>> as 2.6.20-rc2 but 2.6.20-rc1-git8 with one patch reverted seems to be OK.
>> So it took me only 2 reboots. ;)
>>
>> The guilty patch is the one I proposed just an hour ago:
>>  
>>   http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fstable%2Flinux-2.6.20.y.git;a=commitdiff_plain;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
>>
>> So:
>>   - 2.6.20-rc1: OK
>>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 reverted:
>>   OK
>>   - 2.6.20-rc1-git8: very BAD
>>   - 2.6.20-rc2: very BAD
>>   - 2.6.20-rc4: very BAD
>>   - >= 2.6.20: BAD (but not *very* BAD!)
>>
>
> well..  We have code which has been used by *everyone* for a year and it's
> misbehaving for you alone.

No, not for me alone. Probably only I and Thomas Osterried have systems 
where it is so easy to reproduce. Please note that the problem exists on 
my all systems, but only on one it is critical. It is enough to run
"sync; sleep 1; sunc; sleep 1; sync; grep Drirty /proc/meminfo" to be sure. 
With =>2.6.20-rc1-git8 it *never* falls to 0 an *all* my hosts but only 
on one it goes to ~200MB in about 2 weeks and then everything dies:
http://bugzilla.kernel.org/attachment.cgi?id=13824
http://bugzilla.kernel.org/attachment.cgi?id=13825
http://bugzilla.kernel.org/attachment.cgi?id=13826
http://bugzilla.kernel.org/attachment.cgi?id=13827

>  I wonder what you're doing that is different/special.
Me to. :|

> Which filesystem, which mount options

  - ext3 on RAID1 (MD): / - rootflags=data=journal
  - ext3 on LVM on RAID5 (MD)
  - nfs

/dev/md0 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec)
devpts on /dev/pts type devpts (rw,nosuid,noexec)
/dev/mapper/VolGrp0-usr on /usr type ext3 (rw,nodev,data=journal)
/dev/mapper/VolGrp0-var on /var type ext3 (rw,nodev,data=journal)
/dev/mapper/VolGrp0-squid_spool on /var/cache/squid/cd0 type ext3 (rw,nosuid,nodev,noatime,data=writeback)
/dev/mapper/VolGrp0-squid_spool2 on /var/cache/squid/cd1 type ext3 (rw,nosuid,nodev,noatime,data=writeback)
/dev/mapper/VolGrp0-news_spool on /var/spool/news type ext3 (rw,nosuid,nodev,noatime)
shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev)
usbfs on /proc/bus/usb type usbfs (rw,noexec,nosuid,devmode=0664,devgid=85)
owl:/usr/gentoo-nfs on /usr/gentoo-nfs type nfs (ro,nosuid,nodev,noatime,bg,intr,tcp,addr=192.168.129.26)


> what sort of workload?
Different, depending on a host: mail (postfix + amavisd + spamassasin + 
clamav + sqlgray), squid, mysql, apache, nfs, rsync, .... But it seems 
that the biggest problem is on the host running mentioned mail service.

Thanks.

Best regards,

 				Krzysztof Ol
Comment 38 Anonymous Emailer 2007-12-16 01:52:03 UTC
Reply-To: akpm@linux-foundation.org

On Sun, 16 Dec 2007 10:33:20 +0100 (CET) Krzysztof Oledzki <olel@ans.pl> wrote:

> 
> 
> On Sat, 15 Dec 2007, Andrew Morton wrote:
> 
> > On Sun, 16 Dec 2007 00:08:52 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
> wrote:
> >
> >>
> >>
> >> On Sat, 15 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
> >>
> >>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
> >>>
> >>>
> >>> ------- Comment #33 from protasnb@gmail.com  2007-12-15 14:19 -------
> >>> Krzysztof, I'd hate point you to a hard path (at least time consuming),
> but
> >>> you've done a lot of digging by now anyway. How about git bisecting
> between
> >>> 2.6.20-rc2 and rc1? Here is great info on bisecting:
> >>> http://www.kernel.org/doc/local/git-quick.html
> >>
> >> As I'm smarter than git-bistect I can tell that 2.6.20-rc1-git8 is as bad
> >> as 2.6.20-rc2 but 2.6.20-rc1-git8 with one patch reverted seems to be OK.
> >> So it took me only 2 reboots. ;)
> >>
> >> The guilty patch is the one I proposed just an hour ago:
> >>  
> http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fstable%2Flinux-2.6.20.y.git;a=commitdiff_plain;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
> >>
> >> So:
> >>   - 2.6.20-rc1: OK
> >>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
> reverted: OK
> >>   - 2.6.20-rc1-git8: very BAD
> >>   - 2.6.20-rc2: very BAD
> >>   - 2.6.20-rc4: very BAD
> >>   - >= 2.6.20: BAD (but not *very* BAD!)
> >>
> >
> > well..  We have code which has been used by *everyone* for a year and it's
> > misbehaving for you alone.
> 
> No, not for me alone. Probably only I and Thomas Osterried have systems 
> where it is so easy to reproduce. Please note that the problem exists on 
> my all systems, but only on one it is critical. It is enough to run
> "sync; sleep 1; sunc; sleep 1; sync; grep Drirty /proc/meminfo" to be sure. 
> With =>2.6.20-rc1-git8 it *never* falls to 0 an *all* my hosts but only 
> on one it goes to ~200MB in about 2 weeks and then everything dies:
> http://bugzilla.kernel.org/attachment.cgi?id=13824
> http://bugzilla.kernel.org/attachment.cgi?id=13825
> http://bugzilla.kernel.org/attachment.cgi?id=13826
> http://bugzilla.kernel.org/attachment.cgi?id=13827
> 
> >  I wonder what you're doing that is different/special.
> Me to. :|
> 
> > Which filesystem, which mount options
> 
>   - ext3 on RAID1 (MD): / - rootflags=data=journal

It wouldn't surprise me if this is specific to data=journal: that
journalling mode is pretty complex wrt dairty-data handling and isn't well
tested.

Does switching that to data=writeback change things?

THomas, do you have ext3 data=journal on any filesytems?

>   - ext3 on LVM on RAID5 (MD)
>   - nfs
> 
> /dev/md0 on / type ext3 (rw)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec)
> devpts on /dev/pts type devpts (rw,nosuid,noexec)
> /dev/mapper/VolGrp0-usr on /usr type ext3 (rw,nodev,data=journal)
> /dev/mapper/VolGrp0-var on /var type ext3 (rw,nodev,data=journal)
> /dev/mapper/VolGrp0-squid_spool on /var/cache/squid/cd0 type ext3
> (rw,nosuid,nodev,noatime,data=writeback)
> /dev/mapper/VolGrp0-squid_spool2 on /var/cache/squid/cd1 type ext3
> (rw,nosuid,nodev,noatime,data=writeback)
> /dev/mapper/VolGrp0-news_spool on /var/spool/news type ext3
> (rw,nosuid,nodev,noatime)
> shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev)
> usbfs on /proc/bus/usb type usbfs (rw,noexec,nosuid,devmode=0664,devgid=85)
> owl:/usr/gentoo-nfs on /usr/gentoo-nfs type nfs
> (ro,nosuid,nodev,noatime,bg,intr,tcp,addr=192.168.129.26)
> 
> 
> > what sort of workload?
> Different, depending on a host: mail (postfix + amavisd + spamassasin + 
> clamav + sqlgray), squid, mysql, apache, nfs, rsync, .... But it seems 
> that the biggest problem is on the host running mentioned mail service.
> 
Comment 39 Ingo Molnar 2007-12-16 01:58:33 UTC
> So:
>   - 2.6.20-rc1: OK
>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 reverted:
>   OK
>   - 2.6.20-rc1-git8: very BAD
>   - 2.6.20-rc2: very BAD
>   - 2.6.20-rc4: very BAD
>   - >= 2.6.20: BAD (but not *very* BAD!)

based on the great info you already acquired, you should be able to 
bisect this rather effectively, via:

2.6.20-rc1-git8 == 921320210bd2ec4f17053d283355b73048ac0e56

 $ git-bisect start
 $ git-bisect bad 921320210bd2ec4f17053d283355b73048ac0e56
 $ git-bisect good v2.6.20-rc1
 Bisecting: 133 revisions left to test after this

so about 7-8 bootups would pinpoint the breakage.

It would likely pinpoint fba2591b, so it would perhaps be best to first 
attempt a revert of fba2591b on a recent kernel.
Comment 40 Krzysztof Oledzki 2007-12-16 02:12:29 UTC

On Sun, 16 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:

> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>
>
>
>
>
> ------- Comment #39 from mingo@elte.hu  2007-12-16 01:58 -------
>
>> So:
>>   - 2.6.20-rc1: OK
>>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 reverted:
>>   OK
>>   - 2.6.20-rc1-git8: very BAD
>>   - 2.6.20-rc2: very BAD
>>   - 2.6.20-rc4: very BAD
>>   - >= 2.6.20: BAD (but not *very* BAD!)
>
> based on the great info you already acquired, you should be able to
> bisect this rather effectively, via:
>
> 2.6.20-rc1-git8 == 921320210bd2ec4f17053d283355b73048ac0e56
>
> $ git-bisect start
> $ git-bisect bad 921320210bd2ec4f17053d283355b73048ac0e56
> $ git-bisect good v2.6.20-rc1
> Bisecting: 133 revisions left to test after this
>
> so about 7-8 bootups would pinpoint the breakage.

Except that I have very limited time where I can do my tests on this host. 
Please also note that it takes about ~2h after a reboot, to be 100% sure. 
So, 7-8 bootups => 14-16h. :|

> It would likely pinpoint fba2591b, so it would perhaps be best to first
> attempt a revert of fba2591b on a recent kernel.

I wish I could: :(

ole@cougar:/usr/src/linux-2.6.23.9$ cat ..p1 |patch -p1 --dry-run -R
patching file fs/hugetlbfs/inode.c
Hunk #1 succeeded at 203 (offset 27 lines).
patching file include/linux/page-flags.h
Hunk #1 succeeded at 262 (offset 9 lines).
patching file mm/page-writeback.c
Hunk #1 succeeded at 903 (offset 58 lines).
patching file mm/truncate.c
Unreversed patch detected!  Ignore -R? [n] y
Hunk #1 succeeded at 52 with fuzz 2 (offset 1 line).
Hunk #2 FAILED at 85.
Hunk #3 FAILED at 365.
Hunk #4 FAILED at 400.
3 out of 4 hunks FAILED -- saving rejects to file mm/truncate.c.rej

Best regards,

 				Krzysztof Ol
Comment 41 Krzysztof Oledzki 2007-12-16 05:47:25 UTC

On Sun, 16 Dec 2007, Andrew Morton wrote:

> On Sun, 16 Dec 2007 10:33:20 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
> wrote:
>
>>
>>
>> On Sat, 15 Dec 2007, Andrew Morton wrote:
>>
>>> On Sun, 16 Dec 2007 00:08:52 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Sat, 15 Dec 2007, bugme-daemon@bugzilla.kernel.org wrote:
>>>>
>>>>> http://bugzilla.kernel.org/show_bug.cgi?id=9182
>>>>>
>>>>>
>>>>> ------- Comment #33 from protasnb@gmail.com  2007-12-15 14:19 -------
>>>>> Krzysztof, I'd hate point you to a hard path (at least time consuming),
>>>>> but
>>>>> you've done a lot of digging by now anyway. How about git bisecting
>>>>> between
>>>>> 2.6.20-rc2 and rc1? Here is great info on bisecting:
>>>>> http://www.kernel.org/doc/local/git-quick.html
>>>>
>>>> As I'm smarter than git-bistect I can tell that 2.6.20-rc1-git8 is as bad
>>>> as 2.6.20-rc2 but 2.6.20-rc1-git8 with one patch reverted seems to be OK.
>>>> So it took me only 2 reboots. ;)
>>>>
>>>> The guilty patch is the one I proposed just an hour ago:
>>>>  
>>>>   http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fstable%2Flinux-2.6.20.y.git;a=commitdiff_plain;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
>>>>
>>>> So:
>>>>   - 2.6.20-rc1: OK
>>>>   - 2.6.20-rc1-git8 with fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
>>>>   reverted: OK
>>>>   - 2.6.20-rc1-git8: very BAD
>>>>   - 2.6.20-rc2: very BAD
>>>>   - 2.6.20-rc4: very BAD
>>>>   - >= 2.6.20: BAD (but not *very* BAD!)
>>>>
>>>
>>> well..  We have code which has been used by *everyone* for a year and it's
>>> misbehaving for you alone.
>>
>> No, not for me alone. Probably only I and Thomas Osterried have systems
>> where it is so easy to reproduce. Please note that the problem exists on
>> my all systems, but only on one it is critical. It is enough to run
>> "sync; sleep 1; sunc; sleep 1; sync; grep Drirty /proc/meminfo" to be sure.
>> With =>2.6.20-rc1-git8 it *never* falls to 0 an *all* my hosts but only
>> on one it goes to ~200MB in about 2 weeks and then everything dies:
>> http://bugzilla.kernel.org/attachment.cgi?id=13824
>> http://bugzilla.kernel.org/attachment.cgi?id=13825
>> http://bugzilla.kernel.org/attachment.cgi?id=13826
>> http://bugzilla.kernel.org/attachment.cgi?id=13827
>>
>>>  I wonder what you're doing that is different/special.
>> Me to. :|
>>
>>> Which filesystem, which mount options
>>
>>   - ext3 on RAID1 (MD): / - rootflags=data=journal
>
> It wouldn't surprise me if this is specific to data=journal: that
> journalling mode is pretty complex wrt dairty-data handling and isn't well
> tested.
>
> Does switching that to data=writeback change things?

I'll confirm this tomorrow but it seems that even switching to 
data=ordered (AFAIK default o ext3) is indeed enough to cure this problem.

Two questions remain then: why system dies when dirty reaches ~200MB 
and what is wrong with ext3+data=journal with >=2.6.20-rc2?

Best regards,

 				Krzysztof Ol
Comment 42 Anonymous Emailer 2007-12-16 13:52:00 UTC
Reply-To: akpm@linux-foundation.org

On Sun, 16 Dec 2007 14:46:36 +0100 (CET) Krzysztof Oledzki <olel@ans.pl> wrote:

> >>> Which filesystem, which mount options
> >>
> >>   - ext3 on RAID1 (MD): / - rootflags=data=journal
> >
> > It wouldn't surprise me if this is specific to data=journal: that
> > journalling mode is pretty complex wrt dairty-data handling and isn't well
> > tested.
> >
> > Does switching that to data=writeback change things?
> 
> I'll confirm this tomorrow but it seems that even switching to 
> data=ordered (AFAIK default o ext3) is indeed enough to cure this problem.

yes, sorry, I meant ordered.

> Two questions remain then: why system dies when dirty reaches ~200MB 

I think you have ~2G of RAM and you're running with 
/proc/sys/vm/dirty_ratio=10, yes?

If so, when that machine hits 10% * 2G of dirty memory then everyone who
wants to dirty pages gets blocked.

> and what is wrong with ext3+data=journal with >=2.6.20-rc2?

Ah.  It has a bug in it ;)

As I said, data=journal has exceptional handling of pagecache data and is
not well tested.  Someone (and I'm not sure who) will need to get in there
and fix it.
Comment 43 Anonymous Emailer 2007-12-17 05:28:47 UTC
Reply-To: osterried@jesse.de

Hello,
 
> >   - ext3 on RAID1 (MD): / - rootflags=data=journal
> 
> It wouldn't surprise me if this is specific to data=journal: that
> journalling mode is pretty complex wrt dairty-data handling and isn't well
> tested.
> 
> Does switching that to data=writeback change things?
> 
> THomas, do you have ext3 data=journal on any filesytems?

On all systems i have ext3 journal_data enabled as a default mount option.

The test with "sync; sleep 1; sync; cat /proc/meminfo | grep -i dirty" is quite interesting:
on those machines with kernel 2.6.20 the amount of dirty pages remains high:

# cat /proc/vmstat /proc/meminfo |grep -i dirt
nr_dirty 101873
Dirty:          407492 kB
# sync
# sync
# cat /proc/vmstat /proc/meminfo |grep -i dirt
nr_dirty 99266
Dirty:          397064 kB

whereas with kernel 2.6.18-5, the amount of dirty pages really goes to zero:
# cat /proc/meminfo /proc/vmstat |grep -i dirt
Dirty:             232 kB
nr_dirty 58
# sync
# sync
# cat /proc/meminfo /proc/vmstat |grep -i dirt
Dirty:               0 kB
nr_dirty 0

The level of which amount of dirty pages the system goes into trouble varies.
The 2GB RAM machine has problems with start of 400MB dirty pages.
Another machine with less RAM has problems with less MB dirty pages.
/proc/sys/vm/dirty_ratio is at the default value: 40.

The amount of dirty-pages on kernel 2.6.20 and up only decreases a bit when
doing sync, but still remains at the high value, and increases day by day.

We have only ext3fs. No NFS. 

The machines which do a daily rsync backup to a separate disk go into troubles
after some time (one machine after 10 days, others after more time). Some
which have no rsync backup gomore seldomly  into trouble, perhaps because without
daily rsync, the amount of dirty pages increases more slowly. This is why i
see rsync as a catalysator for the problem (but not the cause). Without rsync,
we have had also machines hiting that kind of bug.


Btw, while monitoring /proc/meminfo with the command
  "while true; do date;  cat /proc/meminfo |grep -i Dirt ; sleep 10; done | tee /var/log/kernel_vm_protokoll.txt",
i found this kind of out-of-order writing:
Sa Aug  4 00:01:38 CEST 2007
Dirty:            1664 kB
Sa Aug  4 00:01:48 CEST 2007
Dirty:            1812 kB
Sa Aug  4 00:01:58 CEST 2007
Dirty:            1948 kB
Sa Aug  4 00:02:08 CEST 2007
Sa Aug  4 00:02:08 CEST 2007
Dirty:            2976 kB
Dirty:            2976 kB
Sa Aug  4 00:02:08 CEST 2007
Sa Aug  4 00:02:08 CEST 2007
Dirty:            2976 kB
Dirty:            2976 kB
Sa Aug  4 00:02:18 CEST 2007
Dirty:            2092 kB
Sa Aug  4 00:02:28 CEST 2007
Dirty:            2136 kB
Comment 44 Jan Kara 2007-12-17 06:27:04 UTC
> On Sun, 16 Dec 2007 14:46:36 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
> wrote:
> 
> > >>> Which filesystem, which mount options
> > >>
> > >>   - ext3 on RAID1 (MD): / - rootflags=data=journal
> > >
> > > It wouldn't surprise me if this is specific to data=journal: that
> > > journalling mode is pretty complex wrt dairty-data handling and isn't
> well
> > > tested.
> > >
> > > Does switching that to data=writeback change things?
> > 
> > I'll confirm this tomorrow but it seems that even switching to 
> > data=ordered (AFAIK default o ext3) is indeed enough to cure this problem.
> 
> yes, sorry, I meant ordered.
> 
> > Two questions remain then: why system dies when dirty reaches ~200MB 
> 
> I think you have ~2G of RAM and you're running with 
> /proc/sys/vm/dirty_ratio=10, yes?
> 
> If so, when that machine hits 10% * 2G of dirty memory then everyone who
> wants to dirty pages gets blocked.
> 
> > and what is wrong with ext3+data=journal with >=2.6.20-rc2?
> 
> Ah.  It has a bug in it ;)
> 
> As I said, data=journal has exceptional handling of pagecache data and is
> not well tested.  Someone (and I'm not sure who) will need to get in there
> and fix it.
  It seems fsx-linux is able to trigger the leak on my test machine so
I'll have a look into it (not sure if I'll get to it today but I should
find some time for it this week)...

									Honza
Comment 45 Krzysztof Oledzki 2007-12-17 09:17:58 UTC

On Sun, 16 Dec 2007, Andrew Morton wrote:

> On Sun, 16 Dec 2007 14:46:36 +0100 (CET) Krzysztof Oledzki <olel@ans.pl>
> wrote:
>
>>>>> Which filesystem, which mount options
>>>>
>>>>   - ext3 on RAID1 (MD): / - rootflags=data=journal
>>>
>>> It wouldn't surprise me if this is specific to data=journal: that
>>> journalling mode is pretty complex wrt dairty-data handling and isn't well
>>> tested.
>>>
>>> Does switching that to data=writeback change things?
>>
>> I'll confirm this tomorrow but it seems that even switching to
>> data=ordered (AFAIK default o ext3) is indeed enough to cure this problem.
>
> yes, sorry, I meant ordered.

OK, I can confirm that the problem is with data=journal. With data=ordered 
I get:

# uname -rns;uptime;sync;sleep 1;sync ;sleep 1; sync;grep Dirty /proc/meminfo
Linux cougar 2.6.24-rc5
  17:50:34 up 1 day, 20 min,  1 user,  load average: 0.99, 0.48, 0.35
Dirty:               0 kB

>> Two questions remain then: why system dies when dirty reaches ~200MB
>
> I think you have ~2G of RAM and you're running with
> /proc/sys/vm/dirty_ratio=10, yes?
>
> If so, when that machine hits 10% * 2G of dirty memory then everyone who
> wants to dirty pages gets blocked.

Oh, right. Thank you for the explanation.

>> and what is wrong with ext3+data=journal with >=2.6.20-rc2?
>
> Ah.  It has a bug in it ;)
>
> As I said, data=journal has exceptional handling of pagecache data and is
> not well tested.  Someone (and I'm not sure who) will need to get in there
> and fix it.

OK, I'm willing to test it. ;)

Best regrds,

 				Krzysztof Ol
Comment 46 Krzysztof Oledzki 2007-12-17 12:08:51 UTC
Created attachment 14089 [details]
grep ^Dirty: /proc/meminfo on 2.6.24-rc5 without data=journal
Comment 47 Anonymous Emailer 2007-12-19 09:45:41 UTC
Reply-To: torvalds@linux-foundation.org



On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> 
> I'll confirm this tomorrow but it seems that even switching to data=ordered
> (AFAIK default o ext3) is indeed enough to cure this problem.

Ok, do we actually have any ext3 expert following this? I have no idea 
about what the journalling code does, but I have painful memories of ext3 
doing really odd buffer-head-based IO and totally bypassing all the normal 
page dirty logic.

Judging by the symptoms (sorry for not following this well, it came up 
while I was mostly away travelling), something probably *does* clear the 
dirty bit on the pages, but the dirty *accounting* is not done properly, 
so the kernel keeps thinking it has dirty pages.

Now, a simple "grep" shows that ext3 does not actually do any 
ClearPageDirty() or similar on its own, although maybe I missed some other 
subtle way this can happen. And the *normal* VFS routines that do 
ClearPageDirty should all be doing the proper accounting.

So I see a couple of possible cases:

 - actually clearing the PG_dirty bit somehow, without doing the 
   accounting.

   This looks very unlikely. PG_dirty is always cleared by some variant of 
   "*ClearPageDirty()", and that bit definition isn't used for anything 
   else in the whole kernel judging by "grep" (the page allocator tests 
   the bit, that's it).

   And there aren't that many hits for ClearPageDirty, and they all seem 
   to do the proper "dec_zone_page_state(page, NR_FILE_DIRTY);" etc if the 
   mapping has dirty state accounting.

   The exceptions seem to be:
    - the page freeing path, but that path checks that "mapping" is NULL 
      (so no accounting), and would complain loudly if it wasn't
    - the swap state stuff ("move_from_swap_cache()"), but that should 
      only ever trigger for swap cache pages (we have a BUG_ON() in that 
      path), and those don't do dirty accounting anyway.
    - pageout(), but again only for pages that have a NULL mapping.

 - ext3 might be clearing (probably indirectly) the "page->mapping" thing 
   or similar, which in turn will make the VFS think that even a dirty 
   page isn't actually to be accounted for - so when the page *turned* 
   dirty, it was accounted as a dirty page, but then, when it was cleaned, 
   the accounting wasn't reversed because ->mapping had become NULL.

   This would be some interaction with the truncation logic, and quite 
   frankly, that should be all shared with the non-journal case, so I find 
   this all very unlikely. 

However, that second case is interesting, because the pageout case 
actually has a comment like this:

	/*
	 * Some data journaling orphaned pages can have
	 * page->mapping == NULL while being dirty with clean buffers.
	 */

which really sounds like the case in question. 

I may know the VM, but that special case was added due to insane 
journaling filesystems, and I don't know what insane things they do. Which 
is why I'm wondering if there is any ext3 person who knows the journaling 
code?

How/when does it ever "orphan" pages? Because yes, if it ever does that, 
and clears the ->mapping field on a mapped page, then that page will have 
incremented the dirty counts when it became dirty, but will *not* 
decrement the dirty count when it is an orphan.

> Two questions remain then: why system dies when dirty reaches ~200MB and what
> is wrong with ext3+data=journal with >=2.6.20-rc2?

Well, that one is probably pretty straightforward: since the kernel thinks 
that there are too many dirty pages, it will ask everybody who creates 
more dirty pages to clean out some *old* dirty pages, but since they don't 
exist, the whole thing will basically wait forever for a writeout to clean 
things out that will never happen.

200MB is 10% of your 2GB of low-mem RAM, and 10% is the default 
dirty_ratio that causes synchronous waits for writeback. If you use the 
normal 3:1 VM split, the hang should happen even earlier (at the ~100MB 
"dirty" mark).

So that part isn't the bug. The bug is in the accounting, but I'm pretty 
damn sure that the core VM itself is pretty ok, since that code has now 
been stable for people for the last year or so. It seems that ext3 (with 
data journaling) does something dodgy wrt some page.

But how about trying this appended patch. It should warn a few times if 
some page is ever removed from a mapping while it's dirty (and the mapping 
is one that should have been accouned). It also tries to "fix up" the 
case, so *if* this is the cause, it should also fix the bug.

I'd love to hear if you get any stack dumps with this, and what the 
backtrace is (and whether the dirty counts then stay ok).

The patch is totally untested. It compiles for me. That's all I can say.

(There's a few other places that set ->mapping to NULL, but they're pretty 
esoteric. Page migration? Stuff like that).

			Linus

---
 mm/filemap.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 188cf5f..7560843 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -124,6 +124,18 @@ void __remove_from_page_cache(struct page *page)
 	mapping->nrpages--;
 	__dec_zone_page_state(page, NR_FILE_PAGES);
 	BUG_ON(page_mapped(page));
+
+	if (PageDirty(page) && mapping_cap_account_dirty(mapping)) {
+		static int count = 10;
+		if (count) {
+			count--;
+			WARN_ON(1);
+		}
+
+		/* Try to fix up the bug.. */
+		dec_zone_page_state(page, NR_FILE_DIRTY);
+		dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
+	}
 }
 
 void remove_from_page_cache(struct page *page)
Comment 48 Ingo Molnar 2007-12-19 10:10:43 UTC
Created attachment 14126 [details]
ext3 dirty data accounting fix/debug patch

the patch from Linus was whitespace damaged - i've attached the patch in an applicable format.
Comment 49 Ingo Molnar 2007-12-19 10:17:46 UTC
> The patch is totally untested. It compiles for me. That's all I can 
> say.

i have tried your debug patch with ext3 and the data=journal mount 
option:

  /dev/sda5 on /home type ext3 (rw,noatime,nodiratime,data=journal)

but could not trigger the warning. Maybe those who are seeing the 
leaking dirty memory can reproduce it better. Maybe it depends on 
certain ext3 features, certain file sizes or other rare scenarios?
Comment 50 Ingo Molnar 2007-12-19 10:20:21 UTC
> i have tried your debug patch with ext3 and the data=journal mount 
> option:
> 
>   /dev/sda5 on /home type ext3 (rw,noatime,nodiratime,data=journal)
> 
> but could not trigger the warning. Maybe those who are seeing the 
> leaking dirty memory can reproduce it better. Maybe it depends on 
> certain ext3 features, certain file sizes or other rare scenarios?

ha! It triggered when i gave up after 15 minutes of trying to trigger it 
via various stress tools and logged out of the testbox, over its 
console:

 WARNING: at mm/filemap.c:132 __remove_from_page_cache()
Pid: 3238, comm: bash Not tainted 2.6.24-rc5 #111
 [<c0105c46>] show_trace_log_lvl+0x12/0x25
 [<c01063ea>] show_trace+0xd/0x10
 [<c010670a>] dump_stack+0x57/0x5f
 [<c01615cf>] __remove_from_page_cache+0x78/0xd4
 [<c016164f>] remove_from_page_cache+0x24/0x2f
 [<c0167183>] truncate_complete_page+0x2d/0x41
 [<c0167252>] truncate_inode_pages_range+0xbb/0x29d
 [<c0167440>] truncate_inode_pages+0xc/0x10
 [<c016d329>] vmtruncate+0x7d/0x11d
 [<c018e9e7>] inode_setattr+0x5e/0x139
 [<c01be487>] ext3_setattr+0x189/0x1e5
 [<c018ec0f>] notify_change+0x14d/0x2de
 [<c017c3b7>] do_truncate+0x62/0x7b
 [<c0184af6>] may_open+0x1a9/0x1f4
 [<c01869b2>] open_namei+0x254/0x555
 [<c017bd39>] do_filp_open+0x1f/0x35
 [<c017bd8f>] do_sys_open+0x40/0xb5
 [<c017be30>] sys_open+0x16/0x18
 [<c0104bae>] sysenter_past_esp+0x5f/0xa5
 =======================

so it's ext3 inode attributes and vmtruncate ... hmm .... fun :-)
Comment 51 Anonymous Emailer 2007-12-19 11:21:43 UTC
Reply-To: torvalds@linux-foundation.org



On Wed, 19 Dec 2007, Ingo Molnar wrote:
> 
> ha! It triggered when i gave up after 15 minutes of trying to trigger it 
> via various stress tools and logged out of the testbox, over its 
> console:

Goodie. So this path does indeed seem to be the reason. 

>  WARNING: at mm/filemap.c:132 __remove_from_page_cache()
> Pid: 3238, comm: bash Not tainted 2.6.24-rc5 #111
>  [<c0105c46>] show_trace_log_lvl+0x12/0x25
>  [<c01063ea>] show_trace+0xd/0x10
>  [<c010670a>] dump_stack+0x57/0x5f
>  [<c01615cf>] __remove_from_page_cache+0x78/0xd4
>  [<c016164f>] remove_from_page_cache+0x24/0x2f
>  [<c0167183>] truncate_complete_page+0x2d/0x41
>  [<c0167252>] truncate_inode_pages_range+0xbb/0x29d
>  [<c0167440>] truncate_inode_pages+0xc/0x10
>  [<c016d329>] vmtruncate+0x7d/0x11d
>  [<c018e9e7>] inode_setattr+0x5e/0x139
>  [<c01be487>] ext3_setattr+0x189/0x1e5
>  [<c018ec0f>] notify_change+0x14d/0x2de
>  [<c017c3b7>] do_truncate+0x62/0x7b
>  [<c0184af6>] may_open+0x1a9/0x1f4
>  [<c01869b2>] open_namei+0x254/0x555
>  [<c017bd39>] do_filp_open+0x1f/0x35
>  [<c017bd8f>] do_sys_open+0x40/0xb5
>  [<c017be30>] sys_open+0x16/0x18
>  [<c0104bae>] sysenter_past_esp+0x5f/0xa5
>  =======================
> 
> so it's ext3 inode attributes and vmtruncate ... hmm .... fun :-)

No, it's not inode attributes in the "extended attribute" meaning - the 
"setattr()" thing is just the VFS's internal name for setting various 
perfectly standard state in an inode. 

In this case, it simply seems to be a regular O_TRUNC that causes us to 
truncate the file, which in turn causes a "notify_change()" with the inode 
size, and that causes the VFS layer to call down to the low-level 
filesystem that an "inode attribute" has changed (namely the size and the 
inode modification times). That in turn just causes a regular 
vmtruncate().

However, the interesting thing is that "truncate_complete_page()" already 
did a "cancel_dirty_page()" on that page, which should have cleared the 
dirty bit. And we do all of this with the page lock held, and after having 
unmapped the page from any user mappings, so how the *heck* did that page 
get to be dirty again by the time we do the "remove_from_page_cache()" 
right afterwards?

Regardless, very interesting, and this does seem to be the cause. The 
trivial patch for 2.6.24 - and any backports - may well be to just remove 
the warnings (and just keep the "fixup" in remove_from_page_cache()), but 
I'd really like to understand how that page got marked dirty again, and 
why it seems to be related to "data=journal".

(Of course, the "data=journal" may just be a timing/IO-pattern thing, and 
maybe this is a totally generic race that is just hard to hit under normal 
circumstances, but we really shouldn't be marking locked pages dirty!)

Anyway, I'd really love to have a confirmation from Krzysztof that this 
really does fix it for him too (with hopefully the same backtrace).

			Linus
Comment 52 Anonymous Emailer 2007-12-19 11:42:43 UTC
Reply-To: torvalds@linux-foundation.org



On Wed, 19 Dec 2007, Linus Torvalds wrote:
> 
> Regardless, very interesting, and this does seem to be the cause. The 
> trivial patch for 2.6.24 - and any backports - may well be to just remove 
> the warnings (and just keep the "fixup" in remove_from_page_cache())

In fact, if we do this, it would actually clean up the code. Just remove 
all the dirty counting stuff entirely from the callers, and just have them 
rely on remove_from_page_cache() doing it for them. Much simpler.

However:

> but I'd really like to understand how that page got marked dirty again, 
> and why it seems to be related to "data=journal".

That still holds. I'd really like to understand why/how this triggers.

			Linus
Comment 53 Ingo Molnar 2007-12-19 11:54:14 UTC
* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> > but I'd really like to understand how that page got marked dirty 
> > again, and why it seems to be related to "data=journal".
> 
> That still holds. I'd really like to understand why/how this triggers.

another observation: my QA setup has been running with your patch also 
applied, for the last hour, doing a few dozen random bootups to ext3 - 
but without data=journal. The warning did not trigger even once (and i 
tried a few login/logouts as well which triggered it before). So it does 
seem to be related to data journalling.

	Ingo
Comment 54 Anonymous Emailer 2007-12-19 11:58:07 UTC
Reply-To: torvalds@linux-foundation.org



On Wed, 19 Dec 2007, Linus Torvalds wrote:
> 
> > but I'd really like to understand how that page got marked dirty again, 
> > and why it seems to be related to "data=journal".
> 
> That still holds. I'd really like to understand why/how this triggers.

Hmm. "truncate_complete_page()" does:

        cancel_dirty_page(page, PAGE_CACHE_SIZE);

        if (PagePrivate(page))
                do_invalidatepage(page, 0);

        remove_from_page_cache(page);

and yes, that "do_invalidatepage()" calls down to the filesystem layer 
(mapping->a_ops->invalidatepage), and yes, this all goes into the 
journalling code.

So at a guess, the bug would go away if we just moved the 
"cancel_dirty_page()" to *after* the do_invalidatepage() case, although I 
wonder if we had some reason to do it in that order (ie maybe 
do_invalidatepage() likes to see the page being clean).

Anyway, I think the fixups I added to __remove_from_page_cache() seem to 
continually become a better idea, considering that we let the filesystem 
mess around with the page in between, and if the filesystem messes with 
the dirty bits, it really means that the VM shouldn't just rely on it 
remaining clean.

But I still want/hope-for a confirmation from Krzysztof that the patch 
actually fixes it for him too. At which point I'll just commit it without 
the stack dumping.

		Linus
Comment 55 Ingo Molnar 2007-12-19 12:25:37 UTC
* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> Anyway, I think the fixups I added to __remove_from_page_cache() seem 
> to continually become a better idea, considering that we let the 
> filesystem mess around with the page in between, and if the filesystem 
> messes with the dirty bits, it really means that the VM shouldn't just 
> rely on it remaining clean.

might make sense to make the printout dependent on CONFIG_DEBUG_VM and 
turn it into a simple WARN_ON_ONCE(). That way we could map the range of 
filesystems that are affected by this, without inconveniencing users. 
(and could then decide whether the VM wants to tolerate this or not.)

[ or perhaps make it a single, unconditional WARN_ON_ONCE() in 2.6.24.0,
  kerneloops.org would then pick it up and we'd have the info. ]

	Ingo
Comment 56 Krzysztof Oledzki 2007-12-19 13:54:00 UTC

On Wed, 19 Dec 2007, Linus Torvalds wrote:

>
>
> On Wed, 19 Dec 2007, Linus Torvalds wrote:
>>
>>> but I'd really like to understand how that page got marked dirty again,
>>> and why it seems to be related to "data=journal".
>>
>> That still holds. I'd really like to understand why/how this triggers.
>
> Hmm. "truncate_complete_page()" does:
>
>        cancel_dirty_page(page, PAGE_CACHE_SIZE);
>
>        if (PagePrivate(page))
>                do_invalidatepage(page, 0);
>
>        remove_from_page_cache(page);
>
> and yes, that "do_invalidatepage()" calls down to the filesystem layer
> (mapping->a_ops->invalidatepage), and yes, this all goes into the
> journalling code.
>
> So at a guess, the bug would go away if we just moved the
> "cancel_dirty_page()" to *after* the do_invalidatepage() case, although I
> wonder if we had some reason to do it in that order (ie maybe
> do_invalidatepage() likes to see the page being clean).
>
> Anyway, I think the fixups I added to __remove_from_page_cache() seem to
> continually become a better idea, considering that we let the filesystem
> mess around with the page in between, and if the filesystem messes with
> the dirty bits, it really means that the VM shouldn't just rely on it
> remaining clean.
>
> But I still want/hope-for a confirmation from Krzysztof that the patch
> actually fixes it for him too. At which point I'll just commit it without
> the stack dumping.

Just booted the system with 2.6.24-rc5+the debug/fixup patch. It took 2 
minutes to get this:

WARNING: at mm/filemap.c:132 __remove_from_page_cache()
Pid: 3734, comm: lmtp Not tainted 2.6.24-rc5 #1
  [<c014d772>] __remove_from_page_cache+0x87/0xe6
  [<c014d7f3>] remove_from_page_cache+0x22/0x2b
  [<c015327f>] truncate_complete_page+0x2b/0x3f
  [<c0153367>] truncate_inode_pages_range+0xd4/0x2d8
  [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
  [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
  [<c0245f52>] _atomic_dec_and_lock+0x2a/0x48
  [<c0153582>] truncate_inode_pages+0x17/0x1d
  [<c01a5b39>] ext3_delete_inode+0x13/0xbb
  [<c01a5b26>] ext3_delete_inode+0x0/0xbb
  [<c0178eda>] generic_delete_inode+0x5e/0xc6
  [<c0178604>] iput+0x60/0x62
  [<c0176779>] d_kill+0x2d/0x46
  [<c0176a94>] dput+0xdc/0xe4
  [<c01697c4>] __fput+0x113/0x13d
  [<c016727d>] filp_close+0x51/0x58
  [<c0168315>] sys_close+0x70/0xab
  [<c0103e92>] sysenter_past_esp+0x5f/0xa5
  =======================

WARNING: at mm/filemap.c:132 __remove_from_page_cache()
Pid: 3738, comm: smtp Not tainted 2.6.24-rc5 #1
  [<c014d772>] __remove_from_page_cache+0x87/0xe6
  [<c014d7f3>] remove_from_page_cache+0x22/0x2b
  [<c015327f>] truncate_complete_page+0x2b/0x3f
  [<c0153367>] truncate_inode_pages_range+0xd4/0x2d8
  [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
  [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
  [<c0245f52>] _atomic_dec_and_lock+0x2a/0x48
  [<c0153582>] truncate_inode_pages+0x17/0x1d
  [<c01a5b39>] ext3_delete_inode+0x13/0xbb
  [<c01a5b26>] ext3_delete_inode+0x0/0xbb
  [<c0178eda>] generic_delete_inode+0x5e/0xc6
  [<c0178604>] iput+0x60/0x62
  [<c0176779>] d_kill+0x2d/0x46
  [<c0176a94>] dput+0xdc/0xe4
  [<c01697c4>] __fput+0x113/0x13d
  [<c016727d>] filp_close+0x51/0x58
  [<c0168315>] sys_close+0x70/0xab
  [<c0103e92>] sysenter_past_esp+0x5f/0xa5
  =======================

ole@cougar:~$ dmesg |grep -c __remove_from_page_cache
10

ole@cougar:~$ uptime
  22:53:09 up 2 min,  1 user,  load average: 0.57, 0.37, 0.14


Best regards,

 				Krzysztof Oledzki
Comment 57 Krzysztof Oledzki 2007-12-19 14:51:20 UTC

On Wed, 19 Dec 2007, Krzysztof Oledzki wrote:

>
>
> On Wed, 19 Dec 2007, Linus Torvalds wrote:
>
>> 
>> 
>> On Wed, 19 Dec 2007, Linus Torvalds wrote:
>>> 
>>>> but I'd really like to understand how that page got marked dirty again,
>>>> and why it seems to be related to "data=journal".
>>> 
>>> That still holds. I'd really like to understand why/how this triggers.
>> 
>> Hmm. "truncate_complete_page()" does:
>>
>>        cancel_dirty_page(page, PAGE_CACHE_SIZE);
>>
>>        if (PagePrivate(page))
>>                do_invalidatepage(page, 0);
>>
>>        remove_from_page_cache(page);
>> 
>> and yes, that "do_invalidatepage()" calls down to the filesystem layer
>> (mapping->a_ops->invalidatepage), and yes, this all goes into the
>> journalling code.
>> 
>> So at a guess, the bug would go away if we just moved the
>> "cancel_dirty_page()" to *after* the do_invalidatepage() case, although I
>> wonder if we had some reason to do it in that order (ie maybe
>> do_invalidatepage() likes to see the page being clean).
>> 
>> Anyway, I think the fixups I added to __remove_from_page_cache() seem to
>> continually become a better idea, considering that we let the filesystem
>> mess around with the page in between, and if the filesystem messes with
>> the dirty bits, it really means that the VM shouldn't just rely on it
>> remaining clean.
>> 
>> But I still want/hope-for a confirmation from Krzysztof that the patch
>> actually fixes it for him too. At which point I'll just commit it without
>> the stack dumping.
>
> Just booted the system with 2.6.24-rc5+the debug/fixup patch. It took 2 
> minutes to get this:
>
> WARNING: at mm/filemap.c:132 __remove_from_page_cache()
> Pid: 3734, comm: lmtp Not tainted 2.6.24-rc5 #1
> [<c014d772>] __remove_from_page_cache+0x87/0xe6
> [<c014d7f3>] remove_from_page_cache+0x22/0x2b
> [<c015327f>] truncate_complete_page+0x2b/0x3f
> [<c0153367>] truncate_inode_pages_range+0xd4/0x2d8
> [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
> [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
> [<c0245f52>] _atomic_dec_and_lock+0x2a/0x48
> [<c0153582>] truncate_inode_pages+0x17/0x1d
> [<c01a5b39>] ext3_delete_inode+0x13/0xbb
> [<c01a5b26>] ext3_delete_inode+0x0/0xbb
> [<c0178eda>] generic_delete_inode+0x5e/0xc6
> [<c0178604>] iput+0x60/0x62
> [<c0176779>] d_kill+0x2d/0x46
> [<c0176a94>] dput+0xdc/0xe4
> [<c01697c4>] __fput+0x113/0x13d
> [<c016727d>] filp_close+0x51/0x58
> [<c0168315>] sys_close+0x70/0xab
> [<c0103e92>] sysenter_past_esp+0x5f/0xa5
> =======================
>
> WARNING: at mm/filemap.c:132 __remove_from_page_cache()
> Pid: 3738, comm: smtp Not tainted 2.6.24-rc5 #1
> [<c014d772>] __remove_from_page_cache+0x87/0xe6
> [<c014d7f3>] remove_from_page_cache+0x22/0x2b
> [<c015327f>] truncate_complete_page+0x2b/0x3f
> [<c0153367>] truncate_inode_pages_range+0xd4/0x2d8
> [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
> [<c018b96e>] inotify_inode_is_dead+0x1a/0x70
> [<c0245f52>] _atomic_dec_and_lock+0x2a/0x48
> [<c0153582>] truncate_inode_pages+0x17/0x1d
> [<c01a5b39>] ext3_delete_inode+0x13/0xbb
> [<c01a5b26>] ext3_delete_inode+0x0/0xbb
> [<c0178eda>] generic_delete_inode+0x5e/0xc6
> [<c0178604>] iput+0x60/0x62
> [<c0176779>] d_kill+0x2d/0x46
> [<c0176a94>] dput+0xdc/0xe4
> [<c01697c4>] __fput+0x113/0x13d
> [<c016727d>] filp_close+0x51/0x58
> [<c0168315>] sys_close+0x70/0xab
> [<c0103e92>] sysenter_past_esp+0x5f/0xa5
> =======================
>
> ole@cougar:~$ dmesg |grep -c __remove_from_page_cache
> 10
>
> ole@cougar:~$ uptime
> 22:53:09 up 2 min,  1 user,  load average: 0.57, 0.37, 0.14

Little different call trace:

WARNING: at mm/filemap.c:132 __remove_from_page_cache()
Pid: 3468, comm: qmgr Not tainted 2.6.24-rc5 #1
  [<c014d772>] __remove_from_page_cache+0x87/0xe6
  [<c014d7f3>] remove_from_page_cache+0x22/0x2b
  [<c015327f>] truncate_complete_page+0x2b/0x3f
  [<c0153367>] truncate_inode_pages_range+0xd4/0x2d8
  [<c0245f52>] _atomic_dec_and_lock+0x2a/0x48
  [<c0153582>] truncate_inode_pages+0x17/0x1d
  [<c01a5b39>] ext3_delete_inode+0x13/0xbb
  [<c01a5b26>] ext3_delete_inode+0x0/0xbb
  [<c0178eda>] generic_delete_inode+0x5e/0xc6
  [<c0178604>] iput+0x60/0x62
  [<c0170ebd>] do_unlinkat+0xbf/0x133
  [<c017a9c9>] mntput_no_expire+0x11/0x5c
  [<c016727d>] filp_close+0x51/0x58
  [<c0103e92>] sysenter_past_esp+0x5f/0xa5

Best regards,

 				Krzysztof Ol
Comment 58 Anonymous Emailer 2007-12-19 15:43:08 UTC
Reply-To: torvalds@linux-foundation.org



On Wed, 19 Dec 2007, Krzysztof Oledzki wrote:
> 
> Little different call trace:

They're still all very similar, ie it's always about that normal truncate 
path - the details just differ about exactly what causes the truncate.

I committed the patch witha slightly extended comment and without the 
WARN_ON(). But it would be great to verify that this really fixes it, and 
that there isn't some *other* leak. So please keep running the kernel with 
the patch, and try to make your dirty page leak happen, and see if it 
really is gone now...

Just to verify that there isn't something else going on too...

		Linus
Comment 59 Krzysztof Oledzki 2007-12-19 15:59:33 UTC

On Wed, 19 Dec 2007, Linus Torvalds wrote:

>
>
> On Wed, 19 Dec 2007, Krzysztof Oledzki wrote:
>>
>> Little different call trace:
>
> They're still all very similar, ie it's always about that normal truncate
> path - the details just differ about exactly what causes the truncate.

OK. What still beates me is why this commit:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
exposed the problem?

> I committed the patch witha slightly extended comment and without the
> WARN_ON().

Thank you very much! Is it possible and safe to backport this patch to 
2.6.23 and 2.6.22?

> But it would be great to verify that this really fixes it, and that 
> there isn't some *other* leak. So please keep running the kernel with 
> the patch, and try to make your dirty page leak happen, and see if it 
> really is gone now...
>
> Just to verify that there isn't something else going on too...

So far it is OK:

ole@cougar:~$ uptime;sync;sleep 1;sync;sleep 1;sync;grep Dirt /proc/meminfo
  00:46:42 up  1:56,  1 user,  load average: 0.24, 0.53, 0.61
Dirty:               0 kB

I'll let you know if something happens.

Best regards,

 			Krzysztof Ol
Comment 60 Anonymous Emailer 2007-12-19 16:24:34 UTC
Reply-To: torvalds@linux-foundation.org



On Thu, 20 Dec 2007, Krzysztof Oledzki wrote:
> 
> OK. What still beates me is why this commit:
>
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9
> exposed the problem?

Well, that commit was actually buggy, and fixed by some subsequent commits 
(namely the two commits 5f2a105d5e33a038a717995d2738434f9c25aed2 and 
3e67c0987d7567ad666641164a153dca9a43b11d). So with just that commit, you 
had a serious dirty page accounting memory leak.

With it fixed, you still had the leak, but now it was much smaller, so you 
pinpointed that fba2591bf4e418b6c3f9f8794c9dd8fe40ae7bd9 commit as the 
"problem".  But the problem, as far as I can tell, had actually been there 
before that too, although it's entirely possible that all the 
serialization in the old test_clear_page_dirty() could have made it harder 
to trigger.

It also used to be that we did the "radix_tree_tag_clear()" there, which 
could also have hidden the page from any concurrent processes that tried 
to look up dirty pages.

But I actually suspect that the bug was actually there before, and was 
really exposed by the new dirty-page accounting code that went in late 
last year (and that exposed a lot of _other_ bugs too that had been hidden 
by our old model of just scanning the page tables). I think that went into 
2.6.19.

			Linus
Comment 61 Jan Kara 2007-12-19 17:05:15 UTC
> On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> > 
> > I'll confirm this tomorrow but it seems that even switching to data=ordered
> > (AFAIK default o ext3) is indeed enough to cure this problem.
> 
> Ok, do we actually have any ext3 expert following this? I have no idea 
> about what the journalling code does, but I have painful memories of ext3 
> doing really odd buffer-head-based IO and totally bypassing all the normal 
> page dirty logic.
> 
> Judging by the symptoms (sorry for not following this well, it came up 
> while I was mostly away travelling), something probably *does* clear the 
> dirty bit on the pages, but the dirty *accounting* is not done properly, 
> so the kernel keeps thinking it has dirty pages.
> 
> Now, a simple "grep" shows that ext3 does not actually do any 
> ClearPageDirty() or similar on its own, although maybe I missed some other 
> subtle way this can happen. And the *normal* VFS routines that do 
> ClearPageDirty should all be doing the proper accounting.
> 
> So I see a couple of possible cases:
> 
>  - actually clearing the PG_dirty bit somehow, without doing the 
>    accounting.
> 
>    This looks very unlikely. PG_dirty is always cleared by some variant of 
>    "*ClearPageDirty()", and that bit definition isn't used for anything 
>    else in the whole kernel judging by "grep" (the page allocator tests 
>    the bit, that's it).
> 
>    And there aren't that many hits for ClearPageDirty, and they all seem 
>    to do the proper "dec_zone_page_state(page, NR_FILE_DIRTY);" etc if the 
>    mapping has dirty state accounting.
> 
>    The exceptions seem to be:
>     - the page freeing path, but that path checks that "mapping" is NULL 
>       (so no accounting), and would complain loudly if it wasn't
>     - the swap state stuff ("move_from_swap_cache()"), but that should 
>       only ever trigger for swap cache pages (we have a BUG_ON() in that 
>       path), and those don't do dirty accounting anyway.
>     - pageout(), but again only for pages that have a NULL mapping.
> 
>  - ext3 might be clearing (probably indirectly) the "page->mapping" thing 
>    or similar, which in turn will make the VFS think that even a dirty 
>    page isn't actually to be accounted for - so when the page *turned* 
>    dirty, it was accounted as a dirty page, but then, when it was cleaned, 
>    the accounting wasn't reversed because ->mapping had become NULL.
> 
>    This would be some interaction with the truncation logic, and quite 
>    frankly, that should be all shared with the non-journal case, so I find 
>    this all very unlikely. 
> 
> However, that second case is interesting, because the pageout case 
> actually has a comment like this:
> 
>       /*
>        * Some data journaling orphaned pages can have
>        * page->mapping == NULL while being dirty with clean buffers.
>        */
> 
> which really sounds like the case in question. 
> 
> I may know the VM, but that special case was added due to insane 
> journaling filesystems, and I don't know what insane things they do. Which 
> is why I'm wondering if there is any ext3 person who knows the journaling 
> code?
  Yes, I'm looking into the problem... I think those orphan pages
without mapping are created because we cannot drop truncated
buffers/pages immediately.  There can be a committing transaction that
still needs the data in those buffers and until it commits we have to
keep the pages (and even maybe write them to disk etc.). But eventually,
we should write the buffers, call try_to_free_buffers() which calls
cancel_dirty_page() and everything should be happy... in theory ;)
  In practice, I have not yet narrowed down where the problem is.
fsx-linux is able to trigger the problem on my test machine so as
suspected it is some bad interaction of writes (plain writes, no mmap),
truncates and probably writeback. Small tests don't seem to trigger the
problem (fsx needs at least few hundreds operations to trigger the
problem) - on the other hand when some sequence of operations causes
lost dirty pages, they are lost deterministically in every run. Also the
file fsx operates on can be fairly small - 2MB was enough - so page
reclaim and such stuff probably isn't the thing we interact with.
  Tomorrow I'll try more...

> How/when does it ever "orphan" pages? Because yes, if it ever does that, 
> and clears the ->mapping field on a mapped page, then that page will have 
> incremented the dirty counts when it became dirty, but will *not* 
> decrement the dirty count when it is an orphan.

								Honza
Comment 62 Anonymous Emailer 2007-12-19 17:20:02 UTC
Reply-To: nickpiggin@yahoo.com.au

On Thursday 20 December 2007 12:05, Jan Kara wrote:
> > On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> > > I'll confirm this tomorrow but it seems that even switching to
> > > data=ordered (AFAIK default o ext3) is indeed enough to cure this
> > > problem.
> >
> > Ok, do we actually have any ext3 expert following this? I have no idea
> > about what the journalling code does, but I have painful memories of ext3
> > doing really odd buffer-head-based IO and totally bypassing all the
> > normal page dirty logic.
> >
> > Judging by the symptoms (sorry for not following this well, it came up
> > while I was mostly away travelling), something probably *does* clear the
> > dirty bit on the pages, but the dirty *accounting* is not done properly,
> > so the kernel keeps thinking it has dirty pages.
> >
> > Now, a simple "grep" shows that ext3 does not actually do any
> > ClearPageDirty() or similar on its own, although maybe I missed some
> > other subtle way this can happen. And the *normal* VFS routines that do
> > ClearPageDirty should all be doing the proper accounting.
> >
> > So I see a couple of possible cases:
> >
> >  - actually clearing the PG_dirty bit somehow, without doing the
> >    accounting.
> >
> >    This looks very unlikely. PG_dirty is always cleared by some variant
> > of "*ClearPageDirty()", and that bit definition isn't used for anything
> > else in the whole kernel judging by "grep" (the page allocator tests the
> > bit, that's it).
> >
> >    And there aren't that many hits for ClearPageDirty, and they all seem
> >    to do the proper "dec_zone_page_state(page, NR_FILE_DIRTY);" etc if
> > the mapping has dirty state accounting.
> >
> >    The exceptions seem to be:
> >     - the page freeing path, but that path checks that "mapping" is NULL
> >       (so no accounting), and would complain loudly if it wasn't
> >     - the swap state stuff ("move_from_swap_cache()"), but that should
> >       only ever trigger for swap cache pages (we have a BUG_ON() in that
> >       path), and those don't do dirty accounting anyway.
> >     - pageout(), but again only for pages that have a NULL mapping.
> >
> >  - ext3 might be clearing (probably indirectly) the "page->mapping" thing
> >    or similar, which in turn will make the VFS think that even a dirty
> >    page isn't actually to be accounted for - so when the page *turned*
> >    dirty, it was accounted as a dirty page, but then, when it was
> > cleaned, the accounting wasn't reversed because ->mapping had become
> > NULL.
> >
> >    This would be some interaction with the truncation logic, and quite
> >    frankly, that should be all shared with the non-journal case, so I
> > find this all very unlikely.
> >
> > However, that second case is interesting, because the pageout case
> > actually has a comment like this:
> >
> >     /*
> >      * Some data journaling orphaned pages can have
> >      * page->mapping == NULL while being dirty with clean buffers.
> >      */
> >
> > which really sounds like the case in question.
> >
> > I may know the VM, but that special case was added due to insane
> > journaling filesystems, and I don't know what insane things they do.
> > Which is why I'm wondering if there is any ext3 person who knows the
> > journaling code?
>
>   Yes, I'm looking into the problem... I think those orphan pages
> without mapping are created because we cannot drop truncated
> buffers/pages immediately.  There can be a committing transaction that
> still needs the data in those buffers and until it commits we have to
> keep the pages (and even maybe write them to disk etc.). But eventually,
> we should write the buffers, call try_to_free_buffers() which calls
> cancel_dirty_page() and everything should be happy... in theory ;)

If mapping is NULL, then try_to_free_buffers won't call cancel_dirty_page,
I think?

I don't know whether ext3 can be changed to not require/allow these dirty
pages, but I would rather Linus's dirty page accounting fix to go into that
path (the /* can this still happen? */ in try_to_free_buffers()), if possible.

Then you could also have a WARN_ON in __remove_from_page_cache().
Comment 63 Anonymous Emailer 2007-12-20 06:12:32 UTC
Reply-To: B.Steinbrink@gmx.de

On 2007.12.19 09:44:50 -0800, Linus Torvalds wrote:
> 
> 
> On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> > 
> > I'll confirm this tomorrow but it seems that even switching to data=ordered
> > (AFAIK default o ext3) is indeed enough to cure this problem.
> 
> Ok, do we actually have any ext3 expert following this? I have no idea 
> about what the journalling code does, but I have painful memories of ext3 
> doing really odd buffer-head-based IO and totally bypassing all the normal 
> page dirty logic.
> 
> Judging by the symptoms (sorry for not following this well, it came up 
> while I was mostly away travelling), something probably *does* clear the 
> dirty bit on the pages, but the dirty *accounting* is not done properly, 
> so the kernel keeps thinking it has dirty pages.
> 
> Now, a simple "grep" shows that ext3 does not actually do any 
> ClearPageDirty() or similar on its own, although maybe I missed some other 
> subtle way this can happen. And the *normal* VFS routines that do 
> ClearPageDirty should all be doing the proper accounting.
> 
> So I see a couple of possible cases:
> 
>  - actually clearing the PG_dirty bit somehow, without doing the 
>    accounting.
> 
>    This looks very unlikely. PG_dirty is always cleared by some variant of 
>    "*ClearPageDirty()", and that bit definition isn't used for anything 
>    else in the whole kernel judging by "grep" (the page allocator tests 
>    the bit, that's it).

OK, so I looked for PG_dirty anyway.

In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
bail out if the page is dirty.

Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
truncate_complete_page, because it called cancel_dirty_page (and thus
cleared PG_dirty) after try_to_free_buffers was called via
do_invalidatepage.

Now, if I'm not mistaken, we can end up as follows.

truncate_complete_page()
  cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
  do_invalidatepage()
    ext3_invalidatepage()
      journal_invalidatepage()
        journal_unmap_buffer()
          __dispose_buffer()
            __journal_unfile_buffer()
              __journal_temp_unlink_buffer()
                mark_buffer_dirty(); // PG_dirty set, incr. dirty pages

If journal_unmap_buffer then returns 0, try_to_free_buffers is not
called and neither is cancel_dirty_page, so the dirty pages accounting
is not decreased again.

As try_to_free_buffers got its ext3 hack back in
ecdfc9787fe527491baefc22dce8b2dbd5b2908d, maybe
3e67c0987d7567ad666641164a153dca9a43b11d should be reverted? (Except for
the accounting fix in cancel_dirty_page, of course).


On a side note, before 8368e328dfe1c534957051333a87b3210a12743b the task
io accounting for cancelled writes happened always happened if the page
was dirty, regardless of page->mapping. This was also already true for
the old test_clear_page_dirty code, and the commit log for
8368e328dfe1c534957051333a87b3210a12743b doesn't mention that semantic
change either, so maybe the "if (account_size)" block should be moved
out of the if "(mapping && ...)" block?

Bj
Comment 64 Jan Kara 2007-12-20 07:05:12 UTC
> On 2007.12.19 09:44:50 -0800, Linus Torvalds wrote:
> > 
> > 
> > On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> > > 
> > > I'll confirm this tomorrow but it seems that even switching to
> data=ordered
> > > (AFAIK default o ext3) is indeed enough to cure this problem.
> > 
> > Ok, do we actually have any ext3 expert following this? I have no idea 
> > about what the journalling code does, but I have painful memories of ext3 
> > doing really odd buffer-head-based IO and totally bypassing all the normal 
> > page dirty logic.
> > 
> > Judging by the symptoms (sorry for not following this well, it came up 
> > while I was mostly away travelling), something probably *does* clear the 
> > dirty bit on the pages, but the dirty *accounting* is not done properly, 
> > so the kernel keeps thinking it has dirty pages.
> > 
> > Now, a simple "grep" shows that ext3 does not actually do any 
> > ClearPageDirty() or similar on its own, although maybe I missed some other 
> > subtle way this can happen. And the *normal* VFS routines that do 
> > ClearPageDirty should all be doing the proper accounting.
> > 
> > So I see a couple of possible cases:
> > 
> >  - actually clearing the PG_dirty bit somehow, without doing the 
> >    accounting.
> > 
> >    This looks very unlikely. PG_dirty is always cleared by some variant of 
> >    "*ClearPageDirty()", and that bit definition isn't used for anything 
> >    else in the whole kernel judging by "grep" (the page allocator tests 
> >    the bit, that's it).
> 
> OK, so I looked for PG_dirty anyway.
> 
> In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
> bail out if the page is dirty.
> 
> Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
> truncate_complete_page, because it called cancel_dirty_page (and thus
> cleared PG_dirty) after try_to_free_buffers was called via
> do_invalidatepage.
> 
> Now, if I'm not mistaken, we can end up as follows.
> 
> truncate_complete_page()
>   cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
>   do_invalidatepage()
>     ext3_invalidatepage()
>       journal_invalidatepage()
>         journal_unmap_buffer()
>           __dispose_buffer()
>             __journal_unfile_buffer()
>               __journal_temp_unlink_buffer()
>                 mark_buffer_dirty(); // PG_dirty set, incr. dirty pages
> 
> If journal_unmap_buffer then returns 0, try_to_free_buffers is not
> called and neither is cancel_dirty_page, so the dirty pages accounting
> is not decreased again.
  Yes, this can happen. The call to mark_buffer_dirty() is a fallout
from journal_unfile_buffer() trying to sychronise JBD private dirty bit
(jbddirty) with the standard dirty bit. We could actually clear the
jbddirty bit before calling journal_unfile_buffer() so that this doesn't
happen but since Linus changed remove_from_pagecache() to not care about
redirtying the page I guess it's not needed any more...

								Honza
Comment 65 Jan Kara 2007-12-20 08:05:30 UTC
> > On 2007.12.19 09:44:50 -0800, Linus Torvalds wrote:
> > > 
> > > 
> > > On Sun, 16 Dec 2007, Krzysztof Oledzki wrote:
> > > > 
> > > > I'll confirm this tomorrow but it seems that even switching to
> data=ordered
> > > > (AFAIK default o ext3) is indeed enough to cure this problem.
> > > 
> > > Ok, do we actually have any ext3 expert following this? I have no idea 
> > > about what the journalling code does, but I have painful memories of ext3 
> > > doing really odd buffer-head-based IO and totally bypassing all the
> normal 
> > > page dirty logic.
> > > 
> > > Judging by the symptoms (sorry for not following this well, it came up 
> > > while I was mostly away travelling), something probably *does* clear the 
> > > dirty bit on the pages, but the dirty *accounting* is not done properly, 
> > > so the kernel keeps thinking it has dirty pages.
> > > 
> > > Now, a simple "grep" shows that ext3 does not actually do any 
> > > ClearPageDirty() or similar on its own, although maybe I missed some
> other 
> > > subtle way this can happen. And the *normal* VFS routines that do 
> > > ClearPageDirty should all be doing the proper accounting.
> > > 
> > > So I see a couple of possible cases:
> > > 
> > >  - actually clearing the PG_dirty bit somehow, without doing the 
> > >    accounting.
> > > 
> > >    This looks very unlikely. PG_dirty is always cleared by some variant
> of 
> > >    "*ClearPageDirty()", and that bit definition isn't used for anything 
> > >    else in the whole kernel judging by "grep" (the page allocator tests 
> > >    the bit, that's it).
> > 
> > OK, so I looked for PG_dirty anyway.
> > 
> > In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
> > bail out if the page is dirty.
> > 
> > Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
> > truncate_complete_page, because it called cancel_dirty_page (and thus
> > cleared PG_dirty) after try_to_free_buffers was called via
> > do_invalidatepage.
> > 
> > Now, if I'm not mistaken, we can end up as follows.
> > 
> > truncate_complete_page()
> >   cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
> >   do_invalidatepage()
> >     ext3_invalidatepage()
> >       journal_invalidatepage()
> >         journal_unmap_buffer()
> >           __dispose_buffer()
> >             __journal_unfile_buffer()
> >               __journal_temp_unlink_buffer()
> >                 mark_buffer_dirty(); // PG_dirty set, incr. dirty pages
> > 
> > If journal_unmap_buffer then returns 0, try_to_free_buffers is not
> > called and neither is cancel_dirty_page, so the dirty pages accounting
> > is not decreased again.
>   Yes, this can happen. The call to mark_buffer_dirty() is a fallout
> from journal_unfile_buffer() trying to sychronise JBD private dirty bit
> (jbddirty) with the standard dirty bit. We could actually clear the
> jbddirty bit before calling journal_unfile_buffer() so that this doesn't
> happen but since Linus changed remove_from_pagecache() to not care about
> redirtying the page I guess it's not needed any more...
  Oops, sorry, I spoke to soon. After thinking more about it, I think we
cannot clear the dirty bit (at least not jbddirty) in all cases and in
fact moving cancel_dirty_page() after do_invalidatepage() call only
hides the real problem.
  Let's recap what JBD/ext3 code requires in case of truncation.  A
life-cycle of a journaled buffer looks as follows: When we want to write
some data to it, it gets attached to the running transaction. When the
transaction is committing, the buffer is written to the journal.
Sometime later, the buffer is written to it's final place in the
filesystem - this is called checkpoint - and can be released.
  Now suppose a write to the buffer happens in one transaction and you
truncate the buffer in the next one. You cannot just free the buffer
immediately - it can for example happen, that the transaction in which
the write happened hasn't committed yet. So we just leave the dirty
buffer there and it should be cleaned up later when the committing
transaction writes the data where it needs...
  The problem is that when the commit code writes the buffer, it
eventually calls try_to_free_buffers() but as Nick pointed out,
->mapping is set to NULL by that time so we don't even call
cancel_dirty_page() and so the number of dirty pages is not properly
decreased. Of course, we could decrease the number of dirty pages after
we return from do_invalidatepage when clearing ->mapping but that would
make dirty accounting imprecise - we really still have those dirty data
which need writeout. But it's probably the best workaround I can
currently think of.

								Honza
Comment 66 Anonymous Emailer 2007-12-20 08:26:46 UTC
Reply-To: torvalds@linux-foundation.org



On Thu, 20 Dec 2007, Bj?rn Steinbrink wrote:
> 
> OK, so I looked for PG_dirty anyway.
> 
> In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
> bail out if the page is dirty.
> 
> Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
> truncate_complete_page, because it called cancel_dirty_page (and thus
> cleared PG_dirty) after try_to_free_buffers was called via
> do_invalidatepage.
> 
> Now, if I'm not mistaken, we can end up as follows.
> 
> truncate_complete_page()
>   cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
>   do_invalidatepage()
>     ext3_invalidatepage()
>       journal_invalidatepage()
>         journal_unmap_buffer()
>           __dispose_buffer()
>             __journal_unfile_buffer()
>               __journal_temp_unlink_buffer()
>                 mark_buffer_dirty(); // PG_dirty set, incr. dirty pages

Good, this seems to be the exact path that actually triggers it. I got to 
journal_unmap_buffer(), but was too lazy to actually then bother to follow 
it all the way down - I decided that I didn't actually really even care 
what the low-level FS layer did, I had already convinced myself that it 
obviously must be dirtying the page some way, since that matched the 
symptoms exactly (ie only the journaling case was impacted, and this was 
all about the journal).

But perhaps more importantly: regardless of what the low-level filesystem 
did at that point, the VM accounting shouldn't care, and should be robust 
in the face of a low-level filesystem doing strange and wonderful things. 
But thanks for bothering to go through the whole history and figure out 
what exactly is up.

> As try_to_free_buffers got its ext3 hack back in
> ecdfc9787fe527491baefc22dce8b2dbd5b2908d, maybe
> 3e67c0987d7567ad666641164a153dca9a43b11d should be reverted? (Except for
> the accounting fix in cancel_dirty_page, of course).

Yes, I think we have room for cleanups now, and I agree: we ended up 
reinstating some questionable code in the VM just because we didn't really 
know or understand what was going on in the ext3 journal code. 

Of course, it may well be that there is something *else* going on too, but 
I do believe that this whole case is what it was all about, and the hacks 
end up just (a) making the VM harder to understand (because they cause 
non-obvious VM code to work around some very specific filesystem 
behaviour) and (b) the hacks obviously hid the _real_ issue, but I think 
we've established the real cause, and the hacks clearly weren't enough to 
really hide it 100% anyway.

However, there's no way I'll play with that  right now (I'm planning on an 
-rc6 today), but it might be worth it to make a test-cleanup patch for -mm 
which does some VM cleanups:

 - don't touch dirty pages in fs/buffer.c (ie undo the meat of commit 
   ecdfc9787fe527491baefc22dce8b2dbd5b2908d, but not resurrecting the 
   debugging code)

 - remove the calling of "cancel_dirty_page()" entirely from 
   "truncate_complete_page()", and let "remove_from_page_cache()" just 
   always handle it (and probably just add a "ClearPageDirty()" to match 
   the "ClearPageUptodate()").

 - remove "cancel_dirty_page()" from "truncate_huge_page()", which seems 
   to be the exact same issue (ie we should just use the logic in 
   remove_from_page_cache()).

at that point "cancel_dirty_page()" literally is only used for what its 
name implies, and the only in-tree use of it seems to be NFS for when 
the filesystem gets called for ->invalidatepage - which makes tons of 
conceptual sense, but I suspect we could drop it from there too, since the 
VM layer _will_ cancel the dirtiness at a VM level when it then later 
removes it from the page cache.

So we essentially seem to be able to simplify things a bit by getting rid 
of a hack in try_to_free_buffers(), and potentially getting rid of 
cancel_dirty_page() entirely.

It would imply that we need to do the task_io_account_cancelled_write() 
inside "remove_from_page_cache()", but that should be ok (I don't see any 
locking issues there).

> On a side note, before 8368e328dfe1c534957051333a87b3210a12743b the task
> io accounting for cancelled writes happened always happened if the page
> was dirty, regardless of page->mapping. This was also already true for
> the old test_clear_page_dirty code, and the commit log for
> 8368e328dfe1c534957051333a87b3210a12743b doesn't mention that semantic
> change either, so maybe the "if (account_size)" block should be moved
> out of the if "(mapping && ...)" block?

I think the "if (account_size)" thing was *purely* for me being worried 
about hugetlb entries, and I think that's the only thing that passes in a 
zero account size.

But hugetlbfs already has BDI_CAP_NO_ACCT_DIRTY set (exactly because we 
cannot account for those pages *anyway*), so I think we could go further 
than move the account_size outside of the test, I think we could probably 
remove that test entirely and drop the whole thing.

The thing is, task_io_account_cancelled_write() doesn't make sense on 
mappings that don't do dirty accounting, since those mappings are all 
special anyway: they don't actually do any real IO, they are all in-ram 
things. So I think it should stay inside the

	if (mapping && mapping_cap_account_dirty(mapping))
		..

test, simply because I don't think it makes any conceptual sense outside 
of it.

Hmm?

But none of this seems really critical - just simple cleanups.

			Linus
Comment 67 Jan Kara 2007-12-20 09:26:20 UTC
> On Thu, 20 Dec 2007, Bj?rn Steinbrink wrote:
> > 
> > OK, so I looked for PG_dirty anyway.
> > 
> > In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
> > bail out if the page is dirty.
> > 
> > Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
> > truncate_complete_page, because it called cancel_dirty_page (and thus
> > cleared PG_dirty) after try_to_free_buffers was called via
> > do_invalidatepage.
> > 
> > Now, if I'm not mistaken, we can end up as follows.
> > 
> > truncate_complete_page()
> >   cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
> >   do_invalidatepage()
> >     ext3_invalidatepage()
> >       journal_invalidatepage()
> >         journal_unmap_buffer()
> >           __dispose_buffer()
> >             __journal_unfile_buffer()
> >               __journal_temp_unlink_buffer()
> >                 mark_buffer_dirty(); // PG_dirty set, incr. dirty pages
> 
> Good, this seems to be the exact path that actually triggers it. I got to 
> journal_unmap_buffer(), but was too lazy to actually then bother to follow 
> it all the way down - I decided that I didn't actually really even care 
> what the low-level FS layer did, I had already convinced myself that it 
> obviously must be dirtying the page some way, since that matched the 
> symptoms exactly (ie only the journaling case was impacted, and this was 
> all about the journal).
> 
> But perhaps more importantly: regardless of what the low-level filesystem 
> did at that point, the VM accounting shouldn't care, and should be robust 
> in the face of a low-level filesystem doing strange and wonderful things. 
> But thanks for bothering to go through the whole history and figure out 
> what exactly is up.
  As I wrote in my previous email, this solution works but hides the
fact that the page really *has* dirty data in it and *is* pinned in memory
until the commit code gets to writing it. So in theory it could disturb
the writeout logic by having more dirty data in memory than vm thinks it
has. Not that I'd have a better fix now but I wanted to point out this
problem.

								Honza
Comment 68 Anonymous Emailer 2007-12-20 11:24:54 UTC
Reply-To: torvalds@linux-foundation.org



On Thu, 20 Dec 2007, Jan Kara wrote:
>
>   As I wrote in my previous email, this solution works but hides the
> fact that the page really *has* dirty data in it and *is* pinned in memory
> until the commit code gets to writing it. So in theory it could disturb
> the writeout logic by having more dirty data in memory than vm thinks it
> has. Not that I'd have a better fix now but I wanted to point out this
> problem.

Well, I worry more about the VM being sane - and by the time we actually 
hit this case, as far as VM sanity is concerned, the page no longer really 
exists. It's been removed from the page cache, and it only really exists 
as any other random kernel allocation.

The fact that low-level filesystems (in this case ext3 journaling) do 
their own insane things is not something the VM even _should_ care about. 
It's just an internal FS allocation, and the FS can do whatever the hell 
it wants with it, including doing IO etc.

The kernel doesn't consider any other random IO pages to be "dirty" either 
(eg if you do direct-IO writes using low-level SCSI commands, the VM 
doesn't consider that to be any special dirty stuff, it's just random page 
allocations again). This is really no different.

In other words: the Linux "VM" subsystem is really two differnt parts: the 
low-level page allocator (which obviously knows that the page is still in 
*use*, since it hasn't been free'd), and the higher-level file mapping and 
caching stuff that knows about things like page "dirtyiness". And once 
you've done a "remove_from_page_cache()", the higher levels are no longer 
involved, and dirty accounting simply doesn't get into the picture.

			Linus
Comment 69 Anonymous Emailer 2007-12-20 14:28:32 UTC
Reply-To: B.Steinbrink@gmx.de

On 2007.12.20 08:25:56 -0800, Linus Torvalds wrote:
> 
> 
> On Thu, 20 Dec 2007, Bj?rn Steinbrink wrote:
> > 
> > OK, so I looked for PG_dirty anyway.
> > 
> > In 46d2277c796f9f4937bfa668c40b2e3f43e93dd0 you made try_to_free_buffers
> > bail out if the page is dirty.
> > 
> > Then in 3e67c0987d7567ad666641164a153dca9a43b11d, Andrew fixed
> > truncate_complete_page, because it called cancel_dirty_page (and thus
> > cleared PG_dirty) after try_to_free_buffers was called via
> > do_invalidatepage.
> > 
> > Now, if I'm not mistaken, we can end up as follows.
> > 
> > truncate_complete_page()
> >   cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
> >   do_invalidatepage()
> >     ext3_invalidatepage()
> >       journal_invalidatepage()
> >         journal_unmap_buffer()
> >           __dispose_buffer()
> >             __journal_unfile_buffer()
> >               __journal_temp_unlink_buffer()
> >                 mark_buffer_dirty(); // PG_dirty set, incr. dirty pages
> 
> Good, this seems to be the exact path that actually triggers it. I got to 
> journal_unmap_buffer(), but was too lazy to actually then bother to follow 
> it all the way down - I decided that I didn't actually really even care 
> what the low-level FS layer did, I had already convinced myself that it 
> obviously must be dirtying the page some way, since that matched the 
> symptoms exactly (ie only the journaling case was impacted, and this was 
> all about the journal).
> 
> But perhaps more importantly: regardless of what the low-level filesystem 
> did at that point, the VM accounting shouldn't care, and should be robust 
> in the face of a low-level filesystem doing strange and wonderful things. 
> But thanks for bothering to go through the whole history and figure out 
> what exactly is up.

Oh well, after seeing the move of cancel_dirty_page, I just went
backwards from __set_page_dirty using cscope + some smart guessing and
quickly ended up at ext3_invalidatepage, so it wasn't that hard :-)

> > As try_to_free_buffers got its ext3 hack back in
> > ecdfc9787fe527491baefc22dce8b2dbd5b2908d, maybe
> > 3e67c0987d7567ad666641164a153dca9a43b11d should be reverted? (Except for
> > the accounting fix in cancel_dirty_page, of course).
> 
> Yes, I think we have room for cleanups now, and I agree: we ended up 
> reinstating some questionable code in the VM just because we didn't really 
> know or understand what was going on in the ext3 journal code. 

Hm, you attributed more to my mail than there was actually in it. I
didn't even start to think of cleanups (because I don't know jack about
the whole ext3/jdb stuff, so I simply cannot come up with any cleanups
(yet?)).What I meant is that we only did a half-revert of that hackery.

When try_to_free_buffers started to check for PG_dirty, the
cancel_dirty_page call had to be called before do_invalidatepage, to
"fix" a _huge_ leak.  But that caused the accouting breakage we're now
seeing, because we never account for the pages that got redirtied during
do_invalidatepage.

Then the change to try_to_free_buffers got reverted, so we no longer
need to call cancel_dirty_page before do_invalidatepage, but still we
do. Thus the accounting bug remains. So what I meant to suggest was
simply to actually "finish" the revert we started.

Or expressed as a patch:

diff --git a/mm/truncate.c b/mm/truncate.c
index cadc156..2974903 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -98,11 +98,11 @@ truncate_complete_page(struct address_space *mapping, struct page *page)
 	if (page->mapping != mapping)
 		return;
 
-	cancel_dirty_page(page, PAGE_CACHE_SIZE);
-
 	if (PagePrivate(page))
 		do_invalidatepage(page, 0);
 
+	cancel_dirty_page(page, PAGE_CACHE_SIZE);
+
 	remove_from_page_cache(page);
 	ClearPageUptodate(page);
 	ClearPageMappedToDisk(page);

I'll be the last one to comment on whether or not that causes inaccurate
accouting, so I'll just watch you and Jan battle that out until someone
comes up with a post-.24 patch to provide a clean fix for the issue.

Krzysztof, could you give this patch a test run?

If that "fixes" the problem for now, I'll try to come up with some
usable commit message, or if somehow wants to beat me to it, you can
already have my

Signed-off-by: Bj
Comment 70 Anonymous Emailer 2007-12-20 18:04:16 UTC
Reply-To: nickpiggin@yahoo.com.au

On Friday 21 December 2007 06:24, Linus Torvalds wrote:
> On Thu, 20 Dec 2007, Jan Kara wrote:
> >   As I wrote in my previous email, this solution works but hides the
> > fact that the page really *has* dirty data in it and *is* pinned in
> > memory until the commit code gets to writing it. So in theory it could
> > disturb the writeout logic by having more dirty data in memory than vm
> > thinks it has. Not that I'd have a better fix now but I wanted to point
> > out this problem.
>
> Well, I worry more about the VM being sane - and by the time we actually
> hit this case, as far as VM sanity is concerned, the page no longer really
> exists. It's been removed from the page cache, and it only really exists
> as any other random kernel allocation.

It does allow the VM to just not worry about this. However I don't
really like this kinds of catch-all conditions that are hard to get
rid of and can encourage bad behaviour.

It would be nice if the "insane" things were made to clean up after
themselves.


> The fact that low-level filesystems (in this case ext3 journaling) do
> their own insane things is not something the VM even _should_ care about.
> It's just an internal FS allocation, and the FS can do whatever the hell
> it wants with it, including doing IO etc.
>
> The kernel doesn't consider any other random IO pages to be "dirty" either
> (eg if you do direct-IO writes using low-level SCSI commands, the VM
> doesn't consider that to be any special dirty stuff, it's just random page
> allocations again). This is really no different.
>
> In other words: the Linux "VM" subsystem is really two differnt parts: the
> low-level page allocator (which obviously knows that the page is still in
> *use*, since it hasn't been free'd), and the higher-level file mapping and
> caching stuff that knows about things like page "dirtyiness". And once
> you've done a "remove_from_page_cache()", the higher levels are no longer
> involved, and dirty accounting simply doesn't get into the picture.

That's all true... it would simply be nice to ask the filesystems to do
this. But anyway I think your patch is pretty reasonable for the moment.
Comment 71 Krzysztof Oledzki 2007-12-21 11:59:42 UTC

On Thu, 20 Dec 2007, Bj
Comment 72 Jan Kara 2008-01-25 06:15:13 UTC
This bug is fixed - closing...

Note You need to log in before you can comment on or make changes to this bug.