Most recent kernel where this bug did *NOT* occur:2.6.10 Distribution:Kubuntu Hardware Environment:AMD64X2, Geforce6100 North, NForce 430 South, forcedeth driver Software Environment: Problem Description:dmesg shows above/dead network. netstat shows piles of FIN_WAIT1 connections, using same /proc/sys/net/ipv4 settings Steps to reproduce: Update Edgy 2.6.17-10 kernel with vanilla &.1 patch. Turned on DMA engine, and nv_sata with k7 (32 bit kernel) optimizations (only changes). Used GTK-gnutella or any p2p that really wrings out the ip stack. A few hours later bandwidth slows to a crawl and only currently open connections are alive. Bring down the network and bringing it back up to clear the tables doesn't remove the fin_wait1's. Any workarounds or maybe a new setting that I overlooked that the kernel needs would be greatly appc'd. Cheers.
On Sun, 31 Dec 2006 19:03:34 -0800 bugme-daemon@bugzilla.kernel.org wrote: > http://bugzilla.kernel.org/show_bug.cgi?id=7757 > > Summary: ip_conntrack:table full, dropping connection after > kernel update > Kernel Version: 2.6.19.1 > Status: NEW > Severity: normal > Owner: shemminger@osdl.org > Submitter: joelol75@verizon.net > > > Most recent kernel where this bug did *NOT* occur:2.6.10 > Distribution:Kubuntu > Hardware Environment:AMD64X2, Geforce6100 North, NForce 430 South, forcedeth driver > Software Environment: > Problem Description:dmesg shows above/dead network. netstat shows piles of > FIN_WAIT1 connections, using same /proc/sys/net/ipv4 settings > > Steps to reproduce: Update Edgy 2.6.17-10 kernel with vanilla &.1 patch. Turned > on DMA engine, and nv_sata with k7 (32 bit kernel) optimizations (only changes). > Used GTK-gnutella or any p2p that really wrings out the ip stack. A few hours > later bandwidth slows to a crawl and only currently open connections are alive. > > Bring down the network and bringing it back up to clear the tables doesn't > remove the fin_wait1's. Any workarounds or maybe a new setting that I > overlooked that the kernel needs would be greatly appc'd. > > Cheers. > > ------- You are receiving this mail because: ------- > You are on the CC list for the bug, or are watching someone who is.
I recompiled with option Support for DMA engines/Network TCP recieve copy offload turned off, and updated gtk-gnutella to 0.96.4u SVN and this didn't solve the problem. Bringing down eth0 and back up doesn't 'immediatly' clear the FIN_WAIT1's but they do clear out after a small amount of time, but you must bring the card down and up. This may be a forcedeth driver prob? Thanks again... Joel
Dual booted into the old kernel and had the same problem, but I found a workaround. /proc/sys/net/ipv4/tcp_max_orphans was set to 32768, I changed it to 16 /proc/sys/net/ipv4/tcp_orphan_retries was set to 0, set to 1? (Maybe 0 better/wrong way I went?) Now 16 stay in the list, and they do eventually time out and die instead of accumulating after I shut the p2p down. Isnt 32768 too high for a default? I know connections should be properly closed, but why is this burden so high on the server side? I still can't figure out why the old kernel seemed fine until the new one was installed. I really didn't pound on it as much, maybe I just didn't notice. I mean there were 100s of FIN_WAIT1 sockets 'hung' after a hour or two of heavy p2p downloads that would start out at a reasonable speed and then slow to a death. Any tips/settings appc'd. Thanks in advance.
> shemminger@osdl.org changed: > > What |Removed |Added > ---------------------------------------------------------------------------- > Owner|shemminger@osdl.org |networking_netfilter- > | |iptables@kernel- > | |bugs.osdl.org > Component|IPV4 |Netfilter/Iptables This is not a netfilter bug and doesn't look like a bug at all.
This bug was caused by gtk-gnutella package and has been resolved. Sorry for any inconvienence, you may close this bug.