Hello, I am using the latest skge driver from the kernel git. My ethernet controller is onBoard (ASUS A8N): lspci: 0000:05:0c.0 Ethernet controller: Marvell Technology Group Ltd. Yukon Gigabit Ethernet 10/100/1000Base-T Adapter (rev 13) When using this driver with 3GB RAM it works perfectly: PING test.test (192.168.0.3) 56(84) bytes of data. 64 bytes from test.test (192.168.0.3): icmp_seq=1 ttl=128 time=0.253 ms 64 bytes from test.test (192.168.0.3): icmp_seq=2 ttl=128 time=0.233 ms With >4GB RAM the driver isn't working anymore. It's quite slow and after 11 packets nothing can be transmitted anymore. After restarting ping the same game begins again: PING test.test (192.168.0.3) 56(84) bytes of data. 64 bytes from test.test (192.168.0.3): icmp_seq=1 ttl=128 time=1012 ms 64 bytes from test.test (192.168.0.3): icmp_seq=2 ttl=128 time=1000 ms 64 bytes from test.test (192.168.0.3): icmp_seq=3 ttl=128 time=1000 ms ... 64 bytes from test.test (192.168.0.3): icmp_seq=11 ttl=128 time=1000 ms ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ...
It maybe a BIOS chipset issue, that blocks DMA above 4G.
It appears that the NForce and VIA chipsets have problems and don't always enable iommu with >4G ram. No code fix needed at this time
All other tested network cards are working without problems (forcedeth, via_rhine). What do you mean with "have problems and don't always enable iommu with >4G ram" ?
Okay: any driver that allows HIGHDMA (ie addresses > 4G) needs to have the chipset allow DMA access to >4G. Via-rhine doesn't support HIGHDMA. Some versions of forcedeth support 39bit DMA, some don't it seems to be hardware dependent. It seems some motherboards don't fully support PCI access to the full memory space, so in that case the sky2 probably DMA's garbage. It would be up to BIOS and ACPI to communicate this restriction down, i.e. not the drivers responsiblity. On AMD64, if you force on the the IOMMU or the BIOS enables it, then the hardware maps the PCI accesses to memory and the problem is solved.
This appears to be a motherboard issue, not a driver or board issue; can't reproduce it on my hardware. And private email from Andi Kleen reported that some nForce motherboards just don't seem to allow PCI access above 4G. Forcing the iommu on is a reasonable workaround.