Tested kernels with the issue: 3.7.5, 3.7.0 Tested kernels without the issue: 3.6.11, 3.4.28 and older CentOS 6 x86_64 All kernels are compiled from the same config (http://pastebin.com/SPbgAAVt). A screenshot from a router with 4 NICs on bond0 (eth0, eth1) and bond1 (eth2, eth3): http://s17.postimage.org/f5a4owrsf/bonding.png STEPS TO REPRODUCE ON TEST HOST: Sorry, I have no any managed switches with 802.3ad at home. So I use Virtualbox for bug reporting with two bridged adapters on the guest attached to TAP devices on the host. STEP 1. VIRTUALBOX HOST – INIT: # cat /etc/modprobe.d/bonding.conf: alias bond0 bonding options bond0 mode=4 xmit_hash_policy=layer3+4 miimon=100 # ip tuntap add dev tap0 mode tap # ip tuntap add dev tap1 mode tap # modprobe -v bond0 # ip l set dev tap0 up # ip l set dev tap1 up # ip l set dev bond0 up # ifenslave bond0 tap0 tap1 # ip a add 192.168.10.10/24 dev bond0 STEP 2. VIRTUALBOX GUEST – INIT: # cat /etc/modprobe.d/bonding.conf: alias bond0 bonding options bond0 mode=4 xmit_hash_policy=layer3+4 miimon=100 # modprobe -v bond0 # ip l set dev p2p1 up # ip l set dev p7p1 up # ip l set dev bond0 up # ifenslave bond0 p2p1 p7p1 # ip a add 192.168.10.20/24 dev bond0 # ip r add default via 192.168.10.10 STEP 3. VIRTUALBOX HOST – INIT SNIFFERS: # tcpdump -n -nn -l -i tap0 > tap0.txt & # tcpdump -n -nn -l -i tap1 > tap1.txt & STEP 4. VIRTUALBOX HOST – GENERATE SOME TRAFFIC: # nmap -sU -v -T 5 192.168.10.20 On 3.7.5, 3.7.0 the bonding will place all traffic only to a single interface (tap0 or tap1). On 3.6.11, 3.4.28 and older kernels the bonding will place all traffic to both interfaces (tap0 and tap1). # cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer3+4 (1) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 1 Number of ports: 2 Actor Key: 5 Partner Key: 17 Partner Mac Address: 08:00:27:af:dd:45 Slave Interface: tap0 MII Status: up Speed: 10 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: ba:2f:b5:25:d0:67 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: tap1 MII Status: up Speed: 10 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 6a:ce:e4:d0:0c:05 Aggregator ID: 1 Slave queue ID: 0 Any ideas? Thanks
Bonding hashes based on flow. To simulate a test with multiple flows you need to test multiple connections with something like netperf or iperf.
Hi, Stephen Thank you for your answer. I tried iperf and found interesting things for you: 1) bonding (802.3ad) works perfectly with iptables OUTPUT traffic; 2) bonding (802.3ad) DOESN'T works with iptables FORWARD traffic! The new small iperf test script (test.sh): #!/bin/bash for port in {1000..2000}; do iperf -c 192.168.10.20 --port $port --udp --bandwidth 1K --time 600 > /dev/null 2>&1 & done -------------------------------------------------------------------------------- I. TEST OUTPUT traffic Run test.sh on virtualbox host. Result: bonding works. Traffic are balanced between tap0 and tap1. II. TEST FORWARD traffic 1. Convert virtualbox host to router: * eth0: 192.168.1.10/24 with connected notebook * bond0: 192.168.10.10/24 (tap0 + tap1) with connected virtualbox net.ipv4.ip_forward = 1 iptables has no rules, FORWARD = ACCEPT 2. Run test.sh on the notebook (192.168.1.200) Result: bonding DOESN'T work. It will place all traffic only to a single interface (tap0 or tap1). -------------------------------------------------------------------------------- The bug is reproduced on all my *routers* with 3.7.5 kernels too. One of them: * Intel Gigabit ET2 Quad Port Server Adapter (igb.ko) * 500/100Mbps traffic * NAT, ipt_netflow * bond0 (eth0, eth1) + bond1 (eth2, eth3): Screenshot: http://s17.postimage.org/f5a4owrsf/bonding.png -------------------------------------------------------------------------------- The bug is not reproduced on 3.6.11 and older kernels on the same routers with same configs. – With best regards, Nikolay