Bug 118191 - performance regression since dynamic halt-polling
Summary: performance regression since dynamic halt-polling
Status: NEW
Alias: None
Product: Virtualization
Classification: Unclassified
Component: kvm (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: virtualization_kvm
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-05-13 07:03 UTC by Wolfgang Bumiller
Modified: 2016-05-13 08:08 UTC (History)
1 user (show)

See Also:
Kernel Version: >=4.2
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Wolfgang Bumiller 2016-05-13 07:03:35 UTC
Since commit aca6ff29c40 (KVM: dynamic halt-polling) a VM under network load with virtio network produces extremely high cpu usage on the host.

Bisected on git://github.com/torvalds/linux master

Testcase:

Host: starting with the above mentioned commit (using a debian based linux (PVE4.2))
Using iperf to test:
 $ iperf -us

Guest VM: qemu/kvm (running linux) with this network device:
 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
 -device virtio-net-pci,mac=32:32:37:39:33:62,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
Connecting to the iperf server via:
 $ iperf -uc 10.0.0.1 -b 100m

Behavior before the commit: ~60% cpu usage
After the commit: 100% cpu usage and much lower network throughput.

Related external threads:
http://pve.proxmox.com/pipermail/pve-user/2016-May/010302.html
https://forum.proxmox.com/threads/wrong-cpu-usage.27080/

The iperf result linked in the mailing list thread is the same I'm seeing in my testcase after the commit.
(<https://gist.github.com/gilou/15b620a7a067fd1d58a7616942e025b4#file-perf_virtionet_4-4-txt>)
Whereas before the commit the KVM process iperf output starts with:
5.93% [kernel] [k] kvm_arch_vcpu_ioctl_run
4.92% [kernel] [k] vmx_vcpu_run
3.62% [kernel] [k] native_write_msr_safe
3.14% [kernel] [k] _raw_spin_lock_irqsave

And vhost-net with:
9.66% [kernel] [k] __br_fbd_get
4.55% [kernel] [k] copy_user_enhanced_fast_string
3.01% [kernel] [k] update_cfs_shares
2.43% [kernel] [k] __netif_receive_skb_core
2.25% [kernel] [k] vhost_worker

Loading the kvm module with the 2 new options introduced by the above commit set to 0 reverts to the original CPU usage from before (which makes sense, given that as far as I can tell this reverts to the old behavior).
  halt_poll_ns_grow=0 halt_poll_ns_shrink=0
Comment 1 Wanpeng Li 2016-05-13 07:54:18 UTC
Behavior before the commit: ~60% cpu usage
After the commit: 100% cpu 

===============================================

I think this is the right phenomena which you will observe, it doesn't influence scheduling on host, the poll will stop immediately once another candidate task appears.


usage and much lower network throughput.

==============================================

There is a trace off between throughput and latency, you can stop dynamic halt-polling by the interfaces which you have already found for throughput oriented workloads.
Comment 2 Wanpeng Li 2016-05-13 08:08:45 UTC
You can set halt_poll_ns to zero in order to stop dynamic halt-polling directly instead of other interfaces.

Note You need to log in before you can comment on or make changes to this bug.