Currently, nested VMX does not allow L1 to use the MSR load/store feature on entry/exit. The code fails entry (from L1 to L2) if these VMCS features are used: if (vmcs12->vm_entry_msr_load_count > 0 || vmcs12->vm_exit_msr_load_count > 0 || vmcs12->vm_exit_msr_store_count > 0) { pr_warn_ratelimited("%s: VMCS MSR_{LOAD,STORE} unsupported\n", __func__); nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); return 1; } This was not a big problem if L1 is also KVM, because KVM didn't use this feature - it only uses it in case of EPT for switching the EFER (see explanation in http://kerneltrap.org/mailarchive/linux-kvm/2010/5/2/6261577), and in that case there is a simpler alternative: supporting VM_ENTRY/EXIT_LOAD_IA32_EFER is enough. So this is what we did in the nested EPT patches proposed in bug 53611. However, it is likely that for different L1s (or even KVM in the future), we'll need to support the generic MSR load/store feature.
To support this feature correctly, I think we can't give the msr array address given by L1 (vmcs12) directly to the processor (vmcs02), but rather we should loop on the entries in the array given by L1, using KVM's writemsr/readmsr.
Fixed by commit ff651cb613b4 (KVM: nVMX: Add nested msr load/restore algorithm, 2014-12-11).