Bug 54161

Summary: nVMX: nested vpid
Product: Virtualization Reporter: Nadav Har'El (nyh)
Component: kvmAssignee: virtualization_kvm
Status: NEW ---    
Severity: enhancement    
Priority: P1    
Hardware: All   
OS: Linux   
Kernel Version: Subsystem:
Regression: No Bisected commit-id:
Bug Depends on:    
Bug Blocks: 94971, 53601    

Description Nadav Har'El 2013-02-20 15:19:19 UTC
Currently, support for VPID in nested VMX is minimal: L1 doesn't see the vpid feature, and doesn't use it, but L0 may use it (if available) when running the guests: We use the same vpid to run L1 and all its L2 guests, but              vmx_flush_tlb(vcpu) when going back and forth between L1 and L2 - this flushes this particular vpid, but at least leave other L1's cache entries behind.

A better performing solution would be to support "nested vpid" - L0 exposes to L1 the VPID feature, and maintains a mapping between VPID numbers that L1 uses to VPID numbers that it uses. L0 runs each of its guests - L1s and L2s, as separate guests.

Some points to remember while doing this (this is only a partial list!):

1. Remember to set vmx->vpid on nested_vmx_run() and nested_vmx_vmexit() to
L1's and L2's vpid. This is important because vmx_flush_tlb() (called on
context switch between a guest's processes) flushes only the current vpid (if
possible) and we need to do it to the correct one.
2. In free_nested(), also free the vpids that have been allocated to all the
L2s. This can probably be done in nested_free_saved_vmcs() (see mention above).
However, note that free_nested() is preceded (in vmx_free_vcpu()) by a call to
free_vpid() which frees the current vmx->vpid, which might be l1 or l2 and we
need to make sure we free all of them, and just once.
3. Implement, of course, also INVVPID for L1.