This is a thought-experiment bug - I haven't seen it happen in practice, so maybe it's not a bug, but it's worth investigating. There are, I think, places cases where KVM decides that the guest is buggy or malicious so it aborts this guest. But because KVM is not aware that L1 and several L2 (nested guests) are all pretending to be one guest (one vcpu) - this might mean that a buggy or malicious L2 can kill an entire L1, not just this L2. A potential example of this problem is this (I'm not sure it's actually correct - I just thought of this a long time ago and didn't recheck the details now): In the original nested VMX submission, After L2 changed cr3, kvm_mmu_load() was called to realize immediately that L2 did something bad and only it should be aborted. But after a review, this test was removed, so the code now waits till later, and when it later sees it can't kvm_mmu_load() I think it kills the entire vcpu - L1 and all its L2s.