Bug 217472
Summary: | ACPI _OSC features have different values in Host OS and Guest OS | ||
---|---|---|---|
Product: | Drivers | Reporter: | Nirmal Patel (nirmal.patel) |
Component: | PCI | Assignee: | drivers_pci (drivers_pci) |
Status: | NEW --- | ||
Severity: | high | CC: | bagasdotme, bjorn |
Priority: | P3 | ||
Hardware: | Intel | ||
OS: | Linux | ||
Kernel Version: | Subsystem: | ||
Regression: | No | Bisected commit-id: | |
Attachments: |
Rhel9.1_Guest_dmesg
Hypervisor dmesg VM dmesg VM lspci Hypervisor lspci |
Description
Nirmal Patel
2023-05-22 16:32:03 UTC
Thanks Bjorn and Alex for quick response. I agree with the analysis about guest BIOS not giving control of PCIe native hotplug to guest OS. Adding some background about the above patch. The above patch was added to suppress AER messages when samsung drives were connected with VMD enabled. I believe AER is enabled in BIOS i.e. pre-OS VMD driver not by VMD linux driver. So the AER flooding would be seen even non-linux environment. So with guest BIOS providing different values than the Host BIOS and adding this patch in VMD linux driver leaves direct assign functionality broken across all the hypervisors and guest OS combinations. As a result hotplug will not work which is a major issue. Before this patch, VMD used pciehp. What should be the ideal case here? i.e. vmd relies on native_pcie_hotplug setting or BIOS settings. I am open to a better suggestion, but I can think of two options. Option 1: Revert the patch f611b83c7a0e and suggest the AER fix to be added to the BIOS or Pre-OS vmd driver. Option 2: VMD to enumerate all the devices and set native_pcie_hotplug for all the hotplug capable devices. Thanks nirmal (In reply to Nirmal Patel from comment #1) > Thanks Bjorn and Alex for quick response. > I agree with the analysis about guest BIOS not giving control of PCIe native > hotplug to guest OS. > > Adding some background about the above patch. > The above patch was added to suppress AER messages when samsung drives were > connected with VMD enabled. I believe AER is enabled in BIOS i.e. pre-OS VMD > driver not by VMD linux driver. So the AER flooding would be seen even > non-linux environment. > > So with guest BIOS providing different values than the Host BIOS and adding > this patch in VMD linux driver leaves direct assign functionality broken > across all the hypervisors and guest OS combinations. As a result hotplug > will not work which is a major issue. > > Before this patch, VMD used pciehp. > > What should be the ideal case here? i.e. vmd relies on native_pcie_hotplug > setting or BIOS settings. > > I am open to a better suggestion, but I can think of two options. > > Option 1: Revert the patch f611b83c7a0e and suggest the AER fix to be added > to the BIOS or Pre-OS vmd driver. > > Option 2: VMD to enumerate all the devices and set native_pcie_hotplug for > all the hotplug capable devices. > I don't have any context you're referring to. Where is lore.kernel.org discussion? Bagas, it starts here: https://lore.kernel.org/r/ZGz2FQpHPKYgcc0+@bhelgaas Created attachment 304412 [details]
Hypervisor dmesg
Created attachment 304413 [details]
VM dmesg
Created attachment 304414 [details]
VM lspci
Created attachment 304415 [details]
Hypervisor lspci
VMD uses pass-through mechanism to assign drives directly without much Host OS intervention. I am removing drive physically from the slot and check lspci and lsblk to make sure hot removed drive is disappeared from the list. Reproduction steps: - Enable vmd and hotplug settings in BIOS - Install/ Boot rhel. i.e. Rhel9.1 - Mount ISO with repository and install packages: i.e. libvirt, libvirt-python, bridge-utils, virt-manager, libvirt-daemon-config-network, libguestfs-tools, virt-install, qemu-kvm, virt-viewer - Run virtualization service. i.e. libvirtd - Create Virtual Machine. Add desired settings like cpu, memory, etc. - Assign PCI device to VM. - Open virt-manager. - Select Edit → Virtual Machine Details - Click on Add Hardware, and select PCI Host Device - Find in list VMD devices selected at the beginning of the manual. e.g. 0000:97:00.5 Intel Corporation Volume Management Device NVMe RAID Controller. - Boot to Guest OS. - Verify in lsblk output that assigned drives are visible: # lsblk -o +seriaL - Hot remove drives Expected result: Hot removed drives disappear from guest OS Actual result: Hot removed drives are present in guest OS (lsblk, lspci) Note: Hot removed drives are gone after Guest VM reboot. Drives do not appear in Guest OS after hot plug. Hot remove works correctly on Host OS – drives disappear in lsblk, lspci |