Bug 215063
Summary: | S0ix: can't reach s0ix with VMD and raid mode - Dell XPS 13 9305 | ||
---|---|---|---|
Product: | Power Management | Reporter: | Adhitya Mohan (me) |
Component: | Hibernation/Suspend | Assignee: | David Box (david.e.box) |
Status: | ASSIGNED --- | ||
Severity: | normal | CC: | bugzilla, felash, fkrueger, jonah, leho, rui.zhang |
Priority: | P1 | ||
Hardware: | Intel | ||
OS: | Linux | ||
URL: | https://patchwork.kernel.org/project/linux-pci/cover/20220301041943.2935892-1-david.e.box@linux.intel.com/ | ||
See Also: | https://bugzilla.kernel.org/show_bug.cgi?id=213717 | ||
Kernel Version: | 5.17.1 | Subsystem: | |
Regression: | No | Bisected commit-id: | |
Attachments: |
Subject: [PATCH 1/3] PCI/ASPM: Create ASPM override
Subject: [PATCH 2/3] PCI: vmd: Override ASPM on TGL/ADL VMD devices Subject: [PATCH 3/3] Add pci warnings everywhere patch for 5.16.18 |
Description
Adhitya Mohan
2021-11-18 13:44:43 UTC
Created attachment 299631 [details]
Subject: [PATCH 1/3] PCI/ASPM: Create ASPM override
Created attachment 299633 [details]
Subject: [PATCH 2/3] PCI: vmd: Override ASPM on TGL/ADL VMD devices
Created attachment 299635 [details]
Subject: [PATCH 3/3] Add pci warnings everywhere
I also had issues with this on my Dell XPS 15 9510 (2021) (so probably very similar to the one described in this bug). After applying these same patches, and according to the script S0ixSelftestTool, it looks like the computer can now reach S0ix substate. I'll report again if the computer seems to still lose battery too much. This is still happening with 5.15.32 and 5.16.18 for me (XPS 15 9510 from 2021). I even booted on Windows to make sure I'd update all firmwares. I could unbitrot this patch and apply it on top of 5.16.18 and it still fixes the problem for me. How can we move forward ? Created attachment 300726 [details]
patch for 5.16.18
Here is the updated patch for 5.16.18, as a concatenation from the 3 others.
Note that I don't know what I'm doing :-)
Hi, This patch has changed significantly since the version I posted, i recommend checking out the mailing list for the patch series under the title [PATCH V5 0/3] PCI: vmd: Enable PCIE ASPM and LTR Unfortunately it doesn't look like the module maintainer is very interested in working with Intel on merging the patches. (In reply to Julien Wajsberg from comment #5) > This is still happening with 5.15.32 and 5.16.18 for me (XPS 15 9510 from > 2021). I even booted on Windows to make sure I'd update all firmwares. > > I could unbitrot this patch and apply it on top of 5.16.18 and it still > fixes the problem for me. > > How can we move forward ? Ah that's good to know, thanks. I'm a complete newbie in kernel development, is there a better way to import them than copy paste them from the mailing list archives? Especially do you know if some public git tree would contain them? ah, I found the v6 version of the patch even, the 3 parts are in https://patchwork.kernel.org/project/linux-pci/cover/20220301041943.2935892-1-david.e.box@linux.intel.com/ v5 doesn't apply cleanly, let's see with v6. First patch seems to be in-tree now. Second and third patch needs some edit to be applied but nothing crazy. Let's see how it goes... I confirm this patchset also works for me! Great! Since 1 is intree now, let's hope 2 and 3 will go too. The patches 2 and 3 all apply cleanly on 5.17.1 (not sure why I missed this stable version). Great!, maybe send updated patch files, to this thread? It would be useful for someone else. Forget about my previous messages: patch 1 isn't in-tree. I believe I had it applied locally and got confused. The full patchset applies cleanly on 5.17.1. Here is how I applied: 1. Go to https://patchwork.kernel.org/project/linux-pci/cover/20220301041943.2935892-1-david.e.box@linux.intel.com/ 2. Click on "series" at the top right. 3. Run `git am download_file.patch` Then build the kernel as usual. For me this is: ``` # update config by running make menuconfig and immediatly exiting make menuconfig # build updated debian packages make bindeb-pkg -j$(nproc) ``` I took a shot at backporting vmd.c to 5.15.y, but PCI device defs hunk appeared too complex, so I'm bumping myself to 5.17.y to see if it helps bug 215367. (In reply to Leho Kraav from comment #16) > I took a shot at backporting vmd.c to 5.15.y, but PCI device defs hunk > appeared too complex, so I'm bumping myself to 5.17.y to see if it helps bug > 215367. This bug is present in any devices that use the VMD driver, through the virtue of using the nvme drive by setting the drive in Intel VMD on all alder lake and tiger lake platforms, so yes it should help. To make this easier if you are on 5.15 use the patches I posted, if you are on 5.17 (which I reccomend bumping to) Then use the newer patches from David. I have now figured out that my XPS 13 7390 2-in-1 is running NVMe in plain AHCI mode, not VMD. Adding this patchset unfortunatey does nothing for it. ``` The pcieport 0000:00:1d.0 0000:00:1d.7 ASPM enable status: Pcieport is not in D3cold: 0000:00:1d.0 0000:00:1d.7 The PCIe bridge link power management state is: 0000:00:1d.0 Link is in L0 The link power management state of PCIe bridge: 0000:00:1d.0 is not expected. which is expected to be L1.1 or L1.2, or user would run this script again. ``` I'll continue my investigation in bug 215367. Glad y'all got something moving forward for VMD though. I see there's a new version of the patches, see https://patchwork.kernel.org/project/linux-pci/cover/20221025004411.2910026-1-david.e.box@linux.intel.com/ I'm applying on latest stable and currently building. I forgot to comment, but I'm running on this v7 series since then with no issue. My understanding is that David's work has been merged in v6.3. |