Bug 215892 - 6500XT [drm:amdgpu_dm_init.isra.0.cold [amdgpu]] *ERROR* Failed to register vline0 irq 30!
Summary: 6500XT [drm:amdgpu_dm_init.isra.0.cold [amdgpu]] *ERROR* Failed to register v...
Status: RESOLVED ANSWERED
Alias: None
Product: Drivers
Classification: Unclassified
Component: Video(DRI - non Intel) (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: drivers_video-dri
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2022-04-27 02:23 UTC by Mark Johnston
Modified: 2022-04-27 12:22 UTC (History)
1 user (show)

See Also:
Kernel Version: 5.18-rc4
Subsystem:
Regression: No
Bisected commit-id:


Attachments
New PowerColor board with chip that produces kernel errors (2.38 MB, image/png)
2022-04-27 02:23 UTC, Mark Johnston
Details
full dmesg (82.57 KB, text/plain)
2022-04-27 02:24 UTC, Mark Johnston
Details
Prior PowerColor (6500XT) board with chip that does not produce error (3.02 MB, image/png)
2022-04-27 02:28 UTC, Mark Johnston
Details
lspci (1.67 KB, text/plain)
2022-04-27 02:33 UTC, Mark Johnston
Details
acpidump summary (2.39 KB, text/plain)
2022-04-27 02:52 UTC, Mark Johnston
Details

Description Mark Johnston 2022-04-27 02:23:42 UTC
Created attachment 300811 [details]
New PowerColor board with chip that produces kernel errors

Hello!

This is my first time submitted a bug here. I apologize if I make any mistakes here, but I am going to do my best to describe the efforts that I have gone through to attempt to resolve this issue on my own. As well, I hope not to overload with information, but just wish to help with skipping over the basic questions.

I have numerous PowerColor RX 6500XT graphics cards, and all of them with a specific chip package (picture attached) have the same issue. Any PowerColor RX 6500XT with 2152 printed at the top of the package, and "TFTB43.00" at the bottom of the package suffers the same kernel errors. Previously (up until a few weeks ago) PowerColor was shipping 6500XT boards with chips that were stamped with 2146 and "TFAW62.T5" at the top and bottom of the package respectively. Boards with those chips have zero kernel errors and work flawlessly. As well, I have tested various 6500XT and 6400 boards from different AIB partners of AMD and have not had any issues other than this specific variant from PowerColor.


To be honest, I am not sure if the root of the problem is in pcieport or in amdgpu, but the amdgpu error throws first. 

I have attached the full dmesg output but to save some time here are some highlighted lines of issue:

[    5.506718] [drm:amdgpu_dm_init.isra.0.cold [amdgpu]] *ERROR* Failed to register vline0 irq 30!
[   14.368915] pcieport 0000:01:00.0: can't change power state from D0 to D3hot (config space inaccessible)
[   15.270778] pcieport 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[   15.270799] pcieport 0000:02:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[   25.478689] pcieport 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[   25.478696] pcieport 0000:02:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[   25.722619] amdgpu 0000:03:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[   35.833714] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for VM flush hub: 0!
[   35.941450] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for sem acquire in VM flush!
[   36.048999] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for VM flush hub: 1!
[   36.156835] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for sem acquire in VM flush!
[   36.264770] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for VM flush hub: 1!
[   36.372616] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR* Timeout waiting for VM flush hub: 0!


What I have attempted so far:

Results were the same for the following kernels: 5.4.190, 5.10.111, 5.15.34, 5.17.4 and now 5.18-rc4.

Many different motherboards with varying chipsets (B250, H510, X370, B550). Same result.

Enabling/Disabling clock gating, ASPM, extended synch control for PCIE. Same result.

The problematic cards from PowerColor indeed do work in Windows without issue. This leads me to believe that something may have changed with TUL's implementation of the 6500XT from one production run to another. Hopefully someone from the amdgpu team can help here.


To summarize, PowerColor's prior 6500XT production worked flawlessly with the drivers in the mainline kernel. New production for some reason is no longer usable. New cards work in Windows, but now throw the errors above. Not an isolated issue of one card, as I have tested 12 identical ones with the same chip and all have the same result regardless of motherboard, cpu, power, kernel, OS, etc. Cards (6500XT and 6400s) from other partners have not had any issues.
Comment 1 Mark Johnston 2022-04-27 02:24:49 UTC
Created attachment 300812 [details]
full dmesg
Comment 2 Mark Johnston 2022-04-27 02:28:13 UTC
Created attachment 300813 [details]
Prior PowerColor (6500XT) board with chip that does not produce error
Comment 3 Mark Johnston 2022-04-27 02:33:23 UTC
Created attachment 300814 [details]
lspci

The hardware configuration that was most recently tested.
Comment 4 Mark Johnston 2022-04-27 02:52:05 UTC
Created attachment 300815 [details]
acpidump summary
Comment 5 Artem S. Tashkinov 2022-04-27 08:59:14 UTC
Please search for dupes and refile if missing:

https://gitlab.freedesktop.org/drm/amd/-/issues
Comment 6 Mark Johnston 2022-04-27 12:22:29 UTC
There is one potentially similar report here: https://gitlab.freedesktop.org/drm/amd/-/issues/1933

Though both of the users report having working desktop environments and nothing about amdgpu not being able to come out of D3cold. In my case above the gpus are non-responsive, as they are stuck in the d3cold power state. So the amdgpu_dm_init.isra may be the same, but the results and impacts differ.

Not really sure what to do here. Should I add my findings (hardware tests) to that report?

Note You need to log in before you can comment on or make changes to this bug.