Bug 172421
Summary: | radeon: allow to set the TMDS frequency by a special kernel parameter | ||
---|---|---|---|
Product: | Drivers | Reporter: | Elmar Stellnberger (estellnb) |
Component: | Video(DRI - non Intel) | Assignee: | drivers_video-dri |
Status: | NEW --- | ||
Severity: | enhancement | CC: | alexdeucher, blade, deathsimple, jameshendry05, joachim.hoss, matt.weiland, pawo2500, rscheidegger, sirfixabyte |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 4.8.0-rc7+ | Subsystem: | |
Regression: | No | Bisected commit-id: | |
Attachments: |
shipment notification: R5 230 marketed as 4K-ready
patch introducing radeon.hdmimhz for kernel 4.8.0-rc2 |
Description
Elmar Stellnberger
2016-09-21 18:15:16 UTC
Created attachment 239371 [details]
shipment notification: R5 230 marketed as 4K-ready
Created attachment 239381 [details]
patch introducing radeon.hdmimhz for kernel 4.8.0-rc2
NACK. I rejected this the last time you brought this up. You are running the hw outside of it's validated specs. See this discussion: https://bugs.freedesktop.org/show_bug.cgi?id=93885 It is not true that this card would only feature a TMDS of 297 over DP. The card I have bought does only have HDMI, DVI and 4K and it was sold as 4K-ready as you can see from the first attachement. ... and VGA The asic supports 4K over DP or duallink DVI. It obviously works stable over HDMI as well. Besides this I have just tested an ATI Mobility Radeon HD 2600 XT/2700 and it can provide 3840x2160_22.00 over its HDMI port with a radeon.hdmimhz of 250. With a hdmimhz of 297 I get some screen distortions but that does not harm the card. Why not just unlock a new feature of so many radeon cards? Nouveau developers did so for long with their device driver. Running the hardware out of spec is not something we want to support. any other opinion by someone else on this issue? Driving the PLL and transmitter way over it's limit is clearly not a good idea and can potentially cause hardware failure in the long term. For the R5 230 long time experience is already available. I am successfully using this card at least since February 2016 at a TMDS of 297Mhz and these cards are doing very well on basis of everyday use as well as occasional full throttle. Today I approached to try out the dual-link feature of the DVI-port. First I had verified with AOC that my u2868pqu monitor in deed supports dual-link over DVI. Then I connected the DVI output of my R5 230 (comment #6) with the u2868pqu monitor over DVI. Trying to boot with a vanilla/hdmimhz=0 I just ended up with a black screen. Finally I did succeed to get a picture by setting hdmimhz=165. Firstly amazed I then noticed that hdmimhz does also disable the duallink feature which was obviously the cause why it worked with hdmimhz. My explanation for this is that the radeon driver detected both the display and the card to support dual-link DVI. However the cable does not. (In reply to Elmar Stellnberger from comment #9) > any other opinion by someone else on this issue? FWIW enthusiasts love such things, but corporations do not. Hence you don't see overclocking and similar unspecified stuff in drivers not maintained by community generally. Personally I've always thought the risk of damaging hardware with any kind of overclocking is just about exactly zero as long as you don't increase voltage levels (and you can handle the additional heat but that should be a non-issue here). That's my limited understanding of the physics behind it :-). I suspect part of the reason why it overclocks so well is also that this chip should be DP 1.2 capable - meaning HBR2 mode which has a clock of 270Mhz (not that you can really get cards with that chip which actually do have the DP port...). Now the signalling is different with DP vs. HDMI/DVI but if I had to take a guess I'd suspect the hw is mostly all the same. But none of that is going to change the opinion of overclocking of anyone... (In reply to Roland Scheidegger from comment #13) > Personally I've always thought the risk of damaging hardware with any kind > of overclocking is just about exactly zero as long as you don't increase > voltage levels Unfortunately this is exactly what happens here. The clock is generated by a voltage controlled oscillator and for the desired resolution you need to over clock it by about 30-40%. That in turn means you raise the voltage way over the nominal limit. Those oscillators are designed to handle voltages about 250% over the nominal level without frying immediately, but that says absolutely nothing about the aging of the circuit under those conditions. The PLL we are talking about here clearly isn't designed for that level of operation and even the closed source driver (which are otherwise rather friendly to overclocking) don't let the user override this absolute limit. So this is a clearly NAK from my side. (In reply to Christian König from comment #14) > (In reply to Roland Scheidegger from comment #13) > > Personally I've always thought the risk of damaging hardware with any kind > > of overclocking is just about exactly zero as long as you don't increase > > voltage levels > > Unfortunately this is exactly what happens here. The clock is generated by a > voltage controlled oscillator and for the desired resolution you need to > over clock it by about 30-40%. > > That in turn means you raise the voltage way over the nominal limit. Oh interesting - didn't know voltage was directly tied to clock frequency here. Makes sense then to not allow it (at least if that circuitry isn't shared with DP, as the DP link runs at much higher clock (540Mhz actually), but I suppose it's really different there). (In reply to Roland Scheidegger from comment #15) > Oh interesting - didn't know voltage was directly tied to clock frequency > here. Makes sense then to not allow it (at least if that circuitry isn't > shared with DP, as the DP link runs at much higher clock (540Mhz actually), > but I suppose it's really different there). The voltage is only indirectly related, but yes over clocking such parts is a rather bad idea in the long term. DP uses a fixed frequency which is way easier to generate than the variable HDMI/DVI/VGA clock. The patch does not only allow overclocking; it can also be used to disable the duallink feature. In certain cases this may be necessary to get a picture at all (see comment 12). I've just tried this patch on Radeon HD 6850 and it works great (2560x1080 at 185.58Mhz). While indeed it's more than AMD site says is supported, official AMD drivers for Windows support this resolution out-of-the-box, so I think it's safe to at least increase the limit to 186 Mhz. I read all the previous comments - for and against the adoption of the patch. Question: Why is it easy to find a pixel clock patch for WINDOWS (www.monitortests.com AMD/ATI Pixel Clock Patcher by someone named 'ToastyX') -available and supported since 2012 - and successfully running 4K screens (since January 2016) on the same Radeon cards being discussed here ? Many people on that webpage discussing it. Seems to work. I personally tested on a few older Radeon cards and it works at 3840x2160 for me. I have NOT yet run it for hours (I rarely run Windows, and then only to test of fix PC's for others). I have not connected temperature sensors to heat sinks yet. If the pixel clock generator circuitry is on the same die as everything else, then it shares a heat sink. The whole thing designed such that with recommended airflow across that heat sink, the GPU remains functional. BUT, that can mean running 3 separate displays - utilizing the full capacity of the GPU. IF this patch from Elmar Stellnberger were to be used to run a SINGLE 4K LCD at 3840 x 2160 at 30% overclock on the pixel clock generator, maybe the overall GPU would be generating much less than maximum heat, and the heatsink/fan could easily keep it cool. Does anyone have a maximum pixel clock specification for the various pixel clock generator designs on the various ATI/AMD dies ? Did ATI set limits due to HDMI cables and overall ability of heatsink to dissipate the heat when running the GPU at max speed / load on 3 screens ? Is there a listed maximum voltage that the PLL can run at - long term - without damage ? for most integrated circuits data sheets I have ever read, there is a relationship between max speed and temperature of the die. also, the patch allowing pixel clock increase for Nvidia cards has been in kernel since 4.5.n - also something I successfully tested recently at 3840x2160 resolution on OLD Nvidia card. I thought Linux was the OS that is supposed to allow experiments and variety & versatility. Seems like NOT if talking Radeon cards. Wiht the help of Elmar Stellenberger's patch to kernel 4.20 I can now run my old Radeon HD 5870 at 3840x1600 resolution, which it does "out of the box" under widows. I would also like to recommend this patch for inclusion in the kernel. The patch was not accepted because someone claimed it could damage the hardware. All lies. As years have gone by these graphics cards are still under daily use with this UHD patch and no damages have ever occured. As this hardware is still in use I would really like to see this in mainline, not just the Mageia kernel. Someone who knows about it, told it would be highly improbable that this could be detrimental for the hardware. The only fallacy would be that the screen shows black on a too high TMDS and nobody would ever continue to run the graphics card with a TMDS that yields a black screen. This hardware is near to getting phased out on many computers by now and I´d regard it as ridiculous if the patch is withheld because of the argument that it could damage hardware on the long run. This hardware will get phased out long before any damage could ever occur. As distributors and kernel developers still support Pentium IV hardware I am reopening this bug. This is about Core 2 aged hardware which is totally sufficient for UHD/desktop computers. I would never give up my Xi3650 machines because they are ultimately silent, something you don´t get with a newer computer. And unfortunately the Xi3650 does not work with newer 3D gaming UHD graphics cards (these cards inhibt s2ram also on Windows, which isn´t what you want for a desktop system) I think it a rather political issue that the patch has not been accepted up to now. So many people have reported about it being useful not only on the kernel bugzilla here and I would really welcome it if you kernel developers started to rethink your decision about it. There is still time and the patch is still being useful. Hey guys, I am trying to port this mod to AMDGPU. The intention is, however, not to unlock the advertised features but to force an APU to accept HDMI clocks over a DVI port (with DVI-HDMI cable, the monitor expects HDMI signal). The port itself is easy but it does not work. amdgpu.hdmimhz=... parameter has apparently no effect, Xorg keeps reporting: [ 16.107] (--) AMDGPU(0): HDMI max TMDS frequency 280000KHz Does anyone have an idea how to tackle this? Here is the patch: https://github.com/Code7R/linux/commits/amdgpu-custom-maxtdmsclock With the help of Elmar Stellenberger's patch to the kernel, I managed to make a patch to the 6.2.0 kernel that allowed me to use a DisplayPort -> HDMI adaptor for my ultrawide monitor (2560x1080@60Hz). the 165Mhz was just below what i required (I needed about 166Mhz for my monitor) so very very minimal over-clocking to go from not working at all to working perfectly. This worked natively in windows so clearly they have determined that it is acceptable, so I don't see any real risk. |