Bug 195255
Summary: | pm_qos fails to guarantee requested latency with Baytrail (J1900) | ||
---|---|---|---|
Product: | Power Management | Reporter: | Mika Kuoppala (miku) |
Component: | intel_idle | Assignee: | Len Brown (lenb) |
Status: | CLOSED INSUFFICIENT_DATA | ||
Severity: | normal | CC: | fred.moses, hugh, rui.zhang, yu.c.chen |
Priority: | P1 | ||
Hardware: | Intel | ||
OS: | Linux | ||
Kernel Version: | 4.11.0-rc5 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Mika Kuoppala
2017-04-05 10:27:33 UTC
Mika So all three must be done? And we are working on a better patch? Thanks! (In reply to Fred from comment #1) > So all three must be done? Any, one, of those will do. I think the most interesting one is to limit the number of cpus. Are we working on a possible patch to avoid doing these things at the kernel load line? (In reply to Mika Kuoppala from comment #0) > With J1900 baytrail, single mmio access inside local_irq_disable/enable pair > can reach 400+ usecs, inside pm_qos_update_request(pm_qos, 0), > pm_qos_update_request(pm_qos, PM_QOS_DEFAULT_VALUE) > > known workarounds: > intel_idle.max_cstate=1 > maxcpus=2 these are boot options > adding pm_qos_update_request(pm_qos, 0) and never setting back the default > value. > where do you add this? > The last one indicates that right after the c state limitations imposed by > pm_qos_update_request() fails to affect code immediately following it, and > the desired effect of low latency is delayed. (In reply to Fred from comment #3) > Are we working on a possible patch to avoid doing these things at the kernel > load line? I'm not aware of this. is the problem still true in the latest upstream kernel? bug closed as there is no response from the bug reporter. Please feel free to reopen it if you can provide the information required in comment #4 |