Bug 195255 - pm_qos fails to guarantee requested latency with Baytrail (J1900)
Summary: pm_qos fails to guarantee requested latency with Baytrail (J1900)
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Power Management
Classification: Unclassified
Component: intel_idle (show other bugs)
Hardware: Intel Linux
: P1 normal
Assignee: Len Brown
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-04-05 10:27 UTC by Mika Kuoppala
Modified: 2017-08-07 03:33 UTC (History)
4 users (show)

See Also:
Kernel Version: 4.11.0-rc5
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Mika Kuoppala 2017-04-05 10:27:33 UTC
With J1900 baytrail, single mmio access inside local_irq_disable/enable pair can reach 400+ usecs, inside pm_qos_update_request(pm_qos, 0), pm_qos_update_request(pm_qos, PM_QOS_DEFAULT_VALUE)

known workarounds:
intel_idle.max_cstate=1
maxcpus=2
adding pm_qos_update_request(pm_qos, 0) and never setting back the default value.

The last one indicates that right after the c state limitations imposed by
pm_qos_update_request() fails to affect code immediately following it, and the desired effect of low latency is delayed.
Comment 1 Fred 2017-04-12 12:29:05 UTC
Mika

So all three must be done?  And we are working on a better patch?

Thanks!
Comment 2 Mika Kuoppala 2017-04-12 12:33:30 UTC
(In reply to Fred from comment #1)
> So all three must be done?

Any, one, of those will do. I think the most interesting one is to limit
the number of cpus.
Comment 3 Fred 2017-04-13 20:21:19 UTC
Are we working on a possible patch to avoid doing these things at the kernel load line?
Comment 4 Zhang Rui 2017-06-20 02:22:43 UTC
(In reply to Mika Kuoppala from comment #0)
> With J1900 baytrail, single mmio access inside local_irq_disable/enable pair
> can reach 400+ usecs, inside pm_qos_update_request(pm_qos, 0),
> pm_qos_update_request(pm_qos, PM_QOS_DEFAULT_VALUE)
> 
> known workarounds:
> intel_idle.max_cstate=1
> maxcpus=2

these are boot options

> adding pm_qos_update_request(pm_qos, 0) and never setting back the default
> value.
> 
where do you add this?

> The last one indicates that right after the c state limitations imposed by
> pm_qos_update_request() fails to affect code immediately following it, and
> the desired effect of low latency is delayed.

(In reply to Fred from comment #3)
> Are we working on a possible patch to avoid doing these things at the kernel
> load line?

I'm not aware of this.

is the problem still true in the latest upstream kernel?
Comment 5 Zhang Rui 2017-08-07 03:33:23 UTC
bug closed as there is no response from the bug reporter.
Please feel free to reopen it if you can provide the information required in comment #4

Note You need to log in before you can comment on or make changes to this bug.