synchronize_sched() only guarantees all CPUs have exited non-preemptive block. From my understanding, we want the affected CPU exits idle (in acpi_processor_cst_has_changed), so synchronize_sched() doesn
Created attachment 5542 [details] patch to use cpu_idle_wait The patch does: It does: 1. change pm idle fix. 2. cleanup for acpi_processor_get_power_info_cst to make it use less stack space (acpi_processor_cx is a big structure)
Created attachment 5543 [details] patch to use cpu_idle_wait The patch does: It does: 1. change pm idle fix. 2. cleanup for acpi_processor_get_power_info_cst to make it use less stack space (acpi_processor_cx is a big structure)
patch needs to be re-based on Lindented processor_idle.c
Created attachment 5921 [details] updated patch A updated patch. It tries to fix all races.
Created attachment 6005 [details] updated patch My last patch still has a small race window when one CPU is reading per-cpu idle and another CPU is changing it. This updated one fixes it.
patch does not apply to current tree -- re-opening
with cpuidle, this patch isn't required.