Merge branches 'pm-devfreq' and 'pm-tools'
* pm-devfreq:
PM / devfreq: tegra30: Separate configurations per-SoC generation
PM / devfreq: tegra30: Support interconnect and OPPs from device-tree
PM / devfreq: tegra20: Deprecate in a favor of emc-stat based driver
PM / devfreq: exynos-bus: Add registration of interconnect child device
dt-bindings: devfreq: Add documentation for the interconnect properties
soc/tegra: fuse: Add stub for tegra_sku_info
soc/tegra: fuse: Export tegra_read_ram_code()
clk: tegra: Export Tegra20 EMC kernel symbols
PM / devfreq: tegra30: Silence deferred probe error
PM / devfreq: tegra20: Relax Kconfig dependency
PM / devfreq: tegra20: Silence deferred probe error
PM / devfreq: Remove redundant governor_name from struct devfreq
PM / devfreq: Add governor attribute flag for specifc sysfs nodes
PM / devfreq: Add governor feature flag
PM / devfreq: Add tracepoint for frequency changes
PM / devfreq: Unify frequency change to devfreq_update_target func
trace: events: devfreq: Use fixed indentation size to improve readability
* pm-tools:
pm-graph v5.8
cpupower: Provide online and offline CPU information
diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
index 9847dfe..08b3da4 100644
--- a/Documentation/devicetree/bindings/opp/opp.txt
+++ b/Documentation/devicetree/bindings/opp/opp.txt
@@ -65,7 +65,9 @@
- OPP nodes: One or more OPP nodes describing voltage-current-frequency
combinations. Their name isn't significant but their phandle can be used to
- reference an OPP.
+ reference an OPP. These are mandatory except for the case where the OPP table
+ is present only to indicate dependency between devices using the opp-shared
+ property.
Optional properties:
- opp-shared: Indicates that device nodes using this OPP Table Node's phandle
@@ -568,3 +570,53 @@
};
};
};
+
+Example 7: Single cluster Quad-core ARM cortex A53, OPP points from firmware,
+distinct clock controls but two sets of clock/voltage/current lines.
+
+/ {
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ compatible = "arm,cortex-a53";
+ reg = <0x0 0x100>;
+ next-level-cache = <&A53_L2>;
+ clocks = <&dvfs_controller 0>;
+ operating-points-v2 = <&cpu_opp0_table>;
+ };
+ cpu@1 {
+ compatible = "arm,cortex-a53";
+ reg = <0x0 0x101>;
+ next-level-cache = <&A53_L2>;
+ clocks = <&dvfs_controller 1>;
+ operating-points-v2 = <&cpu_opp0_table>;
+ };
+ cpu@2 {
+ compatible = "arm,cortex-a53";
+ reg = <0x0 0x102>;
+ next-level-cache = <&A53_L2>;
+ clocks = <&dvfs_controller 2>;
+ operating-points-v2 = <&cpu_opp1_table>;
+ };
+ cpu@3 {
+ compatible = "arm,cortex-a53";
+ reg = <0x0 0x103>;
+ next-level-cache = <&A53_L2>;
+ clocks = <&dvfs_controller 3>;
+ operating-points-v2 = <&cpu_opp1_table>;
+ };
+
+ };
+
+ cpu_opp0_table: opp0_table {
+ compatible = "operating-points-v2";
+ opp-shared;
+ };
+
+ cpu_opp1_table: opp1_table {
+ compatible = "operating-points-v2";
+ opp-shared;
+ };
+};
diff --git a/Documentation/driver-api/thermal/power_allocator.rst b/Documentation/driver-api/thermal/power_allocator.rst
index 67b6a32..aa5f665 100644
--- a/Documentation/driver-api/thermal/power_allocator.rst
+++ b/Documentation/driver-api/thermal/power_allocator.rst
@@ -71,7 +71,9 @@
simply an estimate, and may be tuned to affect the aggressiveness of
the thermal ramp. For reference, the sustainable power of a 4" phone
is typically 2000mW, while on a 10" tablet is around 4500mW (may vary
-depending on screen size).
+depending on screen size). It is possible to have the power value
+expressed in an abstract scale. The sustained power should be aligned
+to the scale used by the related cooling devices.
If you are using device tree, do add it as a property of the
thermal-zone. For example::
@@ -269,3 +271,11 @@
governor, step-wise will also misbehave if you call its throttle()
faster than the normal thermal framework tick (due to interrupts for
example) as it will overreact.
+
+Energy Model requirements
+=========================
+
+Another important thing is the consistent scale of the power values
+provided by the cooling devices. All of the cooling devices in a single
+thermal zone should have power values reported either in milli-Watts
+or scaled to the same 'abstract scale'.
diff --git a/Documentation/power/energy-model.rst b/Documentation/power/energy-model.rst
index a6fb986a..60ac091 100644
--- a/Documentation/power/energy-model.rst
+++ b/Documentation/power/energy-model.rst
@@ -20,6 +20,21 @@
abstraction layer which standardizes the format of power cost tables in the
kernel, hence enabling to avoid redundant work.
+The power values might be expressed in milli-Watts or in an 'abstract scale'.
+Multiple subsystems might use the EM and it is up to the system integrator to
+check that the requirements for the power value scale types are met. An example
+can be found in the Energy-Aware Scheduler documentation
+Documentation/scheduler/sched-energy.rst. For some subsystems like thermal or
+powercap power values expressed in an 'abstract scale' might cause issues.
+These subsystems are more interested in estimation of power used in the past,
+thus the real milli-Watts might be needed. An example of these requirements can
+be found in the Intelligent Power Allocation in
+Documentation/driver-api/thermal/power_allocator.rst.
+Kernel subsystems might implement automatic detection to check whether EM
+registered devices have inconsistent scale (based on EM internal flag).
+Important thing to keep in mind is that when the power values are expressed in
+an 'abstract scale' deriving real energy in milli-Joules would not be possible.
+
The figure below depicts an example of drivers (Arm-specific here, but the
approach is applicable to any architecture) providing power costs to the EM
framework, and interested clients reading the data from it::
@@ -73,7 +88,7 @@
calling the following API::
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
- struct em_data_callback *cb, cpumask_t *cpus);
+ struct em_data_callback *cb, cpumask_t *cpus, bool milliwatts);
Drivers must provide a callback function returning <frequency, power> tuples
for each performance state. The callback function provided by the driver is free
@@ -81,6 +96,10 @@
deemed necessary. Only for CPU devices, drivers must specify the CPUs of the
performance domains using cpumask. For other devices than CPUs the last
argument must be set to NULL.
+The last argument 'milliwatts' is important to set with correct value. Kernel
+subsystems which use EM might rely on this flag to check if all EM devices use
+the same scale. If there are different scales, these subsystems might decide
+to: return warning/error, stop working or panic.
See Section 3. for an example of driver implementing this
callback, and kernel/power/energy_model.c for further documentation on this
API.
@@ -156,7 +175,8 @@
37 nr_opp = foo_get_nr_opp(policy);
38
39 /* And register the new performance domain */
- 40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
- 41
- 42 return 0;
- 43 }
+ 40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
+ 41 true);
+ 42
+ 43 return 0;
+ 44 }
diff --git a/Documentation/scheduler/sched-energy.rst b/Documentation/scheduler/sched-energy.rst
index 001e09c..afe02d3 100644
--- a/Documentation/scheduler/sched-energy.rst
+++ b/Documentation/scheduler/sched-energy.rst
@@ -350,6 +350,11 @@
Please also note that the scheduling domains need to be re-built after the
EM has been registered in order to start EAS.
+EAS uses the EM to make a forecasting decision on energy usage and thus it is
+more focused on the difference when checking possible options for task
+placement. For EAS it doesn't matter whether the EM power values are expressed
+in milli-Watts or in an 'abstract scale'.
+
6.3 - Energy Model complexity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 972a34d..c36a083 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -326,8 +326,9 @@
#define MSR_PP1_ENERGY_STATUS 0x00000641
#define MSR_PP1_POLICY 0x00000642
-#define MSR_AMD_PKG_ENERGY_STATUS 0xc001029b
#define MSR_AMD_RAPL_POWER_UNIT 0xc0010299
+#define MSR_AMD_CORE_ENERGY_STATUS 0xc001029a
+#define MSR_AMD_PKG_ENERGY_STATUS 0xc001029b
/* Config TDP MSRs */
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index 94d91c6..3586434 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -749,7 +749,7 @@ static void acpi_pm_notify_work_func(struct acpi_device_wakeup_context *context)
static DEFINE_MUTEX(acpi_wakeup_lock);
static int __acpi_device_wakeup_enable(struct acpi_device *adev,
- u32 target_state, int max_count)
+ u32 target_state)
{
struct acpi_device_wakeup *wakeup = &adev->wakeup;
acpi_status status;
@@ -757,16 +757,27 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
mutex_lock(&acpi_wakeup_lock);
- if (wakeup->enable_count >= max_count)
+ /*
+ * If the device wakeup power is already enabled, disable it and enable
+ * it again in case it depends on the configuration of subordinate
+ * devices and the conditions have changed since it was enabled last
+ * time.
+ */
+ if (wakeup->enable_count > 0)
+ acpi_disable_wakeup_device_power(adev);
+
+ error = acpi_enable_wakeup_device_power(adev, target_state);
+ if (error) {
+ if (wakeup->enable_count > 0) {
+ acpi_disable_gpe(wakeup->gpe_device, wakeup->gpe_number);
+ wakeup->enable_count = 0;
+ }
goto out;
+ }
if (wakeup->enable_count > 0)
goto inc;
- error = acpi_enable_wakeup_device_power(adev, target_state);
- if (error)
- goto out;
-
status = acpi_enable_gpe(wakeup->gpe_device, wakeup->gpe_number);
if (ACPI_FAILURE(status)) {
acpi_disable_wakeup_device_power(adev);
@@ -778,7 +789,10 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
(unsigned int)wakeup->gpe_number);
inc:
- wakeup->enable_count++;
+ if (wakeup->enable_count < INT_MAX)
+ wakeup->enable_count++;
+ else
+ acpi_handle_info(adev->handle, "Wakeup enable count out of bounds!\n");
out:
mutex_unlock(&acpi_wakeup_lock);
@@ -799,7 +813,7 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
*/
static int acpi_device_wakeup_enable(struct acpi_device *adev, u32 target_state)
{
- return __acpi_device_wakeup_enable(adev, target_state, 1);
+ return __acpi_device_wakeup_enable(adev, target_state);
}
/**
@@ -829,8 +843,12 @@ static void acpi_device_wakeup_disable(struct acpi_device *adev)
mutex_unlock(&acpi_wakeup_lock);
}
-static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
- int max_count)
+/**
+ * acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
+ * @dev: Device to enable/disable to generate wakeup events.
+ * @enable: Whether to enable or disable the wakeup functionality.
+ */
+int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
{
struct acpi_device *adev;
int error;
@@ -850,37 +868,15 @@ static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
return 0;
}
- error = __acpi_device_wakeup_enable(adev, acpi_target_system_state(),
- max_count);
+ error = __acpi_device_wakeup_enable(adev, acpi_target_system_state());
if (!error)
dev_dbg(dev, "Wakeup enabled by ACPI\n");
return error;
}
-
-/**
- * acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
- * @dev: Device to enable/disable to generate wakeup events.
- * @enable: Whether to enable or disable the wakeup functionality.
- */
-int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
-{
- return __acpi_pm_set_device_wakeup(dev, enable, 1);
-}
EXPORT_SYMBOL_GPL(acpi_pm_set_device_wakeup);
/**
- * acpi_pm_set_bridge_wakeup - Enable/disable remote wakeup for given bridge.
- * @dev: Bridge device to enable/disable to generate wakeup events.
- * @enable: Whether to enable or disable the wakeup functionality.
- */
-int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
-{
- return __acpi_pm_set_device_wakeup(dev, enable, INT_MAX);
-}
-EXPORT_SYMBOL_GPL(acpi_pm_set_bridge_wakeup);
-
-/**
* acpi_dev_pm_low_power - Put ACPI device into a low-power state.
* @dev: Device to put into a low-power state.
* @adev: ACPI device node corresponding to @dev.
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 7432689..9a14eed 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -21,6 +21,7 @@
#include <linux/suspend.h>
#include <linux/export.h>
#include <linux/cpu.h>
+#include <linux/debugfs.h>
#include "power.h"
@@ -210,6 +211,18 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
}
#ifdef CONFIG_DEBUG_FS
+static struct dentry *genpd_debugfs_dir;
+
+static void genpd_debug_add(struct generic_pm_domain *genpd);
+
+static void genpd_debug_remove(struct generic_pm_domain *genpd)
+{
+ struct dentry *d;
+
+ d = debugfs_lookup(genpd->name, genpd_debugfs_dir);
+ debugfs_remove(d);
+}
+
static void genpd_update_accounting(struct generic_pm_domain *genpd)
{
ktime_t delta, now;
@@ -234,6 +247,8 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
genpd->accounting_time = now;
}
#else
+static inline void genpd_debug_add(struct generic_pm_domain *genpd) {}
+static inline void genpd_debug_remove(struct generic_pm_domain *genpd) {}
static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
#endif
@@ -1142,7 +1157,7 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
if (ret)
return ret;
- if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
+ if (device_wakeup_path(dev) && genpd_is_active_wakeup(genpd))
return 0;
if (genpd->dev_ops.stop && genpd->dev_ops.start &&
@@ -1196,7 +1211,7 @@ static int genpd_resume_noirq(struct device *dev)
if (IS_ERR(genpd))
return -EINVAL;
- if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
+ if (device_wakeup_path(dev) && genpd_is_active_wakeup(genpd))
return pm_generic_resume_noirq(dev);
genpd_lock(genpd);
@@ -1363,41 +1378,60 @@ static void genpd_complete(struct device *dev)
genpd_unlock(genpd);
}
-/**
- * genpd_syscore_switch - Switch power during system core suspend or resume.
- * @dev: Device that normally is marked as "always on" to switch power for.
- *
- * This routine may only be called during the system core (syscore) suspend or
- * resume phase for devices whose "always on" flags are set.
- */
-static void genpd_syscore_switch(struct device *dev, bool suspend)
+static void genpd_switch_state(struct device *dev, bool suspend)
{
struct generic_pm_domain *genpd;
+ bool use_lock;
genpd = dev_to_genpd_safe(dev);
if (!genpd)
return;
+ use_lock = genpd_is_irq_safe(genpd);
+
+ if (use_lock)
+ genpd_lock(genpd);
+
if (suspend) {
genpd->suspended_count++;
- genpd_sync_power_off(genpd, false, 0);
+ genpd_sync_power_off(genpd, use_lock, 0);
} else {
- genpd_sync_power_on(genpd, false, 0);
+ genpd_sync_power_on(genpd, use_lock, 0);
genpd->suspended_count--;
}
+
+ if (use_lock)
+ genpd_unlock(genpd);
}
-void pm_genpd_syscore_poweroff(struct device *dev)
+/**
+ * dev_pm_genpd_suspend - Synchronously try to suspend the genpd for @dev
+ * @dev: The device that is attached to the genpd, that can be suspended.
+ *
+ * This routine should typically be called for a device that needs to be
+ * suspended during the syscore suspend phase. It may also be called during
+ * suspend-to-idle to suspend a corresponding CPU device that is attached to a
+ * genpd.
+ */
+void dev_pm_genpd_suspend(struct device *dev)
{
- genpd_syscore_switch(dev, true);
+ genpd_switch_state(dev, true);
}
-EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweroff);
+EXPORT_SYMBOL_GPL(dev_pm_genpd_suspend);
-void pm_genpd_syscore_poweron(struct device *dev)
+/**
+ * dev_pm_genpd_resume - Synchronously try to resume the genpd for @dev
+ * @dev: The device that is attached to the genpd, which needs to be resumed.
+ *
+ * This routine should typically be called for a device that needs to be resumed
+ * during the syscore resume phase. It may also be called during suspend-to-idle
+ * to resume a corresponding CPU device that is attached to a genpd.
+ */
+void dev_pm_genpd_resume(struct device *dev)
{
- genpd_syscore_switch(dev, false);
+ genpd_switch_state(dev, false);
}
-EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
+EXPORT_SYMBOL_GPL(dev_pm_genpd_resume);
#else /* !CONFIG_PM_SLEEP */
@@ -1954,6 +1988,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
mutex_lock(&gpd_list_lock);
list_add(&genpd->gpd_list_node, &gpd_list);
+ genpd_debug_add(genpd);
mutex_unlock(&gpd_list_lock);
return 0;
@@ -1987,6 +2022,7 @@ static int genpd_remove(struct generic_pm_domain *genpd)
kfree(link);
}
+ genpd_debug_remove(genpd);
list_del(&genpd->gpd_list_node);
genpd_unlock(genpd);
cancel_work_sync(&genpd->power_off_work);
@@ -2249,7 +2285,7 @@ int of_genpd_add_provider_onecell(struct device_node *np,
* Save table for faster processing while setting
* performance state.
*/
- genpd->opp_table = dev_pm_opp_get_opp_table_indexed(&genpd->dev, i);
+ genpd->opp_table = dev_pm_opp_get_opp_table(&genpd->dev);
WARN_ON(IS_ERR(genpd->opp_table));
}
@@ -2893,14 +2929,6 @@ core_initcall(genpd_bus_init);
/*** debugfs support ***/
#ifdef CONFIG_DEBUG_FS
-#include <linux/pm.h>
-#include <linux/device.h>
-#include <linux/debugfs.h>
-#include <linux/seq_file.h>
-#include <linux/init.h>
-#include <linux/kobject.h>
-static struct dentry *genpd_debugfs_dir;
-
/*
* TODO: This function is a slightly modified version of rtpm_status_show
* from sysfs.c, so generalize it.
@@ -3177,9 +3205,34 @@ DEFINE_SHOW_ATTRIBUTE(total_idle_time);
DEFINE_SHOW_ATTRIBUTE(devices);
DEFINE_SHOW_ATTRIBUTE(perf_state);
-static int __init genpd_debug_init(void)
+static void genpd_debug_add(struct generic_pm_domain *genpd)
{
struct dentry *d;
+
+ if (!genpd_debugfs_dir)
+ return;
+
+ d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
+
+ debugfs_create_file("current_state", 0444,
+ d, genpd, &status_fops);
+ debugfs_create_file("sub_domains", 0444,
+ d, genpd, &sub_domains_fops);
+ debugfs_create_file("idle_states", 0444,
+ d, genpd, &idle_states_fops);
+ debugfs_create_file("active_time", 0444,
+ d, genpd, &active_time_fops);
+ debugfs_create_file("total_idle_time", 0444,
+ d, genpd, &total_idle_time_fops);
+ debugfs_create_file("devices", 0444,
+ d, genpd, &devices_fops);
+ if (genpd->set_performance_state)
+ debugfs_create_file("perf_state", 0444,
+ d, genpd, &perf_state_fops);
+}
+
+static int __init genpd_debug_init(void)
+{
struct generic_pm_domain *genpd;
genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
@@ -3187,25 +3240,8 @@ static int __init genpd_debug_init(void)
debugfs_create_file("pm_genpd_summary", S_IRUGO, genpd_debugfs_dir,
NULL, &summary_fops);
- list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
- d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
-
- debugfs_create_file("current_state", 0444,
- d, genpd, &status_fops);
- debugfs_create_file("sub_domains", 0444,
- d, genpd, &sub_domains_fops);
- debugfs_create_file("idle_states", 0444,
- d, genpd, &idle_states_fops);
- debugfs_create_file("active_time", 0444,
- d, genpd, &active_time_fops);
- debugfs_create_file("total_idle_time", 0444,
- d, genpd, &total_idle_time_fops);
- debugfs_create_file("devices", 0444,
- d, genpd, &devices_fops);
- if (genpd->set_performance_state)
- debugfs_create_file("perf_state", 0444,
- d, genpd, &perf_state_fops);
- }
+ list_for_each_entry(genpd, &gpd_list, gpd_list_node)
+ genpd_debug_add(genpd);
return 0;
}
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index c7ac490..4679327 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -441,9 +441,9 @@ static pm_callback_t pm_noirq_op(const struct dev_pm_ops *ops, pm_message_t stat
static void pm_dev_dbg(struct device *dev, pm_message_t state, const char *info)
{
- dev_dbg(dev, "%s%s%s\n", info, pm_verb(state.event),
+ dev_dbg(dev, "%s%s%s driver flags: %x\n", info, pm_verb(state.event),
((state.event & PM_EVENT_SLEEP) && device_may_wakeup(dev)) ?
- ", may wakeup" : "");
+ ", may wakeup" : "", dev->power.driver_flags);
}
static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
@@ -1359,7 +1359,7 @@ static void dpm_propagate_wakeup_to_parent(struct device *dev)
spin_lock_irq(&parent->power.lock);
- if (dev->power.wakeup_path && !parent->power.ignore_children)
+ if (device_wakeup_path(dev) && !parent->power.ignore_children)
parent->power.wakeup_path = true;
spin_unlock_irq(&parent->power.lock);
@@ -1627,7 +1627,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
goto Complete;
/* Avoid direct_complete to let wakeup_path propagate. */
- if (device_may_wakeup(dev) || dev->power.wakeup_path)
+ if (device_may_wakeup(dev) || device_wakeup_path(dev))
dev->power.direct_complete = false;
if (dev->power.direct_complete) {
diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
index 7607774..7275d95 100644
--- a/drivers/clocksource/sh_cmt.c
+++ b/drivers/clocksource/sh_cmt.c
@@ -658,7 +658,7 @@ static void sh_cmt_clocksource_suspend(struct clocksource *cs)
return;
sh_cmt_stop(ch, FLAG_CLOCKSOURCE);
- pm_genpd_syscore_poweroff(&ch->cmt->pdev->dev);
+ dev_pm_genpd_suspend(&ch->cmt->pdev->dev);
}
static void sh_cmt_clocksource_resume(struct clocksource *cs)
@@ -668,7 +668,7 @@ static void sh_cmt_clocksource_resume(struct clocksource *cs)
if (!ch->cs_enabled)
return;
- pm_genpd_syscore_poweron(&ch->cmt->pdev->dev);
+ dev_pm_genpd_resume(&ch->cmt->pdev->dev);
sh_cmt_start(ch, FLAG_CLOCKSOURCE);
}
@@ -760,7 +760,7 @@ static void sh_cmt_clock_event_suspend(struct clock_event_device *ced)
{
struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
- pm_genpd_syscore_poweroff(&ch->cmt->pdev->dev);
+ dev_pm_genpd_suspend(&ch->cmt->pdev->dev);
clk_unprepare(ch->cmt->clk);
}
@@ -769,7 +769,7 @@ static void sh_cmt_clock_event_resume(struct clock_event_device *ced)
struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
clk_prepare(ch->cmt->clk);
- pm_genpd_syscore_poweron(&ch->cmt->pdev->dev);
+ dev_pm_genpd_resume(&ch->cmt->pdev->dev);
}
static int sh_cmt_register_clockevent(struct sh_cmt_channel *ch,
diff --git a/drivers/clocksource/sh_mtu2.c b/drivers/clocksource/sh_mtu2.c
index bfccb31..169a1fc 100644
--- a/drivers/clocksource/sh_mtu2.c
+++ b/drivers/clocksource/sh_mtu2.c
@@ -297,12 +297,12 @@ static int sh_mtu2_clock_event_set_periodic(struct clock_event_device *ced)
static void sh_mtu2_clock_event_suspend(struct clock_event_device *ced)
{
- pm_genpd_syscore_poweroff(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
+ dev_pm_genpd_suspend(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
}
static void sh_mtu2_clock_event_resume(struct clock_event_device *ced)
{
- pm_genpd_syscore_poweron(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
+ dev_pm_genpd_resume(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
}
static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch,
diff --git a/drivers/clocksource/sh_tmu.c b/drivers/clocksource/sh_tmu.c
index d41df9b..b00dec0 100644
--- a/drivers/clocksource/sh_tmu.c
+++ b/drivers/clocksource/sh_tmu.c
@@ -292,7 +292,7 @@ static void sh_tmu_clocksource_suspend(struct clocksource *cs)
if (--ch->enable_count == 0) {
__sh_tmu_disable(ch);
- pm_genpd_syscore_poweroff(&ch->tmu->pdev->dev);
+ dev_pm_genpd_suspend(&ch->tmu->pdev->dev);
}
}
@@ -304,7 +304,7 @@ static void sh_tmu_clocksource_resume(struct clocksource *cs)
return;
if (ch->enable_count++ == 0) {
- pm_genpd_syscore_poweron(&ch->tmu->pdev->dev);
+ dev_pm_genpd_resume(&ch->tmu->pdev->dev);
__sh_tmu_enable(ch);
}
}
@@ -394,12 +394,12 @@ static int sh_tmu_clock_event_next(unsigned long delta,
static void sh_tmu_clock_event_suspend(struct clock_event_device *ced)
{
- pm_genpd_syscore_poweroff(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
+ dev_pm_genpd_suspend(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
}
static void sh_tmu_clock_event_resume(struct clock_event_device *ced)
{
- pm_genpd_syscore_poweron(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
+ dev_pm_genpd_resume(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
}
static void sh_tmu_register_clockevent(struct sh_tmu_channel *ch,
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index 015ec0c..1f73fa7 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -94,7 +94,7 @@
tristate "Freescale i.MX6 cpufreq support"
depends on ARCH_MXC
depends on REGULATOR_ANATOP
- select NVMEM_IMX_OCOTP
+ depends on NVMEM_IMX_OCOTP || COMPILE_TEST
select PM_OPP
help
This adds cpufreq driver support for Freescale i.MX6 series SoCs.
diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
index 39e34f50..b0fc5e8 100644
--- a/drivers/cpufreq/armada-8k-cpufreq.c
+++ b/drivers/cpufreq/armada-8k-cpufreq.c
@@ -204,6 +204,12 @@ static void __exit armada_8k_cpufreq_exit(void)
}
module_exit(armada_8k_cpufreq_exit);
+static const struct of_device_id __maybe_unused armada_8k_cpufreq_of_match[] = {
+ { .compatible = "marvell,ap806-cpu-clock" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, armada_8k_cpufreq_of_match);
+
MODULE_AUTHOR("Gregory Clement <[email protected]>");
MODULE_DESCRIPTION("Armada 8K cpufreq driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
index f29e8d0..7cc9bd8 100644
--- a/drivers/cpufreq/cppc_cpufreq.c
+++ b/drivers/cpufreq/cppc_cpufreq.c
@@ -26,8 +26,8 @@
/* Minimum struct length needed for the DMI processor entry we want */
#define DMI_ENTRY_PROCESSOR_MIN_LENGTH 48
-/* Offest in the DMI processor structure for the max frequency */
-#define DMI_PROCESSOR_MAX_SPEED 0x14
+/* Offset in the DMI processor structure for the max frequency */
+#define DMI_PROCESSOR_MAX_SPEED 0x14
/*
* These structs contain information parsed from per CPU
@@ -96,11 +96,11 @@ static u64 cppc_get_dmi_max_khz(void)
* and extrapolate the rest
* For perf/freq > Nominal, we use the ratio perf:freq at Nominal for conversion
*/
-static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
- unsigned int perf)
+static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu_data,
+ unsigned int perf)
{
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
static u64 max_khz;
- struct cppc_perf_caps *caps = &cpu->perf_caps;
u64 mul, div;
if (caps->lowest_freq && caps->nominal_freq) {
@@ -120,11 +120,11 @@ static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
return (u64)perf * mul / div;
}
-static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu,
- unsigned int freq)
+static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
+ unsigned int freq)
{
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
static u64 max_khz;
- struct cppc_perf_caps *caps = &cpu->perf_caps;
u64 mul, div;
if (caps->lowest_freq && caps->nominal_freq) {
@@ -146,32 +146,30 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu,
}
static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
- unsigned int target_freq,
- unsigned int relation)
+ unsigned int target_freq,
+ unsigned int relation)
{
- struct cppc_cpudata *cpu;
+ struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
struct cpufreq_freqs freqs;
u32 desired_perf;
int ret = 0;
- cpu = all_cpu_data[policy->cpu];
-
- desired_perf = cppc_cpufreq_khz_to_perf(cpu, target_freq);
+ desired_perf = cppc_cpufreq_khz_to_perf(cpu_data, target_freq);
/* Return if it is exactly the same perf */
- if (desired_perf == cpu->perf_ctrls.desired_perf)
+ if (desired_perf == cpu_data->perf_ctrls.desired_perf)
return ret;
- cpu->perf_ctrls.desired_perf = desired_perf;
+ cpu_data->perf_ctrls.desired_perf = desired_perf;
freqs.old = policy->cur;
freqs.new = target_freq;
cpufreq_freq_transition_begin(policy, &freqs);
- ret = cppc_set_perf(cpu->cpu, &cpu->perf_ctrls);
+ ret = cppc_set_perf(cpu_data->cpu, &cpu_data->perf_ctrls);
cpufreq_freq_transition_end(policy, &freqs, ret != 0);
if (ret)
pr_debug("Failed to set target on CPU:%d. ret:%d\n",
- cpu->cpu, ret);
+ cpu_data->cpu, ret);
return ret;
}
@@ -184,28 +182,29 @@ static int cppc_verify_policy(struct cpufreq_policy_data *policy)
static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy)
{
- int cpu_num = policy->cpu;
- struct cppc_cpudata *cpu = all_cpu_data[cpu_num];
+ struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
+ unsigned int cpu = policy->cpu;
int ret;
- cpu->perf_ctrls.desired_perf = cpu->perf_caps.lowest_perf;
+ cpu_data->perf_ctrls.desired_perf = caps->lowest_perf;
- ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls);
+ ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
if (ret)
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
- cpu->perf_caps.lowest_perf, cpu_num, ret);
+ caps->lowest_perf, cpu, ret);
}
/*
* The PCC subspace describes the rate at which platform can accept commands
* on the shared PCC channel (including READs which do not count towards freq
- * trasition requests), so ideally we need to use the PCC values as a fallback
+ * transition requests), so ideally we need to use the PCC values as a fallback
* if we don't have a platform specific transition_delay_us
*/
#ifdef CONFIG_ARM64
#include <asm/cputype.h>
-static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
+static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
{
unsigned long implementor = read_cpuid_implementor();
unsigned long part_num = read_cpuid_part_number();
@@ -233,7 +232,7 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
#else
-static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
+static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
{
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
}
@@ -241,54 +240,57 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
- struct cppc_cpudata *cpu;
- unsigned int cpu_num = policy->cpu;
+ struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
+ unsigned int cpu = policy->cpu;
int ret = 0;
- cpu = all_cpu_data[policy->cpu];
-
- cpu->cpu = cpu_num;
- ret = cppc_get_perf_caps(policy->cpu, &cpu->perf_caps);
+ cpu_data->cpu = cpu;
+ ret = cppc_get_perf_caps(cpu, caps);
if (ret) {
pr_debug("Err reading CPU%d perf capabilities. ret:%d\n",
- cpu_num, ret);
+ cpu, ret);
return ret;
}
/* Convert the lowest and nominal freq from MHz to KHz */
- cpu->perf_caps.lowest_freq *= 1000;
- cpu->perf_caps.nominal_freq *= 1000;
+ caps->lowest_freq *= 1000;
+ caps->nominal_freq *= 1000;
/*
* Set min to lowest nonlinear perf to avoid any efficiency penalty (see
* Section 8.4.7.1.1.5 of ACPI 6.1 spec)
*/
- policy->min = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_nonlinear_perf);
- policy->max = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
+ policy->min = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->lowest_nonlinear_perf);
+ policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->nominal_perf);
/*
* Set cpuinfo.min_freq to Lowest to make the full range of performance
* available if userspace wants to use any perf between lowest & lowest
* nonlinear perf
*/
- policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_perf);
- policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
+ policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->lowest_perf);
+ policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->nominal_perf);
- policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num);
- policy->shared_type = cpu->shared_type;
+ policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu);
+ policy->shared_type = cpu_data->shared_type;
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
int i;
- cpumask_copy(policy->cpus, cpu->shared_cpu_map);
+ cpumask_copy(policy->cpus, cpu_data->shared_cpu_map);
for_each_cpu(i, policy->cpus) {
- if (unlikely(i == policy->cpu))
+ if (unlikely(i == cpu))
continue;
- memcpy(&all_cpu_data[i]->perf_caps, &cpu->perf_caps,
- sizeof(cpu->perf_caps));
+ memcpy(&all_cpu_data[i]->perf_caps, caps,
+ sizeof(cpu_data->perf_caps));
}
} else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) {
/* Support only SW_ANY for now. */
@@ -296,24 +298,23 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
return -EFAULT;
}
- cpu->cur_policy = policy;
+ cpu_data->cur_policy = policy;
/*
* If 'highest_perf' is greater than 'nominal_perf', we assume CPU Boost
* is supported.
*/
- if (cpu->perf_caps.highest_perf > cpu->perf_caps.nominal_perf)
+ if (caps->highest_perf > caps->nominal_perf)
boost_supported = true;
/* Set policy->cur to max now. The governors will adjust later. */
- policy->cur = cppc_cpufreq_perf_to_khz(cpu,
- cpu->perf_caps.highest_perf);
- cpu->perf_ctrls.desired_perf = cpu->perf_caps.highest_perf;
+ policy->cur = cppc_cpufreq_perf_to_khz(cpu_data, caps->highest_perf);
+ cpu_data->perf_ctrls.desired_perf = caps->highest_perf;
- ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls);
+ ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
if (ret)
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
- cpu->perf_caps.highest_perf, cpu_num, ret);
+ caps->highest_perf, cpu, ret);
return ret;
}
@@ -326,7 +327,7 @@ static inline u64 get_delta(u64 t1, u64 t0)
return (u32)t1 - (u32)t0;
}
-static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu,
+static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data,
struct cppc_perf_fb_ctrs fb_ctrs_t0,
struct cppc_perf_fb_ctrs fb_ctrs_t1)
{
@@ -345,33 +346,34 @@ static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu,
delivered_perf = (reference_perf * delta_delivered) /
delta_reference;
else
- delivered_perf = cpu->perf_ctrls.desired_perf;
+ delivered_perf = cpu_data->perf_ctrls.desired_perf;
- return cppc_cpufreq_perf_to_khz(cpu, delivered_perf);
+ return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
}
-static unsigned int cppc_cpufreq_get_rate(unsigned int cpunum)
+static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
{
struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
- struct cppc_cpudata *cpu = all_cpu_data[cpunum];
+ struct cppc_cpudata *cpu_data = all_cpu_data[cpu];
int ret;
- ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t0);
+ ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
if (ret)
return ret;
udelay(2); /* 2usec delay between sampling */
- ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t1);
+ ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
if (ret)
return ret;
- return cppc_get_rate_from_fbctrs(cpu, fb_ctrs_t0, fb_ctrs_t1);
+ return cppc_get_rate_from_fbctrs(cpu_data, fb_ctrs_t0, fb_ctrs_t1);
}
static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
{
- struct cppc_cpudata *cpudata;
+ struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
int ret;
if (!boost_supported) {
@@ -379,13 +381,12 @@ static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
return -EINVAL;
}
- cpudata = all_cpu_data[policy->cpu];
if (state)
- policy->max = cppc_cpufreq_perf_to_khz(cpudata,
- cpudata->perf_caps.highest_perf);
+ policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->highest_perf);
else
- policy->max = cppc_cpufreq_perf_to_khz(cpudata,
- cpudata->perf_caps.nominal_perf);
+ policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
+ caps->nominal_perf);
policy->cpuinfo.max_freq = policy->max;
ret = freq_qos_update_request(policy->max_freq_req, policy->max);
@@ -412,17 +413,17 @@ static struct cpufreq_driver cppc_cpufreq_driver = {
* platform specific mechanism. We reuse the desired performance register to
* store the real performance calculated by the platform.
*/
-static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpunum)
+static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu)
{
- struct cppc_cpudata *cpudata = all_cpu_data[cpunum];
+ struct cppc_cpudata *cpu_data = all_cpu_data[cpu];
u64 desired_perf;
int ret;
- ret = cppc_get_desired_perf(cpunum, &desired_perf);
+ ret = cppc_get_desired_perf(cpu, &desired_perf);
if (ret < 0)
return -EIO;
- return cppc_cpufreq_perf_to_khz(cpudata, desired_perf);
+ return cppc_cpufreq_perf_to_khz(cpu_data, desired_perf);
}
static void cppc_check_hisi_workaround(void)
@@ -450,8 +451,8 @@ static void cppc_check_hisi_workaround(void)
static int __init cppc_cpufreq_init(void)
{
+ struct cppc_cpudata *cpu_data;
int i, ret = 0;
- struct cppc_cpudata *cpu;
if (acpi_disabled)
return -ENODEV;
@@ -466,8 +467,8 @@ static int __init cppc_cpufreq_init(void)
if (!all_cpu_data[i])
goto out;
- cpu = all_cpu_data[i];
- if (!zalloc_cpumask_var(&cpu->shared_cpu_map, GFP_KERNEL))
+ cpu_data = all_cpu_data[i];
+ if (!zalloc_cpumask_var(&cpu_data->shared_cpu_map, GFP_KERNEL))
goto out;
}
@@ -487,11 +488,11 @@ static int __init cppc_cpufreq_init(void)
out:
for_each_possible_cpu(i) {
- cpu = all_cpu_data[i];
- if (!cpu)
+ cpu_data = all_cpu_data[i];
+ if (!cpu_data)
break;
- free_cpumask_var(cpu->shared_cpu_map);
- kfree(cpu);
+ free_cpumask_var(cpu_data->shared_cpu_map);
+ kfree(cpu_data);
}
kfree(all_cpu_data);
@@ -500,15 +501,15 @@ static int __init cppc_cpufreq_init(void)
static void __exit cppc_cpufreq_exit(void)
{
- struct cppc_cpudata *cpu;
+ struct cppc_cpudata *cpu_data;
int i;
cpufreq_unregister_driver(&cppc_cpufreq_driver);
for_each_possible_cpu(i) {
- cpu = all_cpu_data[i];
- free_cpumask_var(cpu->shared_cpu_map);
- kfree(cpu);
+ cpu_data = all_cpu_data[i];
+ free_cpumask_var(cpu_data->shared_cpu_map);
+ kfree(cpu_data);
}
kfree(all_cpu_data);
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index 3776d96..bd2db01 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -119,10 +119,12 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "mediatek,mt2712", },
{ .compatible = "mediatek,mt7622", },
{ .compatible = "mediatek,mt7623", },
+ { .compatible = "mediatek,mt8167", },
{ .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
+ { .compatible = "mediatek,mt8516", },
{ .compatible = "nvidia,tegra20", },
{ .compatible = "nvidia,tegra30", },
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index e363ae0..ad42345 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -30,7 +30,7 @@ struct private_data {
cpumask_var_t cpus;
struct device *cpu_dev;
struct opp_table *opp_table;
- struct opp_table *reg_opp_table;
+ struct cpufreq_frequency_table *freq_table;
bool have_static_opps;
};
@@ -102,7 +102,6 @@ static const char *find_supply_name(struct device *dev)
static int cpufreq_init(struct cpufreq_policy *policy)
{
- struct cpufreq_frequency_table *freq_table;
struct private_data *priv;
struct device *cpu_dev;
struct clk *cpu_clk;
@@ -114,9 +113,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
pr_err("failed to find data for cpu%d\n", policy->cpu);
return -ENODEV;
}
-
cpu_dev = priv->cpu_dev;
- cpumask_copy(policy->cpus, priv->cpus);
cpu_clk = clk_get(cpu_dev, NULL);
if (IS_ERR(cpu_clk)) {
@@ -125,67 +122,32 @@ static int cpufreq_init(struct cpufreq_policy *policy)
return ret;
}
- /*
- * Initialize OPP tables for all policy->cpus. They will be shared by
- * all CPUs which have marked their CPUs shared with OPP bindings.
- *
- * For platforms not using operating-points-v2 bindings, we do this
- * before updating policy->cpus. Otherwise, we will end up creating
- * duplicate OPPs for policy->cpus.
- *
- * OPPs might be populated at runtime, don't check for error here
- */
- if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
- priv->have_static_opps = true;
+ transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev);
+ if (!transition_latency)
+ transition_latency = CPUFREQ_ETERNAL;
- /*
- * But we need OPP table to function so if it is not there let's
- * give platform code chance to provide it for us.
- */
- ret = dev_pm_opp_get_opp_count(cpu_dev);
- if (ret <= 0) {
- dev_err(cpu_dev, "OPP table can't be empty\n");
- ret = -ENODEV;
- goto out_free_opp;
- }
-
- ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
- if (ret) {
- dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
- goto out_free_opp;
- }
-
+ cpumask_copy(policy->cpus, priv->cpus);
policy->driver_data = priv;
policy->clk = cpu_clk;
- policy->freq_table = freq_table;
-
+ policy->freq_table = priv->freq_table;
policy->suspend_freq = dev_pm_opp_get_suspend_opp_freq(cpu_dev) / 1000;
+ policy->cpuinfo.transition_latency = transition_latency;
+ policy->dvfs_possible_from_any_cpu = true;
/* Support turbo/boost mode */
if (policy_has_boost_freq(policy)) {
/* This gets disabled by core on driver unregister */
ret = cpufreq_enable_boost_support();
if (ret)
- goto out_free_cpufreq_table;
+ goto out_clk_put;
cpufreq_dt_attr[1] = &cpufreq_freq_attr_scaling_boost_freqs;
}
- transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev);
- if (!transition_latency)
- transition_latency = CPUFREQ_ETERNAL;
-
- policy->cpuinfo.transition_latency = transition_latency;
- policy->dvfs_possible_from_any_cpu = true;
-
dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
return 0;
-out_free_cpufreq_table:
- dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
-out_free_opp:
- if (priv->have_static_opps)
- dev_pm_opp_of_cpumask_remove_table(policy->cpus);
+out_clk_put:
clk_put(cpu_clk);
return ret;
@@ -208,11 +170,6 @@ static int cpufreq_offline(struct cpufreq_policy *policy)
static int cpufreq_exit(struct cpufreq_policy *policy)
{
- struct private_data *priv = policy->driver_data;
-
- dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
- if (priv->have_static_opps)
- dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
clk_put(policy->clk);
return 0;
}
@@ -236,6 +193,7 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
{
struct private_data *priv;
struct device *cpu_dev;
+ bool fallback = false;
const char *reg_name;
int ret;
@@ -254,68 +212,86 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL))
return -ENOMEM;
+ cpumask_set_cpu(cpu, priv->cpus);
priv->cpu_dev = cpu_dev;
- /* Try to get OPP table early to ensure resources are available */
- priv->opp_table = dev_pm_opp_get_opp_table(cpu_dev);
- if (IS_ERR(priv->opp_table)) {
- ret = PTR_ERR(priv->opp_table);
- if (ret != -EPROBE_DEFER)
- dev_err(cpu_dev, "failed to get OPP table: %d\n", ret);
- goto free_cpumask;
- }
-
/*
* OPP layer will be taking care of regulators now, but it needs to know
* the name of the regulator first.
*/
reg_name = find_supply_name(cpu_dev);
if (reg_name) {
- priv->reg_opp_table = dev_pm_opp_set_regulators(cpu_dev,
- ®_name, 1);
- if (IS_ERR(priv->reg_opp_table)) {
- ret = PTR_ERR(priv->reg_opp_table);
+ priv->opp_table = dev_pm_opp_set_regulators(cpu_dev, ®_name,
+ 1);
+ if (IS_ERR(priv->opp_table)) {
+ ret = PTR_ERR(priv->opp_table);
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to set regulators: %d\n",
ret);
- goto put_table;
+ goto free_cpumask;
}
}
- /* Find OPP sharing information so we can fill pri->cpus here */
/* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, priv->cpus);
if (ret) {
if (ret != -ENOENT)
- goto put_reg;
+ goto out;
/*
* operating-points-v2 not supported, fallback to all CPUs share
* OPP for backward compatibility if the platform hasn't set
* sharing CPUs.
*/
- if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus)) {
- cpumask_setall(priv->cpus);
+ if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus))
+ fallback = true;
+ }
- /*
- * OPP tables are initialized only for cpu, do it for
- * others as well.
- */
- ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus);
- if (ret)
- dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
- __func__, ret);
- }
+ /*
+ * Initialize OPP tables for all priv->cpus. They will be shared by
+ * all CPUs which have marked their CPUs shared with OPP bindings.
+ *
+ * For platforms not using operating-points-v2 bindings, we do this
+ * before updating priv->cpus. Otherwise, we will end up creating
+ * duplicate OPPs for the CPUs.
+ *
+ * OPPs might be populated at runtime, don't check for error here.
+ */
+ if (!dev_pm_opp_of_cpumask_add_table(priv->cpus))
+ priv->have_static_opps = true;
+
+ /*
+ * The OPP table must be initialized, statically or dynamically, by this
+ * point.
+ */
+ ret = dev_pm_opp_get_opp_count(cpu_dev);
+ if (ret <= 0) {
+ dev_err(cpu_dev, "OPP table can't be empty\n");
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (fallback) {
+ cpumask_setall(priv->cpus);
+ ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus);
+ if (ret)
+ dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
+ __func__, ret);
+ }
+
+ ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &priv->freq_table);
+ if (ret) {
+ dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+ goto out;
}
list_add(&priv->node, &priv_list);
return 0;
-put_reg:
- if (priv->reg_opp_table)
- dev_pm_opp_put_regulators(priv->reg_opp_table);
-put_table:
- dev_pm_opp_put_opp_table(priv->opp_table);
+out:
+ if (priv->have_static_opps)
+ dev_pm_opp_of_cpumask_remove_table(priv->cpus);
+ dev_pm_opp_put_regulators(priv->opp_table);
free_cpumask:
free_cpumask_var(priv->cpus);
return ret;
@@ -326,9 +302,10 @@ static void dt_cpufreq_release(void)
struct private_data *priv, *tmp;
list_for_each_entry_safe(priv, tmp, &priv_list, node) {
- if (priv->reg_opp_table)
- dev_pm_opp_put_regulators(priv->reg_opp_table);
- dev_pm_opp_put_opp_table(priv->opp_table);
+ dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &priv->freq_table);
+ if (priv->have_static_opps)
+ dev_pm_opp_of_cpumask_remove_table(priv->cpus);
+ dev_pm_opp_put_regulators(priv->opp_table);
free_cpumask_var(priv->cpus);
list_del(&priv->node);
}
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 1e7e3f2..c17aa29 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -298,8 +298,10 @@ struct cpufreq_policy *cpufreq_cpu_acquire(unsigned int cpu)
* EXTERNALLY AFFECTING FREQUENCY CHANGES *
*********************************************************************/
-/*
- * adjust_jiffies - adjust the system "loops_per_jiffy"
+/**
+ * adjust_jiffies - Adjust the system "loops_per_jiffy".
+ * @val: CPUFREQ_PRECHANGE or CPUFREQ_POSTCHANGE.
+ * @ci: Frequency change information.
*
* This function alters the system "loops_per_jiffy" for the clock
* speed change. Note that loops_per_jiffy cannot be updated on SMP
@@ -331,14 +333,14 @@ static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
}
/**
- * cpufreq_notify_transition - Notify frequency transition and adjust_jiffies.
+ * cpufreq_notify_transition - Notify frequency transition and adjust jiffies.
* @policy: cpufreq policy to enable fast frequency switching for.
* @freqs: contain details of the frequency update.
* @state: set to CPUFREQ_PRECHANGE or CPUFREQ_POSTCHANGE.
*
- * This function calls the transition notifiers and the "adjust_jiffies"
- * function. It is called twice on all CPU frequency changes that have
- * external effects.
+ * This function calls the transition notifiers and adjust_jiffies().
+ *
+ * It is called twice on all CPU frequency changes that have external effects.
*/
static void cpufreq_notify_transition(struct cpufreq_policy *policy,
struct cpufreq_freqs *freqs,
@@ -1391,8 +1393,10 @@ static int cpufreq_online(unsigned int cpu)
policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req),
GFP_KERNEL);
- if (!policy->min_freq_req)
+ if (!policy->min_freq_req) {
+ ret = -ENOMEM;
goto out_destroy_policy;
+ }
ret = freq_qos_add_request(&policy->constraints,
policy->min_freq_req, FREQ_QOS_MIN,
@@ -1429,6 +1433,7 @@ static int cpufreq_online(unsigned int cpu)
if (cpufreq_driver->get && has_target()) {
policy->cur = cpufreq_driver->get(policy->cpu);
if (!policy->cur) {
+ ret = -EIO;
pr_err("%s: ->get() failed\n", __func__);
goto out_destroy_policy;
}
@@ -1646,13 +1651,12 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
}
/**
- * cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're
- * in deep trouble.
- * @policy: policy managing CPUs
- * @new_freq: CPU frequency the CPU actually runs at
+ * cpufreq_out_of_sync - Fix up actual and saved CPU frequency difference.
+ * @policy: Policy managing CPUs.
+ * @new_freq: New CPU frequency.
*
- * We adjust to current frequency first, and need to clean up later.
- * So either call to cpufreq_update_policy() or schedule handle_update()).
+ * Adjust to the current frequency first and clean up later by either calling
+ * cpufreq_update_policy(), or scheduling handle_update().
*/
static void cpufreq_out_of_sync(struct cpufreq_policy *policy,
unsigned int new_freq)
@@ -1832,7 +1836,7 @@ int cpufreq_generic_suspend(struct cpufreq_policy *policy)
EXPORT_SYMBOL(cpufreq_generic_suspend);
/**
- * cpufreq_suspend() - Suspend CPUFreq governors
+ * cpufreq_suspend() - Suspend CPUFreq governors.
*
* Called during system wide Suspend/Hibernate cycles for suspending governors
* as some platforms can't change frequency after this point in suspend cycle.
@@ -1868,7 +1872,7 @@ void cpufreq_suspend(void)
}
/**
- * cpufreq_resume() - Resume CPUFreq governors
+ * cpufreq_resume() - Resume CPUFreq governors.
*
* Called during system wide Suspend/Hibernate cycle for resuming governors that
* are suspended with cpufreq_suspend().
@@ -1920,10 +1924,10 @@ bool cpufreq_driver_test_flags(u16 flags)
}
/**
- * cpufreq_get_current_driver - return current driver's name
+ * cpufreq_get_current_driver - Return the current driver's name.
*
- * Return the name string of the currently loaded cpufreq driver
- * or NULL, if none.
+ * Return the name string of the currently registered cpufreq driver or NULL if
+ * none.
*/
const char *cpufreq_get_current_driver(void)
{
@@ -1935,10 +1939,10 @@ const char *cpufreq_get_current_driver(void)
EXPORT_SYMBOL_GPL(cpufreq_get_current_driver);
/**
- * cpufreq_get_driver_data - return current driver data
+ * cpufreq_get_driver_data - Return current driver data.
*
- * Return the private data of the currently loaded cpufreq
- * driver, or NULL if no cpufreq driver is loaded.
+ * Return the private data of the currently registered cpufreq driver, or NULL
+ * if no cpufreq driver has been registered.
*/
void *cpufreq_get_driver_data(void)
{
@@ -1954,17 +1958,16 @@ EXPORT_SYMBOL_GPL(cpufreq_get_driver_data);
*********************************************************************/
/**
- * cpufreq_register_notifier - register a driver with cpufreq
- * @nb: notifier function to register
- * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
+ * cpufreq_register_notifier - Register a notifier with cpufreq.
+ * @nb: notifier function to register.
+ * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER.
*
- * Add a driver to one of two lists: either a list of drivers that
- * are notified about clock rate changes (once before and once after
- * the transition), or a list of drivers that are notified about
- * changes in cpufreq policy.
+ * Add a notifier to one of two lists: either a list of notifiers that run on
+ * clock rate changes (once before and once after every transition), or a list
+ * of notifiers that ron on cpufreq policy changes.
*
- * This function may sleep, and has the same return conditions as
- * blocking_notifier_chain_register.
+ * This function may sleep and it has the same return values as
+ * blocking_notifier_chain_register().
*/
int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
{
@@ -2001,14 +2004,14 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
EXPORT_SYMBOL(cpufreq_register_notifier);
/**
- * cpufreq_unregister_notifier - unregister a driver with cpufreq
- * @nb: notifier block to be unregistered
- * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
+ * cpufreq_unregister_notifier - Unregister a notifier from cpufreq.
+ * @nb: notifier block to be unregistered.
+ * @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER.
*
- * Remove a driver from the CPU frequency notifier list.
+ * Remove a notifier from one of the cpufreq notifier lists.
*
- * This function may sleep, and has the same return conditions as
- * blocking_notifier_chain_unregister.
+ * This function may sleep and it has the same return values as
+ * blocking_notifier_chain_unregister().
*/
int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list)
{
@@ -2123,7 +2126,7 @@ static int __target_intermediate(struct cpufreq_policy *policy,
static int __target_index(struct cpufreq_policy *policy, int index)
{
struct cpufreq_freqs freqs = {.old = policy->cur, .flags = 0};
- unsigned int intermediate_freq = 0;
+ unsigned int restore_freq, intermediate_freq = 0;
unsigned int newfreq = policy->freq_table[index].frequency;
int retval = -EINVAL;
bool notify;
@@ -2131,6 +2134,9 @@ static int __target_index(struct cpufreq_policy *policy, int index)
if (newfreq == policy->cur)
return 0;
+ /* Save last value to restore later on errors */
+ restore_freq = policy->cur;
+
notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION);
if (notify) {
/* Handle switching to intermediate frequency */
@@ -2168,7 +2174,7 @@ static int __target_index(struct cpufreq_policy *policy, int index)
*/
if (unlikely(retval && intermediate_freq)) {
freqs.old = intermediate_freq;
- freqs.new = policy->restore_freq;
+ freqs.new = restore_freq;
cpufreq_freq_transition_begin(policy, &freqs);
cpufreq_freq_transition_end(policy, &freqs, 0);
}
@@ -2203,9 +2209,6 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
!(cpufreq_driver->flags & CPUFREQ_NEED_UPDATE_LIMITS))
return 0;
- /* Save last value to restore later on errors */
- policy->restore_freq = policy->cur;
-
if (cpufreq_driver->target)
return cpufreq_driver->target(policy, target_freq, relation);
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index 6cd5c8a..da717f7 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -9,9 +9,9 @@
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/module.h>
+#include <linux/sched/clock.h>
#include <linux/slab.h>
-
struct cpufreq_stats {
unsigned int total_trans;
unsigned long long last_time;
@@ -30,7 +30,7 @@ struct cpufreq_stats {
static void cpufreq_stats_update(struct cpufreq_stats *stats,
unsigned long long time)
{
- unsigned long long cur_time = get_jiffies_64();
+ unsigned long long cur_time = local_clock();
stats->time_in_state[stats->last_index] += cur_time - time;
stats->last_time = cur_time;
@@ -42,7 +42,7 @@ static void cpufreq_stats_reset_table(struct cpufreq_stats *stats)
memset(stats->time_in_state, 0, count * sizeof(u64));
memset(stats->trans_table, 0, count * count * sizeof(int));
- stats->last_time = get_jiffies_64();
+ stats->last_time = local_clock();
stats->total_trans = 0;
/* Adjust for the time elapsed since reset was requested */
@@ -82,18 +82,18 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
* before the reset_pending read above.
*/
smp_rmb();
- time = get_jiffies_64() - READ_ONCE(stats->reset_time);
+ time = local_clock() - READ_ONCE(stats->reset_time);
} else {
time = 0;
}
} else {
time = stats->time_in_state[i];
if (i == stats->last_index)
- time += get_jiffies_64() - stats->last_time;
+ time += local_clock() - stats->last_time;
}
len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
- jiffies_64_to_clock_t(time));
+ nsec_to_clock_t(time));
}
return len;
}
@@ -109,7 +109,7 @@ static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
* Defer resetting of stats to cpufreq_stats_record_transition() to
* avoid races.
*/
- WRITE_ONCE(stats->reset_time, get_jiffies_64());
+ WRITE_ONCE(stats->reset_time, local_clock());
/*
* The memory barrier below is to prevent the readers of reset_time from
* seeing a stale or partially updated value.
@@ -249,7 +249,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
stats->freq_table[i++] = pos->frequency;
stats->state_num = i;
- stats->last_time = get_jiffies_64();
+ stats->last_time = local_clock();
stats->last_index = freq_table_get_index(stats, policy->cur);
policy->stats = stats;
diff --git a/drivers/cpufreq/highbank-cpufreq.c b/drivers/cpufreq/highbank-cpufreq.c
index 5a7f6da..ac57cdd 100644
--- a/drivers/cpufreq/highbank-cpufreq.c
+++ b/drivers/cpufreq/highbank-cpufreq.c
@@ -101,6 +101,13 @@ static int hb_cpufreq_driver_init(void)
}
module_init(hb_cpufreq_driver_init);
+static const struct of_device_id __maybe_unused hb_cpufreq_of_match[] = {
+ { .compatible = "calxeda,highbank" },
+ { .compatible = "calxeda,ecx-2000" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, hb_cpufreq_of_match);
+
MODULE_AUTHOR("Mark Langsdorf <[email protected]>");
MODULE_DESCRIPTION("Calxeda Highbank cpufreq driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 36a3ccf..2a4db85 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -2569,14 +2569,13 @@ static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
int old_pstate = cpu->pstate.current_pstate;
target_pstate = intel_pstate_prepare_request(cpu, target_pstate);
- if (hwp_active) {
+ if (hwp_active)
intel_cpufreq_adjust_hwp(cpu, target_pstate,
policy->strict_target, fast_switch);
- cpu->pstate.current_pstate = target_pstate;
- } else if (target_pstate != old_pstate) {
+ else if (target_pstate != old_pstate)
intel_cpufreq_adjust_perf_ctl(cpu, target_pstate, fast_switch);
- cpu->pstate.current_pstate = target_pstate;
- }
+
+ cpu->pstate.current_pstate = target_pstate;
intel_cpufreq_trace(cpu, fast_switch ? INTEL_PSTATE_TRACE_FAST_SWITCH :
INTEL_PSTATE_TRACE_TARGET, old_pstate);
diff --git a/drivers/cpufreq/loongson1-cpufreq.c b/drivers/cpufreq/loongson1-cpufreq.c
index 0ea88778..86f6125 100644
--- a/drivers/cpufreq/loongson1-cpufreq.c
+++ b/drivers/cpufreq/loongson1-cpufreq.c
@@ -216,6 +216,7 @@ static struct platform_driver ls1x_cpufreq_platdrv = {
module_platform_driver(ls1x_cpufreq_platdrv);
+MODULE_ALIAS("platform:ls1x-cpufreq");
MODULE_AUTHOR("Kelvin Cheung <[email protected]>");
MODULE_DESCRIPTION("Loongson1 CPUFreq driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
index 7d1212c..022e3e9 100644
--- a/drivers/cpufreq/mediatek-cpufreq.c
+++ b/drivers/cpufreq/mediatek-cpufreq.c
@@ -532,6 +532,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt2712", },
{ .compatible = "mediatek,mt7622", },
{ .compatible = "mediatek,mt7623", },
+ { .compatible = "mediatek,mt8167", },
{ .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
@@ -540,6 +541,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ }
};
+MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
static int __init mtk_cpufreq_driver_init(void)
{
@@ -572,6 +574,7 @@ static int __init mtk_cpufreq_driver_init(void)
pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
if (IS_ERR(pdev)) {
pr_err("failed to register mtk-cpufreq platform device\n");
+ platform_driver_unregister(&mtk_cpufreq_platdrv);
return PTR_ERR(pdev);
}
diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
index d06b378..d1744b5 100644
--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
+++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
@@ -397,19 +397,19 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
free_genpd_opp:
for_each_possible_cpu(cpu) {
- if (IS_ERR_OR_NULL(drv->genpd_opp_tables[cpu]))
+ if (IS_ERR(drv->genpd_opp_tables[cpu]))
break;
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
}
kfree(drv->genpd_opp_tables);
free_opp:
for_each_possible_cpu(cpu) {
- if (IS_ERR_OR_NULL(drv->names_opp_tables[cpu]))
+ if (IS_ERR(drv->names_opp_tables[cpu]))
break;
dev_pm_opp_put_prop_name(drv->names_opp_tables[cpu]);
}
for_each_possible_cpu(cpu) {
- if (IS_ERR_OR_NULL(drv->hw_opp_tables[cpu]))
+ if (IS_ERR(drv->hw_opp_tables[cpu]))
break;
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
}
@@ -430,12 +430,9 @@ static int qcom_cpufreq_remove(struct platform_device *pdev)
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu) {
- if (drv->names_opp_tables[cpu])
- dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]);
- if (drv->hw_opp_tables[cpu])
- dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
- if (drv->genpd_opp_tables[cpu])
- dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
+ dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]);
+ dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
+ dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
}
kfree(drv->names_opp_tables);
@@ -464,6 +461,7 @@ static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
{ .compatible = "qcom,msm8960", .data = &match_data_krait },
{},
};
+MODULE_DEVICE_TABLE(of, qcom_cpufreq_match_list);
/*
* Since the driver depends on smem and nvmem drivers, which may
diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
index 8286205..491a0a2 100644
--- a/drivers/cpufreq/scmi-cpufreq.c
+++ b/drivers/cpufreq/scmi-cpufreq.c
@@ -126,6 +126,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
struct scmi_data *priv;
struct cpufreq_frequency_table *freq_table;
struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
+ bool power_scale_mw;
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
@@ -189,7 +190,9 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
policy->fast_switch_possible =
handle->perf_ops->fast_switch_possible(handle, cpu_dev);
- em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
+ power_scale_mw = handle->perf_ops->power_scale_mw_get(handle);
+ em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
+ power_scale_mw);
return 0;
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
index 43db05b..e5140ad 100644
--- a/drivers/cpufreq/scpi-cpufreq.c
+++ b/drivers/cpufreq/scpi-cpufreq.c
@@ -233,6 +233,7 @@ static struct platform_driver scpi_cpufreq_platdrv = {
};
module_platform_driver(scpi_cpufreq_platdrv);
+MODULE_ALIAS("platform:scpi-cpufreq");
MODULE_AUTHOR("Sudeep Holla <[email protected]>");
MODULE_DESCRIPTION("ARM SCPI CPUFreq interface driver");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/sti-cpufreq.c b/drivers/cpufreq/sti-cpufreq.c
index 4ac6fb2..fdb0a72 100644
--- a/drivers/cpufreq/sti-cpufreq.c
+++ b/drivers/cpufreq/sti-cpufreq.c
@@ -223,7 +223,8 @@ static int sti_cpufreq_set_opp_info(void)
opp_table = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to set supported hardware\n");
- return PTR_ERR(opp_table);
+ ret = PTR_ERR(opp_table);
+ goto err_put_prop_name;
}
dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n",
@@ -232,6 +233,10 @@ static int sti_cpufreq_set_opp_info(void)
version[0], version[1], version[2]);
return 0;
+
+err_put_prop_name:
+ dev_pm_opp_put_prop_name(opp_table);
+ return ret;
}
static int sti_cpufreq_fetch_syscon_registers(void)
@@ -292,6 +297,13 @@ static int sti_cpufreq_init(void)
}
module_init(sti_cpufreq_init);
+static const struct of_device_id __maybe_unused sti_cpufreq_of_match[] = {
+ { .compatible = "st,stih407" },
+ { .compatible = "st,stih410" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, sti_cpufreq_of_match);
+
MODULE_DESCRIPTION("STMicroelectronics CPUFreq/OPP driver");
MODULE_AUTHOR("Ajitpal Singh <[email protected]>");
MODULE_AUTHOR("Lee Jones <[email protected]>");
diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
index 9907a16..2deed8d 100644
--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
@@ -167,6 +167,7 @@ static const struct of_device_id sun50i_cpufreq_match_list[] = {
{ .compatible = "allwinner,sun50i-h6" },
{}
};
+MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list);
static const struct of_device_id *sun50i_cpufreq_match_node(void)
{
diff --git a/drivers/cpufreq/tegra186-cpufreq.c b/drivers/cpufreq/tegra186-cpufreq.c
index 7eb2c56..e566ea2 100644
--- a/drivers/cpufreq/tegra186-cpufreq.c
+++ b/drivers/cpufreq/tegra186-cpufreq.c
@@ -12,35 +12,52 @@
#include <soc/tegra/bpmp.h>
#include <soc/tegra/bpmp-abi.h>
-#define EDVD_CORE_VOLT_FREQ(core) (0x20 + (core) * 0x4)
-#define EDVD_CORE_VOLT_FREQ_F_SHIFT 0
-#define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff
-#define EDVD_CORE_VOLT_FREQ_V_SHIFT 16
+#define TEGRA186_NUM_CLUSTERS 2
+#define EDVD_OFFSET_A57(core) ((SZ_64K * 6) + (0x20 + (core) * 0x4))
+#define EDVD_OFFSET_DENVER(core) ((SZ_64K * 7) + (0x20 + (core) * 0x4))
+#define EDVD_CORE_VOLT_FREQ_F_SHIFT 0
+#define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff
+#define EDVD_CORE_VOLT_FREQ_V_SHIFT 16
-struct tegra186_cpufreq_cluster_info {
- unsigned long offset;
- int cpus[4];
+struct tegra186_cpufreq_cpu {
unsigned int bpmp_cluster_id;
+ unsigned int edvd_offset;
};
-#define NO_CPU -1
-static const struct tegra186_cpufreq_cluster_info tegra186_clusters[] = {
- /* Denver cluster */
+static const struct tegra186_cpufreq_cpu tegra186_cpus[] = {
+ /* CPU0 - A57 Cluster */
{
- .offset = SZ_64K * 7,
- .cpus = { 1, 2, NO_CPU, NO_CPU },
- .bpmp_cluster_id = 0,
- },
- /* A57 cluster */
- {
- .offset = SZ_64K * 6,
- .cpus = { 0, 3, 4, 5 },
.bpmp_cluster_id = 1,
+ .edvd_offset = EDVD_OFFSET_A57(0)
+ },
+ /* CPU1 - Denver Cluster */
+ {
+ .bpmp_cluster_id = 0,
+ .edvd_offset = EDVD_OFFSET_DENVER(0)
+ },
+ /* CPU2 - Denver Cluster */
+ {
+ .bpmp_cluster_id = 0,
+ .edvd_offset = EDVD_OFFSET_DENVER(1)
+ },
+ /* CPU3 - A57 Cluster */
+ {
+ .bpmp_cluster_id = 1,
+ .edvd_offset = EDVD_OFFSET_A57(1)
+ },
+ /* CPU4 - A57 Cluster */
+ {
+ .bpmp_cluster_id = 1,
+ .edvd_offset = EDVD_OFFSET_A57(2)
+ },
+ /* CPU5 - A57 Cluster */
+ {
+ .bpmp_cluster_id = 1,
+ .edvd_offset = EDVD_OFFSET_A57(3)
},
};
struct tegra186_cpufreq_cluster {
- const struct tegra186_cpufreq_cluster_info *info;
struct cpufreq_frequency_table *table;
u32 ref_clk_khz;
u32 div;
@@ -48,36 +65,18 @@ struct tegra186_cpufreq_cluster {
struct tegra186_cpufreq_data {
void __iomem *regs;
-
- size_t num_clusters;
struct tegra186_cpufreq_cluster *clusters;
+ const struct tegra186_cpufreq_cpu *cpus;
};
static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
{
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
- unsigned int i;
+ unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id;
- for (i = 0; i < data->num_clusters; i++) {
- struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
- const struct tegra186_cpufreq_cluster_info *info =
- cluster->info;
- int core;
-
- for (core = 0; core < ARRAY_SIZE(info->cpus); core++) {
- if (info->cpus[core] == policy->cpu)
- break;
- }
- if (core == ARRAY_SIZE(info->cpus))
- continue;
-
- policy->driver_data =
- data->regs + info->offset + EDVD_CORE_VOLT_FREQ(core);
- policy->freq_table = cluster->table;
- break;
- }
-
+ policy->freq_table = data->clusters[cluster].table;
policy->cpuinfo.transition_latency = 300 * 1000;
+ policy->driver_data = NULL;
return 0;
}
@@ -85,11 +84,12 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int index)
{
+ struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
struct cpufreq_frequency_table *tbl = policy->freq_table + index;
- void __iomem *edvd_reg = policy->driver_data;
+ unsigned int edvd_offset = data->cpus[policy->cpu].edvd_offset;
u32 edvd_val = tbl->driver_data;
- writel(edvd_val, edvd_reg);
+ writel(edvd_val, data->regs + edvd_offset);
return 0;
}
@@ -97,35 +97,22 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
static unsigned int tegra186_cpufreq_get(unsigned int cpu)
{
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
+ struct tegra186_cpufreq_cluster *cluster;
struct cpufreq_policy *policy;
- void __iomem *edvd_reg;
- unsigned int i, freq = 0;
+ unsigned int edvd_offset, cluster_id;
u32 ndiv;
policy = cpufreq_cpu_get(cpu);
if (!policy)
return 0;
- edvd_reg = policy->driver_data;
- ndiv = readl(edvd_reg) & EDVD_CORE_VOLT_FREQ_F_MASK;
-
- for (i = 0; i < data->num_clusters; i++) {
- struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
- int core;
-
- for (core = 0; core < ARRAY_SIZE(cluster->info->cpus); core++) {
- if (cluster->info->cpus[core] != policy->cpu)
- continue;
-
- freq = (cluster->ref_clk_khz * ndiv) / cluster->div;
- goto out;
- }
- }
-
-out:
+ edvd_offset = data->cpus[policy->cpu].edvd_offset;
+ ndiv = readl(data->regs + edvd_offset) & EDVD_CORE_VOLT_FREQ_F_MASK;
+ cluster_id = data->cpus[policy->cpu].bpmp_cluster_id;
+ cluster = &data->clusters[cluster_id];
cpufreq_cpu_put(policy);
- return freq;
+ return (cluster->ref_clk_khz * ndiv) / cluster->div;
}
static struct cpufreq_driver tegra186_cpufreq_driver = {
@@ -141,7 +128,7 @@ static struct cpufreq_driver tegra186_cpufreq_driver = {
static struct cpufreq_frequency_table *init_vhint_table(
struct platform_device *pdev, struct tegra_bpmp *bpmp,
- struct tegra186_cpufreq_cluster *cluster)
+ struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id)
{
struct cpufreq_frequency_table *table;
struct mrq_cpu_vhint_request req;
@@ -160,7 +147,7 @@ static struct cpufreq_frequency_table *init_vhint_table(
memset(&req, 0, sizeof(req));
req.addr = phys;
- req.cluster_id = cluster->info->bpmp_cluster_id;
+ req.cluster_id = cluster_id;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_CPU_VHINT;
@@ -234,12 +221,12 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
if (!data)
return -ENOMEM;
- data->clusters = devm_kcalloc(&pdev->dev, ARRAY_SIZE(tegra186_clusters),
+ data->clusters = devm_kcalloc(&pdev->dev, TEGRA186_NUM_CLUSTERS,
sizeof(*data->clusters), GFP_KERNEL);
if (!data->clusters)
return -ENOMEM;
- data->num_clusters = ARRAY_SIZE(tegra186_clusters);
+ data->cpus = tegra186_cpus;
bpmp = tegra_bpmp_get(&pdev->dev);
if (IS_ERR(bpmp))
@@ -251,11 +238,10 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
goto put_bpmp;
}
- for (i = 0; i < data->num_clusters; i++) {
+ for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) {
struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
- cluster->info = &tegra186_clusters[i];
- cluster->table = init_vhint_table(pdev, bpmp, cluster);
+ cluster->table = init_vhint_table(pdev, bpmp, cluster, i);
if (IS_ERR(cluster->table)) {
err = PTR_ERR(cluster->table);
goto put_bpmp;
diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c
index e1d931c..6a67f36 100644
--- a/drivers/cpufreq/tegra194-cpufreq.c
+++ b/drivers/cpufreq/tegra194-cpufreq.c
@@ -21,7 +21,6 @@
#define KHZ 1000
#define REF_CLK_MHZ 408 /* 408 MHz */
#define US_DELAY 500
-#define US_DELAY_MIN 2
#define CPUFREQ_TBL_STEP_HZ (50 * KHZ * KHZ)
#define MAX_CNT ~0U
@@ -44,7 +43,6 @@ struct tegra194_cpufreq_data {
struct tegra_cpu_ctr {
u32 cpu;
- u32 delay;
u32 coreclk_cnt, last_coreclk_cnt;
u32 refclk_cnt, last_refclk_cnt;
};
@@ -112,7 +110,7 @@ static void tegra_read_counters(struct work_struct *work)
val = read_freq_feedback();
c->last_refclk_cnt = lower_32_bits(val);
c->last_coreclk_cnt = upper_32_bits(val);
- udelay(c->delay);
+ udelay(US_DELAY);
val = read_freq_feedback();
c->refclk_cnt = lower_32_bits(val);
c->coreclk_cnt = upper_32_bits(val);
@@ -139,7 +137,7 @@ static void tegra_read_counters(struct work_struct *work)
* @cpu - logical cpu whose freq to be updated
* Returns freq in KHz on success, 0 if cpu is offline
*/
-static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
+static unsigned int tegra194_calculate_speed(u32 cpu)
{
struct read_counters_work read_counters_work;
struct tegra_cpu_ctr c;
@@ -153,7 +151,6 @@ static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
* interrupts enabled.
*/
read_counters_work.c.cpu = cpu;
- read_counters_work.c.delay = delay;
INIT_WORK_ONSTACK(&read_counters_work.work, tegra_read_counters);
queue_work_on(cpu, read_counters_wq, &read_counters_work.work);
flush_work(&read_counters_work.work);
@@ -180,9 +177,61 @@ static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
return (rate_mhz * KHZ); /* in KHz */
}
+static void get_cpu_ndiv(void *ndiv)
+{
+ u64 ndiv_val;
+
+ asm volatile("mrs %0, s3_0_c15_c0_4" : "=r" (ndiv_val) : );
+
+ *(u64 *)ndiv = ndiv_val;
+}
+
+static void set_cpu_ndiv(void *data)
+{
+ struct cpufreq_frequency_table *tbl = data;
+ u64 ndiv_val = (u64)tbl->driver_data;
+
+ asm volatile("msr s3_0_c15_c0_4, %0" : : "r" (ndiv_val));
+}
+
static unsigned int tegra194_get_speed(u32 cpu)
{
- return tegra194_get_speed_common(cpu, US_DELAY);
+ struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
+ struct cpufreq_frequency_table *pos;
+ unsigned int rate;
+ u64 ndiv;
+ int ret;
+ u32 cl;
+
+ smp_call_function_single(cpu, get_cpu_cluster, &cl, true);
+
+ /* reconstruct actual cpu freq using counters */
+ rate = tegra194_calculate_speed(cpu);
+
+ /* get last written ndiv value */
+ ret = smp_call_function_single(cpu, get_cpu_ndiv, &ndiv, true);
+ if (WARN_ON_ONCE(ret))
+ return rate;
+
+ /*
+ * If the reconstructed frequency has acceptable delta from
+ * the last written value, then return freq corresponding
+ * to the last written ndiv value from freq_table. This is
+ * done to return consistent value.
+ */
+ cpufreq_for_each_valid_entry(pos, data->tables[cl]) {
+ if (pos->driver_data != ndiv)
+ continue;
+
+ if (abs(pos->frequency - rate) > 115200) {
+ pr_warn("cpufreq: cpu%d,cur:%u,set:%u,set ndiv:%llu\n",
+ cpu, rate, pos->frequency, ndiv);
+ } else {
+ rate = pos->frequency;
+ }
+ break;
+ }
+ return rate;
}
static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
@@ -196,9 +245,6 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
if (cl >= data->num_clusters)
return -EINVAL;
- /* boot freq */
- policy->cur = tegra194_get_speed_common(policy->cpu, US_DELAY_MIN);
-
/* set same policy for all cpus in a cluster */
for (cpu = (cl * 2); cpu < ((cl + 1) * 2); cpu++)
cpumask_set_cpu(cpu, policy->cpus);
@@ -209,14 +255,6 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
return 0;
}
-static void set_cpu_ndiv(void *data)
-{
- struct cpufreq_frequency_table *tbl = data;
- u64 ndiv_val = (u64)tbl->driver_data;
-
- asm volatile("msr s3_0_c15_c0_4, %0" : : "r" (ndiv_val));
-}
-
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int index)
{
diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c
index e89b905..f711d8e 100644
--- a/drivers/cpufreq/vexpress-spc-cpufreq.c
+++ b/drivers/cpufreq/vexpress-spc-cpufreq.c
@@ -591,6 +591,7 @@ static struct platform_driver ve_spc_cpufreq_platdrv = {
};
module_platform_driver(ve_spc_cpufreq_platdrv);
+MODULE_ALIAS("platform:vexpress-spc-cpufreq");
MODULE_AUTHOR("Viresh Kumar <[email protected]>");
MODULE_AUTHOR("Sudeep Holla <[email protected]>");
MODULE_DESCRIPTION("Vexpress SPC ARM big LITTLE cpufreq driver");
diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
index 4a031c6..ff2c3f8e 100644
--- a/drivers/cpuidle/cpuidle-psci-domain.c
+++ b/drivers/cpuidle/cpuidle-psci-domain.c
@@ -327,6 +327,8 @@ struct device *psci_dt_attach_cpu(int cpu)
if (cpu_online(cpu))
pm_runtime_get_sync(dev);
+ dev_pm_syscore_device(dev, true);
+
return dev;
}
diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
index d928b37..b51b5df 100644
--- a/drivers/cpuidle/cpuidle-psci.c
+++ b/drivers/cpuidle/cpuidle-psci.c
@@ -19,6 +19,7 @@
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/psci.h>
+#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/string.h>
@@ -52,8 +53,9 @@ static inline int psci_enter_state(int idx, u32 state)
return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter, idx, state);
}
-static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
- struct cpuidle_driver *drv, int idx)
+static int __psci_enter_domain_idle_state(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int idx,
+ bool s2idle)
{
struct psci_cpuidle_data *data = this_cpu_ptr(&psci_cpuidle_data);
u32 *states = data->psci_states;
@@ -66,7 +68,12 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
return -1;
/* Do runtime PM to manage a hierarchical CPU toplogy. */
- RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev));
+ rcu_irq_enter_irqson();
+ if (s2idle)
+ dev_pm_genpd_suspend(pd_dev);
+ else
+ pm_runtime_put_sync_suspend(pd_dev);
+ rcu_irq_exit_irqson();
state = psci_get_domain_state();
if (!state)
@@ -74,7 +81,12 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
ret = psci_cpu_suspend_enter(state) ? -1 : idx;
- RCU_NONIDLE(pm_runtime_get_sync(pd_dev));
+ rcu_irq_enter_irqson();
+ if (s2idle)
+ dev_pm_genpd_resume(pd_dev);
+ else
+ pm_runtime_get_sync(pd_dev);
+ rcu_irq_exit_irqson();
cpu_pm_exit();
@@ -83,6 +95,19 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
return ret;
}
+static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int idx)
+{
+ return __psci_enter_domain_idle_state(dev, drv, idx, false);
+}
+
+static int psci_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv,
+ int idx)
+{
+ return __psci_enter_domain_idle_state(dev, drv, idx, true);
+}
+
static int psci_idle_cpuhp_up(unsigned int cpu)
{
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
@@ -170,6 +195,7 @@ static int psci_dt_cpu_init_topology(struct cpuidle_driver *drv,
* deeper states.
*/
drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
+ drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state;
psci_cpuidle_use_cpuhp = true;
return 0;
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 83af15f..ef2ea1b 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -368,6 +368,19 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index)
cpuidle_curr_governor->reflect(dev, index);
}
+/*
+ * Min polling interval of 10usec is a guess. It is assuming that
+ * for most users, the time for a single ping-pong workload like
+ * perf bench pipe would generally complete within 10usec but
+ * this is hardware dependant. Actual time can be estimated with
+ *
+ * perf bench sched pipe -l 10000
+ *
+ * Run multiple times to avoid cpufreq effects.
+ */
+#define CPUIDLE_POLL_MIN 10000
+#define CPUIDLE_POLL_MAX (TICK_NSEC / 16)
+
/**
* cpuidle_poll_time - return amount of time to poll for,
* governors can override dev->poll_limit_ns if necessary
@@ -382,15 +395,23 @@ u64 cpuidle_poll_time(struct cpuidle_driver *drv,
int i;
u64 limit_ns;
+ BUILD_BUG_ON(CPUIDLE_POLL_MIN > CPUIDLE_POLL_MAX);
+
if (dev->poll_limit_ns)
return dev->poll_limit_ns;
- limit_ns = TICK_NSEC;
+ limit_ns = CPUIDLE_POLL_MAX;
for (i = 1; i < drv->state_count; i++) {
+ u64 state_limit;
+
if (dev->states_usage[i].disable)
continue;
- limit_ns = drv->states[i].target_residency_ns;
+ state_limit = drv->states[i].target_residency_ns;
+ if (state_limit < CPUIDLE_POLL_MIN)
+ continue;
+
+ limit_ns = min_t(u64, state_limit, CPUIDLE_POLL_MAX);
break;
}
diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
index 20447a4..e689101 100644
--- a/drivers/devfreq/exynos-bus.c
+++ b/drivers/devfreq/exynos-bus.c
@@ -161,10 +161,8 @@ static void exynos_bus_exit(struct device *dev)
dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk);
- if (bus->opp_table) {
- dev_pm_opp_put_regulators(bus->opp_table);
- bus->opp_table = NULL;
- }
+ dev_pm_opp_put_regulators(bus->opp_table);
+ bus->opp_table = NULL;
}
static void exynos_bus_passive_exit(struct device *dev)
@@ -461,10 +459,8 @@ static int exynos_bus_probe(struct platform_device *pdev)
dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk);
err_reg:
- if (!passive) {
- dev_pm_opp_put_regulators(bus->opp_table);
- bus->opp_table = NULL;
- }
+ dev_pm_opp_put_regulators(bus->opp_table);
+ bus->opp_table = NULL;
return ret;
}
diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
index 82fb3ba..e374b11 100644
--- a/drivers/firmware/arm_scmi/perf.c
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -750,6 +750,13 @@ static bool scmi_fast_switch_possible(const struct scmi_handle *handle,
return dom->fc_info && dom->fc_info->level_set_addr;
}
+static bool scmi_power_scale_mw_get(const struct scmi_handle *handle)
+{
+ struct scmi_perf_info *pi = handle->perf_priv;
+
+ return pi->power_scale_mw;
+}
+
static const struct scmi_perf_ops perf_ops = {
.limits_set = scmi_perf_limits_set,
.limits_get = scmi_perf_limits_get,
@@ -762,6 +769,7 @@ static const struct scmi_perf_ops perf_ops = {
.freq_get = scmi_dvfs_freq_get,
.est_power_get = scmi_dvfs_est_power_get,
.fast_switch_possible = scmi_fast_switch_possible,
+ .power_scale_mw_get = scmi_power_scale_mw_get,
};
static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
diff --git a/drivers/gpu/drm/lima/lima_devfreq.c b/drivers/gpu/drm/lima/lima_devfreq.c
index bbe02817..e7b7b8d 100644
--- a/drivers/gpu/drm/lima/lima_devfreq.c
+++ b/drivers/gpu/drm/lima/lima_devfreq.c
@@ -110,15 +110,10 @@ void lima_devfreq_fini(struct lima_device *ldev)
devfreq->opp_of_table_added = false;
}
- if (devfreq->regulators_opp_table) {
- dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
- devfreq->regulators_opp_table = NULL;
- }
-
- if (devfreq->clkname_opp_table) {
- dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
- devfreq->clkname_opp_table = NULL;
- }
+ dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
+ dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
+ devfreq->regulators_opp_table = NULL;
+ devfreq->clkname_opp_table = NULL;
}
int lima_devfreq_init(struct lima_device *ldev)
diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
index 8ab025d..97b5abc 100644
--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
@@ -170,10 +170,8 @@ void panfrost_devfreq_fini(struct panfrost_device *pfdev)
pfdevfreq->opp_of_table_added = false;
}
- if (pfdevfreq->regulators_opp_table) {
- dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table);
- pfdevfreq->regulators_opp_table = NULL;
- }
+ dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table);
+ pfdevfreq->regulators_opp_table = NULL;
}
void panfrost_devfreq_resume(struct panfrost_device *pfdev)
diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
index f41f51a..9aa8e65 100644
--- a/drivers/i2c/busses/i2c-stm32f7.c
+++ b/drivers/i2c/busses/i2c-stm32f7.c
@@ -2322,7 +2322,7 @@ static int stm32f7_i2c_suspend(struct device *dev)
i2c_mark_adapter_suspended(&i2c_dev->adap);
- if (!device_may_wakeup(dev) && !dev->power.wakeup_path) {
+ if (!device_may_wakeup(dev) && !device_wakeup_path(dev)) {
ret = stm32f7_i2c_regs_backup(i2c_dev);
if (ret < 0) {
i2c_mark_adapter_resumed(&i2c_dev->adap);
@@ -2341,7 +2341,7 @@ static int stm32f7_i2c_resume(struct device *dev)
struct stm32f7_i2c_dev *i2c_dev = dev_get_drvdata(dev);
int ret;
- if (!device_may_wakeup(dev) && !dev->power.wakeup_path) {
+ if (!device_may_wakeup(dev) && !device_wakeup_path(dev)) {
ret = pm_runtime_force_resume(dev);
if (ret < 0)
return ret;
diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
index a9538c2c..3a24841 100644
--- a/drivers/media/platform/qcom/venus/pm_helpers.c
+++ b/drivers/media/platform/qcom/venus/pm_helpers.c
@@ -898,8 +898,7 @@ static void core_put_v4(struct device *dev)
if (core->has_opp_table)
dev_pm_opp_of_remove_table(dev);
- if (core->opp_table)
- dev_pm_opp_put_clkname(core->opp_table);
+ dev_pm_opp_put_clkname(core->opp_table);
}
diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index 0e0a526..4268eb3 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -29,32 +29,32 @@
LIST_HEAD(opp_tables);
/* Lock to allow exclusive modification to the device and opp lists */
DEFINE_MUTEX(opp_table_lock);
+/* Flag indicating that opp_tables list is being updated at the moment */
+static bool opp_tables_busy;
-static struct opp_device *_find_opp_dev(const struct device *dev,
- struct opp_table *opp_table)
+static bool _find_opp_dev(const struct device *dev, struct opp_table *opp_table)
{
struct opp_device *opp_dev;
+ bool found = false;
+ mutex_lock(&opp_table->lock);
list_for_each_entry(opp_dev, &opp_table->dev_list, node)
- if (opp_dev->dev == dev)
- return opp_dev;
+ if (opp_dev->dev == dev) {
+ found = true;
+ break;
+ }
- return NULL;
+ mutex_unlock(&opp_table->lock);
+ return found;
}
static struct opp_table *_find_opp_table_unlocked(struct device *dev)
{
struct opp_table *opp_table;
- bool found;
list_for_each_entry(opp_table, &opp_tables, node) {
- mutex_lock(&opp_table->lock);
- found = !!_find_opp_dev(dev, opp_table);
- mutex_unlock(&opp_table->lock);
-
- if (found) {
+ if (_find_opp_dev(dev, opp_table)) {
_get_opp_table_kref(opp_table);
-
return opp_table;
}
}
@@ -1036,8 +1036,8 @@ static void _remove_opp_dev(struct opp_device *opp_dev,
kfree(opp_dev);
}
-static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
- struct opp_table *opp_table)
+struct opp_device *_add_opp_dev(const struct device *dev,
+ struct opp_table *opp_table)
{
struct opp_device *opp_dev;
@@ -1048,7 +1048,9 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
/* Initialize opp-dev */
opp_dev->dev = dev;
+ mutex_lock(&opp_table->lock);
list_add(&opp_dev->node, &opp_table->dev_list);
+ mutex_unlock(&opp_table->lock);
/* Create debugfs entries for the opp_table */
opp_debug_register(opp_dev, opp_table);
@@ -1056,18 +1058,6 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
return opp_dev;
}
-struct opp_device *_add_opp_dev(const struct device *dev,
- struct opp_table *opp_table)
-{
- struct opp_device *opp_dev;
-
- mutex_lock(&opp_table->lock);
- opp_dev = _add_opp_dev_unlocked(dev, opp_table);
- mutex_unlock(&opp_table->lock);
-
- return opp_dev;
-}
-
static struct opp_table *_allocate_opp_table(struct device *dev, int index)
{
struct opp_table *opp_table;
@@ -1121,8 +1111,6 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
INIT_LIST_HEAD(&opp_table->opp_list);
kref_init(&opp_table->kref);
- /* Secure the device table modification */
- list_add(&opp_table->node, &opp_tables);
return opp_table;
err:
@@ -1135,27 +1123,64 @@ void _get_opp_table_kref(struct opp_table *opp_table)
kref_get(&opp_table->kref);
}
-static struct opp_table *_opp_get_opp_table(struct device *dev, int index)
+/*
+ * We need to make sure that the OPP table for a device doesn't get added twice,
+ * if this routine gets called in parallel with the same device pointer.
+ *
+ * The simplest way to enforce that is to perform everything (find existing
+ * table and if not found, create a new one) under the opp_table_lock, so only
+ * one creator gets access to the same. But that expands the critical section
+ * under the lock and may end up causing circular dependencies with frameworks
+ * like debugfs, interconnect or clock framework as they may be direct or
+ * indirect users of OPP core.
+ *
+ * And for that reason we have to go for a bit tricky implementation here, which
+ * uses the opp_tables_busy flag to indicate if another creator is in the middle
+ * of adding an OPP table and others should wait for it to finish.
+ */
+struct opp_table *_add_opp_table_indexed(struct device *dev, int index)
{
struct opp_table *opp_table;
- /* Hold our table modification lock here */
+again:
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table_unlocked(dev);
if (!IS_ERR(opp_table))
goto unlock;
+ /*
+ * The opp_tables list or an OPP table's dev_list is getting updated by
+ * another user, wait for it to finish.
+ */
+ if (unlikely(opp_tables_busy)) {
+ mutex_unlock(&opp_table_lock);
+ cpu_relax();
+ goto again;
+ }
+
+ opp_tables_busy = true;
opp_table = _managed_opp(dev, index);
+
+ /* Drop the lock to reduce the size of critical section */
+ mutex_unlock(&opp_table_lock);
+
if (opp_table) {
- if (!_add_opp_dev_unlocked(dev, opp_table)) {
+ if (!_add_opp_dev(dev, opp_table)) {
dev_pm_opp_put_opp_table(opp_table);
opp_table = ERR_PTR(-ENOMEM);
}
- goto unlock;
+
+ mutex_lock(&opp_table_lock);
+ } else {
+ opp_table = _allocate_opp_table(dev, index);
+
+ mutex_lock(&opp_table_lock);
+ if (!IS_ERR(opp_table))
+ list_add(&opp_table->node, &opp_tables);
}
- opp_table = _allocate_opp_table(dev, index);
+ opp_tables_busy = false;
unlock:
mutex_unlock(&opp_table_lock);
@@ -1163,18 +1188,17 @@ static struct opp_table *_opp_get_opp_table(struct device *dev, int index)
return opp_table;
}
+struct opp_table *_add_opp_table(struct device *dev)
+{
+ return _add_opp_table_indexed(dev, 0);
+}
+
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
{
- return _opp_get_opp_table(dev, 0);
+ return _find_opp_table(dev);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_table);
-struct opp_table *dev_pm_opp_get_opp_table_indexed(struct device *dev,
- int index)
-{
- return _opp_get_opp_table(dev, index);
-}
-
static void _opp_table_kref_release(struct kref *kref)
{
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
@@ -1227,9 +1251,14 @@ void _opp_free(struct dev_pm_opp *opp)
kfree(opp);
}
-static void _opp_kref_release(struct dev_pm_opp *opp,
- struct opp_table *opp_table)
+static void _opp_kref_release(struct kref *kref)
{
+ struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
+ struct opp_table *opp_table = opp->opp_table;
+
+ list_del(&opp->node);
+ mutex_unlock(&opp_table->lock);
+
/*
* Notify the changes in the availability of the operable
* frequency/voltage list.
@@ -1237,27 +1266,9 @@ static void _opp_kref_release(struct dev_pm_opp *opp,
blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_REMOVE, opp);
_of_opp_free_required_opps(opp_table, opp);
opp_debug_remove_one(opp);
- list_del(&opp->node);
kfree(opp);
}
-static void _opp_kref_release_unlocked(struct kref *kref)
-{
- struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
- struct opp_table *opp_table = opp->opp_table;
-
- _opp_kref_release(opp, opp_table);
-}
-
-static void _opp_kref_release_locked(struct kref *kref)
-{
- struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
- struct opp_table *opp_table = opp->opp_table;
-
- _opp_kref_release(opp, opp_table);
- mutex_unlock(&opp_table->lock);
-}
-
void dev_pm_opp_get(struct dev_pm_opp *opp)
{
kref_get(&opp->kref);
@@ -1265,16 +1276,10 @@ void dev_pm_opp_get(struct dev_pm_opp *opp)
void dev_pm_opp_put(struct dev_pm_opp *opp)
{
- kref_put_mutex(&opp->kref, _opp_kref_release_locked,
- &opp->opp_table->lock);
+ kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put);
-static void dev_pm_opp_put_unlocked(struct dev_pm_opp *opp)
-{
- kref_put(&opp->kref, _opp_kref_release_unlocked);
-}
-
/**
* dev_pm_opp_remove() - Remove an OPP from OPP table
* @dev: device for which we do this operation
@@ -1318,30 +1323,49 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
}
EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
+static struct dev_pm_opp *_opp_get_next(struct opp_table *opp_table,
+ bool dynamic)
+{
+ struct dev_pm_opp *opp = NULL, *temp;
+
+ mutex_lock(&opp_table->lock);
+ list_for_each_entry(temp, &opp_table->opp_list, node) {
+ if (dynamic == temp->dynamic) {
+ opp = temp;
+ break;
+ }
+ }
+
+ mutex_unlock(&opp_table->lock);
+ return opp;
+}
+
bool _opp_remove_all_static(struct opp_table *opp_table)
{
- struct dev_pm_opp *opp, *tmp;
- bool ret = true;
+ struct dev_pm_opp *opp;
mutex_lock(&opp_table->lock);
if (!opp_table->parsed_static_opps) {
- ret = false;
- goto unlock;
+ mutex_unlock(&opp_table->lock);
+ return false;
}
- if (--opp_table->parsed_static_opps)
- goto unlock;
-
- list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
- if (!opp->dynamic)
- dev_pm_opp_put_unlocked(opp);
+ if (--opp_table->parsed_static_opps) {
+ mutex_unlock(&opp_table->lock);
+ return true;
}
-unlock:
mutex_unlock(&opp_table->lock);
- return ret;
+ /*
+ * Can't remove the OPP from under the lock, debugfs removal needs to
+ * happen lock less to avoid circular dependency issues.
+ */
+ while ((opp = _opp_get_next(opp_table, false)))
+ dev_pm_opp_put(opp);
+
+ return true;
}
/**
@@ -1353,21 +1377,21 @@ bool _opp_remove_all_static(struct opp_table *opp_table)
void dev_pm_opp_remove_all_dynamic(struct device *dev)
{
struct opp_table *opp_table;
- struct dev_pm_opp *opp, *temp;
+ struct dev_pm_opp *opp;
int count = 0;
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table))
return;
- mutex_lock(&opp_table->lock);
- list_for_each_entry_safe(opp, temp, &opp_table->opp_list, node) {
- if (opp->dynamic) {
- dev_pm_opp_put_unlocked(opp);
- count++;
- }
+ /*
+ * Can't remove the OPP from under the lock, debugfs removal needs to
+ * happen lock less to avoid circular dependency issues.
+ */
+ while ((opp = _opp_get_next(opp_table, true))) {
+ dev_pm_opp_put(opp);
+ count++;
}
- mutex_unlock(&opp_table->lock);
/* Drop the references taken by dev_pm_opp_add() */
while (count--)
@@ -1602,7 +1626,7 @@ struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
{
struct opp_table *opp_table;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -1636,6 +1660,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw);
*/
void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
{
+ if (unlikely(!opp_table))
+ return;
+
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@@ -1661,7 +1688,7 @@ struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
{
struct opp_table *opp_table;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -1692,6 +1719,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name);
*/
void dev_pm_opp_put_prop_name(struct opp_table *opp_table)
{
+ if (unlikely(!opp_table))
+ return;
+
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@@ -1754,7 +1784,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
struct regulator *reg;
int ret, i;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -1820,6 +1850,9 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
{
int i;
+ if (unlikely(!opp_table))
+ return;
+
if (!opp_table->regulators)
goto put_opp_table;
@@ -1862,7 +1895,7 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name)
struct opp_table *opp_table;
int ret;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -1902,6 +1935,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_clkname);
*/
void dev_pm_opp_put_clkname(struct opp_table *opp_table)
{
+ if (unlikely(!opp_table))
+ return;
+
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@@ -1930,7 +1966,7 @@ struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
if (!set_opp)
return ERR_PTR(-EINVAL);
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -1957,6 +1993,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
*/
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
{
+ if (unlikely(!opp_table))
+ return;
+
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@@ -2014,7 +2053,7 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
int index = 0, ret = -EINVAL;
const char **name = names;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return opp_table;
@@ -2085,6 +2124,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_attach_genpd);
*/
void dev_pm_opp_detach_genpd(struct opp_table *opp_table)
{
+ if (unlikely(!opp_table))
+ return;
+
/*
* Acquire genpd_virt_dev_lock to make sure virt_dev isn't getting
* used in parallel.
@@ -2179,7 +2221,7 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
struct opp_table *opp_table;
int ret;
- opp_table = dev_pm_opp_get_opp_table(dev);
+ opp_table = _add_opp_table(dev);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
diff --git a/drivers/opp/of.c b/drivers/opp/of.c
index 9faeb83..03cb387 100644
--- a/drivers/opp/of.c
+++ b/drivers/opp/of.c
@@ -112,8 +112,6 @@ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np)
struct opp_table *opp_table;
struct device_node *opp_table_np;
- lockdep_assert_held(&opp_table_lock);
-
opp_table_np = of_get_parent(opp_np);
if (!opp_table_np)
goto err;
@@ -121,12 +119,15 @@ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np)
/* It is safe to put the node now as all we need now is its address */
of_node_put(opp_table_np);
+ mutex_lock(&opp_table_lock);
list_for_each_entry(opp_table, &opp_tables, node) {
if (opp_table_np == opp_table->np) {
_get_opp_table_kref(opp_table);
+ mutex_unlock(&opp_table_lock);
return opp_table;
}
}
+ mutex_unlock(&opp_table_lock);
err:
return ERR_PTR(-ENODEV);
@@ -169,7 +170,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
/* Traversing the first OPP node is all we need */
np = of_get_next_available_child(opp_np, NULL);
if (!np) {
- dev_err(dev, "Empty OPP table\n");
+ dev_warn(dev, "Empty OPP table\n");
+
return;
}
@@ -377,7 +379,9 @@ int dev_pm_opp_of_find_icc_paths(struct device *dev,
struct icc_path **paths;
ret = _bandwidth_supported(dev, opp_table);
- if (ret <= 0)
+ if (ret == -EINVAL)
+ return 0; /* Empty OPP table is a valid corner-case, let's not fail */
+ else if (ret <= 0)
return ret;
ret = 0;
@@ -974,7 +978,7 @@ int dev_pm_opp_of_add_table(struct device *dev)
struct opp_table *opp_table;
int ret;
- opp_table = dev_pm_opp_get_opp_table_indexed(dev, 0);
+ opp_table = _add_opp_table_indexed(dev, 0);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
@@ -1029,7 +1033,7 @@ int dev_pm_opp_of_add_table_indexed(struct device *dev, int index)
index = 0;
}
- opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
+ opp_table = _add_opp_table_indexed(dev, index);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
@@ -1335,7 +1339,7 @@ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus)
goto failed;
}
- ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus);
+ ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus, true);
if (ret)
goto failed;
diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
index ebd930e..4ced7ff 100644
--- a/drivers/opp/opp.h
+++ b/drivers/opp/opp.h
@@ -224,6 +224,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *o
int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic);
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cpu);
struct opp_table *_add_opp_table(struct device *dev);
+struct opp_table *_add_opp_table_indexed(struct device *dev, int index);
void _put_opp_list_kref(struct opp_table *opp_table);
#ifdef CONFIG_OF
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
index bf03648..745a4e0c 100644
--- a/drivers/pci/pci-acpi.c
+++ b/drivers/pci/pci-acpi.c
@@ -1060,7 +1060,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
{
while (bus->parent) {
if (acpi_pm_device_can_wakeup(&bus->self->dev))
- return acpi_pm_set_bridge_wakeup(&bus->self->dev, enable);
+ return acpi_pm_set_device_wakeup(&bus->self->dev, enable);
bus = bus->parent;
}
@@ -1068,7 +1068,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
/* We have reached the root bus. */
if (bus->bridge) {
if (acpi_pm_device_can_wakeup(bus->bridge))
- return acpi_pm_set_bridge_wakeup(bus->bridge, enable);
+ return acpi_pm_set_device_wakeup(bus->bridge, enable);
}
return 0;
}
diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
index 70d6d52..c9e5723 100644
--- a/drivers/powercap/intel_rapl_common.c
+++ b/drivers/powercap/intel_rapl_common.c
@@ -1011,6 +1011,10 @@ static const struct rapl_defaults rapl_defaults_cht = {
.compute_time_window = rapl_compute_time_window_atom,
};
+static const struct rapl_defaults rapl_defaults_amd = {
+ .check_unit = rapl_check_unit_core,
+};
+
static const struct x86_cpu_id rapl_ids[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &rapl_defaults_core),
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &rapl_defaults_core),
@@ -1061,6 +1065,9 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &rapl_defaults_hsw_server),
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &rapl_defaults_hsw_server),
+
+ X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
+ X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
{}
};
MODULE_DEVICE_TABLE(x86cpu, rapl_ids);
diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
index 1646808..78213d4 100644
--- a/drivers/powercap/intel_rapl_msr.c
+++ b/drivers/powercap/intel_rapl_msr.c
@@ -31,7 +31,9 @@
#define MSR_VR_CURRENT_CONFIG 0x00000601
/* private data for RAPL MSR Interface */
-static struct rapl_if_priv rapl_msr_priv = {
+static struct rapl_if_priv *rapl_msr_priv;
+
+static struct rapl_if_priv rapl_msr_priv_intel = {
.reg_unit = MSR_RAPL_POWER_UNIT,
.regs[RAPL_DOMAIN_PACKAGE] = {
MSR_PKG_POWER_LIMIT, MSR_PKG_ENERGY_STATUS, MSR_PKG_PERF_STATUS, 0, MSR_PKG_POWER_INFO },
@@ -47,6 +49,14 @@ static struct rapl_if_priv rapl_msr_priv = {
.limits[RAPL_DOMAIN_PLATFORM] = 2,
};
+static struct rapl_if_priv rapl_msr_priv_amd = {
+ .reg_unit = MSR_AMD_RAPL_POWER_UNIT,
+ .regs[RAPL_DOMAIN_PACKAGE] = {
+ 0, MSR_AMD_PKG_ENERGY_STATUS, 0, 0, 0 },
+ .regs[RAPL_DOMAIN_PP0] = {
+ 0, MSR_AMD_CORE_ENERGY_STATUS, 0, 0, 0 },
+};
+
/* Handles CPU hotplug on multi-socket systems.
* If a CPU goes online as the first CPU of the physical package
* we add the RAPL package to the system. Similarly, when the last
@@ -58,9 +68,9 @@ static int rapl_cpu_online(unsigned int cpu)
{
struct rapl_package *rp;
- rp = rapl_find_package_domain(cpu, &rapl_msr_priv);
+ rp = rapl_find_package_domain(cpu, rapl_msr_priv);
if (!rp) {
- rp = rapl_add_package(cpu, &rapl_msr_priv);
+ rp = rapl_add_package(cpu, rapl_msr_priv);
if (IS_ERR(rp))
return PTR_ERR(rp);
}
@@ -73,7 +83,7 @@ static int rapl_cpu_down_prep(unsigned int cpu)
struct rapl_package *rp;
int lead_cpu;
- rp = rapl_find_package_domain(cpu, &rapl_msr_priv);
+ rp = rapl_find_package_domain(cpu, rapl_msr_priv);
if (!rp)
return 0;
@@ -136,40 +146,51 @@ static int rapl_msr_probe(struct platform_device *pdev)
const struct x86_cpu_id *id = x86_match_cpu(pl4_support_ids);
int ret;
- rapl_msr_priv.read_raw = rapl_msr_read_raw;
- rapl_msr_priv.write_raw = rapl_msr_write_raw;
+ switch (boot_cpu_data.x86_vendor) {
+ case X86_VENDOR_INTEL:
+ rapl_msr_priv = &rapl_msr_priv_intel;
+ break;
+ case X86_VENDOR_AMD:
+ rapl_msr_priv = &rapl_msr_priv_amd;
+ break;
+ default:
+ pr_err("intel-rapl does not support CPU vendor %d\n", boot_cpu_data.x86_vendor);
+ return -ENODEV;
+ }
+ rapl_msr_priv->read_raw = rapl_msr_read_raw;
+ rapl_msr_priv->write_raw = rapl_msr_write_raw;
if (id) {
- rapl_msr_priv.limits[RAPL_DOMAIN_PACKAGE] = 3;
- rapl_msr_priv.regs[RAPL_DOMAIN_PACKAGE][RAPL_DOMAIN_REG_PL4] =
+ rapl_msr_priv->limits[RAPL_DOMAIN_PACKAGE] = 3;
+ rapl_msr_priv->regs[RAPL_DOMAIN_PACKAGE][RAPL_DOMAIN_REG_PL4] =
MSR_VR_CURRENT_CONFIG;
pr_info("PL4 support detected.\n");
}
- rapl_msr_priv.control_type = powercap_register_control_type(NULL, "intel-rapl", NULL);
- if (IS_ERR(rapl_msr_priv.control_type)) {
+ rapl_msr_priv->control_type = powercap_register_control_type(NULL, "intel-rapl", NULL);
+ if (IS_ERR(rapl_msr_priv->control_type)) {
pr_debug("failed to register powercap control_type.\n");
- return PTR_ERR(rapl_msr_priv.control_type);
+ return PTR_ERR(rapl_msr_priv->control_type);
}
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powercap/rapl:online",
rapl_cpu_online, rapl_cpu_down_prep);
if (ret < 0)
goto out;
- rapl_msr_priv.pcap_rapl_online = ret;
+ rapl_msr_priv->pcap_rapl_online = ret;
return 0;
out:
if (ret)
- powercap_unregister_control_type(rapl_msr_priv.control_type);
+ powercap_unregister_control_type(rapl_msr_priv->control_type);
return ret;
}
static int rapl_msr_remove(struct platform_device *pdev)
{
- cpuhp_remove_state(rapl_msr_priv.pcap_rapl_online);
- powercap_unregister_control_type(rapl_msr_priv.control_type);
+ cpuhp_remove_state(rapl_msr_priv->pcap_rapl_online);
+ powercap_unregister_control_type(rapl_msr_priv->control_type);
return 0;
}
diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
index 3f0b8e2..f0654a9 100644
--- a/drivers/powercap/powercap_sys.c
+++ b/drivers/powercap/powercap_sys.c
@@ -170,9 +170,8 @@ static ssize_t show_constraint_name(struct device *dev,
if (pconst && pconst->ops && pconst->ops->get_name) {
name = pconst->ops->get_name(power_zone, id);
if (name) {
- snprintf(buf, POWERCAP_CONSTRAINT_NAME_LEN,
- "%s\n", name);
- buf[POWERCAP_CONSTRAINT_NAME_LEN] = '\0';
+ sprintf(buf, "%.*s\n", POWERCAP_CONSTRAINT_NAME_LEN - 1,
+ name);
len = strlen(buf);
}
}
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
index a3abcc4b..6d1879b 100644
--- a/include/acpi/acpi_bus.h
+++ b/include/acpi/acpi_bus.h
@@ -620,7 +620,6 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev);
bool acpi_pm_device_can_wakeup(struct device *dev);
int acpi_pm_device_sleep_state(struct device *, int *, int);
int acpi_pm_set_device_wakeup(struct device *dev, bool enable);
-int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable);
#else
static inline void acpi_pm_wakeup_event(struct device *dev)
{
@@ -651,10 +650,6 @@ static inline int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
{
return -ENODEV;
}
-static inline int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
-{
- return -ENODEV;
-}
#endif
#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index acbad3b..584fccd 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -65,7 +65,6 @@ struct cpufreq_policy {
unsigned int max; /* in kHz */
unsigned int cur; /* in kHz, only needed if cpufreq
* governors are used */
- unsigned int restore_freq; /* = policy->cur before transition */
unsigned int suspend_freq; /* freq to set during suspend */
unsigned int policy; /* see above */
@@ -314,10 +313,6 @@ struct cpufreq_driver {
/* define one out of two */
int (*setpolicy)(struct cpufreq_policy *policy);
- /*
- * On failure, should always restore frequency to policy->restore_freq
- * (i.e. old freq).
- */
int (*target)(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation); /* Deprecated */
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index b67a51c..757fc60 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -13,9 +13,8 @@
/**
* em_perf_state - Performance state of a performance domain
* @frequency: The frequency in KHz, for consistency with CPUFreq
- * @power: The power consumed at this level, in milli-watts (by 1 CPU or
- by a registered device). It can be a total power: static and
- dynamic.
+ * @power: The power consumed at this level (by 1 CPU or by a registered
+ * device). It can be a total power: static and dynamic.
* @cost: The cost coefficient associated with this level, used during
* energy calculation. Equal to: power * max_frequency / frequency
*/
@@ -29,6 +28,8 @@ struct em_perf_state {
* em_perf_domain - Performance domain
* @table: List of performance states, in ascending order
* @nr_perf_states: Number of performance states
+ * @milliwatts: Flag indicating the power values are in milli-Watts
+ * or some other scale.
* @cpus: Cpumask covering the CPUs of the domain. It's here
* for performance reasons to avoid potential cache
* misses during energy calculations in the scheduler
@@ -43,6 +44,7 @@ struct em_perf_state {
struct em_perf_domain {
struct em_perf_state *table;
int nr_perf_states;
+ int milliwatts;
unsigned long cpus[];
};
@@ -55,7 +57,7 @@ struct em_data_callback {
/**
* active_power() - Provide power at the next performance state of
* a device
- * @power : Active power at the performance state in mW
+ * @power : Active power at the performance state
* (modified)
* @freq : Frequency at the performance state in kHz
* (modified)
@@ -66,8 +68,8 @@ struct em_data_callback {
* and frequency.
*
* In case of CPUs, the power is the one of a single CPU in the domain,
- * expressed in milli-watts. It is expected to fit in the
- * [0, EM_MAX_POWER] range.
+ * expressed in milli-Watts or an abstract scale. It is expected to
+ * fit in the [0, EM_MAX_POWER] range.
*
* Return 0 on success.
*/
@@ -79,7 +81,8 @@ struct em_data_callback {
struct em_perf_domain *em_cpu_get(int cpu);
struct em_perf_domain *em_pd_get(struct device *dev);
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
- struct em_data_callback *cb, cpumask_t *span);
+ struct em_data_callback *cb, cpumask_t *span,
+ bool milliwatts);
void em_dev_unregister_perf_domain(struct device *dev);
/**
@@ -103,6 +106,9 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
struct em_perf_state *ps;
int i, cpu;
+ if (!sum_util)
+ return 0;
+
/*
* In order to predict the performance state, map the utilization of
* the most utilized CPU of the performance domain to a requested
@@ -186,7 +192,8 @@ struct em_data_callback {};
static inline
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
- struct em_data_callback *cb, cpumask_t *span)
+ struct em_data_callback *cb, cpumask_t *span,
+ bool milliwatts)
{
return -EINVAL;
}
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 1ad0ec4..2ca919a 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -255,24 +255,24 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
}
static inline int pm_genpd_remove(struct generic_pm_domain *genpd)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
static inline int dev_pm_genpd_set_performance_state(struct device *dev,
unsigned int state)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
static inline int dev_pm_genpd_add_notifier(struct device *dev,
struct notifier_block *nb)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
static inline int dev_pm_genpd_remove_notifier(struct device *dev)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
#define simple_qos_governor (*(struct dev_power_governor *)(NULL))
@@ -280,11 +280,11 @@ static inline int dev_pm_genpd_remove_notifier(struct device *dev)
#endif
#ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP
-void pm_genpd_syscore_poweroff(struct device *dev);
-void pm_genpd_syscore_poweron(struct device *dev);
+void dev_pm_genpd_suspend(struct device *dev);
+void dev_pm_genpd_resume(struct device *dev);
#else
-static inline void pm_genpd_syscore_poweroff(struct device *dev) {}
-static inline void pm_genpd_syscore_poweron(struct device *dev) {}
+static inline void dev_pm_genpd_suspend(struct device *dev) {}
+static inline void dev_pm_genpd_resume(struct device *dev) {}
#endif
/* OF PM domain providers */
@@ -325,13 +325,13 @@ struct device *genpd_dev_pm_attach_by_name(struct device *dev,
static inline int of_genpd_add_provider_simple(struct device_node *np,
struct generic_pm_domain *genpd)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
static inline int of_genpd_add_provider_onecell(struct device_node *np,
struct genpd_onecell_data *data)
{
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
}
static inline void of_genpd_del_provider(struct device_node *np) {}
@@ -387,7 +387,7 @@ static inline struct device *genpd_dev_pm_attach_by_name(struct device *dev,
static inline
struct generic_pm_domain *of_genpd_remove_last(struct device_node *np)
{
- return ERR_PTR(-ENOTSUPP);
+ return ERR_PTR(-EOPNOTSUPP);
}
#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
index dbb4845..1435c05 100644
--- a/include/linux/pm_opp.h
+++ b/include/linux/pm_opp.h
@@ -90,7 +90,6 @@ struct dev_pm_set_opp_data {
#if defined(CONFIG_PM_OPP)
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev);
-struct opp_table *dev_pm_opp_get_opp_table_indexed(struct device *dev, int index);
void dev_pm_opp_put_opp_table(struct opp_table *opp_table);
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp);
diff --git a/include/linux/pm_wakeup.h b/include/linux/pm_wakeup.h
index aa3da66..196a157 100644
--- a/include/linux/pm_wakeup.h
+++ b/include/linux/pm_wakeup.h
@@ -84,6 +84,11 @@ static inline bool device_may_wakeup(struct device *dev)
return dev->power.can_wakeup && !!dev->power.wakeup;
}
+static inline bool device_wakeup_path(struct device *dev)
+{
+ return dev->power.wakeup_path;
+}
+
static inline void device_set_wakeup_path(struct device *dev)
{
dev->power.wakeup_path = true;
@@ -174,6 +179,11 @@ static inline bool device_may_wakeup(struct device *dev)
return dev->power.can_wakeup && dev->power.should_wakeup;
}
+static inline bool device_wakeup_path(struct device *dev)
+{
+ return false;
+}
+
static inline void device_set_wakeup_path(struct device *dev) {}
static inline void __pm_stay_awake(struct wakeup_source *ws) {}
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 9cd312a..c77e4e1 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -121,6 +121,7 @@ struct scmi_perf_ops {
unsigned long *rate, unsigned long *power);
bool (*fast_switch_possible)(const struct scmi_handle *handle,
struct device *dev);
+ bool (*power_scale_mw_get)(const struct scmi_handle *handle);
};
/**
diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
index c1ff7fa0..1358fa4 100644
--- a/kernel/power/energy_model.c
+++ b/kernel/power/energy_model.c
@@ -52,6 +52,17 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused)
}
DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
+static int em_debug_units_show(struct seq_file *s, void *unused)
+{
+ struct em_perf_domain *pd = s->private;
+ char *units = pd->milliwatts ? "milliWatts" : "bogoWatts";
+
+ seq_printf(s, "%s\n", units);
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(em_debug_units);
+
static void em_debug_create_pd(struct device *dev)
{
struct dentry *d;
@@ -64,6 +75,8 @@ static void em_debug_create_pd(struct device *dev)
debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus,
&em_debug_cpus_fops);
+ debugfs_create_file("units", 0444, d, dev->em_pd, &em_debug_units_fops);
+
/* Create a sub-directory for each performance state */
for (i = 0; i < dev->em_pd->nr_perf_states; i++)
em_debug_create_ps(&dev->em_pd->table[i], d);
@@ -130,7 +143,7 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
/*
* The power returned by active_state() is expected to be
- * positive, in milli-watts and to fit into 16 bits.
+ * positive and to fit into 16 bits.
*/
if (!power || power > EM_MAX_POWER) {
dev_err(dev, "EM: invalid power: %lu\n",
@@ -250,17 +263,24 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
* @cpus : Pointer to cpumask_t, which in case of a CPU device is
* obligatory. It can be taken from i.e. 'policy->cpus'. For other
* type of devices this should be set to NULL.
+ * @milliwatts : Flag indicating that the power values are in milliWatts or
+ * in some other scale. It must be set properly.
*
* Create Energy Model tables for a performance domain using the callbacks
* defined in cb.
*
+ * The @milliwatts is important to set with correct value. Some kernel
+ * sub-systems might rely on this flag and check if all devices in the EM are
+ * using the same scale.
+ *
* If multiple clients register the same performance domain, all but the first
* registration will be ignored.
*
* Return 0 on success
*/
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
- struct em_data_callback *cb, cpumask_t *cpus)
+ struct em_data_callback *cb, cpumask_t *cpus,
+ bool milliwatts)
{
unsigned long cap, prev_cap = 0;
int cpu, ret;
@@ -313,6 +333,8 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
if (ret)
goto unlock;
+ dev->em_pd->milliwatts = milliwatts;
+
em_debug_create_pd(dev);
dev_info(dev, "EM: created perf domain\n");
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index 32391ac..d8cae43 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -224,6 +224,7 @@ EXPORT_SYMBOL_GPL(suspend_set_ops);
/**
* suspend_valid_only_mem - Generic memory-only valid callback.
+ * @state: Target system sleep state.
*
* Platform drivers that implement mem suspend only and only need to check for
* that in their .valid() callback can use this instead of rolling their own
@@ -335,6 +336,7 @@ static int suspend_test(int level)
/**
* suspend_prepare - Prepare for entering system sleep state.
+ * @state: Target system sleep state.
*
* Common code run for every system sleep state that can be entered (except for
* hibernation). Run suspend notifiers, allocate the "suspend" console and
diff --git a/kernel/reboot.c b/kernel/reboot.c
index af6f23d..2a18b76 100644
--- a/kernel/reboot.c
+++ b/kernel/reboot.c
@@ -244,6 +244,8 @@ void migrate_to_reboot_cpu(void)
void kernel_restart(char *cmd)
{
kernel_restart_prepare(cmd);
+ if (pm_power_off_prepare)
+ pm_power_off_prepare();
migrate_to_reboot_cpu();
syscore_shutdown();
if (!cmd)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 97d318b..7773605 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -102,12 +102,10 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
unsigned int next_freq)
{
- if (!sg_policy->need_freq_update) {
- if (sg_policy->next_freq == next_freq)
- return false;
- } else {
+ if (sg_policy->need_freq_update)
sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
- }
+ else if (sg_policy->next_freq == next_freq)
+ return false;
sg_policy->next_freq = next_freq;
sg_policy->last_freq_update_time = time;
diff --git a/tools/power/cpupower/utils/cpufreq-set.c b/tools/power/cpupower/utils/cpufreq-set.c
index 7b2164e..c5e60a3 100644
--- a/tools/power/cpupower/utils/cpufreq-set.c
+++ b/tools/power/cpupower/utils/cpufreq-set.c
@@ -315,6 +315,7 @@ int cmd_freq_set(int argc, char **argv)
}
}
+ get_cpustate();
/* loop over CPUs */
for (cpu = bitmask_first(cpus_chosen);
@@ -332,5 +333,7 @@ int cmd_freq_set(int argc, char **argv)
}
}
+ print_offline_cpus();
+
return 0;
}
diff --git a/tools/power/cpupower/utils/cpuidle-set.c b/tools/power/cpupower/utils/cpuidle-set.c
index 569f268..4615892 100644
--- a/tools/power/cpupower/utils/cpuidle-set.c
+++ b/tools/power/cpupower/utils/cpuidle-set.c
@@ -95,6 +95,8 @@ int cmd_idle_set(int argc, char **argv)
exit(EXIT_FAILURE);
}
+ get_cpustate();
+
/* Default is: set all CPUs */
if (bitmask_isallclear(cpus_chosen))
bitmask_setall(cpus_chosen);
@@ -181,5 +183,7 @@ int cmd_idle_set(int argc, char **argv)
break;
}
}
+
+ print_offline_cpus();
return EXIT_SUCCESS;
}
diff --git a/tools/power/cpupower/utils/cpupower.c b/tools/power/cpupower/utils/cpupower.c
index 8e3d080..8ac3304 100644
--- a/tools/power/cpupower/utils/cpupower.c
+++ b/tools/power/cpupower/utils/cpupower.c
@@ -34,6 +34,8 @@ int run_as_root;
int base_cpu;
/* Affected cpus chosen by -c/--cpu param */
struct bitmask *cpus_chosen;
+struct bitmask *online_cpus;
+struct bitmask *offline_cpus;
#ifdef DEBUG
int be_verbose;
@@ -178,6 +180,8 @@ int main(int argc, const char *argv[])
char pathname[32];
cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
+ online_cpus = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
+ offline_cpus = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
argc--;
argv += 1;
@@ -230,6 +234,10 @@ int main(int argc, const char *argv[])
ret = p->main(argc, argv);
if (cpus_chosen)
bitmask_free(cpus_chosen);
+ if (online_cpus)
+ bitmask_free(online_cpus);
+ if (offline_cpus)
+ bitmask_free(offline_cpus);
return ret;
}
print_help();
diff --git a/tools/power/cpupower/utils/helpers/helpers.h b/tools/power/cpupower/utils/helpers/helpers.h
index c258eec..d5799aa 100644
--- a/tools/power/cpupower/utils/helpers/helpers.h
+++ b/tools/power/cpupower/utils/helpers/helpers.h
@@ -94,6 +94,8 @@ struct cpupower_cpu_info {
*/
extern int get_cpu_info(struct cpupower_cpu_info *cpu_info);
extern struct cpupower_cpu_info cpupower_cpu_info;
+
+
/* cpuid and cpuinfo helpers **************************/
/* X86 ONLY ****************************************/
@@ -171,4 +173,14 @@ static inline unsigned int cpuid_ecx(unsigned int op) { return 0; };
static inline unsigned int cpuid_edx(unsigned int op) { return 0; };
#endif /* defined(__i386__) || defined(__x86_64__) */
+/*
+ * CPU State related functions
+ */
+extern struct bitmask *online_cpus;
+extern struct bitmask *offline_cpus;
+
+void get_cpustate(void);
+void print_online_cpus(void);
+void print_offline_cpus(void);
+
#endif /* __CPUPOWERUTILS_HELPERS__ */
diff --git a/tools/power/cpupower/utils/helpers/misc.c b/tools/power/cpupower/utils/helpers/misc.c
index f406adc..2ead981 100644
--- a/tools/power/cpupower/utils/helpers/misc.c
+++ b/tools/power/cpupower/utils/helpers/misc.c
@@ -1,8 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
-#if defined(__i386__) || defined(__x86_64__)
+
+#include <stdio.h>
+#include <stdlib.h>
#include "helpers/helpers.h"
+#if defined(__i386__) || defined(__x86_64__)
+
#define MSR_AMD_HWCR 0xc0010015
int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active,
@@ -41,3 +45,63 @@ int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active,
return 0;
}
#endif /* #if defined(__i386__) || defined(__x86_64__) */
+
+/* get_cpustate
+ *
+ * Gather the information of all online CPUs into bitmask struct
+ */
+void get_cpustate(void)
+{
+ unsigned int cpu = 0;
+
+ bitmask_clearall(online_cpus);
+ bitmask_clearall(offline_cpus);
+
+ for (cpu = bitmask_first(cpus_chosen);
+ cpu <= bitmask_last(cpus_chosen); cpu++) {
+
+ if (cpupower_is_cpu_online(cpu) == 1)
+ bitmask_setbit(online_cpus, cpu);
+ else
+ bitmask_setbit(offline_cpus, cpu);
+
+ continue;
+ }
+}
+
+/* print_online_cpus
+ *
+ * Print the CPU numbers of all CPUs that are online currently
+ */
+void print_online_cpus(void)
+{
+ int str_len = 0;
+ char *online_cpus_str = NULL;
+
+ str_len = online_cpus->size * 5;
+ online_cpus_str = (void *)malloc(sizeof(char) * str_len);
+
+ if (!bitmask_isallclear(online_cpus)) {
+ bitmask_displaylist(online_cpus_str, str_len, online_cpus);
+ printf(_("Following CPUs are online:\n%s\n"), online_cpus_str);
+ }
+}
+
+/* print_offline_cpus
+ *
+ * Print the CPU numbers of all CPUs that are offline currently
+ */
+void print_offline_cpus(void)
+{
+ int str_len = 0;
+ char *offline_cpus_str = NULL;
+
+ str_len = offline_cpus->size * 5;
+ offline_cpus_str = (void *)malloc(sizeof(char) * str_len);
+
+ if (!bitmask_isallclear(offline_cpus)) {
+ bitmask_displaylist(offline_cpus_str, str_len, offline_cpus);
+ printf(_("Following CPUs are offline:\n%s\n"), offline_cpus_str);
+ printf(_("cpupower set operation was not performed on them\n"));
+ }
+}
diff --git a/tools/power/pm-graph/README b/tools/power/pm-graph/README
index 89d0a7d..da468bd 100644
--- a/tools/power/pm-graph/README
+++ b/tools/power/pm-graph/README
@@ -6,7 +6,7 @@
|_| |___/ |_|
pm-graph: suspend/resume/boot timing analysis tools
- Version: 5.7
+ Version: 5.8
Author: Todd Brandt <[email protected]>
Home Page: https://01.org/pm-graph
@@ -61,7 +61,7 @@
- runs with python2 or python3, choice is made by /usr/bin/python link
- python
- python-configparser (for python2 sleepgraph)
- - python-requests (for googlesheet.py)
+ - python-requests (for stresstester.py)
- linux-tools-common (for turbostat usage in sleepgraph)
Ubuntu:
diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py
index 1bc36a1..81f4b8a 100755
--- a/tools/power/pm-graph/sleepgraph.py
+++ b/tools/power/pm-graph/sleepgraph.py
@@ -81,7 +81,7 @@
# store system values and test parameters
class SystemValues:
title = 'SleepGraph'
- version = '5.7'
+ version = '5.8'
ansi = False
rs = 0
display = ''
@@ -92,8 +92,9 @@
testlog = True
dmesglog = True
ftracelog = False
+ acpidebug = True
tstat = True
- mindevlen = 0.0
+ mindevlen = 0.0001
mincglen = 0.0
cgphase = ''
cgtest = -1
@@ -115,6 +116,7 @@
fpdtpath = '/sys/firmware/acpi/tables/FPDT'
epath = '/sys/kernel/debug/tracing/events/power/'
pmdpath = '/sys/power/pm_debug_messages'
+ acpipath='/sys/module/acpi/parameters/debug_level'
traceevents = [
'suspend_resume',
'wakeup_source_activate',
@@ -162,16 +164,16 @@
devdump = False
mixedphaseheight = True
devprops = dict()
+ cfgdef = dict()
platinfo = []
predelay = 0
postdelay = 0
- pmdebug = ''
tmstart = 'SUSPEND START %Y%m%d-%H:%M:%S.%f'
tmend = 'RESUME COMPLETE %Y%m%d-%H:%M:%S.%f'
tracefuncs = {
'sys_sync': {},
'ksys_sync': {},
- 'pm_notifier_call_chain_robust': {},
+ '__pm_notifier_call_chain': {},
'pm_prepare_console': {},
'pm_notifier_call_chain': {},
'freeze_processes': {},
@@ -490,9 +492,9 @@
call('echo 0 > %s/wakealarm' % self.rtcpath, shell=True)
def initdmesg(self):
# get the latest time stamp from the dmesg log
- fp = Popen('dmesg', stdout=PIPE).stdout
+ lines = Popen('dmesg', stdout=PIPE).stdout.readlines()
ktime = '0'
- for line in fp:
+ for line in reversed(lines):
line = ascii(line).replace('\r\n', '')
idx = line.find('[')
if idx > 1:
@@ -500,7 +502,7 @@
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
if(m):
ktime = m.group('ktime')
- fp.close()
+ break
self.dmesgstart = float(ktime)
def getdmesg(self, testdata):
op = self.writeDatafileHeader(self.dmesgfile, testdata)
@@ -715,8 +717,6 @@
self.fsetVal('0', 'events/kprobes/enable')
self.fsetVal('', 'kprobe_events')
self.fsetVal('1024', 'buffer_size_kb')
- if self.pmdebug:
- self.setVal(self.pmdebug, self.pmdpath)
def setupAllKprobes(self):
for name in self.tracefuncs:
self.defaultKprobe(name, self.tracefuncs[name])
@@ -740,11 +740,7 @@
# turn trace off
self.fsetVal('0', 'tracing_on')
self.cleanupFtrace()
- # pm debug messages
- pv = self.getVal(self.pmdpath)
- if pv != '1':
- self.setVal('1', self.pmdpath)
- self.pmdebug = pv
+ self.testVal(self.pmdpath, 'basic', '1')
# set the trace clock to global
self.fsetVal('global', 'trace_clock')
self.fsetVal('nop', 'current_tracer')
@@ -900,6 +896,14 @@
if isgz:
return gzip.open(filename, mode+'t')
return open(filename, mode)
+ def putlog(self, filename, text):
+ with self.openlog(filename, 'a') as fp:
+ fp.write(text)
+ fp.close()
+ def dlog(self, text):
+ self.putlog(self.dmesgfile, '# %s\n' % text)
+ def flog(self, text):
+ self.putlog(self.ftracefile, text)
def b64unzip(self, data):
try:
out = codecs.decode(base64.b64decode(data), 'zlib').decode()
@@ -992,9 +996,7 @@
# add a line for each of these commands with their outputs
for name, cmdline, info in cmdafter:
footer += '# platform-%s: %s | %s\n' % (name, cmdline, self.b64zip(info))
-
- with self.openlog(self.ftracefile, 'a') as fp:
- fp.write(footer)
+ self.flog(footer)
return True
def commonPrefix(self, list):
if len(list) < 2:
@@ -1034,6 +1036,7 @@
cmdline, cmdpath = ' '.join(cargs[2:]), self.getExec(cargs[2])
if not cmdpath or (begin and not delta):
continue
+ self.dlog('[%s]' % cmdline)
try:
fp = Popen([cmdpath]+cargs[3:], stdout=PIPE, stderr=PIPE).stdout
info = ascii(fp.read()).strip()
@@ -1060,6 +1063,29 @@
else:
out.append((name, cmdline, '\tnothing' if not info else info))
return out
+ def testVal(self, file, fmt='basic', value=''):
+ if file == 'restoreall':
+ for f in self.cfgdef:
+ if os.path.exists(f):
+ fp = open(f, 'w')
+ fp.write(self.cfgdef[f])
+ fp.close()
+ self.cfgdef = dict()
+ elif value and os.path.exists(file):
+ fp = open(file, 'r+')
+ if fmt == 'radio':
+ m = re.match('.*\[(?P<v>.*)\].*', fp.read())
+ if m:
+ self.cfgdef[file] = m.group('v')
+ elif fmt == 'acpi':
+ line = fp.read().strip().split('\n')[-1]
+ m = re.match('.* (?P<v>[0-9A-Fx]*) .*', line)
+ if m:
+ self.cfgdef[file] = m.group('v')
+ else:
+ self.cfgdef[file] = fp.read().strip()
+ fp.write(value)
+ fp.close()
def haveTurbostat(self):
if not self.tstat:
return False
@@ -1201,6 +1227,57 @@
self.multitest[sz] *= 1440
elif unit == 'h':
self.multitest[sz] *= 60
+ def displayControl(self, cmd):
+ xset, ret = 'timeout 10 xset -d :0.0 {0}', 0
+ if self.sudouser:
+ xset = 'sudo -u %s %s' % (self.sudouser, xset)
+ if cmd == 'init':
+ ret = call(xset.format('dpms 0 0 0'), shell=True)
+ if not ret:
+ ret = call(xset.format('s off'), shell=True)
+ elif cmd == 'reset':
+ ret = call(xset.format('s reset'), shell=True)
+ elif cmd in ['on', 'off', 'standby', 'suspend']:
+ b4 = self.displayControl('stat')
+ ret = call(xset.format('dpms force %s' % cmd), shell=True)
+ if not ret:
+ curr = self.displayControl('stat')
+ self.vprint('Display Switched: %s -> %s' % (b4, curr))
+ if curr != cmd:
+ self.vprint('WARNING: Display failed to change to %s' % cmd)
+ if ret:
+ self.vprint('WARNING: Display failed to change to %s with xset' % cmd)
+ return ret
+ elif cmd == 'stat':
+ fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout
+ ret = 'unknown'
+ for line in fp:
+ m = re.match('[\s]*Monitor is (?P<m>.*)', ascii(line))
+ if(m and len(m.group('m')) >= 2):
+ out = m.group('m').lower()
+ ret = out[3:] if out[0:2] == 'in' else out
+ break
+ fp.close()
+ return ret
+ def setRuntimeSuspend(self, before=True):
+ if before:
+ # runtime suspend disable or enable
+ if self.rs > 0:
+ self.rstgt, self.rsval, self.rsdir = 'on', 'auto', 'enabled'
+ else:
+ self.rstgt, self.rsval, self.rsdir = 'auto', 'on', 'disabled'
+ pprint('CONFIGURING RUNTIME SUSPEND...')
+ self.rslist = deviceInfo(self.rstgt)
+ for i in self.rslist:
+ self.setVal(self.rsval, i)
+ pprint('runtime suspend %s on all devices (%d changed)' % (self.rsdir, len(self.rslist)))
+ pprint('waiting 5 seconds...')
+ time.sleep(5)
+ else:
+ # runtime suspend re-enable or re-disable
+ for i in self.rslist:
+ self.setVal(self.rstgt, i)
+ pprint('runtime suspend settings restored on %d devices' % len(self.rslist))
sysvals = SystemValues()
switchvalues = ['enable', 'disable', 'on', 'off', 'true', 'false', '1', '0']
@@ -1640,15 +1717,20 @@
if 'resume_machine' in phase and 'suspend_machine' in lp:
tS, tR = self.dmesg[lp]['end'], self.dmesg[phase]['start']
tL = tR - tS
- if tL > 0:
- left = True if tR > tZero else False
- self.trimTime(tS, tL, left)
- if 'trying' in self.dmesg[lp] and self.dmesg[lp]['trying'] >= 0.001:
- tTry = round(self.dmesg[lp]['trying'] * 1000)
- text = '%.0f (-%.0f waking)' % (tL * 1000, tTry)
+ if tL <= 0:
+ continue
+ left = True if tR > tZero else False
+ self.trimTime(tS, tL, left)
+ if 'waking' in self.dmesg[lp]:
+ tCnt = self.dmesg[lp]['waking'][0]
+ if self.dmesg[lp]['waking'][1] >= 0.001:
+ tTry = '-%.0f' % (round(self.dmesg[lp]['waking'][1] * 1000))
else:
- text = '%.0f' % (tL * 1000)
- self.tLow.append(text)
+ tTry = '-%.3f' % (self.dmesg[lp]['waking'][1] * 1000)
+ text = '%.0f (%s ms waking %d times)' % (tL * 1000, tTry, tCnt)
+ else:
+ text = '%.0f' % (tL * 1000)
+ self.tLow.append(text)
lp = phase
def getMemTime(self):
if not self.hwstart or not self.hwend:
@@ -1921,7 +2003,7 @@
for dev in list:
length = (list[dev]['end'] - list[dev]['start']) * 1000
width = widfmt % (((list[dev]['end']-list[dev]['start'])*100)/tTotal)
- if width != '0.000000' and length >= mindevlen:
+ if length >= mindevlen:
devlist.append(dev)
self.tdevlist[phase] = devlist
def addHorizontalDivider(self, devname, devend):
@@ -3316,9 +3398,10 @@
# trim out s2idle loops, track time trying to freeze
llp = data.lastPhase(2)
if llp.startswith('suspend_machine'):
- if 'trying' not in data.dmesg[llp]:
- data.dmesg[llp]['trying'] = 0
- data.dmesg[llp]['trying'] += \
+ if 'waking' not in data.dmesg[llp]:
+ data.dmesg[llp]['waking'] = [0, 0.0]
+ data.dmesg[llp]['waking'][0] += 1
+ data.dmesg[llp]['waking'][1] += \
t.time - data.dmesg[lp]['start']
data.currphase = ''
del data.dmesg[lp]
@@ -4555,7 +4638,7 @@
# draw the devices for this phase
phaselist = data.dmesg[b]['list']
for d in sorted(data.tdevlist[b]):
- dname = d if '[' not in d else d.split('[')[0]
+ dname = d if ('[' not in d or 'CPU' in d) else d.split('[')[0]
name, dev = dname, phaselist[d]
drv = xtraclass = xtrainfo = xtrastyle = ''
if 'htmlclass' in dev:
@@ -5194,156 +5277,146 @@
'</script>\n'
hf.write(script_code);
-def setRuntimeSuspend(before=True):
- global sysvals
- sv = sysvals
- if sv.rs == 0:
- return
- if before:
- # runtime suspend disable or enable
- if sv.rs > 0:
- sv.rstgt, sv.rsval, sv.rsdir = 'on', 'auto', 'enabled'
- else:
- sv.rstgt, sv.rsval, sv.rsdir = 'auto', 'on', 'disabled'
- pprint('CONFIGURING RUNTIME SUSPEND...')
- sv.rslist = deviceInfo(sv.rstgt)
- for i in sv.rslist:
- sv.setVal(sv.rsval, i)
- pprint('runtime suspend %s on all devices (%d changed)' % (sv.rsdir, len(sv.rslist)))
- pprint('waiting 5 seconds...')
- time.sleep(5)
- else:
- # runtime suspend re-enable or re-disable
- for i in sv.rslist:
- sv.setVal(sv.rstgt, i)
- pprint('runtime suspend settings restored on %d devices' % len(sv.rslist))
-
# Function: executeSuspend
# Description:
# Execute system suspend through the sysfs interface, then copy the output
# dmesg and ftrace files to the test output directory.
def executeSuspend(quiet=False):
- pm = ProcessMonitor()
- tp = sysvals.tpath
- if sysvals.wifi:
- wifi = sysvals.checkWifi()
+ sv, tp, pm = sysvals, sysvals.tpath, ProcessMonitor()
+ if sv.wifi:
+ wifi = sv.checkWifi()
+ sv.dlog('wifi check, connected device is "%s"' % wifi)
testdata = []
# run these commands to prepare the system for suspend
- if sysvals.display:
+ if sv.display:
if not quiet:
- pprint('SET DISPLAY TO %s' % sysvals.display.upper())
- displayControl(sysvals.display)
+ pprint('SET DISPLAY TO %s' % sv.display.upper())
+ ret = sv.displayControl(sv.display)
+ sv.dlog('xset display %s, ret = %d' % (sv.display, ret))
time.sleep(1)
- if sysvals.sync:
+ if sv.sync:
if not quiet:
pprint('SYNCING FILESYSTEMS')
+ sv.dlog('syncing filesystems')
call('sync', shell=True)
- # mark the start point in the kernel ring buffer just as we start
- sysvals.initdmesg()
+ sv.dlog('read dmesg')
+ sv.initdmesg()
# start ftrace
- if(sysvals.usecallgraph or sysvals.usetraceevents):
+ if(sv.usecallgraph or sv.usetraceevents):
if not quiet:
pprint('START TRACING')
- sysvals.fsetVal('1', 'tracing_on')
- if sysvals.useprocmon:
+ sv.dlog('start ftrace tracing')
+ sv.fsetVal('1', 'tracing_on')
+ if sv.useprocmon:
+ sv.dlog('start the process monitor')
pm.start()
- sysvals.cmdinfo(True)
+ sv.dlog('run the cmdinfo list before')
+ sv.cmdinfo(True)
# execute however many s/r runs requested
- for count in range(1,sysvals.execcount+1):
+ for count in range(1,sv.execcount+1):
# x2delay in between test runs
- if(count > 1 and sysvals.x2delay > 0):
- sysvals.fsetVal('WAIT %d' % sysvals.x2delay, 'trace_marker')
- time.sleep(sysvals.x2delay/1000.0)
- sysvals.fsetVal('WAIT END', 'trace_marker')
+ if(count > 1 and sv.x2delay > 0):
+ sv.fsetVal('WAIT %d' % sv.x2delay, 'trace_marker')
+ time.sleep(sv.x2delay/1000.0)
+ sv.fsetVal('WAIT END', 'trace_marker')
# start message
- if sysvals.testcommand != '':
+ if sv.testcommand != '':
pprint('COMMAND START')
else:
- if(sysvals.rtcwake):
+ if(sv.rtcwake):
pprint('SUSPEND START')
else:
pprint('SUSPEND START (press a key to resume)')
# set rtcwake
- if(sysvals.rtcwake):
+ if(sv.rtcwake):
if not quiet:
- pprint('will issue an rtcwake in %d seconds' % sysvals.rtcwaketime)
- sysvals.rtcWakeAlarmOn()
+ pprint('will issue an rtcwake in %d seconds' % sv.rtcwaketime)
+ sv.dlog('enable RTC wake alarm')
+ sv.rtcWakeAlarmOn()
# start of suspend trace marker
- if(sysvals.usecallgraph or sysvals.usetraceevents):
- sysvals.fsetVal(datetime.now().strftime(sysvals.tmstart), 'trace_marker')
+ if(sv.usecallgraph or sv.usetraceevents):
+ sv.fsetVal(datetime.now().strftime(sv.tmstart), 'trace_marker')
# predelay delay
- if(count == 1 and sysvals.predelay > 0):
- sysvals.fsetVal('WAIT %d' % sysvals.predelay, 'trace_marker')
- time.sleep(sysvals.predelay/1000.0)
- sysvals.fsetVal('WAIT END', 'trace_marker')
+ if(count == 1 and sv.predelay > 0):
+ sv.fsetVal('WAIT %d' % sv.predelay, 'trace_marker')
+ time.sleep(sv.predelay/1000.0)
+ sv.fsetVal('WAIT END', 'trace_marker')
# initiate suspend or command
+ sv.dlog('system executing a suspend')
tdata = {'error': ''}
- if sysvals.testcommand != '':
- res = call(sysvals.testcommand+' 2>&1', shell=True);
+ if sv.testcommand != '':
+ res = call(sv.testcommand+' 2>&1', shell=True);
if res != 0:
tdata['error'] = 'cmd returned %d' % res
else:
- mode = sysvals.suspendmode
- if sysvals.memmode and os.path.exists(sysvals.mempowerfile):
+ mode = sv.suspendmode
+ if sv.memmode and os.path.exists(sv.mempowerfile):
mode = 'mem'
- pf = open(sysvals.mempowerfile, 'w')
- pf.write(sysvals.memmode)
- pf.close()
- if sysvals.diskmode and os.path.exists(sysvals.diskpowerfile):
+ sv.testVal(sv.mempowerfile, 'radio', sv.memmode)
+ if sv.diskmode and os.path.exists(sv.diskpowerfile):
mode = 'disk'
- pf = open(sysvals.diskpowerfile, 'w')
- pf.write(sysvals.diskmode)
- pf.close()
- if mode == 'freeze' and sysvals.haveTurbostat():
+ sv.testVal(sv.diskpowerfile, 'radio', sv.diskmode)
+ if sv.acpidebug:
+ sv.testVal(sv.acpipath, 'acpi', '0xe')
+ if mode == 'freeze' and sv.haveTurbostat():
# execution will pause here
- turbo = sysvals.turbostat()
+ turbo = sv.turbostat()
if turbo:
tdata['turbo'] = turbo
else:
- pf = open(sysvals.powerfile, 'w')
+ pf = open(sv.powerfile, 'w')
pf.write(mode)
# execution will pause here
try:
pf.close()
except Exception as e:
tdata['error'] = str(e)
- if(sysvals.rtcwake):
- sysvals.rtcWakeAlarmOff()
+ sv.dlog('system returned from resume')
+ # reset everything
+ sv.testVal('restoreall')
+ if(sv.rtcwake):
+ sv.dlog('disable RTC wake alarm')
+ sv.rtcWakeAlarmOff()
# postdelay delay
- if(count == sysvals.execcount and sysvals.postdelay > 0):
- sysvals.fsetVal('WAIT %d' % sysvals.postdelay, 'trace_marker')
- time.sleep(sysvals.postdelay/1000.0)
- sysvals.fsetVal('WAIT END', 'trace_marker')
+ if(count == sv.execcount and sv.postdelay > 0):
+ sv.fsetVal('WAIT %d' % sv.postdelay, 'trace_marker')
+ time.sleep(sv.postdelay/1000.0)
+ sv.fsetVal('WAIT END', 'trace_marker')
# return from suspend
pprint('RESUME COMPLETE')
- if(sysvals.usecallgraph or sysvals.usetraceevents):
- sysvals.fsetVal(datetime.now().strftime(sysvals.tmend), 'trace_marker')
- if sysvals.wifi and wifi:
- tdata['wifi'] = sysvals.pollWifi(wifi)
- if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'):
+ if(sv.usecallgraph or sv.usetraceevents):
+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
+ if sv.wifi and wifi:
+ tdata['wifi'] = sv.pollWifi(wifi)
+ sv.dlog('wifi check, %s' % tdata['wifi'])
+ if(sv.suspendmode == 'mem' or sv.suspendmode == 'command'):
+ sv.dlog('read the ACPI FPDT')
tdata['fw'] = getFPDT(False)
testdata.append(tdata)
- cmdafter = sysvals.cmdinfo(False)
+ sv.dlog('run the cmdinfo list after')
+ cmdafter = sv.cmdinfo(False)
# stop ftrace
- if(sysvals.usecallgraph or sysvals.usetraceevents):
- if sysvals.useprocmon:
+ if(sv.usecallgraph or sv.usetraceevents):
+ if sv.useprocmon:
+ sv.dlog('stop the process monitor')
pm.stop()
- sysvals.fsetVal('0', 'tracing_on')
+ sv.fsetVal('0', 'tracing_on')
# grab a copy of the dmesg output
if not quiet:
pprint('CAPTURING DMESG')
- sysvals.getdmesg(testdata)
+ sysvals.dlog('EXECUTION TRACE END')
+ sv.getdmesg(testdata)
# grab a copy of the ftrace output
- if(sysvals.usecallgraph or sysvals.usetraceevents):
+ if(sv.usecallgraph or sv.usetraceevents):
if not quiet:
pprint('CAPTURING TRACE')
- op = sysvals.writeDatafileHeader(sysvals.ftracefile, testdata)
+ op = sv.writeDatafileHeader(sv.ftracefile, testdata)
fp = open(tp+'trace', 'r')
for line in fp:
op.write(line)
op.close()
- sysvals.fsetVal('', 'trace')
- sysvals.platforminfo(cmdafter)
+ sv.fsetVal('', 'trace')
+ sv.platforminfo(cmdafter)
def readFile(file):
if os.path.islink(file):
@@ -5586,39 +5659,6 @@
count += 1
return out
-def displayControl(cmd):
- xset, ret = 'timeout 10 xset -d :0.0 {0}', 0
- if sysvals.sudouser:
- xset = 'sudo -u %s %s' % (sysvals.sudouser, xset)
- if cmd == 'init':
- ret = call(xset.format('dpms 0 0 0'), shell=True)
- if not ret:
- ret = call(xset.format('s off'), shell=True)
- elif cmd == 'reset':
- ret = call(xset.format('s reset'), shell=True)
- elif cmd in ['on', 'off', 'standby', 'suspend']:
- b4 = displayControl('stat')
- ret = call(xset.format('dpms force %s' % cmd), shell=True)
- if not ret:
- curr = displayControl('stat')
- sysvals.vprint('Display Switched: %s -> %s' % (b4, curr))
- if curr != cmd:
- sysvals.vprint('WARNING: Display failed to change to %s' % cmd)
- if ret:
- sysvals.vprint('WARNING: Display failed to change to %s with xset' % cmd)
- return ret
- elif cmd == 'stat':
- fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout
- ret = 'unknown'
- for line in fp:
- m = re.match('[\s]*Monitor is (?P<m>.*)', ascii(line))
- if(m and len(m.group('m')) >= 2):
- out = m.group('m').lower()
- ret = out[3:] if out[0:2] == 'in' else out
- break
- fp.close()
- return ret
-
# Function: getFPDT
# Description:
# Read the acpi bios tables and pull out FPDT, the firmware data
@@ -6001,8 +6041,19 @@
# execute a suspend/resume, gather the logs, and generate the output
def runTest(n=0, quiet=False):
# prepare for the test
- sysvals.initFtrace(quiet)
sysvals.initTestOutput('suspend')
+ op = sysvals.writeDatafileHeader(sysvals.dmesgfile, [])
+ op.write('# EXECUTION TRACE START\n')
+ op.close()
+ if n <= 1:
+ if sysvals.rs != 0:
+ sysvals.dlog('%sabling runtime suspend' % ('en' if sysvals.rs > 0 else 'dis'))
+ sysvals.setRuntimeSuspend(True)
+ if sysvals.display:
+ ret = sysvals.displayControl('init')
+ sysvals.dlog('xset display init, ret = %d' % ret)
+ sysvals.dlog('initialize ftrace')
+ sysvals.initFtrace(quiet)
# execute the test
executeSuspend(quiet)
@@ -6098,8 +6149,16 @@
if wifi:
extra['wifi'] = wifi
low = find_in_html(html, 'freeze time: <b>', ' ms</b>')
- if low and 'waking' in low:
- issue = 'FREEZEWAKE'
+ for lowstr in ['waking', '+']:
+ if not low:
+ break
+ if lowstr not in low:
+ continue
+ if lowstr == '+':
+ issue = 'S2LOOPx%d' % len(low.split('+'))
+ else:
+ m = re.match('.*waking *(?P<n>[0-9]*) *times.*', low)
+ issue = 'S2WAKEx%s' % m.group('n') if m else 'S2WAKExNaN'
match = [i for i in issues if i['match'] == issue]
if len(match) > 0:
match[0]['count'] += 1
@@ -6605,6 +6664,11 @@
val = next(args)
except:
doError('-info requires one string argument', True)
+ elif(arg == '-desc'):
+ try:
+ val = next(args)
+ except:
+ doError('-desc requires one string argument', True)
elif(arg == '-rs'):
try:
val = next(args)
@@ -6814,9 +6878,9 @@
runSummary(sysvals.outdir, True, genhtml)
elif(cmd in ['xon', 'xoff', 'xstandby', 'xsuspend', 'xinit', 'xreset']):
sysvals.verbose = True
- ret = displayControl(cmd[1:])
+ ret = sysvals.displayControl(cmd[1:])
elif(cmd == 'xstat'):
- pprint('Display Status: %s' % displayControl('stat').upper())
+ pprint('Display Status: %s' % sysvals.displayControl('stat').upper())
elif(cmd == 'wificheck'):
dev = sysvals.checkWifi()
if dev:
@@ -6854,12 +6918,8 @@
if mode.startswith('disk-'):
sysvals.diskmode = mode.split('-', 1)[-1]
sysvals.suspendmode = 'disk'
-
sysvals.systemInfo(dmidecode(sysvals.mempath))
- setRuntimeSuspend(True)
- if sysvals.display:
- displayControl('init')
failcnt, ret = 0, 0
if sysvals.multitest['run']:
# run multiple tests in a separate subdirectory
@@ -6900,7 +6960,10 @@
sysvals.testdir = sysvals.outdir
# run the test in the current directory
ret = runTest()
+
+ # reset to default values after testing
if sysvals.display:
- displayControl('reset')
- setRuntimeSuspend(False)
+ sysvals.displayControl('reset')
+ if sysvals.rs != 0:
+ sysvals.setRuntimeSuspend(False)
sys.exit(ret)