arm64 updates for 6.11:

* Virtual CPU hotplug support for arm64 ACPI systems
 
 * cpufeature infrastructure cleanups and making the FEAT_ECBHB ID bits
   visible to guests
 
 * CPU errata: expand the speculative SSBS workaround to more CPUs
 
 * arm64 ACPI:
 
   - acpi=nospcr option to disable SPCR as default console for arm64
 
   - Move some ACPI code (cpuidle, FFH) to drivers/acpi/arm64/
 
 * GICv3, use compile-time PMR values: optimise the way regular IRQs are
   masked/unmasked when GICv3 pseudo-NMIs are used, removing the need for
   a static key in fast paths by using a priority value chosen
   dynamically at boot time
 
 * arm64 perf updates:
 
   - Rework of the IMX PMU driver to enable support for I.MX95
 
   - Enable support for tertiary match groups in the CMN PMU driver
 
   - Initial refactoring of the CPU PMU code to prepare for the fixed
     instruction counter introduced by Arm v9.4
 
   - Add missing PMU driver MODULE_DESCRIPTION() strings
 
   - Hook up DT compatibles for recent CPU PMUs
 
 * arm64 kselftest updates:
 
   - Kernel mode NEON fp-stress
 
   - Cleanups, spelling mistakes
 
 * arm64 Documentation update with a minor clarification on TBI
 
 * Miscellaneous:
 
   - Fix missing IPI statistics
 
   - Implement raw_smp_processor_id() using thread_info rather than a
     per-CPU variable (better code generation)
 
   - Make MTE checking of in-kernel asynchronous tag faults conditional
     on KASAN being enabled
 
   - Minor cleanups, typos
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAmaQKN4ACgkQa9axLQDI
 XvE0Nw/+JZ6OEQ+DMUHXZfbWanvn1p0nVOoEV3MYVpOeQK1ILYCoDapatLNIlet0
 wcja7tohKbL1ifc7GOqlkitu824LMlotncrdOBycRqb/4C5KuJ+XhygFv5hGfX0T
 Uh2zbo4w52FPPEUMICfEAHrKT3QB9tv7f66xeUNbWWFqUn3rY02/ZVQVVdw6Zc0e
 fVYWGUUoQDR7+9hRkk6tnYw3+9YFVAUAbLWk+DGrW7WsANi6HuJ/rBMibwFI6RkG
 SZDZHum6vnwx0Dj9H7WrYaQCvUMm7AlckhQGfPbIFhUk6pWysfJtP5Qk49yiMl7p
 oRk/GrSXpiKumuetgTeOHbokiE1Nb8beXx0OcsjCu4RrIaNipAEpH1AkYy5oiKoT
 9vKZErMDtQgd96JHFVaXc+A3D2kxVfkc1u7K3TEfVRnZFV7CN+YL+61iyZ+uLxVi
 d9xrAmwRsWYFVQzlZG3NWvSeQBKisUA1L8JROlzWc/NFDwTqDGIt/zS4pZNL3+OM
 EXW0LyKt7Ijl6vPXKCXqrODRrPlcLc66VMZxofZOl0/dEqyJ+qLL4GUkWZu8lTqO
 BqydYnbTSjiDg/ntWjTrD0uJ8c40Qy7KTPEdaPqEIQvkDEsUGlOnhAQjHrnGNb9M
 psZtpDW2xm7GykEOcd6rgSz4Xeky2iLsaR4Wc7FTyDS0YRmeG44=
 =ob2k
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
 "The biggest part is the virtual CPU hotplug that touches ACPI,
  irqchip. We also have some GICv3 optimisation for pseudo-NMIs that has
  been queued via the arm64 tree. Otherwise the usual perf updates,
  kselftest, various small cleanups.

  Core:

   - Virtual CPU hotplug support for arm64 ACPI systems

   - cpufeature infrastructure cleanups and making the FEAT_ECBHB ID
     bits visible to guests

   - CPU errata: expand the speculative SSBS workaround to more CPUs

   - GICv3, use compile-time PMR values: optimise the way regular IRQs
     are masked/unmasked when GICv3 pseudo-NMIs are used, removing the
     need for a static key in fast paths by using a priority value
     chosen dynamically at boot time

  ACPI:

   - 'acpi=nospcr' option to disable SPCR as default console for arm64

   - Move some ACPI code (cpuidle, FFH) to drivers/acpi/arm64/

  Perf updates:

   - Rework of the IMX PMU driver to enable support for I.MX95

   - Enable support for tertiary match groups in the CMN PMU driver

   - Initial refactoring of the CPU PMU code to prepare for the fixed
     instruction counter introduced by Arm v9.4

   - Add missing PMU driver MODULE_DESCRIPTION() strings

   - Hook up DT compatibles for recent CPU PMUs

  Kselftest updates:

   - Kernel mode NEON fp-stress

   - Cleanups, spelling mistakes

  Miscellaneous:

   - arm64 Documentation update with a minor clarification on TBI

   - Fix missing IPI statistics

   - Implement raw_smp_processor_id() using thread_info rather than a
     per-CPU variable (better code generation)

   - Make MTE checking of in-kernel asynchronous tag faults conditional
     on KASAN being enabled

   - Minor cleanups, typos"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (69 commits)
  selftests: arm64: tags: remove the result script
  selftests: arm64: tags_test: conform test to TAP output
  perf: add missing MODULE_DESCRIPTION() macros
  arm64: smp: Fix missing IPI statistics
  irqchip/gic-v3: Fix 'broken_rdists' unused warning when !SMP and !ACPI
  ACPI: Add acpi=nospcr to disable ACPI SPCR as default console on ARM64
  Documentation: arm64: Update memory.rst for TBI
  arm64/cpufeature: Replace custom macros with fields from ID_AA64PFR0_EL1
  KVM: arm64: Replace custom macros with fields from ID_AA64PFR0_EL1
  perf: arm_pmuv3: Include asm/arm_pmuv3.h from linux/perf/arm_pmuv3.h
  perf: arm_v6/7_pmu: Drop non-DT probe support
  perf/arm: Move 32-bit PMU drivers to drivers/perf/
  perf: arm_pmuv3: Drop unnecessary IS_ENABLED(CONFIG_ARM64) check
  perf: arm_pmuv3: Avoid assigning fixed cycle counter with threshold
  arm64: Kconfig: Fix dependencies to enable ACPI_HOTPLUG_CPU
  perf: imx_perf: add support for i.MX95 platform
  perf: imx_perf: fix counter start and config sequence
  perf: imx_perf: refactor driver for imx93
  perf: imx_perf: let the driver manage the counter usage rather the user
  perf: imx_perf: add macro definitions for parsing config attr
  ...
This commit is contained in:
Linus Torvalds 2024-07-15 17:06:19 -07:00
commit c89d780cc1
86 changed files with 1606 additions and 752 deletions

View File

@ -694,3 +694,9 @@ Description:
(RO) indicates whether or not the kernel directly supports
modifying the crash elfcorehdr for CPU hot un/plug and/or
on/offline changes.
What: /sys/devices/system/cpu/enabled
Date: Nov 2022
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description:
(RO) the list of CPUs that can be brought online.

View File

@ -12,7 +12,7 @@
acpi= [HW,ACPI,X86,ARM64,RISCV64,EARLY]
Advanced Configuration and Power Interface
Format: { force | on | off | strict | noirq | rsdt |
copy_dsdt }
copy_dsdt | nospcr }
force -- enable ACPI if default was off
on -- enable ACPI but allow fallback to DT [arm64,riscv64]
off -- disable ACPI if default was on
@ -21,8 +21,12 @@
strictly ACPI specification compliant.
rsdt -- prefer RSDT over (default) XSDT
copy_dsdt -- copy DSDT to memory
For ARM64 and RISCV64, ONLY "acpi=off", "acpi=on" or
"acpi=force" are available
nospcr -- disable console in ACPI SPCR table as
default _serial_ console on ARM64
For ARM64, ONLY "acpi=off", "acpi=on", "acpi=force" or
"acpi=nospcr" are available
For RISCV64, ONLY "acpi=off", "acpi=on" or "acpi=force"
are available
See also Documentation/power/runtime_pm.rst, pci=noacpi

View File

@ -0,0 +1,79 @@
.. SPDX-License-Identifier: GPL-2.0
.. _cpuhp_index:
====================
CPU Hotplug and ACPI
====================
CPU hotplug in the arm64 world is commonly used to describe the kernel taking
CPUs online/offline using PSCI. This document is about ACPI firmware allowing
CPUs that were not available during boot to be added to the system later.
``possible`` and ``present`` refer to the state of the CPU as seen by linux.
CPU Hotplug on physical systems - CPUs not present at boot
----------------------------------------------------------
Physical systems need to mark a CPU that is ``possible`` but not ``present`` as
being ``present``. An example would be a dual socket machine, where the package
in one of the sockets can be replaced while the system is running.
This is not supported.
In the arm64 world CPUs are not a single device but a slice of the system.
There are no systems that support the physical addition (or removal) of CPUs
while the system is running, and ACPI is not able to sufficiently describe
them.
e.g. New CPUs come with new caches, but the platform's cache toplogy is
described in a static table, the PPTT. How caches are shared between CPUs is
not discoverable, and must be described by firmware.
e.g. The GIC redistributor for each CPU must be accessed by the driver during
boot to discover the system wide supported features. ACPI's MADT GICC
structures can describe a redistributor associated with a disabled CPU, but
can't describe whether the redistributor is accessible, only that it is not
'always on'.
arm64's ACPI tables assume that everything described is ``present``.
CPU Hotplug on virtual systems - CPUs not enabled at boot
---------------------------------------------------------
Virtual systems have the advantage that all the properties the system will
ever have can be described at boot. There are no power-domain considerations
as such devices are emulated.
CPU Hotplug on virtual systems is supported. It is distinct from physical
CPU Hotplug as all resources are described as ``present``, but CPUs may be
marked as disabled by firmware. Only the CPU's online/offline behaviour is
influenced by firmware. An example is where a virtual machine boots with a
single CPU, and additional CPUs are added once a cloud orchestrator deploys
the workload.
For a virtual machine, the VMM (e.g. Qemu) plays the part of firmware.
Virtual hotplug is implemented as a firmware policy affecting which CPUs can be
brought online. Firmware can enforce its policy via PSCI's return codes. e.g.
``DENIED``.
The ACPI tables must describe all the resources of the virtual machine. CPUs
that firmware wishes to disable either from boot (or later) should not be
``enabled`` in the MADT GICC structures, but should have the ``online capable``
bit set, to indicate they can be enabled later. The boot CPU must be marked as
``enabled``. The 'always on' GICR structure must be used to describe the
redistributors.
CPUs described as ``online capable`` but not ``enabled`` can be set to enabled
by the DSDT's Processor object's _STA method. On virtual systems the _STA method
must always report the CPU as ``present``. Changes to the firmware policy can
be notified to the OS via device-check or eject-request.
CPUs described as ``enabled`` in the static table, should not have their _STA
modified dynamically by firmware. Soft-restart features such as kexec will
re-read the static properties of the system from these static tables, and
may malfunction if these no longer describe the running system. Linux will
re-discover the dynamic properties of the system from the _STA method later
during boot.

View File

@ -13,6 +13,7 @@ ARM64 Architecture
asymmetric-32bit
booting
cpu-feature-registers
cpu-hotplug
elf_hwcaps
hugetlbpage
kdump

View File

@ -18,12 +18,10 @@ ARMv8.2 adds optional support for Large Virtual Address space. This is
only available when running with a 64KB page size and expands the
number of descriptors in the first level of translation.
User addresses have bits 63:48 set to 0 while the kernel addresses have
the same bits set to 1. TTBRx selection is given by bit 63 of the
virtual address. The swapper_pg_dir contains only kernel (global)
mappings while the user pgd contains only user (non-global) mappings.
The swapper_pg_dir address is written to TTBR1 and never written to
TTBR0.
TTBRx selection is given by bit 55 of the virtual address. The
swapper_pg_dir contains only kernel (global) mappings while the user pgd
contains only user (non-global) mappings. The swapper_pg_dir address is
written to TTBR1 and never written to TTBR0.
AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
@ -65,14 +63,14 @@ Translation table lookup with 4KB pages::
+--------+--------+--------+--------+--------+--------+--------+--------+
|63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0|
+--------+--------+--------+--------+--------+--------+--------+--------+
| | | | | |
| | | | | v
| | | | | [11:0] in-page offset
| | | | +-> [20:12] L3 index
| | | +-----------> [29:21] L2 index
| | +---------------------> [38:30] L1 index
| +-------------------------------> [47:39] L0 index
+-------------------------------------------------> [63] TTBR0/1
| | | | | |
| | | | | v
| | | | | [11:0] in-page offset
| | | | +-> [20:12] L3 index
| | | +-----------> [29:21] L2 index
| | +---------------------> [38:30] L1 index
| +-------------------------------> [47:39] L0 index
+----------------------------------------> [55] TTBR0/1
Translation table lookup with 64KB pages::
@ -80,14 +78,14 @@ Translation table lookup with 64KB pages::
+--------+--------+--------+--------+--------+--------+--------+--------+
|63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0|
+--------+--------+--------+--------+--------+--------+--------+--------+
| | | | |
| | | | v
| | | | [15:0] in-page offset
| | | +----------> [28:16] L3 index
| | +--------------------------> [41:29] L2 index
| +-------------------------------> [47:42] L1 index (48-bit)
| [51:42] L1 index (52-bit)
+-------------------------------------------------> [63] TTBR0/1
| | | | |
| | | | v
| | | | [15:0] in-page offset
| | | +----------> [28:16] L3 index
| | +--------------------------> [41:29] L2 index
| +-------------------------------> [47:42] L1 index (48-bit)
| [51:42] L1 index (52-bit)
+----------------------------------------> [55] TTBR0/1
When using KVM without the Virtualization Host Extensions, the

View File

@ -132,16 +132,26 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A710 | #2224489 | ARM64_ERRATUM_2224489 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A710 | #3324338 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A715 | #2645198 | ARM64_ERRATUM_2645198 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A720 | #3456091 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X1 | #1502854 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X2 | #2119858 | ARM64_ERRATUM_2119858 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X2 | #2224489 | ARM64_ERRATUM_2224489 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X2 | #3324338 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X3 | #3324335 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X4 | #3194386 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X925 | #3324334 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1349291 | N/A |
@ -156,9 +166,13 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N2 | #2253138 | ARM64_ERRATUM_2253138 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N2 | #3324339 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-V1 | #1619801 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-V3 | #3312417 | ARM64_ERRATUM_3312417 |
| ARM | Neoverse-V2 | #3324336 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-V3 | #3312417 | ARM64_ERRATUM_3194386 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | MMU-500 | #841119,826419 | N/A |
+----------------+-----------------+-----------------+-----------------------------+

View File

@ -53,14 +53,20 @@ properties:
- arm,cortex-a710-pmu
- arm,cortex-a715-pmu
- arm,cortex-a720-pmu
- arm,cortex-a725-pmu
- arm,cortex-x1-pmu
- arm,cortex-x2-pmu
- arm,cortex-x3-pmu
- arm,cortex-x4-pmu
- arm,cortex-x925-pmu
- arm,neoverse-e1-pmu
- arm,neoverse-n1-pmu
- arm,neoverse-n2-pmu
- arm,neoverse-n3-pmu
- arm,neoverse-v1-pmu
- arm,neoverse-v2-pmu
- arm,neoverse-v3-pmu
- arm,neoverse-v3ae-pmu
- brcm,vulcan-pmu
- cavium,thunder-pmu
- nvidia,denver-pmu

View File

@ -30,6 +30,9 @@ properties:
- items:
- const: fsl,imx8dxl-ddr-pmu
- const: fsl,imx8-ddr-pmu
- items:
- const: fsl,imx95-ddr-pmu
- const: fsl,imx93-ddr-pmu
reg:
maxItems: 1

View File

@ -78,8 +78,6 @@ obj-$(CONFIG_CPU_XSC3) += xscale-cp0.o
obj-$(CONFIG_CPU_MOHAWK) += xscale-cp0.o
obj-$(CONFIG_IWMMXT) += iwmmxt.o
obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o
obj-$(CONFIG_HW_PERF_EVENTS) += perf_event_xscale.o perf_event_v6.o \
perf_event_v7.o
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o
obj-$(CONFIG_VDSO) += vdso.o

View File

@ -5,6 +5,7 @@ config ARM64
select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI
select ACPI_GTDT if ACPI
select ACPI_HOTPLUG_CPU if ACPI_PROCESSOR && HOTPLUG_CPU
select ACPI_IORT if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if (ACPI && PCI)
@ -381,7 +382,7 @@ config BROKEN_GAS_INST
config BUILTIN_RETURN_ADDRESS_STRIPS_PAC
bool
# Clang's __builtin_return_adddress() strips the PAC since 12.0.0
# Clang's __builtin_return_address() strips the PAC since 12.0.0
# https://github.com/llvm/llvm-project/commit/2a96f47c5ffca84cd774ad402cacd137f4bf45e2
default y if CC_IS_CLANG
# GCC's __builtin_return_address() strips the PAC since 11.1.0,
@ -1067,15 +1068,21 @@ config ARM64_ERRATUM_3117295
If unsure, say Y.
config ARM64_WORKAROUND_SPECULATIVE_SSBS
bool
config ARM64_ERRATUM_3194386
bool "Cortex-X4: 3194386: workaround for MSR SSBS not self-synchronizing"
select ARM64_WORKAROUND_SPECULATIVE_SSBS
bool "Cortex-{A720,X4,X925}/Neoverse-V3: workaround for MSR SSBS not self-synchronizing"
default y
help
This option adds the workaround for ARM Cortex-X4 erratum 3194386.
This option adds the workaround for the following errata:
* ARM Cortex-A710 erratam 3324338
* ARM Cortex-A720 erratum 3456091
* ARM Cortex-X2 erratum 3324338
* ARM Cortex-X3 erratum 3324335
* ARM Cortex-X4 erratum 3194386
* ARM Cortex-X925 erratum 3324334
* ARM Neoverse N2 erratum 3324339
* ARM Neoverse V2 erratum 3324336
* ARM Neoverse-V3 erratum 3312417
On affected cores "MSR SSBS, #0" instructions may not affect
subsequent speculative instructions, which may permit unexepected
@ -1089,26 +1096,6 @@ config ARM64_ERRATUM_3194386
If unsure, say Y.
config ARM64_ERRATUM_3312417
bool "Neoverse-V3: 3312417: workaround for MSR SSBS not self-synchronizing"
select ARM64_WORKAROUND_SPECULATIVE_SSBS
default y
help
This option adds the workaround for ARM Neoverse-V3 erratum 3312417.
On affected cores "MSR SSBS, #0" instructions may not affect
subsequent speculative instructions, which may permit unexepected
speculative store bypassing.
Work around this problem by placing a speculation barrier after
kernel changes to SSBS. The presence of the SSBS special-purpose
register is hidden from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such
that userspace will use the PR_SPEC_STORE_BYPASS prctl to change
SSBS.
If unsure, say Y.
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y

View File

@ -119,6 +119,18 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu)
return acpi_cpu_get_madt_gicc(cpu)->uid;
}
static inline int get_cpu_for_acpi_id(u32 uid)
{
int cpu;
for (cpu = 0; cpu < nr_cpu_ids; cpu++)
if (acpi_cpu_get_madt_gicc(cpu) &&
uid == get_acpi_id_for_cpu(cpu))
return cpu;
return -EINVAL;
}
static inline void arch_fix_phys_package_id(int num, u32 slot) { }
void __init acpi_init_cpus(void);
int apei_claim_sea(struct pt_regs *regs);

View File

@ -175,21 +175,6 @@ static inline bool gic_prio_masking_enabled(void)
static inline void gic_pmr_mask_irqs(void)
{
BUILD_BUG_ON(GICD_INT_DEF_PRI < (__GIC_PRIO_IRQOFF |
GIC_PRIO_PSR_I_SET));
BUILD_BUG_ON(GICD_INT_DEF_PRI >= GIC_PRIO_IRQON);
/*
* Need to make sure IRQON allows IRQs when SCR_EL3.FIQ is cleared
* and non-secure PMR accesses are not subject to the shifts that
* are applied to IRQ priorities
*/
BUILD_BUG_ON((0x80 | (GICD_INT_DEF_PRI >> 1)) >= GIC_PRIO_IRQON);
/*
* Same situation as above, but now we make sure that we can mask
* regular interrupts.
*/
BUILD_BUG_ON((0x80 | (GICD_INT_DEF_PRI >> 1)) < (__GIC_PRIO_IRQOFF_NS |
GIC_PRIO_PSR_I_SET));
gic_write_pmr(GIC_PRIO_IRQOFF);
}

View File

@ -15,7 +15,7 @@
#include <linux/bug.h>
#include <linux/init.h>
#include <linux/jump_label.h>
#include <linux/smp.h>
#include <linux/percpu.h>
#include <linux/types.h>
#include <clocksource/arm_arch_timer.h>

View File

@ -6,7 +6,7 @@
#ifndef __ASM_PMUV3_H
#define __ASM_PMUV3_H
#include <linux/kvm_host.h>
#include <asm/kvm_host.h>
#include <asm/cpufeature.h>
#include <asm/sysreg.h>

View File

@ -59,7 +59,7 @@ cpucap_is_possible(const unsigned int cap)
case ARM64_WORKAROUND_REPEAT_TLBI:
return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI);
case ARM64_WORKAROUND_SPECULATIVE_SSBS:
return IS_ENABLED(CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS);
return IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386);
}
return true;

View File

@ -588,14 +588,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL1_SHIFT);
return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT;
return val == ID_AA64PFR0_EL1_EL1_AARCH32;
}
static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL0_SHIFT);
return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT;
return val == ID_AA64PFR0_EL1_EL0_AARCH32;
}
static inline bool id_aa64pfr0_sve(u64 pfr0)

View File

@ -86,9 +86,12 @@
#define ARM_CPU_PART_CORTEX_X2 0xD48
#define ARM_CPU_PART_NEOVERSE_N2 0xD49
#define ARM_CPU_PART_CORTEX_A78C 0xD4B
#define ARM_CPU_PART_CORTEX_X3 0xD4E
#define ARM_CPU_PART_NEOVERSE_V2 0xD4F
#define ARM_CPU_PART_CORTEX_A720 0xD81
#define ARM_CPU_PART_CORTEX_X4 0xD82
#define ARM_CPU_PART_NEOVERSE_V3 0xD84
#define ARM_CPU_PART_CORTEX_X925 0xD85
#define APM_CPU_PART_XGENE 0x000
#define APM_CPU_VAR_POTENZA 0x00
@ -162,9 +165,12 @@
#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
#define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
#define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3)
#define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2)
#define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720)
#define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4)
#define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
#define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)

View File

@ -121,6 +121,14 @@
#define ESR_ELx_FSC_SECC (0x18)
#define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n))
/* Status codes for individual page table levels */
#define ESR_ELx_FSC_ACCESS_L(n) (ESR_ELx_FSC_ACCESS + n)
#define ESR_ELx_FSC_PERM_L(n) (ESR_ELx_FSC_PERM + n)
#define ESR_ELx_FSC_FAULT_nL (0x2C)
#define ESR_ELx_FSC_FAULT_L(n) (((n) < 0 ? ESR_ELx_FSC_FAULT_nL : \
ESR_ELx_FSC_FAULT) + (n))
/* ISS field definitions for Data Aborts */
#define ESR_ELx_ISV_SHIFT (24)
#define ESR_ELx_ISV (UL(1) << ESR_ELx_ISV_SHIFT)
@ -388,20 +396,33 @@ static inline bool esr_is_data_abort(unsigned long esr)
static inline bool esr_fsc_is_translation_fault(unsigned long esr)
{
/* Translation fault, level -1 */
if ((esr & ESR_ELx_FSC) == 0b101011)
return true;
return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT;
esr = esr & ESR_ELx_FSC;
return (esr == ESR_ELx_FSC_FAULT_L(3)) ||
(esr == ESR_ELx_FSC_FAULT_L(2)) ||
(esr == ESR_ELx_FSC_FAULT_L(1)) ||
(esr == ESR_ELx_FSC_FAULT_L(0)) ||
(esr == ESR_ELx_FSC_FAULT_L(-1));
}
static inline bool esr_fsc_is_permission_fault(unsigned long esr)
{
return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_PERM;
esr = esr & ESR_ELx_FSC;
return (esr == ESR_ELx_FSC_PERM_L(3)) ||
(esr == ESR_ELx_FSC_PERM_L(2)) ||
(esr == ESR_ELx_FSC_PERM_L(1)) ||
(esr == ESR_ELx_FSC_PERM_L(0));
}
static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
{
return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_ACCESS;
esr = esr & ESR_ELx_FSC;
return (esr == ESR_ELx_FSC_ACCESS_L(3)) ||
(esr == ESR_ELx_FSC_ACCESS_L(2)) ||
(esr == ESR_ELx_FSC_ACCESS_L(1)) ||
(esr == ESR_ELx_FSC_ACCESS_L(0));
}
/* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */

View File

@ -72,11 +72,11 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
{
unsigned long tcr = read_sysreg(tcr_el1);
if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
if ((tcr & TCR_T0SZ_MASK) == t0sz)
return;
tcr &= ~TCR_T0SZ_MASK;
tcr |= t0sz << TCR_T0SZ_OFFSET;
tcr |= t0sz;
write_sysreg(tcr, tcr_el1);
isb();
}

View File

@ -182,7 +182,7 @@ void mte_check_tfsr_el1(void);
static inline void mte_check_tfsr_entry(void)
{
if (!system_supports_mte())
if (!kasan_hw_tags_enabled())
return;
mte_check_tfsr_el1();
@ -190,7 +190,7 @@ static inline void mte_check_tfsr_entry(void)
static inline void mte_check_tfsr_exit(void)
{
if (!system_supports_mte())
if (!kasan_hw_tags_enabled())
return;
/*

View File

@ -21,35 +21,12 @@
#define INIT_PSTATE_EL2 \
(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | PSR_MODE_EL2h)
/*
* PMR values used to mask/unmask interrupts.
*
* GIC priority masking works as follows: if an IRQ's priority is a higher value
* than the value held in PMR, that IRQ is masked. Lowering the value of PMR
* means masking more IRQs (or at least that the same IRQs remain masked).
*
* To mask interrupts, we clear the most significant bit of PMR.
*
* Some code sections either automatically switch back to PSR.I or explicitly
* require to not use priority masking. If bit GIC_PRIO_PSR_I_SET is included
* in the priority mask, it indicates that PSR.I should be set and
* interrupt disabling temporarily does not rely on IRQ priorities.
*/
#define GIC_PRIO_IRQON 0xe0
#define __GIC_PRIO_IRQOFF (GIC_PRIO_IRQON & ~0x80)
#define __GIC_PRIO_IRQOFF_NS 0xa0
#define GIC_PRIO_PSR_I_SET (1 << 4)
#include <linux/irqchip/arm-gic-v3-prio.h>
#define GIC_PRIO_IRQOFF \
({ \
extern struct static_key_false gic_nonsecure_priorities;\
u8 __prio = __GIC_PRIO_IRQOFF; \
\
if (static_branch_unlikely(&gic_nonsecure_priorities)) \
__prio = __GIC_PRIO_IRQOFF_NS; \
\
__prio; \
})
#define GIC_PRIO_IRQON GICV3_PRIO_UNMASKED
#define GIC_PRIO_IRQOFF GICV3_PRIO_IRQ
#define GIC_PRIO_PSR_I_SET GICV3_PRIO_PSR_I_SET
/* Additional SPSR bits not exposed in the UABI */
#define PSR_MODE_THREAD_BIT (1 << 0)

View File

@ -25,22 +25,11 @@
#ifndef __ASSEMBLY__
#include <asm/percpu.h>
#include <linux/threads.h>
#include <linux/cpumask.h>
#include <linux/thread_info.h>
DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
/*
* We don't use this_cpu_read(cpu_number) as that has implicit writes to
* preempt_count, and associated (compiler) barriers, that we'd like to avoid
* the expense of. If we're preemptible, the value can be stale at use anyway.
* And we can't use this_cpu_ptr() either, as that winds up recursing back
* here under CONFIG_DEBUG_PREEMPT=y.
*/
#define raw_smp_processor_id() (*raw_cpu_ptr(&cpu_number))
#define raw_smp_processor_id() (current_thread_info()->cpu)
/*
* Logical CPU mapping.

View File

@ -872,10 +872,6 @@
/* Position the attr at the correct index */
#define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8))
/* id_aa64pfr0 */
#define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2
/* id_aa64mmfr0 */
#define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0
#define ID_AA64MMFR0_EL1_TGRAN4_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT

View File

@ -46,7 +46,6 @@ obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o
obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
obj-$(CONFIG_KGDB) += kgdb.o
obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o

View File

@ -30,6 +30,7 @@
#include <linux/pgtable.h>
#include <acpi/ghes.h>
#include <acpi/processor.h>
#include <asm/cputype.h>
#include <asm/cpu_ops.h>
#include <asm/daifflags.h>
@ -45,6 +46,7 @@ EXPORT_SYMBOL(acpi_pci_disabled);
static bool param_acpi_off __initdata;
static bool param_acpi_on __initdata;
static bool param_acpi_force __initdata;
static bool param_acpi_nospcr __initdata;
static int __init parse_acpi(char *arg)
{
@ -58,6 +60,8 @@ static int __init parse_acpi(char *arg)
param_acpi_on = true;
else if (strcmp(arg, "force") == 0) /* force ACPI to be enabled */
param_acpi_force = true;
else if (strcmp(arg, "nospcr") == 0) /* disable SPCR as default console */
param_acpi_nospcr = true;
else
return -EINVAL; /* Core will print when we return error */
@ -237,7 +241,20 @@ done:
acpi_put_table(facs);
}
#endif
acpi_parse_spcr(earlycon_acpi_spcr_enable, true);
/*
* For varying privacy and security reasons, sometimes need
* to completely silence the serial console output, and only
* enable it when needed.
* But there are many existing systems that depend on this
* behaviour, use acpi=nospcr to disable console in ACPI SPCR
* table as default serial console.
*/
acpi_parse_spcr(earlycon_acpi_spcr_enable,
!param_acpi_nospcr);
pr_info("Use ACPI SPCR as default console: %s\n",
param_acpi_nospcr ? "No" : "Yes");
if (IS_ENABLED(CONFIG_ACPI_BGRT))
acpi_table_parse(ACPI_SIG_BGRT, acpi_parse_bgrt);
}
@ -423,107 +440,23 @@ void arch_reserve_mem_area(acpi_physical_address addr, size_t size)
memblock_mark_nomap(addr, size);
}
#ifdef CONFIG_ACPI_FFH
/*
* Implements ARM64 specific callbacks to support ACPI FFH Operation Region as
* specified in https://developer.arm.com/docs/den0048/latest
*/
struct acpi_ffh_data {
struct acpi_ffh_info info;
void (*invoke_ffh_fn)(unsigned long a0, unsigned long a1,
unsigned long a2, unsigned long a3,
unsigned long a4, unsigned long a5,
unsigned long a6, unsigned long a7,
struct arm_smccc_res *args,
struct arm_smccc_quirk *res);
void (*invoke_ffh64_fn)(const struct arm_smccc_1_2_regs *args,
struct arm_smccc_1_2_regs *res);
};
int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt)
#ifdef CONFIG_ACPI_HOTPLUG_CPU
int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 apci_id,
int *pcpu)
{
enum arm_smccc_conduit conduit;
struct acpi_ffh_data *ffh_ctxt;
if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2)
return -EOPNOTSUPP;
conduit = arm_smccc_1_1_get_conduit();
if (conduit == SMCCC_CONDUIT_NONE) {
pr_err("%s: invalid SMCCC conduit\n", __func__);
return -EOPNOTSUPP;
/* If an error code is passed in this stub can't fix it */
if (*pcpu < 0) {
pr_warn_once("Unable to map CPU to valid ID\n");
return *pcpu;
}
ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL);
if (!ffh_ctxt)
return -ENOMEM;
if (conduit == SMCCC_CONDUIT_SMC) {
ffh_ctxt->invoke_ffh_fn = __arm_smccc_smc;
ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_smc;
} else {
ffh_ctxt->invoke_ffh_fn = __arm_smccc_hvc;
ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_hvc;
}
memcpy(ffh_ctxt, handler_ctxt, sizeof(ffh_ctxt->info));
*region_ctxt = ffh_ctxt;
return AE_OK;
return 0;
}
EXPORT_SYMBOL(acpi_map_cpu);
static bool acpi_ffh_smccc_owner_allowed(u32 fid)
int acpi_unmap_cpu(int cpu)
{
int owner = ARM_SMCCC_OWNER_NUM(fid);
if (owner == ARM_SMCCC_OWNER_STANDARD ||
owner == ARM_SMCCC_OWNER_SIP || owner == ARM_SMCCC_OWNER_OEM)
return true;
return false;
return 0;
}
int acpi_ffh_address_space_arch_handler(acpi_integer *value, void *region_context)
{
int ret = 0;
struct acpi_ffh_data *ffh_ctxt = region_context;
if (ffh_ctxt->info.offset == 0) {
/* SMC/HVC 32bit call */
struct arm_smccc_res res;
u32 a[8] = { 0 }, *ptr = (u32 *)value;
if (!ARM_SMCCC_IS_FAST_CALL(*ptr) || ARM_SMCCC_IS_64(*ptr) ||
!acpi_ffh_smccc_owner_allowed(*ptr) ||
ffh_ctxt->info.length > 32) {
ret = AE_ERROR;
} else {
int idx, len = ffh_ctxt->info.length >> 2;
for (idx = 0; idx < len; idx++)
a[idx] = *(ptr + idx);
ffh_ctxt->invoke_ffh_fn(a[0], a[1], a[2], a[3], a[4],
a[5], a[6], a[7], &res, NULL);
memcpy(value, &res, sizeof(res));
}
} else if (ffh_ctxt->info.offset == 1) {
/* SMC/HVC 64bit call */
struct arm_smccc_1_2_regs *r = (struct arm_smccc_1_2_regs *)value;
if (!ARM_SMCCC_IS_FAST_CALL(r->a0) || !ARM_SMCCC_IS_64(r->a0) ||
!acpi_ffh_smccc_owner_allowed(r->a0) ||
ffh_ctxt->info.length > sizeof(*r)) {
ret = AE_ERROR;
} else {
ffh_ctxt->invoke_ffh64_fn(r, r);
memcpy(value, r, ffh_ctxt->info.length);
}
} else {
ret = AE_ERROR;
}
return ret;
}
#endif /* CONFIG_ACPI_FFH */
EXPORT_SYMBOL(acpi_unmap_cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */

View File

@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu)
return acpi_early_node_map[cpu];
}
static inline int get_cpu_for_acpi_id(u32 uid)
{
int cpu;
for (cpu = 0; cpu < nr_cpu_ids; cpu++)
if (uid == get_acpi_id_for_cpu(cpu))
return cpu;
return -EINVAL;
}
static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header,
const unsigned long end)
{

View File

@ -432,14 +432,17 @@ static const struct midr_range erratum_spec_unpriv_load_list[] = {
};
#endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS
static const struct midr_range erratum_spec_ssbs_list[] = {
#ifdef CONFIG_ARM64_ERRATUM_3194386
static const struct midr_range erratum_spec_ssbs_list[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
MIDR_ALL_VERSIONS(MIDR_CORTEX_X4),
#endif
#ifdef CONFIG_ARM64_ERRATUM_3312417
MIDR_ALL_VERSIONS(MIDR_CORTEX_X925),
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3),
#endif
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
{}
};
#endif
@ -741,9 +744,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)),
},
#endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS
#ifdef CONFIG_ARM64_ERRATUM_3194386
{
.desc = "ARM errata 3194386, 3312417",
.desc = "SSBS not fully self-synchronizing",
.capability = ARM64_WORKAROUND_SPECULATIVE_SSBS,
ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list),
},

View File

@ -285,8 +285,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_FP_SHIFT, 4, ID_AA64PFR0_EL1_FP_NI),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL1_SHIFT, 4, ID_AA64PFR0_EL1_ELx_64BIT_ONLY),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL0_SHIFT, 4, ID_AA64PFR0_EL1_ELx_64BIT_ONLY),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL1_SHIFT, 4, ID_AA64PFR0_EL1_EL1_IMP),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_EL0_SHIFT, 4, ID_AA64PFR0_EL1_EL0_IMP),
ARM64_FTR_END,
};
@ -429,6 +429,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_ECBHB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_TIDCP1_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_AFP_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_HCX_SHIFT, 4, 0),

View File

@ -105,11 +105,6 @@ KVM_NVHE_ALIAS(__hyp_stub_vectors);
KVM_NVHE_ALIAS(vgic_v2_cpuif_trap);
KVM_NVHE_ALIAS(vgic_v3_cpuif_trap);
#ifdef CONFIG_ARM64_PSEUDO_NMI
/* Static key checked in GIC_PRIO_IRQOFF. */
KVM_NVHE_ALIAS(gic_nonsecure_priorities);
#endif
/* EL2 exception handling */
KVM_NVHE_ALIAS(__start___kvm_ex_table);
KVM_NVHE_ALIAS(__stop___kvm_ex_table);

View File

@ -567,7 +567,7 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void)
* Mitigate this with an unconditional speculation barrier, as CPUs
* could mis-speculate branches and bypass a conditional barrier.
*/
if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS))
if (IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386))
spec_bar();
return SPECTRE_MITIGATED;

View File

@ -40,7 +40,7 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
{
phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
if (err)
if (err && err != -EPERM)
pr_err("failed to boot CPU%d (%d)\n", cpu, err);
return err;

View File

@ -74,4 +74,5 @@ static void __exit reloc_test_exit(void)
module_init(reloc_test_init);
module_exit(reloc_test_exit);
MODULE_DESCRIPTION("Relocation testing module");
MODULE_LICENSE("GPL v2");

View File

@ -55,9 +55,6 @@
#include <trace/events/ipi.h>
DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
EXPORT_PER_CPU_SYMBOL(cpu_number);
/*
* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core
@ -132,7 +129,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
/* Now bring the CPU into our world */
ret = boot_secondary(cpu, idle);
if (ret) {
pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
if (ret != -EPERM)
pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
return ret;
}
@ -510,6 +508,59 @@ static int __init smp_cpu_setup(int cpu)
static bool bootcpu_valid __initdata;
static unsigned int cpu_count = 1;
int arch_register_cpu(int cpu)
{
acpi_handle acpi_handle = acpi_get_processor_handle(cpu);
struct cpu *c = &per_cpu(cpu_devices, cpu);
if (!acpi_disabled && !acpi_handle &&
IS_ENABLED(CONFIG_ACPI_HOTPLUG_CPU))
return -EPROBE_DEFER;
#ifdef CONFIG_ACPI_HOTPLUG_CPU
/* For now block anything that looks like physical CPU Hotplug */
if (invalid_logical_cpuid(cpu) || !cpu_present(cpu)) {
pr_err_once("Changing CPU present bit is not supported\n");
return -ENODEV;
}
#endif
/*
* Availability of the acpi handle is sufficient to establish
* that _STA has aleady been checked. No need to recheck here.
*/
c->hotpluggable = arch_cpu_is_hotpluggable(cpu);
return register_cpu(c, cpu);
}
#ifdef CONFIG_ACPI_HOTPLUG_CPU
void arch_unregister_cpu(int cpu)
{
acpi_handle acpi_handle = acpi_get_processor_handle(cpu);
struct cpu *c = &per_cpu(cpu_devices, cpu);
acpi_status status;
unsigned long long sta;
if (!acpi_handle) {
pr_err_once("Removing a CPU without associated ACPI handle\n");
return;
}
status = acpi_evaluate_integer(acpi_handle, "_STA", NULL, &sta);
if (ACPI_FAILURE(status))
return;
/* For now do not allow anything that looks like physical CPU HP */
if (cpu_present(cpu) && !(sta & ACPI_STA_DEVICE_PRESENT)) {
pr_err_once("Changing CPU present bit is not supported\n");
return;
}
unregister_cpu(c);
}
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
#ifdef CONFIG_ACPI
static struct acpi_madt_generic_interrupt cpu_madt_gicc[NR_CPUS];
@ -530,7 +581,8 @@ acpi_map_gic_cpu_interface(struct acpi_madt_generic_interrupt *processor)
{
u64 hwid = processor->arm_mpidr;
if (!acpi_gicc_is_usable(processor)) {
if (!(processor->flags &
(ACPI_MADT_ENABLED | ACPI_MADT_GICC_ONLINE_CAPABLE))) {
pr_debug("skipping disabled CPU entry with 0x%llx MPIDR\n", hwid);
return;
}
@ -749,8 +801,6 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
*/
for_each_possible_cpu(cpu) {
per_cpu(cpu_number, cpu) = cpu;
if (cpu == smp_processor_id())
continue;
@ -767,13 +817,15 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
}
}
static const char *ipi_types[NR_IPI] __tracepoint_string = {
static const char *ipi_types[MAX_IPI] __tracepoint_string = {
[IPI_RESCHEDULE] = "Rescheduling interrupts",
[IPI_CALL_FUNC] = "Function call interrupts",
[IPI_CPU_STOP] = "CPU stop interrupts",
[IPI_CPU_CRASH_STOP] = "CPU stop (for crash dump) interrupts",
[IPI_TIMER] = "Timer broadcast interrupts",
[IPI_IRQ_WORK] = "IRQ work interrupts",
[IPI_CPU_BACKTRACE] = "CPU backtrace interrupts",
[IPI_KGDB_ROUNDUP] = "KGDB roundup interrupts",
};
static void smp_cross_call(const struct cpumask *target, unsigned int ipinr);
@ -784,7 +836,7 @@ int arch_show_interrupts(struct seq_file *p, int prec)
{
unsigned int cpu, i;
for (i = 0; i < NR_IPI; i++) {
for (i = 0; i < MAX_IPI; i++) {
seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
prec >= 4 ? " " : "");
for_each_online_cpu(cpu)
@ -1028,12 +1080,12 @@ void __init set_smp_ipi_range(int ipi_base, int n)
if (ipi_should_be_nmi(i)) {
err = request_percpu_nmi(ipi_base + i, ipi_handler,
"IPI", &cpu_number);
"IPI", &irq_stat);
WARN(err, "Could not request IPI %d as NMI, err=%d\n",
i, err);
} else {
err = request_percpu_irq(ipi_base + i, ipi_handler,
"IPI", &cpu_number);
"IPI", &irq_stat);
WARN(err, "Could not request IPI %d as IRQ, err=%d\n",
i, err);
}

View File

@ -52,11 +52,11 @@
* Supported by KVM
*/
#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL2), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL3), ID_AA64PFR0_EL1_ELx_64BIT_ONLY) | \
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_RAS), ID_AA64PFR0_EL1_RAS_IMP) \
SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL0, IMP) | \
SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL1, IMP) | \
SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL2, IMP) | \
SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, EL3, IMP) | \
SYS_FIELD_PREP_ENUM(ID_AA64PFR0_EL1, RAS, IMP) \
)
/*

View File

@ -33,9 +33,9 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
/* Protected KVM does not support AArch32 guests. */
BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0),
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_ELx_64BIT_ONLY);
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL0_IMP);
BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1),
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_ELx_64BIT_ONLY);
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) != ID_AA64PFR0_EL1_EL1_IMP);
/*
* Linux guests assume support for floating-point and Advanced SIMD. Do

View File

@ -276,7 +276,7 @@ static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu,
* of AArch32 feature id registers.
*/
BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL1),
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_EL1_ELx_64BIT_ONLY);
PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) > ID_AA64PFR0_EL1_EL1_IMP);
return pvm_access_raz_wi(vcpu, p, r);
}

View File

@ -14,7 +14,6 @@
#include <asm/kvm_emulate.h>
#include <kvm/arm_pmu.h>
#include <kvm/arm_vgic.h>
#include <asm/arm_pmuv3.h>
#define PERF_ATTR_CFG1_COUNTER_64BIT BIT(0)

View File

@ -35,6 +35,17 @@ EXPORT_PER_CPU_SYMBOL(processors);
struct acpi_processor_errata errata __read_mostly;
EXPORT_SYMBOL_GPL(errata);
acpi_handle acpi_get_processor_handle(int cpu)
{
struct acpi_processor *pr;
pr = per_cpu(processors, cpu);
if (pr)
return pr->handle;
return NULL;
}
static int acpi_processor_errata_piix4(struct pci_dev *dev)
{
u8 value1 = 0;
@ -183,20 +194,44 @@ static void __init acpi_pcc_cpufreq_init(void) {}
#endif /* CONFIG_X86 */
/* Initialization */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
static int acpi_processor_hotadd_init(struct acpi_processor *pr)
static DEFINE_PER_CPU(void *, processor_device_array);
static int acpi_processor_set_per_cpu(struct acpi_processor *pr,
struct acpi_device *device)
{
BUG_ON(pr->id >= nr_cpu_ids);
/*
* Buggy BIOS check.
* ACPI id of processors can be reported wrongly by the BIOS.
* Don't trust it blindly
*/
if (per_cpu(processor_device_array, pr->id) != NULL &&
per_cpu(processor_device_array, pr->id) != device) {
dev_warn(&device->dev,
"BIOS reported wrong ACPI id %d for the processor\n",
pr->id);
return -EINVAL;
}
/*
* processor_device_array is not cleared on errors to allow buggy BIOS
* checks.
*/
per_cpu(processor_device_array, pr->id) = device;
per_cpu(processors, pr->id) = pr;
return 0;
}
#ifdef CONFIG_ACPI_HOTPLUG_CPU
static int acpi_processor_hotadd_init(struct acpi_processor *pr,
struct acpi_device *device)
{
unsigned long long sta;
acpi_status status;
int ret;
if (invalid_phys_cpuid(pr->phys_id))
return -ENODEV;
status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
return -ENODEV;
cpu_maps_update_begin();
cpus_write_lock();
@ -204,19 +239,26 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
if (ret)
goto out;
ret = arch_register_cpu(pr->id);
ret = acpi_processor_set_per_cpu(pr, device);
if (ret) {
acpi_unmap_cpu(pr->id);
goto out;
}
ret = arch_register_cpu(pr->id);
if (ret) {
/* Leave the processor device array in place to detect buggy bios */
per_cpu(processors, pr->id) = NULL;
acpi_unmap_cpu(pr->id);
goto out;
}
/*
* CPU got hot-added, but cpu_data is not initialized yet. Set a flag
* to delay cpu_idle/throttling initialization and do it when the CPU
* gets online for the first time.
* CPU got hot-added, but cpu_data is not initialized yet. Do
* cpu_idle/throttling initialization when the CPU gets online for
* the first time.
*/
pr_info("CPU%d has been hot-added\n", pr->id);
pr->flags.need_hotplug_init = 1;
out:
cpus_write_unlock();
@ -224,7 +266,8 @@ out:
return ret;
}
#else
static inline int acpi_processor_hotadd_init(struct acpi_processor *pr)
static inline int acpi_processor_hotadd_init(struct acpi_processor *pr,
struct acpi_device *device)
{
return -ENODEV;
}
@ -239,6 +282,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
acpi_status status = AE_OK;
static int cpu0_initialized;
unsigned long long value;
int ret;
acpi_processor_errata();
@ -315,19 +359,19 @@ static int acpi_processor_get_info(struct acpi_device *device)
}
/*
* Extra Processor objects may be enumerated on MP systems with
* less than the max # of CPUs. They should be ignored _iff
* they are physically not present.
*
* NOTE: Even if the processor has a cpuid, it may not be present
* because cpuid <-> apicid mapping is persistent now.
* This code is not called unless we know the CPU is present and
* enabled. The two paths are:
* a) Initially present CPUs on architectures that do not defer
* their arch_register_cpu() calls until this point.
* b) Hotplugged CPUs (enabled bit in _STA has transitioned from not
* enabled to enabled)
*/
if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
int ret = acpi_processor_hotadd_init(pr);
if (ret)
return ret;
}
if (!get_cpu_device(pr->id))
ret = acpi_processor_hotadd_init(pr, device);
else
ret = acpi_processor_set_per_cpu(pr, device);
if (ret)
return ret;
/*
* On some boxes several processors use the same processor bus id.
@ -372,8 +416,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
* (cpu_data(cpu)) values, like CPU feature flags, family, model, etc.
* Such things have to be put in and set up by the processor driver's .probe().
*/
static DEFINE_PER_CPU(void *, processor_device_array);
static int acpi_processor_add(struct acpi_device *device,
const struct acpi_device_id *id)
{
@ -400,39 +442,17 @@ static int acpi_processor_add(struct acpi_device *device,
result = acpi_processor_get_info(device);
if (result) /* Processor is not physically present or unavailable */
return 0;
BUG_ON(pr->id >= nr_cpu_ids);
/*
* Buggy BIOS check.
* ACPI id of processors can be reported wrongly by the BIOS.
* Don't trust it blindly
*/
if (per_cpu(processor_device_array, pr->id) != NULL &&
per_cpu(processor_device_array, pr->id) != device) {
dev_warn(&device->dev,
"BIOS reported wrong ACPI id %d for the processor\n",
pr->id);
/* Give up, but do not abort the namespace scan. */
goto err;
}
/*
* processor_device_array is not cleared on errors to allow buggy BIOS
* checks.
*/
per_cpu(processor_device_array, pr->id) = device;
per_cpu(processors, pr->id) = pr;
goto err_clear_driver_data;
dev = get_cpu_device(pr->id);
if (!dev) {
result = -ENODEV;
goto err;
goto err_clear_per_cpu;
}
result = acpi_bind_one(dev, device);
if (result)
goto err;
goto err_clear_per_cpu;
pr->dev = dev;
@ -443,10 +463,11 @@ static int acpi_processor_add(struct acpi_device *device,
dev_err(dev, "Processor driver could not be attached\n");
acpi_unbind_one(dev);
err:
free_cpumask_var(pr->throttling.shared_cpu_map);
device->driver_data = NULL;
err_clear_per_cpu:
per_cpu(processors, pr->id) = NULL;
err_clear_driver_data:
device->driver_data = NULL;
free_cpumask_var(pr->throttling.shared_cpu_map);
err_free_pr:
kfree(pr);
return result;
@ -454,7 +475,7 @@ static int acpi_processor_add(struct acpi_device *device,
#ifdef CONFIG_ACPI_HOTPLUG_CPU
/* Removal */
static void acpi_processor_remove(struct acpi_device *device)
static void acpi_processor_post_eject(struct acpi_device *device)
{
struct acpi_processor *pr;
@ -476,10 +497,6 @@ static void acpi_processor_remove(struct acpi_device *device)
device_release_driver(pr->dev);
acpi_unbind_one(pr->dev);
/* Clean up. */
per_cpu(processor_device_array, pr->id) = NULL;
per_cpu(processors, pr->id) = NULL;
cpu_maps_update_begin();
cpus_write_lock();
@ -487,6 +504,10 @@ static void acpi_processor_remove(struct acpi_device *device)
arch_unregister_cpu(pr->id);
acpi_unmap_cpu(pr->id);
/* Clean up. */
per_cpu(processor_device_array, pr->id) = NULL;
per_cpu(processors, pr->id) = NULL;
cpus_write_unlock();
cpu_maps_update_done();
@ -622,7 +643,7 @@ static struct acpi_scan_handler processor_handler = {
.ids = processor_device_ids,
.attach = acpi_processor_add,
#ifdef CONFIG_ACPI_HOTPLUG_CPU
.detach = acpi_processor_remove,
.post_eject = acpi_processor_post_eject,
#endif
.hotplug = {
.enabled = true,

View File

@ -1,8 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_ACPI_AGDI) += agdi.o
obj-$(CONFIG_ACPI_IORT) += iort.o
obj-$(CONFIG_ACPI_GTDT) += gtdt.o
obj-$(CONFIG_ACPI_APMT) += apmt.o
obj-$(CONFIG_ACPI_FFH) += ffh.o
obj-$(CONFIG_ACPI_GTDT) += gtdt.o
obj-$(CONFIG_ACPI_IORT) += iort.o
obj-$(CONFIG_ACPI_PROCESSOR_IDLE) += cpuidle.o
obj-$(CONFIG_ARM_AMBA) += amba.o
obj-y += dma.o init.o
obj-y += thermal_cpufreq.o

View File

@ -27,11 +27,7 @@ static const struct acpi_device_id amba_id_list[] = {
static void amba_register_dummy_clk(void)
{
static struct clk *amba_dummy_clk;
/* If clock already registered */
if (amba_dummy_clk)
return;
struct clk *amba_dummy_clk;
amba_dummy_clk = clk_register_fixed_rate(NULL, "apb_pclk", NULL, 0, 0);
clk_register_clkdev(amba_dummy_clk, "apb_pclk", NULL);

View File

@ -10,9 +10,6 @@
#include <linux/cpuidle.h>
#include <linux/cpu_pm.h>
#include <linux/psci.h>
#ifdef CONFIG_ACPI_PROCESSOR_IDLE
#include <acpi/processor.h>
#define ARM64_LPI_IS_RETENTION_STATE(arch_flags) (!(arch_flags))
@ -71,4 +68,3 @@ __cpuidle int acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi)
return CPU_PM_CPU_IDLE_ENTER_PARAM_RCU(psci_cpu_suspend_enter,
lpi->index, state);
}
#endif

107
drivers/acpi/arm64/ffh.c Normal file
View File

@ -0,0 +1,107 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/acpi.h>
#include <linux/arm-smccc.h>
#include <linux/slab.h>
/*
* Implements ARM64 specific callbacks to support ACPI FFH Operation Region as
* specified in https://developer.arm.com/docs/den0048/latest
*/
struct acpi_ffh_data {
struct acpi_ffh_info info;
void (*invoke_ffh_fn)(unsigned long a0, unsigned long a1,
unsigned long a2, unsigned long a3,
unsigned long a4, unsigned long a5,
unsigned long a6, unsigned long a7,
struct arm_smccc_res *args,
struct arm_smccc_quirk *res);
void (*invoke_ffh64_fn)(const struct arm_smccc_1_2_regs *args,
struct arm_smccc_1_2_regs *res);
};
int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt)
{
enum arm_smccc_conduit conduit;
struct acpi_ffh_data *ffh_ctxt;
if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2)
return -EOPNOTSUPP;
conduit = arm_smccc_1_1_get_conduit();
if (conduit == SMCCC_CONDUIT_NONE) {
pr_err("%s: invalid SMCCC conduit\n", __func__);
return -EOPNOTSUPP;
}
ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL);
if (!ffh_ctxt)
return -ENOMEM;
if (conduit == SMCCC_CONDUIT_SMC) {
ffh_ctxt->invoke_ffh_fn = __arm_smccc_smc;
ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_smc;
} else {
ffh_ctxt->invoke_ffh_fn = __arm_smccc_hvc;
ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_hvc;
}
memcpy(ffh_ctxt, handler_ctxt, sizeof(ffh_ctxt->info));
*region_ctxt = ffh_ctxt;
return AE_OK;
}
static bool acpi_ffh_smccc_owner_allowed(u32 fid)
{
int owner = ARM_SMCCC_OWNER_NUM(fid);
if (owner == ARM_SMCCC_OWNER_STANDARD ||
owner == ARM_SMCCC_OWNER_SIP || owner == ARM_SMCCC_OWNER_OEM)
return true;
return false;
}
int acpi_ffh_address_space_arch_handler(acpi_integer *value, void *region_context)
{
int ret = 0;
struct acpi_ffh_data *ffh_ctxt = region_context;
if (ffh_ctxt->info.offset == 0) {
/* SMC/HVC 32bit call */
struct arm_smccc_res res;
u32 a[8] = { 0 }, *ptr = (u32 *)value;
if (!ARM_SMCCC_IS_FAST_CALL(*ptr) || ARM_SMCCC_IS_64(*ptr) ||
!acpi_ffh_smccc_owner_allowed(*ptr) ||
ffh_ctxt->info.length > 32) {
ret = AE_ERROR;
} else {
int idx, len = ffh_ctxt->info.length >> 2;
for (idx = 0; idx < len; idx++)
a[idx] = *(ptr + idx);
ffh_ctxt->invoke_ffh_fn(a[0], a[1], a[2], a[3], a[4],
a[5], a[6], a[7], &res, NULL);
memcpy(value, &res, sizeof(res));
}
} else if (ffh_ctxt->info.offset == 1) {
/* SMC/HVC 64bit call */
struct arm_smccc_1_2_regs *r = (struct arm_smccc_1_2_regs *)value;
if (!ARM_SMCCC_IS_FAST_CALL(r->a0) || !ARM_SMCCC_IS_64(r->a0) ||
!acpi_ffh_smccc_owner_allowed(r->a0) ||
ffh_ctxt->info.length > sizeof(*r)) {
ret = AE_ERROR;
} else {
ffh_ctxt->invoke_ffh64_fn(r, r);
memcpy(value, r, ffh_ctxt->info.length);
}
} else {
ret = AE_ERROR;
}
return ret;
}

View File

@ -90,7 +90,8 @@ static int map_gicc_mpidr(struct acpi_subtable_header *entry,
struct acpi_madt_generic_interrupt *gicc =
container_of(entry, struct acpi_madt_generic_interrupt, header);
if (!acpi_gicc_is_usable(gicc))
if (!(gicc->flags &
(ACPI_MADT_ENABLED | ACPI_MADT_GICC_ONLINE_CAPABLE)))
return -ENODEV;
/* device_declaration means Device object in DSDT, in the

View File

@ -33,7 +33,6 @@ MODULE_AUTHOR("Paul Diefenbaugh");
MODULE_DESCRIPTION("ACPI Processor Driver");
MODULE_LICENSE("GPL");
static int acpi_processor_start(struct device *dev);
static int acpi_processor_stop(struct device *dev);
static const struct acpi_device_id processor_device_ids[] = {
@ -47,7 +46,6 @@ static struct device_driver acpi_processor_driver = {
.name = "processor",
.bus = &cpu_subsys,
.acpi_match_table = processor_device_ids,
.probe = acpi_processor_start,
.remove = acpi_processor_stop,
};
@ -115,12 +113,9 @@ static int acpi_soft_cpu_online(unsigned int cpu)
* CPU got physically hotplugged and onlined for the first time:
* Initialize missing things.
*/
if (pr->flags.need_hotplug_init) {
if (!pr->flags.previously_online) {
int ret;
pr_info("Will online and init hotplugged CPU: %d\n",
pr->id);
pr->flags.need_hotplug_init = 0;
ret = __acpi_processor_start(device);
WARN(ret, "Failed to start CPU: %d\n", pr->id);
} else {
@ -167,9 +162,6 @@ static int __acpi_processor_start(struct acpi_device *device)
if (!pr)
return -ENODEV;
if (pr->flags.need_hotplug_init)
return 0;
result = acpi_cppc_processor_probe(pr);
if (result && !IS_ENABLED(CONFIG_ACPI_CPU_FREQ_PSS))
dev_dbg(&device->dev, "CPPC data invalid or not present\n");
@ -185,32 +177,21 @@ static int __acpi_processor_start(struct acpi_device *device)
status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
acpi_processor_notify, device);
if (ACPI_SUCCESS(status))
return 0;
if (!ACPI_SUCCESS(status)) {
result = -ENODEV;
goto err_thermal_exit;
}
pr->flags.previously_online = 1;
result = -ENODEV;
return 0;
err_thermal_exit:
acpi_processor_thermal_exit(pr, device);
err_power_exit:
acpi_processor_power_exit(pr);
return result;
}
static int acpi_processor_start(struct device *dev)
{
struct acpi_device *device = ACPI_COMPANION(dev);
int ret;
if (!device)
return -ENODEV;
/* Protect against concurrent CPU hotplug operations */
cpu_hotplug_disable();
ret = __acpi_processor_start(device);
cpu_hotplug_enable();
return ret;
}
static int acpi_processor_stop(struct device *dev)
{
struct acpi_device *device = ACPI_COMPANION(dev);
@ -279,9 +260,9 @@ static int __init acpi_processor_driver_init(void)
if (result < 0)
return result;
result = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
"acpi/cpu-drv:online",
acpi_soft_cpu_online, NULL);
result = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
"acpi/cpu-drv:online",
acpi_soft_cpu_online, NULL);
if (result < 0)
goto err;
hp_online = result;

View File

@ -243,13 +243,17 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
return 0;
}
static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
#define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
#define ACPI_SCAN_CHECK_FLAG_EJECT BIT(1)
static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
{
struct acpi_scan_handler *handler = adev->handler;
uintptr_t flags = (uintptr_t)p;
acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, check);
acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, p);
if (check) {
if (flags & ACPI_SCAN_CHECK_FLAG_STATUS) {
acpi_bus_get_status(adev);
/*
* Skip devices that are still there and take the enabled
@ -269,8 +273,6 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
if (handler) {
if (handler->detach)
handler->detach(adev);
adev->handler = NULL;
} else {
device_release_driver(&adev->dev);
}
@ -280,6 +282,28 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
*/
acpi_device_set_power(adev, ACPI_STATE_D3_COLD);
adev->flags.initialized = false;
/* For eject this is deferred to acpi_bus_post_eject() */
if (!(flags & ACPI_SCAN_CHECK_FLAG_EJECT)) {
adev->handler = NULL;
acpi_device_clear_enumerated(adev);
}
return 0;
}
static int acpi_bus_post_eject(struct acpi_device *adev, void *not_used)
{
struct acpi_scan_handler *handler = adev->handler;
acpi_dev_for_each_child_reverse(adev, acpi_bus_post_eject, NULL);
if (handler) {
if (handler->post_eject)
handler->post_eject(adev);
adev->handler = NULL;
}
acpi_device_clear_enumerated(adev);
return 0;
@ -287,7 +311,9 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
static void acpi_scan_check_subtree(struct acpi_device *adev)
{
acpi_scan_check_and_detach(adev, (void *)true);
uintptr_t flags = ACPI_SCAN_CHECK_FLAG_STATUS;
acpi_scan_check_and_detach(adev, (void *)flags);
}
static int acpi_scan_hot_remove(struct acpi_device *device)
@ -295,6 +321,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
acpi_handle handle = device->handle;
unsigned long long sta;
acpi_status status;
uintptr_t flags = ACPI_SCAN_CHECK_FLAG_EJECT;
if (device->handler && device->handler->hotplug.demand_offline) {
if (!acpi_scan_is_offline(device, true))
@ -307,7 +334,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
acpi_handle_debug(handle, "Ejecting\n");
acpi_bus_trim(device);
acpi_scan_check_and_detach(device, (void *)flags);
acpi_evaluate_lck(handle, 0);
/*
@ -330,6 +357,8 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
} else if (sta & ACPI_STA_DEVICE_ENABLED) {
acpi_handle_warn(handle,
"Eject incomplete - status 0x%llx\n", sta);
} else {
acpi_bus_post_eject(device, NULL);
}
return 0;
@ -2596,7 +2625,9 @@ EXPORT_SYMBOL(acpi_bus_scan);
*/
void acpi_bus_trim(struct acpi_device *adev)
{
acpi_scan_check_and_detach(adev, NULL);
uintptr_t flags = 0;
acpi_scan_check_and_detach(adev, (void *)flags);
}
EXPORT_SYMBOL_GPL(acpi_bus_trim);

View File

@ -95,6 +95,7 @@ void unregister_cpu(struct cpu *cpu)
{
int logical_cpu = cpu->dev.id;
set_cpu_enabled(logical_cpu, false);
unregister_cpu_under_node(logical_cpu, cpu_to_node(logical_cpu));
device_unregister(&cpu->dev);
@ -273,6 +274,13 @@ static ssize_t print_cpus_offline(struct device *dev,
}
static DEVICE_ATTR(offline, 0444, print_cpus_offline, NULL);
static ssize_t print_cpus_enabled(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(cpu_enabled_mask));
}
static DEVICE_ATTR(enabled, 0444, print_cpus_enabled, NULL);
static ssize_t print_cpus_isolated(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -413,6 +421,7 @@ int register_cpu(struct cpu *cpu, int num)
register_cpu_under_node(num, cpu_to_node(num));
dev_pm_qos_expose_latency_limit(&cpu->dev,
PM_QOS_RESUME_LATENCY_NO_CONSTRAINT);
set_cpu_enabled(num, true);
return 0;
}
@ -494,6 +503,7 @@ static struct attribute *cpu_root_attrs[] = {
&cpu_attrs[2].attr.attr,
&dev_attr_kernel_max.attr,
&dev_attr_offline.attr,
&dev_attr_enabled.attr,
&dev_attr_isolated.attr,
#ifdef CONFIG_NO_HZ_FULL
&dev_attr_nohz_full.attr,
@ -558,7 +568,7 @@ static void __init cpu_dev_register_generic(void)
for_each_present_cpu(i) {
ret = arch_register_cpu(i);
if (ret)
if (ret && ret != -EPROBE_DEFER)
pr_warn("register_cpu %d failed (%d)\n", i, ret);
}
}

View File

@ -7,6 +7,7 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/kernel.h>
#include "irq-gic-common.h"
@ -45,7 +46,7 @@ void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
}
int gic_configure_irq(unsigned int irq, unsigned int type,
void __iomem *base, void (*sync_access)(void))
void __iomem *base)
{
u32 confmask = 0x2 << ((irq % 16) * 2);
u32 confoff = (irq / 16) * 4;
@ -84,14 +85,10 @@ int gic_configure_irq(unsigned int irq, unsigned int type,
raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
if (sync_access)
sync_access();
return ret;
}
void gic_dist_config(void __iomem *base, int gic_irqs,
void (*sync_access)(void))
void gic_dist_config(void __iomem *base, int gic_irqs, u8 priority)
{
unsigned int i;
@ -106,7 +103,8 @@ void gic_dist_config(void __iomem *base, int gic_irqs,
* Set priority on all global interrupts.
*/
for (i = 32; i < gic_irqs; i += 4)
writel_relaxed(GICD_INT_DEF_PRI_X4, base + GIC_DIST_PRI + i);
writel_relaxed(REPEAT_BYTE_U32(priority),
base + GIC_DIST_PRI + i);
/*
* Deactivate and disable all SPIs. Leave the PPI and SGIs
@ -118,12 +116,9 @@ void gic_dist_config(void __iomem *base, int gic_irqs,
writel_relaxed(GICD_INT_EN_CLR_X32,
base + GIC_DIST_ENABLE_CLEAR + i / 8);
}
if (sync_access)
sync_access();
}
void gic_cpu_config(void __iomem *base, int nr, void (*sync_access)(void))
void gic_cpu_config(void __iomem *base, int nr, u8 priority)
{
int i;
@ -142,9 +137,6 @@ void gic_cpu_config(void __iomem *base, int nr, void (*sync_access)(void))
* Set priority on PPI and SGI interrupts
*/
for (i = 0; i < nr; i += 4)
writel_relaxed(GICD_INT_DEF_PRI_X4,
writel_relaxed(REPEAT_BYTE_U32(priority),
base + GIC_DIST_PRI + i * 4 / 4);
if (sync_access)
sync_access();
}

View File

@ -20,10 +20,9 @@ struct gic_quirk {
};
int gic_configure_irq(unsigned int irq, unsigned int type,
void __iomem *base, void (*sync_access)(void));
void gic_dist_config(void __iomem *base, int gic_irqs,
void (*sync_access)(void));
void gic_cpu_config(void __iomem *base, int nr, void (*sync_access)(void));
void __iomem *base);
void gic_dist_config(void __iomem *base, int gic_irqs, u8 priority);
void gic_cpu_config(void __iomem *base, int nr, u8 priority);
void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
void *data);
void gic_enable_of_quirks(const struct device_node *np,

View File

@ -59,7 +59,7 @@ static u32 lpi_id_bits;
#define LPI_PROPBASE_SZ ALIGN(BIT(LPI_NRBITS), SZ_64K)
#define LPI_PENDBASE_SZ ALIGN(BIT(LPI_NRBITS) / 8, SZ_64K)
#define LPI_PROP_DEFAULT_PRIO GICD_INT_DEF_PRI
static u8 __ro_after_init lpi_prop_prio;
/*
* Collection structure - just an ID, and a redistributor address to
@ -1926,7 +1926,7 @@ static int its_vlpi_unmap(struct irq_data *d)
/* and restore the physical one */
irqd_clr_forwarded_to_vcpu(d);
its_send_mapti(its_dev, d->hwirq, event);
lpi_update_config(d, 0xff, (LPI_PROP_DEFAULT_PRIO |
lpi_update_config(d, 0xff, (lpi_prop_prio |
LPI_PROP_ENABLED |
LPI_PROP_GROUP1));
@ -2181,8 +2181,8 @@ static void its_lpi_free(unsigned long *bitmap, u32 base, u32 nr_ids)
static void gic_reset_prop_table(void *va)
{
/* Priority 0xa0, Group-1, disabled */
memset(va, LPI_PROP_DEFAULT_PRIO | LPI_PROP_GROUP1, LPI_PROPBASE_SZ);
/* Regular IRQ priority, Group-1, disabled */
memset(va, lpi_prop_prio | LPI_PROP_GROUP1, LPI_PROPBASE_SZ);
/* Make sure the GIC will observe the written configuration */
gic_flush_dcache_to_poc(va, LPI_PROPBASE_SZ);
@ -5650,7 +5650,7 @@ int __init its_lpi_memreserve_init(void)
}
int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
struct irq_domain *parent_domain)
struct irq_domain *parent_domain, u8 irq_prio)
{
struct device_node *of_node;
struct its_node *its;
@ -5660,6 +5660,7 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
gic_rdists = rdists;
lpi_prop_prio = irq_prio;
its_parent = parent_domain;
of_node = to_of_node(handle);
if (of_node)

View File

@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/kstrtox.h>
#include <linux/of.h>
#include <linux/of_address.h>
@ -24,6 +25,7 @@
#include <linux/irqchip.h>
#include <linux/irqchip/arm-gic-common.h>
#include <linux/irqchip/arm-gic-v3.h>
#include <linux/irqchip/arm-gic-v3-prio.h>
#include <linux/irqchip/irq-partition-percpu.h>
#include <linux/bitfield.h>
#include <linux/bits.h>
@ -36,7 +38,8 @@
#include "irq-gic-common.h"
#define GICD_INT_NMI_PRI (GICD_INT_DEF_PRI & ~0x80)
static u8 dist_prio_irq __ro_after_init = GICV3_PRIO_IRQ;
static u8 dist_prio_nmi __ro_after_init = GICV3_PRIO_NMI;
#define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
#define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
@ -44,6 +47,8 @@
#define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
static struct cpumask broken_rdists __read_mostly __maybe_unused;
struct redist_region {
void __iomem *redist_base;
phys_addr_t phys_base;
@ -108,29 +113,96 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
*/
static DEFINE_STATIC_KEY_FALSE(supports_pseudo_nmis);
DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
EXPORT_SYMBOL(gic_nonsecure_priorities);
static u32 gic_get_pribits(void)
{
u32 pribits;
/*
* When the Non-secure world has access to group 0 interrupts (as a
* consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will
* return the Distributor's view of the interrupt priority.
*
* When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority
* written by software is moved to the Non-secure range by the Distributor.
*
* If both are true (which is when gic_nonsecure_priorities gets enabled),
* we need to shift down the priority programmed by software to match it
* against the value returned by ICC_RPR_EL1.
*/
#define GICD_INT_RPR_PRI(priority) \
({ \
u32 __priority = (priority); \
if (static_branch_unlikely(&gic_nonsecure_priorities)) \
__priority = 0x80 | (__priority >> 1); \
\
__priority; \
})
pribits = gic_read_ctlr();
pribits &= ICC_CTLR_EL1_PRI_BITS_MASK;
pribits >>= ICC_CTLR_EL1_PRI_BITS_SHIFT;
pribits++;
return pribits;
}
static bool gic_has_group0(void)
{
u32 val;
u32 old_pmr;
old_pmr = gic_read_pmr();
/*
* Let's find out if Group0 is under control of EL3 or not by
* setting the highest possible, non-zero priority in PMR.
*
* If SCR_EL3.FIQ is set, the priority gets shifted down in
* order for the CPU interface to set bit 7, and keep the
* actual priority in the non-secure range. In the process, it
* looses the least significant bit and the actual priority
* becomes 0x80. Reading it back returns 0, indicating that
* we're don't have access to Group0.
*/
gic_write_pmr(BIT(8 - gic_get_pribits()));
val = gic_read_pmr();
gic_write_pmr(old_pmr);
return val != 0;
}
static inline bool gic_dist_security_disabled(void)
{
return readl_relaxed(gic_data.dist_base + GICD_CTLR) & GICD_CTLR_DS;
}
static bool cpus_have_security_disabled __ro_after_init;
static bool cpus_have_group0 __ro_after_init;
static void __init gic_prio_init(void)
{
cpus_have_security_disabled = gic_dist_security_disabled();
cpus_have_group0 = gic_has_group0();
/*
* How priority values are used by the GIC depends on two things:
* the security state of the GIC (controlled by the GICD_CTRL.DS bit)
* and if Group 0 interrupts can be delivered to Linux in the non-secure
* world as FIQs (controlled by the SCR_EL3.FIQ bit). These affect the
* way priorities are presented in ICC_PMR_EL1 and in the distributor:
*
* GICD_CTRL.DS | SCR_EL3.FIQ | ICC_PMR_EL1 | Distributor
* -------------------------------------------------------
* 1 | - | unchanged | unchanged
* -------------------------------------------------------
* 0 | 1 | non-secure | non-secure
* -------------------------------------------------------
* 0 | 0 | unchanged | non-secure
*
* In the non-secure view reads and writes are modified:
*
* - A value written is right-shifted by one and the MSB is set,
* forcing the priority into the non-secure range.
*
* - A value read is left-shifted by one.
*
* In the first two cases, where ICC_PMR_EL1 and the interrupt priority
* are both either modified or unchanged, we can use the same set of
* priorities.
*
* In the last case, where only the interrupt priorities are modified to
* be in the non-secure range, we program the non-secure values into
* the distributor to match the PMR values we want.
*/
if (cpus_have_group0 & !cpus_have_security_disabled) {
dist_prio_irq = __gicv3_prio_to_ns(dist_prio_irq);
dist_prio_nmi = __gicv3_prio_to_ns(dist_prio_nmi);
}
pr_info("GICD_CTRL.DS=%d, SCR_EL3.FIQ=%d\n",
cpus_have_security_disabled,
!cpus_have_group0);
}
/* rdist_nmi_refs[n] == number of cpus having the rdist interrupt n set as NMI */
static refcount_t *rdist_nmi_refs;
@ -556,7 +628,7 @@ static int gic_irq_nmi_setup(struct irq_data *d)
desc->handle_irq = handle_fasteoi_nmi;
}
gic_irq_set_prio(d, GICD_INT_NMI_PRI);
gic_irq_set_prio(d, dist_prio_nmi);
return 0;
}
@ -591,7 +663,7 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
desc->handle_irq = handle_fasteoi_irq;
}
gic_irq_set_prio(d, GICD_INT_DEF_PRI);
gic_irq_set_prio(d, dist_prio_irq);
}
static bool gic_arm64_erratum_2941627_needed(struct irq_data *d)
@ -670,7 +742,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
offset = convert_offset_index(d, GICD_ICFGR, &index);
ret = gic_configure_irq(index, type, base + offset, NULL);
ret = gic_configure_irq(index, type, base + offset);
if (ret && (range == PPI_RANGE || range == EPPI_RANGE)) {
/* Misconfigured PPIs are usually not fatal */
pr_warn("GIC: PPI INTID%ld is secure or misconfigured\n", irq);
@ -753,7 +825,7 @@ static bool gic_rpr_is_nmi_prio(void)
if (!gic_supports_nmi())
return false;
return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
return unlikely(gic_read_rpr() == GICV3_PRIO_NMI);
}
static bool gic_irqnr_is_special(u32 irqnr)
@ -866,44 +938,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
__gic_handle_irq_from_irqson(regs);
}
static u32 gic_get_pribits(void)
{
u32 pribits;
pribits = gic_read_ctlr();
pribits &= ICC_CTLR_EL1_PRI_BITS_MASK;
pribits >>= ICC_CTLR_EL1_PRI_BITS_SHIFT;
pribits++;
return pribits;
}
static bool gic_has_group0(void)
{
u32 val;
u32 old_pmr;
old_pmr = gic_read_pmr();
/*
* Let's find out if Group0 is under control of EL3 or not by
* setting the highest possible, non-zero priority in PMR.
*
* If SCR_EL3.FIQ is set, the priority gets shifted down in
* order for the CPU interface to set bit 7, and keep the
* actual priority in the non-secure range. In the process, it
* looses the least significant bit and the actual priority
* becomes 0x80. Reading it back returns 0, indicating that
* we're don't have access to Group0.
*/
gic_write_pmr(BIT(8 - gic_get_pribits()));
val = gic_read_pmr();
gic_write_pmr(old_pmr);
return val != 0;
}
static void __init gic_dist_init(void)
{
unsigned int i;
@ -937,10 +971,11 @@ static void __init gic_dist_init(void)
writel_relaxed(0, base + GICD_ICFGRnE + i / 4);
for (i = 0; i < GIC_ESPI_NR; i += 4)
writel_relaxed(GICD_INT_DEF_PRI_X4, base + GICD_IPRIORITYRnE + i);
writel_relaxed(REPEAT_BYTE_U32(dist_prio_irq),
base + GICD_IPRIORITYRnE + i);
/* Now do the common stuff */
gic_dist_config(base, GIC_LINE_NR, NULL);
gic_dist_config(base, GIC_LINE_NR, dist_prio_irq);
val = GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1;
if (gic_data.rdists.gicd_typer2 & GICD_TYPER2_nASSGIcap) {
@ -1119,12 +1154,6 @@ static void gic_update_rdist_properties(void)
gic_data.rdists.has_vpend_valid_dirty ? "Valid+Dirty " : "");
}
/* Check whether it's single security state view */
static inline bool gic_dist_security_disabled(void)
{
return readl_relaxed(gic_data.dist_base + GICD_CTLR) & GICD_CTLR_DS;
}
static void gic_cpu_sys_reg_init(void)
{
int i, cpu = smp_processor_id();
@ -1152,18 +1181,14 @@ static void gic_cpu_sys_reg_init(void)
write_gicreg(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
} else if (gic_supports_nmi()) {
/*
* Mismatch configuration with boot CPU, the system is likely
* to die as interrupt masking will not work properly on all
* CPUs
* Check that all CPUs use the same priority space.
*
* The boot CPU calls this function before enabling NMI support,
* and as a result we'll never see this warning in the boot path
* for that CPU.
* If there's a mismatch with the boot CPU, the system is
* likely to die as interrupt masking will not work properly on
* all CPUs.
*/
if (static_branch_unlikely(&gic_nonsecure_priorities))
WARN_ON(!group0 || gic_dist_security_disabled());
else
WARN_ON(group0 && !gic_dist_security_disabled());
WARN_ON(group0 != cpus_have_group0);
WARN_ON(gic_dist_security_disabled() != cpus_have_security_disabled);
}
/*
@ -1282,7 +1307,8 @@ static void gic_cpu_init(void)
for (i = 0; i < gic_data.ppi_nr + SGI_NR; i += 32)
writel_relaxed(~0, rbase + GICR_IGROUPR0 + i / 8);
gic_cpu_config(rbase, gic_data.ppi_nr + SGI_NR, gic_redist_wait_for_rwp);
gic_cpu_config(rbase, gic_data.ppi_nr + SGI_NR, dist_prio_irq);
gic_redist_wait_for_rwp();
/* initialise system registers */
gic_cpu_sys_reg_init();
@ -1293,6 +1319,18 @@ static void gic_cpu_init(void)
#define MPIDR_TO_SGI_RS(mpidr) (MPIDR_RS(mpidr) << ICC_SGI1R_RS_SHIFT)
#define MPIDR_TO_SGI_CLUSTER_ID(mpidr) ((mpidr) & ~0xFUL)
/*
* gic_starting_cpu() is called after the last point where cpuhp is allowed
* to fail. So pre check for problems earlier.
*/
static int gic_check_rdist(unsigned int cpu)
{
if (cpumask_test_cpu(cpu, &broken_rdists))
return -EINVAL;
return 0;
}
static int gic_starting_cpu(unsigned int cpu)
{
gic_cpu_init();
@ -1384,6 +1422,10 @@ static void __init gic_smp_init(void)
};
int base_sgi;
cpuhp_setup_state_nocalls(CPUHP_BP_PREPARE_DYN,
"irqchip/arm/gicv3:checkrdist",
gic_check_rdist, NULL);
cpuhp_setup_state_nocalls(CPUHP_AP_IRQ_GIC_STARTING,
"irqchip/arm/gicv3:starting",
gic_starting_cpu, NULL);
@ -1948,36 +1990,6 @@ static void gic_enable_nmi_support(void)
pr_info("Pseudo-NMIs enabled using %s ICC_PMR_EL1 synchronisation\n",
gic_has_relaxed_pmr_sync() ? "relaxed" : "forced");
/*
* How priority values are used by the GIC depends on two things:
* the security state of the GIC (controlled by the GICD_CTRL.DS bit)
* and if Group 0 interrupts can be delivered to Linux in the non-secure
* world as FIQs (controlled by the SCR_EL3.FIQ bit). These affect the
* ICC_PMR_EL1 register and the priority that software assigns to
* interrupts:
*
* GICD_CTRL.DS | SCR_EL3.FIQ | ICC_PMR_EL1 | Group 1 priority
* -----------------------------------------------------------
* 1 | - | unchanged | unchanged
* -----------------------------------------------------------
* 0 | 1 | non-secure | non-secure
* -----------------------------------------------------------
* 0 | 0 | unchanged | non-secure
*
* where non-secure means that the value is right-shifted by one and the
* MSB bit set, to make it fit in the non-secure priority range.
*
* In the first two cases, where ICC_PMR_EL1 and the interrupt priority
* are both either modified or unchanged, we can use the same set of
* priorities.
*
* In the last case, where only the interrupt priorities are modified to
* be in the non-secure range, we use a different PMR value to mask IRQs
* and the rest of the values that we use remain unchanged.
*/
if (gic_has_group0() && !gic_dist_security_disabled())
static_branch_enable(&gic_nonsecure_priorities);
static_branch_enable(&supports_pseudo_nmis);
if (static_branch_likely(&supports_deactivate_key))
@ -2058,6 +2070,7 @@ static int __init gic_init_bases(phys_addr_t dist_phys_base,
gic_update_rdist_properties();
gic_prio_init();
gic_dist_init();
gic_cpu_init();
gic_enable_nmi_support();
@ -2065,7 +2078,7 @@ static int __init gic_init_bases(phys_addr_t dist_phys_base,
gic_cpu_pm_init();
if (gic_dist_supports_lpis()) {
its_init(handle, &gic_data.rdists, gic_data.domain);
its_init(handle, &gic_data.rdists, gic_data.domain, dist_prio_irq);
its_cpu_init();
its_lpi_memreserve_init();
} else {
@ -2365,9 +2378,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
void __iomem *redist_base;
if (!acpi_gicc_is_usable(gicc))
/* Neither enabled or online capable means it doesn't exist, skip it */
if (!(gicc->flags & (ACPI_MADT_ENABLED | ACPI_MADT_GICC_ONLINE_CAPABLE)))
return 0;
/*
* Capable but disabled CPUs can be brought online later. What about
* the redistributor? ACPI doesn't want to say!
* Virtual hotplug systems can use the MADT's "always-on" GICR entries.
* Otherwise, prevent such CPUs from being brought online.
*/
if (!(gicc->flags & ACPI_MADT_ENABLED)) {
int cpu = get_cpu_for_acpi_id(gicc->uid);
pr_warn("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
if (cpu >= 0)
cpumask_set_cpu(cpu, &broken_rdists);
return 0;
}
redist_base = ioremap(gicc->gicr_base_address, size);
if (!redist_base)
return -ENOMEM;
@ -2413,21 +2442,15 @@ static int __init gic_acpi_match_gicc(union acpi_subtable_headers *header,
/*
* If GICC is enabled and has valid gicr base address, then it means
* GICR base is presented via GICC
* GICR base is presented via GICC. The redistributor is only known to
* be accessible if the GICC is marked as enabled. If this bit is not
* set, we'd need to add the redistributor at runtime, which isn't
* supported.
*/
if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address) {
if (gicc->flags & ACPI_MADT_ENABLED && gicc->gicr_base_address)
acpi_data.enabled_rdists++;
return 0;
}
/*
* It's perfectly valid firmware can pass disabled GICC entry, driver
* should not treat as errors, skip the entry instead of probe fail.
*/
if (!acpi_gicc_is_usable(gicc))
return 0;
return -ENODEV;
return 0;
}
static int __init gic_acpi_count_gicr_regions(void)
@ -2483,7 +2506,8 @@ static int __init gic_acpi_parse_virt_madt_gicc(union acpi_subtable_headers *hea
int maint_irq_mode;
static int first_madt = true;
if (!acpi_gicc_is_usable(gicc))
if (!(gicc->flags &
(ACPI_MADT_ENABLED | ACPI_MADT_GICC_ONLINE_CAPABLE)))
return 0;
maint_irq_mode = (gicc->flags & ACPI_MADT_VGIC_IRQ_MODE) ?

View File

@ -303,7 +303,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
type != IRQ_TYPE_EDGE_RISING)
return -EINVAL;
ret = gic_configure_irq(gicirq, type, base + GIC_DIST_CONFIG, NULL);
ret = gic_configure_irq(gicirq, type, base + GIC_DIST_CONFIG);
if (ret && gicirq < 32) {
/* Misconfigured PPIs are usually not fatal */
pr_warn("GIC: PPI%ld is secure or misconfigured\n", gicirq - 16);
@ -479,7 +479,7 @@ static void gic_dist_init(struct gic_chip_data *gic)
for (i = 32; i < gic_irqs; i += 4)
writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
gic_dist_config(base, gic_irqs, NULL);
gic_dist_config(base, gic_irqs, GICD_INT_DEF_PRI);
writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
}
@ -516,7 +516,7 @@ static int gic_cpu_init(struct gic_chip_data *gic)
gic_cpu_map[i] &= ~cpu_mask;
}
gic_cpu_config(dist_base, 32, NULL);
gic_cpu_config(dist_base, 32, GICD_INT_DEF_PRI);
writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
gic_cpu_if_up(gic);
@ -608,7 +608,7 @@ void gic_dist_restore(struct gic_chip_data *gic)
dist_base + GIC_DIST_CONFIG + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
writel_relaxed(GICD_INT_DEF_PRI_X4,
writel_relaxed(REPEAT_BYTE_U32(GICD_INT_DEF_PRI),
dist_base + GIC_DIST_PRI + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
@ -697,7 +697,7 @@ void gic_cpu_restore(struct gic_chip_data *gic)
writel_relaxed(ptr[i], dist_base + GIC_DIST_CONFIG + i * 4);
for (i = 0; i < DIV_ROUND_UP(32, 4); i++)
writel_relaxed(GICD_INT_DEF_PRI_X4,
writel_relaxed(REPEAT_BYTE_U32(GICD_INT_DEF_PRI),
dist_base + GIC_DIST_PRI + i * 4);
writel_relaxed(GICC_INT_PRI_THRESHOLD, cpu_base + GIC_CPU_PRIMASK);

View File

@ -130,7 +130,7 @@ static int hip04_irq_set_type(struct irq_data *d, unsigned int type)
raw_spin_lock(&irq_controller_lock);
ret = gic_configure_irq(irq, type, base + GIC_DIST_CONFIG, NULL);
ret = gic_configure_irq(irq, type, base + GIC_DIST_CONFIG);
if (ret && irq < 32) {
/* Misconfigured PPIs are usually not fatal */
pr_warn("GIC: PPI%d is secure or misconfigured\n", irq - 16);
@ -260,7 +260,7 @@ static void __init hip04_irq_dist_init(struct hip04_irq_data *intc)
for (i = 32; i < nr_irqs; i += 2)
writel_relaxed(cpumask, base + GIC_DIST_TARGET + ((i * 2) & ~3));
gic_dist_config(base, nr_irqs, NULL);
gic_dist_config(base, nr_irqs, GICD_INT_DEF_PRI);
writel_relaxed(1, base + GIC_DIST_CTRL);
}
@ -287,7 +287,7 @@ static void hip04_irq_cpu_init(struct hip04_irq_data *intc)
if (i != cpu)
hip04_cpu_map[i] &= ~cpu_mask;
gic_cpu_config(dist_base, 32, NULL);
gic_cpu_config(dist_base, 32, GICD_INT_DEF_PRI);
writel_relaxed(0xf0, base + GIC_CPU_PRIMASK);
writel_relaxed(1, base + GIC_CPU_CTRL);

View File

@ -56,6 +56,18 @@ config ARM_PMU
Say y if you want to use CPU performance monitors on ARM-based
systems.
config ARM_V6_PMU
depends on ARM_PMU && (CPU_V6 || CPU_V6K)
def_bool y
config ARM_V7_PMU
depends on ARM_PMU && CPU_V7
def_bool y
config ARM_XSCALE_PMU
depends on ARM_PMU && CPU_XSCALE
def_bool y
config RISCV_PMU
depends on RISCV
bool "RISC-V PMU framework"

View File

@ -6,6 +6,9 @@ obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
obj-$(CONFIG_ARM_PMUV3) += arm_pmuv3.o
obj-$(CONFIG_ARM_V6_PMU) += arm_v6_pmu.o
obj-$(CONFIG_ARM_V7_PMU) += arm_v7_pmu.o
obj-$(CONFIG_ARM_XSCALE_PMU) += arm_xscale_pmu.o
obj-$(CONFIG_ARM_SMMU_V3_PMU) += arm_smmuv3_pmu.o
obj-$(CONFIG_FSL_IMX8_DDR_PMU) += fsl_imx8_ddr_perf.o
obj-$(CONFIG_FSL_IMX9_DDR_PMU) += fsl_imx9_ddr_perf.o

View File

@ -1561,4 +1561,5 @@ module_init(arm_ccn_init);
module_exit(arm_ccn_exit);
MODULE_AUTHOR("Pawel Moll <pawel.moll@arm.com>");
MODULE_DESCRIPTION("ARM CCN (Cache Coherent Network) Performance Monitor Driver");
MODULE_LICENSE("GPL v2");

View File

@ -174,9 +174,8 @@
#define CMN_CONFIG_WP_COMBINE GENMASK_ULL(30, 27)
#define CMN_CONFIG_WP_DEV_SEL GENMASK_ULL(50, 48)
#define CMN_CONFIG_WP_CHN_SEL GENMASK_ULL(55, 51)
/* Note that we don't yet support the tertiary match group on newer IPs */
#define CMN_CONFIG_WP_GRP BIT_ULL(56)
#define CMN_CONFIG_WP_EXCLUSIVE BIT_ULL(57)
#define CMN_CONFIG_WP_GRP GENMASK_ULL(57, 56)
#define CMN_CONFIG_WP_EXCLUSIVE BIT_ULL(58)
#define CMN_CONFIG1_WP_VAL GENMASK_ULL(63, 0)
#define CMN_CONFIG2_WP_MASK GENMASK_ULL(63, 0)
@ -590,6 +589,13 @@ struct arm_cmn_hw_event {
s8 dtc_idx[CMN_MAX_DTCS];
u8 num_dns;
u8 dtm_offset;
/*
* WP config registers are divided to UP and DOWN events. We need to
* keep to track only one of them.
*/
DECLARE_BITMAP(wp_idx, CMN_MAX_XPS);
bool wide_sel;
enum cmn_filter_select filter_sel;
};
@ -617,6 +623,17 @@ static unsigned int arm_cmn_get_index(u64 x[], unsigned int pos)
return (x[pos / 32] >> ((pos % 32) * 2)) & 3;
}
static void arm_cmn_set_wp_idx(unsigned long *wp_idx, unsigned int pos, bool val)
{
if (val)
set_bit(pos, wp_idx);
}
static unsigned int arm_cmn_get_wp_idx(unsigned long *wp_idx, unsigned int pos)
{
return test_bit(pos, wp_idx);
}
struct arm_cmn_event_attr {
struct device_attribute attr;
enum cmn_model model;
@ -1336,12 +1353,37 @@ static const struct attribute_group *arm_cmn_attr_groups[] = {
NULL
};
static int arm_cmn_wp_idx(struct perf_event *event)
static int arm_cmn_find_free_wp_idx(struct arm_cmn_dtm *dtm,
struct perf_event *event)
{
return CMN_EVENT_EVENTID(event) + CMN_EVENT_WP_GRP(event);
int wp_idx = CMN_EVENT_EVENTID(event);
if (dtm->wp_event[wp_idx] >= 0)
if (dtm->wp_event[++wp_idx] >= 0)
return -ENOSPC;
return wp_idx;
}
static u32 arm_cmn_wp_config(struct perf_event *event)
static int arm_cmn_get_assigned_wp_idx(struct perf_event *event,
struct arm_cmn_hw_event *hw,
unsigned int pos)
{
return CMN_EVENT_EVENTID(event) + arm_cmn_get_wp_idx(hw->wp_idx, pos);
}
static void arm_cmn_claim_wp_idx(struct arm_cmn_dtm *dtm,
struct perf_event *event,
unsigned int dtc, int wp_idx,
unsigned int pos)
{
struct arm_cmn_hw_event *hw = to_cmn_hw(event);
dtm->wp_event[wp_idx] = hw->dtc_idx[dtc];
arm_cmn_set_wp_idx(hw->wp_idx, pos, wp_idx - CMN_EVENT_EVENTID(event));
}
static u32 arm_cmn_wp_config(struct perf_event *event, int wp_idx)
{
u32 config;
u32 dev = CMN_EVENT_WP_DEV_SEL(event);
@ -1351,6 +1393,10 @@ static u32 arm_cmn_wp_config(struct perf_event *event)
u32 combine = CMN_EVENT_WP_COMBINE(event);
bool is_cmn600 = to_cmn(event->pmu)->part == PART_CMN600;
/* CMN-600 supports only primary and secondary matching groups */
if (is_cmn600)
grp &= 1;
config = FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_DEV_SEL, dev) |
FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_CHN_SEL, chn) |
FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_GRP, grp) |
@ -1358,7 +1404,9 @@ static u32 arm_cmn_wp_config(struct perf_event *event)
if (exc)
config |= is_cmn600 ? CMN600_WPn_CONFIG_WP_EXCLUSIVE :
CMN_DTM_WPn_CONFIG_WP_EXCLUSIVE;
if (combine && !grp)
/* wp_combine is available only on WP0 and WP2 */
if (combine && !(wp_idx & 0x1))
config |= is_cmn600 ? CMN600_WPn_CONFIG_WP_COMBINE :
CMN_DTM_WPn_CONFIG_WP_COMBINE;
return config;
@ -1520,12 +1568,12 @@ static void arm_cmn_event_start(struct perf_event *event, int flags)
writeq_relaxed(CMN_CC_INIT, cmn->dtc[i].base + CMN_DT_PMCCNTR);
cmn->dtc[i].cc_active = true;
} else if (type == CMN_TYPE_WP) {
int wp_idx = arm_cmn_wp_idx(event);
u64 val = CMN_EVENT_WP_VAL(event);
u64 mask = CMN_EVENT_WP_MASK(event);
for_each_hw_dn(hw, dn, i) {
void __iomem *base = dn->pmu_base + CMN_DTM_OFFSET(hw->dtm_offset);
int wp_idx = arm_cmn_get_assigned_wp_idx(event, hw, i);
writeq_relaxed(val, base + CMN_DTM_WPn_VAL(wp_idx));
writeq_relaxed(mask, base + CMN_DTM_WPn_MASK(wp_idx));
@ -1550,10 +1598,9 @@ static void arm_cmn_event_stop(struct perf_event *event, int flags)
i = hw->dtc_idx[0];
cmn->dtc[i].cc_active = false;
} else if (type == CMN_TYPE_WP) {
int wp_idx = arm_cmn_wp_idx(event);
for_each_hw_dn(hw, dn, i) {
void __iomem *base = dn->pmu_base + CMN_DTM_OFFSET(hw->dtm_offset);
int wp_idx = arm_cmn_get_assigned_wp_idx(event, hw, i);
writeq_relaxed(0, base + CMN_DTM_WPn_MASK(wp_idx));
writeq_relaxed(~0ULL, base + CMN_DTM_WPn_VAL(wp_idx));
@ -1571,10 +1618,23 @@ struct arm_cmn_val {
u8 dtm_count[CMN_MAX_DTMS];
u8 occupid[CMN_MAX_DTMS][SEL_MAX];
u8 wp[CMN_MAX_DTMS][4];
u8 wp_combine[CMN_MAX_DTMS][2];
int dtc_count[CMN_MAX_DTCS];
bool cycles;
};
static int arm_cmn_val_find_free_wp_config(struct perf_event *event,
struct arm_cmn_val *val, int dtm)
{
int wp_idx = CMN_EVENT_EVENTID(event);
if (val->wp[dtm][wp_idx])
if (val->wp[dtm][++wp_idx])
return -ENOSPC;
return wp_idx;
}
static void arm_cmn_val_add_event(struct arm_cmn *cmn, struct arm_cmn_val *val,
struct perf_event *event)
{
@ -1606,8 +1666,9 @@ static void arm_cmn_val_add_event(struct arm_cmn *cmn, struct arm_cmn_val *val,
if (type != CMN_TYPE_WP)
continue;
wp_idx = arm_cmn_wp_idx(event);
val->wp[dtm][wp_idx] = CMN_EVENT_WP_COMBINE(event) + 1;
wp_idx = arm_cmn_val_find_free_wp_config(event, val, dtm);
val->wp[dtm][wp_idx] = 1;
val->wp_combine[dtm][wp_idx >> 1] += !!CMN_EVENT_WP_COMBINE(event);
}
}
@ -1631,6 +1692,7 @@ static int arm_cmn_validate_group(struct arm_cmn *cmn, struct perf_event *event)
return -ENOMEM;
arm_cmn_val_add_event(cmn, val, leader);
for_each_sibling_event(sibling, leader)
arm_cmn_val_add_event(cmn, val, sibling);
@ -1645,7 +1707,7 @@ static int arm_cmn_validate_group(struct arm_cmn *cmn, struct perf_event *event)
goto done;
for_each_hw_dn(hw, dn, i) {
int wp_idx, wp_cmb, dtm = dn->dtm, sel = hw->filter_sel;
int wp_idx, dtm = dn->dtm, sel = hw->filter_sel;
if (val->dtm_count[dtm] == CMN_DTM_NUM_COUNTERS)
goto done;
@ -1657,12 +1719,12 @@ static int arm_cmn_validate_group(struct arm_cmn *cmn, struct perf_event *event)
if (type != CMN_TYPE_WP)
continue;
wp_idx = arm_cmn_wp_idx(event);
if (val->wp[dtm][wp_idx])
wp_idx = arm_cmn_val_find_free_wp_config(event, val, dtm);
if (wp_idx < 0)
goto done;
wp_cmb = val->wp[dtm][wp_idx ^ 1];
if (wp_cmb && wp_cmb != CMN_EVENT_WP_COMBINE(event) + 1)
if (wp_idx & 1 &&
val->wp_combine[dtm][wp_idx >> 1] != !!CMN_EVENT_WP_COMBINE(event))
goto done;
}
@ -1773,8 +1835,11 @@ static void arm_cmn_event_clear(struct arm_cmn *cmn, struct perf_event *event,
struct arm_cmn_dtm *dtm = &cmn->dtms[hw->dn[i].dtm] + hw->dtm_offset;
unsigned int dtm_idx = arm_cmn_get_index(hw->dtm_idx, i);
if (type == CMN_TYPE_WP)
dtm->wp_event[arm_cmn_wp_idx(event)] = -1;
if (type == CMN_TYPE_WP) {
int wp_idx = arm_cmn_get_assigned_wp_idx(event, hw, i);
dtm->wp_event[wp_idx] = -1;
}
if (hw->filter_sel > SEL_NONE)
hw->dn[i].occupid[hw->filter_sel].count--;
@ -1783,6 +1848,7 @@ static void arm_cmn_event_clear(struct arm_cmn *cmn, struct perf_event *event,
writel_relaxed(dtm->pmu_config_low, dtm->base + CMN_DTM_PMU_CONFIG);
}
memset(hw->dtm_idx, 0, sizeof(hw->dtm_idx));
memset(hw->wp_idx, 0, sizeof(hw->wp_idx));
for_each_hw_dtc_idx(hw, j, idx)
cmn->dtc[j].counters[idx] = NULL;
@ -1836,19 +1902,23 @@ static int arm_cmn_event_add(struct perf_event *event, int flags)
if (type == CMN_TYPE_XP) {
input_sel = CMN__PMEVCNT0_INPUT_SEL_XP + dtm_idx;
} else if (type == CMN_TYPE_WP) {
int tmp, wp_idx = arm_cmn_wp_idx(event);
u32 cfg = arm_cmn_wp_config(event);
int tmp, wp_idx;
u32 cfg;
if (dtm->wp_event[wp_idx] >= 0)
wp_idx = arm_cmn_find_free_wp_idx(dtm, event);
if (wp_idx < 0)
goto free_dtms;
cfg = arm_cmn_wp_config(event, wp_idx);
tmp = dtm->wp_event[wp_idx ^ 1];
if (tmp >= 0 && CMN_EVENT_WP_COMBINE(event) !=
CMN_EVENT_WP_COMBINE(cmn->dtc[d].counters[tmp]))
goto free_dtms;
input_sel = CMN__PMEVCNT0_INPUT_SEL_WP + wp_idx;
dtm->wp_event[wp_idx] = hw->dtc_idx[d];
arm_cmn_claim_wp_idx(dtm, event, d, wp_idx, i);
writel_relaxed(cfg, dtm->base + CMN_DTM_WPn_CONFIG(wp_idx));
} else {
struct arm_cmn_nodeid nid = arm_cmn_nid(cmn, dn->id);

View File

@ -269,4 +269,5 @@ static void __exit ampere_cspmu_exit(void)
module_init(ampere_cspmu_init);
module_exit(ampere_cspmu_exit);
MODULE_DESCRIPTION("Ampere SoC Performance Monitor Driver");
MODULE_LICENSE("GPL");

View File

@ -1427,4 +1427,5 @@ EXPORT_SYMBOL_GPL(arm_cspmu_impl_unregister);
module_init(arm_cspmu_init);
module_exit(arm_cspmu_exit);
MODULE_DESCRIPTION("ARM CoreSight Architecture Performance Monitor Driver");
MODULE_LICENSE("GPL v2");

View File

@ -417,4 +417,5 @@ static void __exit nvidia_cspmu_exit(void)
module_init(nvidia_cspmu_init);
module_exit(nvidia_cspmu_exit);
MODULE_DESCRIPTION("NVIDIA Coresight Architecture Performance Monitor Driver");
MODULE_LICENSE("GPL v2");

View File

@ -25,8 +25,6 @@
#include <linux/smp.h>
#include <linux/nmi.h>
#include <asm/arm_pmuv3.h>
/* ARMv8 Cortex-A53 specific event types. */
#define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2
@ -338,6 +336,11 @@ static bool armv8pmu_event_want_user_access(struct perf_event *event)
return ATTR_CFG_GET_FLD(&event->attr, rdpmc);
}
static u32 armv8pmu_event_get_threshold(struct perf_event_attr *attr)
{
return ATTR_CFG_GET_FLD(attr, threshold);
}
static u8 armv8pmu_event_threshold_control(struct perf_event_attr *attr)
{
u8 th_compare = ATTR_CFG_GET_FLD(attr, threshold_compare);
@ -941,7 +944,8 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
/* Always prefer to place a cycle counter into the cycle counter. */
if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
if ((evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) &&
!armv8pmu_event_get_threshold(&event->attr)) {
if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
return ARMV8_IDX_CYCLE_COUNTER;
else if (armv8pmu_event_is_64bit(event) &&
@ -1033,13 +1037,13 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
* If FEAT_PMUv3_TH isn't implemented, then THWIDTH (threshold_max) will
* be 0 and will also trigger this check, preventing it from being used.
*/
th = ATTR_CFG_GET_FLD(attr, threshold);
th = armv8pmu_event_get_threshold(attr);
if (th > threshold_max(cpu_pmu)) {
pr_debug("PMU event threshold exceeds max value\n");
return -EINVAL;
}
if (IS_ENABLED(CONFIG_ARM64) && th) {
if (th) {
config_base |= FIELD_PREP(ARMV8_PMU_EVTYPE_TH, th);
config_base |= FIELD_PREP(ARMV8_PMU_EVTYPE_TC,
armv8pmu_event_threshold_control(attr));
@ -1340,14 +1344,20 @@ PMUV3_INIT_SIMPLE(armv9_cortex_a520)
PMUV3_INIT_SIMPLE(armv9_cortex_a710)
PMUV3_INIT_SIMPLE(armv9_cortex_a715)
PMUV3_INIT_SIMPLE(armv9_cortex_a720)
PMUV3_INIT_SIMPLE(armv9_cortex_a725)
PMUV3_INIT_SIMPLE(armv8_cortex_x1)
PMUV3_INIT_SIMPLE(armv9_cortex_x2)
PMUV3_INIT_SIMPLE(armv9_cortex_x3)
PMUV3_INIT_SIMPLE(armv9_cortex_x4)
PMUV3_INIT_SIMPLE(armv9_cortex_x925)
PMUV3_INIT_SIMPLE(armv8_neoverse_e1)
PMUV3_INIT_SIMPLE(armv8_neoverse_n1)
PMUV3_INIT_SIMPLE(armv9_neoverse_n2)
PMUV3_INIT_SIMPLE(armv9_neoverse_n3)
PMUV3_INIT_SIMPLE(armv8_neoverse_v1)
PMUV3_INIT_SIMPLE(armv8_neoverse_v2)
PMUV3_INIT_SIMPLE(armv8_neoverse_v3)
PMUV3_INIT_SIMPLE(armv8_neoverse_v3ae)
PMUV3_INIT_SIMPLE(armv8_nvidia_carmel)
PMUV3_INIT_SIMPLE(armv8_nvidia_denver)
@ -1379,14 +1389,20 @@ static const struct of_device_id armv8_pmu_of_device_ids[] = {
{.compatible = "arm,cortex-a710-pmu", .data = armv9_cortex_a710_pmu_init},
{.compatible = "arm,cortex-a715-pmu", .data = armv9_cortex_a715_pmu_init},
{.compatible = "arm,cortex-a720-pmu", .data = armv9_cortex_a720_pmu_init},
{.compatible = "arm,cortex-a725-pmu", .data = armv9_cortex_a725_pmu_init},
{.compatible = "arm,cortex-x1-pmu", .data = armv8_cortex_x1_pmu_init},
{.compatible = "arm,cortex-x2-pmu", .data = armv9_cortex_x2_pmu_init},
{.compatible = "arm,cortex-x3-pmu", .data = armv9_cortex_x3_pmu_init},
{.compatible = "arm,cortex-x4-pmu", .data = armv9_cortex_x4_pmu_init},
{.compatible = "arm,cortex-x925-pmu", .data = armv9_cortex_x925_pmu_init},
{.compatible = "arm,neoverse-e1-pmu", .data = armv8_neoverse_e1_pmu_init},
{.compatible = "arm,neoverse-n1-pmu", .data = armv8_neoverse_n1_pmu_init},
{.compatible = "arm,neoverse-n2-pmu", .data = armv9_neoverse_n2_pmu_init},
{.compatible = "arm,neoverse-n3-pmu", .data = armv9_neoverse_n3_pmu_init},
{.compatible = "arm,neoverse-v1-pmu", .data = armv8_neoverse_v1_pmu_init},
{.compatible = "arm,neoverse-v2-pmu", .data = armv8_neoverse_v2_pmu_init},
{.compatible = "arm,neoverse-v3-pmu", .data = armv8_neoverse_v3_pmu_init},
{.compatible = "arm,neoverse-v3ae-pmu", .data = armv8_neoverse_v3ae_pmu_init},
{.compatible = "cavium,thunder-pmu", .data = armv8_cavium_thunder_pmu_init},
{.compatible = "brcm,vulcan-pmu", .data = armv8_brcm_vulcan_pmu_init},
{.compatible = "nvidia,carmel-pmu", .data = armv8_nvidia_carmel_pmu_init},

View File

@ -31,8 +31,6 @@
* enable the interrupt.
*/
#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K)
#include <asm/cputype.h>
#include <asm/irq_regs.h>
@ -403,13 +401,6 @@ static int armv6_1136_pmu_init(struct arm_pmu *cpu_pmu)
return 0;
}
static int armv6_1156_pmu_init(struct arm_pmu *cpu_pmu)
{
armv6pmu_init(cpu_pmu);
cpu_pmu->name = "armv6_1156";
return 0;
}
static int armv6_1176_pmu_init(struct arm_pmu *cpu_pmu)
{
armv6pmu_init(cpu_pmu);
@ -423,17 +414,9 @@ static const struct of_device_id armv6_pmu_of_device_ids[] = {
{ /* sentinel value */ }
};
static const struct pmu_probe_info armv6_pmu_probe_table[] = {
ARM_PMU_PROBE(ARM_CPU_PART_ARM1136, armv6_1136_pmu_init),
ARM_PMU_PROBE(ARM_CPU_PART_ARM1156, armv6_1156_pmu_init),
ARM_PMU_PROBE(ARM_CPU_PART_ARM1176, armv6_1176_pmu_init),
{ /* sentinel value */ }
};
static int armv6_pmu_device_probe(struct platform_device *pdev)
{
return arm_pmu_device_probe(pdev, armv6_pmu_of_device_ids,
armv6_pmu_probe_table);
return arm_pmu_device_probe(pdev, armv6_pmu_of_device_ids, NULL);
}
static struct platform_driver armv6_pmu_driver = {
@ -445,4 +428,3 @@ static struct platform_driver armv6_pmu_driver = {
};
builtin_platform_driver(armv6_pmu_driver);
#endif /* CONFIG_CPU_V6 || CONFIG_CPU_V6K */

View File

@ -17,8 +17,6 @@
* counter and all 4 performance counters together can be reset separately.
*/
#ifdef CONFIG_CPU_V7
#include <asm/cp15.h>
#include <asm/cputype.h>
#include <asm/irq_regs.h>
@ -1979,17 +1977,9 @@ static const struct of_device_id armv7_pmu_of_device_ids[] = {
{},
};
static const struct pmu_probe_info armv7_pmu_probe_table[] = {
ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A8, armv7_a8_pmu_init),
ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A9, armv7_a9_pmu_init),
{ /* sentinel value */ }
};
static int armv7_pmu_device_probe(struct platform_device *pdev)
{
return arm_pmu_device_probe(pdev, armv7_pmu_of_device_ids,
armv7_pmu_probe_table);
return arm_pmu_device_probe(pdev, armv7_pmu_of_device_ids, NULL);
}
static struct platform_driver armv7_pmu_driver = {
@ -2002,4 +1992,3 @@ static struct platform_driver armv7_pmu_driver = {
};
builtin_platform_driver(armv7_pmu_driver);
#endif /* CONFIG_CPU_V7 */

View File

@ -13,8 +13,6 @@
* PMU structures.
*/
#ifdef CONFIG_CPU_XSCALE
#include <asm/cputype.h>
#include <asm/irq_regs.h>
@ -745,4 +743,3 @@ static struct platform_driver xscale_pmu_driver = {
};
builtin_platform_driver(xscale_pmu_driver);
#endif /* CONFIG_CPU_XSCALE */

View File

@ -972,6 +972,7 @@ static __exit void cxl_pmu_exit(void)
cpuhp_remove_multi_state(cxl_pmu_cpuhp_state_num);
}
MODULE_DESCRIPTION("CXL Performance Monitor Driver");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS(CXL);
module_init(cxl_pmu_init);

View File

@ -850,4 +850,5 @@ static struct platform_driver imx_ddr_pmu_driver = {
};
module_platform_driver(imx_ddr_pmu_driver);
MODULE_DESCRIPTION("Freescale i.MX8 DDR Performance Monitor Driver");
MODULE_LICENSE("GPL v2");

View File

@ -11,14 +11,24 @@
#include <linux/perf_event.h>
/* Performance monitor configuration */
#define PMCFG1 0x00
#define PMCFG1_RD_TRANS_FILT_EN BIT(31)
#define PMCFG1_WR_TRANS_FILT_EN BIT(30)
#define PMCFG1_RD_BT_FILT_EN BIT(29)
#define PMCFG1_ID_MASK GENMASK(17, 0)
#define PMCFG1 0x00
#define MX93_PMCFG1_RD_TRANS_FILT_EN BIT(31)
#define MX93_PMCFG1_WR_TRANS_FILT_EN BIT(30)
#define MX93_PMCFG1_RD_BT_FILT_EN BIT(29)
#define MX93_PMCFG1_ID_MASK GENMASK(17, 0)
#define PMCFG2 0x04
#define PMCFG2_ID GENMASK(17, 0)
#define MX95_PMCFG1_WR_BEAT_FILT_EN BIT(31)
#define MX95_PMCFG1_RD_BEAT_FILT_EN BIT(30)
#define PMCFG2 0x04
#define MX93_PMCFG2_ID GENMASK(17, 0)
#define PMCFG3 0x08
#define PMCFG4 0x0C
#define PMCFG5 0x10
#define PMCFG6 0x14
#define MX95_PMCFG_ID_MASK GENMASK(9, 0)
#define MX95_PMCFG_ID GENMASK(25, 16)
/* Global control register affects all counters and takes priority over local control registers */
#define PMGC0 0x40
@ -41,6 +51,10 @@
#define NUM_COUNTERS 11
#define CYCLES_COUNTER 0
#define CYCLES_EVENT_ID 0
#define CONFIG_EVENT_MASK GENMASK(7, 0)
#define CONFIG_COUNTER_MASK GENMASK(23, 16)
#define to_ddr_pmu(p) container_of(p, struct ddr_pmu, pmu)
@ -71,8 +85,23 @@ static const struct imx_ddr_devtype_data imx93_devtype_data = {
.identifier = "imx93",
};
static const struct imx_ddr_devtype_data imx95_devtype_data = {
.identifier = "imx95",
};
static inline bool is_imx93(struct ddr_pmu *pmu)
{
return pmu->devtype_data == &imx93_devtype_data;
}
static inline bool is_imx95(struct ddr_pmu *pmu)
{
return pmu->devtype_data == &imx95_devtype_data;
}
static const struct of_device_id imx_ddr_pmu_dt_ids[] = {
{.compatible = "fsl,imx93-ddr-pmu", .data = &imx93_devtype_data},
{ .compatible = "fsl,imx93-ddr-pmu", .data = &imx93_devtype_data },
{ .compatible = "fsl,imx95-ddr-pmu", .data = &imx95_devtype_data },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, imx_ddr_pmu_dt_ids);
@ -118,21 +147,40 @@ static const struct attribute_group ddr_perf_cpumask_attr_group = {
.attrs = ddr_perf_cpumask_attrs,
};
struct imx9_pmu_events_attr {
struct device_attribute attr;
u64 id;
const void *devtype_data;
};
static ssize_t ddr_pmu_event_show(struct device *dev,
struct device_attribute *attr, char *page)
{
struct perf_pmu_events_attr *pmu_attr;
struct imx9_pmu_events_attr *pmu_attr;
pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
pmu_attr = container_of(attr, struct imx9_pmu_events_attr, attr);
return sysfs_emit(page, "event=0x%02llx\n", pmu_attr->id);
}
#define IMX9_DDR_PMU_EVENT_ATTR(_name, _id) \
(&((struct perf_pmu_events_attr[]) { \
#define COUNTER_OFFSET_IN_EVENT 8
#define ID(counter, id) ((counter << COUNTER_OFFSET_IN_EVENT) | id)
#define DDR_PMU_EVENT_ATTR_COMM(_name, _id, _data) \
(&((struct imx9_pmu_events_attr[]) { \
{ .attr = __ATTR(_name, 0444, ddr_pmu_event_show, NULL),\
.id = _id, } \
.id = _id, \
.devtype_data = _data, } \
})[0].attr.attr)
#define IMX9_DDR_PMU_EVENT_ATTR(_name, _id) \
DDR_PMU_EVENT_ATTR_COMM(_name, _id, NULL)
#define IMX93_DDR_PMU_EVENT_ATTR(_name, _id) \
DDR_PMU_EVENT_ATTR_COMM(_name, _id, &imx93_devtype_data)
#define IMX95_DDR_PMU_EVENT_ATTR(_name, _id) \
DDR_PMU_EVENT_ATTR_COMM(_name, _id, &imx95_devtype_data)
static struct attribute *ddr_perf_events_attrs[] = {
/* counter0 cycles event */
IMX9_DDR_PMU_EVENT_ATTR(cycles, 0),
@ -159,90 +207,114 @@ static struct attribute *ddr_perf_events_attrs[] = {
IMX9_DDR_PMU_EVENT_ATTR(ddrc_pm_29, 63),
/* counter1 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_1, 65),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_2, 66),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_3, 67),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_4, 68),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_5, 69),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_6, 70),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_7, 71),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_0, ID(1, 64)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_1, ID(1, 65)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_2, ID(1, 66)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_3, ID(1, 67)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_4, ID(1, 68)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_5, ID(1, 69)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_6, ID(1, 70)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_riq_7, ID(1, 71)),
/* counter2 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_1, 65),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_2, 66),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_3, 67),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_4, 68),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_5, 69),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_6, 70),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_7, 71),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_empty, 72),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_trans_filt, 73),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_0, ID(2, 64)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_1, ID(2, 65)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_2, ID(2, 66)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_3, ID(2, 67)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_4, ID(2, 68)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_5, ID(2, 69)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_6, ID(2, 70)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_ld_wiq_7, ID(2, 71)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_empty, ID(2, 72)),
IMX93_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_trans_filt, ID(2, 73)), /* imx93 specific*/
IMX95_DDR_PMU_EVENT_ATTR(eddrtq_pm_wr_beat_filt, ID(2, 73)), /* imx95 specific*/
/* counter3 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_1, 65),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_2, 66),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_3, 67),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_4, 68),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_5, 69),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_6, 70),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_7, 71),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_full, 72),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pm_wr_trans_filt, 73),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_0, ID(3, 64)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_1, ID(3, 65)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_2, ID(3, 66)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_3, ID(3, 67)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_4, ID(3, 68)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_5, ID(3, 69)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_6, ID(3, 70)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_collision_7, ID(3, 71)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_full, ID(3, 72)),
IMX93_DDR_PMU_EVENT_ATTR(eddrtq_pm_wr_trans_filt, ID(3, 73)), /* imx93 specific*/
IMX95_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_beat_filt2, ID(3, 73)), /* imx95 specific*/
/* counter4 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_1, 65),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_2, 66),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_3, 67),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_4, 68),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_5, 69),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_6, 70),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_7, 71),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq2_rmw, 72),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_beat_filt, 73),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_0, ID(4, 64)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_1, ID(4, 65)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_2, ID(4, 66)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_3, ID(4, 67)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_4, ID(4, 68)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_5, ID(4, 69)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_6, ID(4, 70)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_row_open_7, ID(4, 71)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq2_rmw, ID(4, 72)),
IMX93_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_beat_filt, ID(4, 73)), /* imx93 specific*/
IMX95_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_beat_filt1, ID(4, 73)), /* imx95 specific*/
/* counter5 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_1, 65),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_2, 66),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_3, 67),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_4, 68),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_5, 69),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_6, 70),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_7, 71),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq1, 72),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_0, ID(5, 64)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_1, ID(5, 65)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_2, ID(5, 66)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_3, ID(5, 67)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_4, ID(5, 68)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_5, ID(5, 69)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_6, ID(5, 70)),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_start_7, ID(5, 71)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq1, ID(5, 72)),
IMX95_DDR_PMU_EVENT_ATTR(eddrtq_pm_rd_beat_filt0, ID(5, 73)), /* imx95 specific*/
/* counter6 specific events */
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_end_0, 64),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq2, 72),
IMX9_DDR_PMU_EVENT_ATTR(ddrc_qx_valid_end_0, ID(6, 64)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq2, ID(6, 72)),
/* counter7 specific events */
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_1_2_full, 64),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_wrq0, 65),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_1_2_full, ID(7, 64)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_wrq0, ID(7, 65)),
/* counter8 specific events */
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_bias_switched, 64),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_1_4_full, 65),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_bias_switched, ID(8, 64)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_1_4_full, ID(8, 65)),
/* counter9 specific events */
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_wrq1, 65),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_3_4_full, 66),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_wrq1, ID(9, 65)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_3_4_full, ID(9, 66)),
/* counter10 specific events */
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_misc_mrk, 65),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq0, 66),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_misc_mrk, ID(10, 65)),
IMX9_DDR_PMU_EVENT_ATTR(eddrtq_pmon_ld_rdq0, ID(10, 66)),
NULL,
};
static umode_t
ddr_perf_events_attrs_is_visible(struct kobject *kobj,
struct attribute *attr, int unused)
{
struct pmu *pmu = dev_get_drvdata(kobj_to_dev(kobj));
struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu);
struct imx9_pmu_events_attr *eattr;
eattr = container_of(attr, typeof(*eattr), attr.attr);
if (!eattr->devtype_data)
return attr->mode;
if (eattr->devtype_data != ddr_pmu->devtype_data)
return 0;
return attr->mode;
}
static const struct attribute_group ddr_perf_events_attr_group = {
.name = "events",
.attrs = ddr_perf_events_attrs,
.is_visible = ddr_perf_events_attrs_is_visible,
};
PMU_FORMAT_ATTR(event, "config:0-7");
PMU_FORMAT_ATTR(event, "config:0-7,16-23");
PMU_FORMAT_ATTR(counter, "config:8-15");
PMU_FORMAT_ATTR(axi_id, "config1:0-17");
PMU_FORMAT_ATTR(axi_mask, "config2:0-17");
@ -339,8 +411,10 @@ static void ddr_perf_counter_local_config(struct ddr_pmu *pmu, int config,
int counter, bool enable)
{
u32 ctrl_a;
int event;
ctrl_a = readl_relaxed(pmu->base + PMLCA(counter));
event = FIELD_GET(CONFIG_EVENT_MASK, config);
if (enable) {
ctrl_a |= PMLCA_FC;
@ -352,7 +426,7 @@ static void ddr_perf_counter_local_config(struct ddr_pmu *pmu, int config,
ctrl_a &= ~PMLCA_FC;
ctrl_a |= PMLCA_CE;
ctrl_a &= ~FIELD_PREP(PMLCA_EVENT, 0x7F);
ctrl_a |= FIELD_PREP(PMLCA_EVENT, (config & 0x000000FF));
ctrl_a |= FIELD_PREP(PMLCA_EVENT, event);
writel(ctrl_a, pmu->base + PMLCA(counter));
} else {
/* Freeze counter. */
@ -361,39 +435,79 @@ static void ddr_perf_counter_local_config(struct ddr_pmu *pmu, int config,
}
}
static void ddr_perf_monitor_config(struct ddr_pmu *pmu, int cfg, int cfg1, int cfg2)
static void imx93_ddr_perf_monitor_config(struct ddr_pmu *pmu, int event,
int counter, int axi_id, int axi_mask)
{
u32 pmcfg1, pmcfg2;
int event, counter;
event = cfg & 0x000000FF;
counter = (cfg & 0x0000FF00) >> 8;
u32 mask[] = { MX93_PMCFG1_RD_TRANS_FILT_EN,
MX93_PMCFG1_WR_TRANS_FILT_EN,
MX93_PMCFG1_RD_BT_FILT_EN };
pmcfg1 = readl_relaxed(pmu->base + PMCFG1);
if (counter == 2 && event == 73)
pmcfg1 |= PMCFG1_RD_TRANS_FILT_EN;
else if (counter == 2 && event != 73)
pmcfg1 &= ~PMCFG1_RD_TRANS_FILT_EN;
if (counter >= 2 && counter <= 4)
pmcfg1 = event == 73 ? pmcfg1 | mask[counter - 2] :
pmcfg1 & ~mask[counter - 2];
if (counter == 3 && event == 73)
pmcfg1 |= PMCFG1_WR_TRANS_FILT_EN;
else if (counter == 3 && event != 73)
pmcfg1 &= ~PMCFG1_WR_TRANS_FILT_EN;
if (counter == 4 && event == 73)
pmcfg1 |= PMCFG1_RD_BT_FILT_EN;
else if (counter == 4 && event != 73)
pmcfg1 &= ~PMCFG1_RD_BT_FILT_EN;
pmcfg1 &= ~FIELD_PREP(PMCFG1_ID_MASK, 0x3FFFF);
pmcfg1 |= FIELD_PREP(PMCFG1_ID_MASK, cfg2);
writel(pmcfg1, pmu->base + PMCFG1);
pmcfg1 &= ~FIELD_PREP(MX93_PMCFG1_ID_MASK, 0x3FFFF);
pmcfg1 |= FIELD_PREP(MX93_PMCFG1_ID_MASK, axi_mask);
writel_relaxed(pmcfg1, pmu->base + PMCFG1);
pmcfg2 = readl_relaxed(pmu->base + PMCFG2);
pmcfg2 &= ~FIELD_PREP(PMCFG2_ID, 0x3FFFF);
pmcfg2 |= FIELD_PREP(PMCFG2_ID, cfg1);
writel(pmcfg2, pmu->base + PMCFG2);
pmcfg2 &= ~FIELD_PREP(MX93_PMCFG2_ID, 0x3FFFF);
pmcfg2 |= FIELD_PREP(MX93_PMCFG2_ID, axi_id);
writel_relaxed(pmcfg2, pmu->base + PMCFG2);
}
static void imx95_ddr_perf_monitor_config(struct ddr_pmu *pmu, int event,
int counter, int axi_id, int axi_mask)
{
u32 pmcfg1, pmcfg, offset = 0;
pmcfg1 = readl_relaxed(pmu->base + PMCFG1);
if (event == 73) {
switch (counter) {
case 2:
pmcfg1 |= MX95_PMCFG1_WR_BEAT_FILT_EN;
offset = PMCFG3;
break;
case 3:
pmcfg1 |= MX95_PMCFG1_RD_BEAT_FILT_EN;
offset = PMCFG4;
break;
case 4:
pmcfg1 |= MX95_PMCFG1_RD_BEAT_FILT_EN;
offset = PMCFG5;
break;
case 5:
pmcfg1 |= MX95_PMCFG1_RD_BEAT_FILT_EN;
offset = PMCFG6;
break;
}
} else {
switch (counter) {
case 2:
pmcfg1 &= ~MX95_PMCFG1_WR_BEAT_FILT_EN;
break;
case 3:
case 4:
case 5:
pmcfg1 &= ~MX95_PMCFG1_RD_BEAT_FILT_EN;
break;
}
}
writel_relaxed(pmcfg1, pmu->base + PMCFG1);
if (offset) {
pmcfg = readl_relaxed(pmu->base + offset);
pmcfg &= ~(FIELD_PREP(MX95_PMCFG_ID_MASK, 0x3FF) |
FIELD_PREP(MX95_PMCFG_ID, 0x3FF));
pmcfg |= (FIELD_PREP(MX95_PMCFG_ID_MASK, axi_mask) |
FIELD_PREP(MX95_PMCFG_ID, axi_id));
writel_relaxed(pmcfg, pmu->base + offset);
}
}
static void ddr_perf_event_update(struct perf_event *event)
@ -460,6 +574,28 @@ static void ddr_perf_event_start(struct perf_event *event, int flags)
hwc->state = 0;
}
static int ddr_perf_alloc_counter(struct ddr_pmu *pmu, int event, int counter)
{
int i;
if (event == CYCLES_EVENT_ID) {
// Cycles counter is dedicated for cycle event.
if (pmu->events[CYCLES_COUNTER] == NULL)
return CYCLES_COUNTER;
} else if (counter != 0) {
// Counter specific event use specific counter.
if (pmu->events[counter] == NULL)
return counter;
} else {
// Auto allocate counter for referene event.
for (i = 1; i < NUM_COUNTERS; i++)
if (pmu->events[i] == NULL)
return i;
}
return -ENOENT;
}
static int ddr_perf_event_add(struct perf_event *event, int flags)
{
struct ddr_pmu *pmu = to_ddr_pmu(event->pmu);
@ -467,21 +603,33 @@ static int ddr_perf_event_add(struct perf_event *event, int flags)
int cfg = event->attr.config;
int cfg1 = event->attr.config1;
int cfg2 = event->attr.config2;
int counter;
int event_id, counter;
counter = (cfg & 0x0000FF00) >> 8;
event_id = FIELD_GET(CONFIG_EVENT_MASK, cfg);
counter = FIELD_GET(CONFIG_COUNTER_MASK, cfg);
counter = ddr_perf_alloc_counter(pmu, event_id, counter);
if (counter < 0) {
dev_dbg(pmu->dev, "There are not enough counters\n");
return -EOPNOTSUPP;
}
pmu->events[counter] = event;
pmu->active_events++;
hwc->idx = counter;
hwc->state |= PERF_HES_STOPPED;
if (is_imx93(pmu))
/* read trans, write trans, read beat */
imx93_ddr_perf_monitor_config(pmu, event_id, counter, cfg1, cfg2);
if (is_imx95(pmu))
/* write beat, read beat2, read beat1, read beat */
imx95_ddr_perf_monitor_config(pmu, event_id, counter, cfg1, cfg2);
if (flags & PERF_EF_START)
ddr_perf_event_start(event, flags);
/* read trans, write trans, read beat */
ddr_perf_monitor_config(pmu, cfg, cfg1, cfg2);
return 0;
}
@ -501,9 +649,11 @@ static void ddr_perf_event_del(struct perf_event *event, int flags)
{
struct ddr_pmu *pmu = to_ddr_pmu(event->pmu);
struct hw_perf_event *hwc = &event->hw;
int counter = hwc->idx;
ddr_perf_event_stop(event, PERF_EF_UPDATE);
pmu->events[counter] = NULL;
pmu->active_events--;
hwc->idx = -1;
}

View File

@ -537,4 +537,5 @@ void hisi_pmu_init(struct hisi_pmu *hisi_pmu, struct module *module)
}
EXPORT_SYMBOL_GPL(hisi_pmu_init);
MODULE_DESCRIPTION("HiSilicon SoC uncore Performance Monitor driver framework");
MODULE_LICENSE("GPL v2");

View File

@ -763,4 +763,5 @@ module_init(cn10k_ddr_pmu_init);
module_exit(cn10k_ddr_pmu_exit);
MODULE_AUTHOR("Bharat Bhushan <bbhushan2@marvell.com>");
MODULE_DESCRIPTION("Marvell CN10K DRAM Subsystem (DSS) Performance Monitor Driver");
MODULE_LICENSE("GPL v2");

View File

@ -134,6 +134,7 @@ struct acpi_scan_handler {
bool (*match)(const char *idstr, const struct acpi_device_id **matchid);
int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id);
void (*detach)(struct acpi_device *dev);
void (*post_eject)(struct acpi_device *dev);
void (*bind)(struct device *phys_dev);
void (*unbind)(struct device *phys_dev);
struct acpi_hotplug_profile hotplug;

View File

@ -217,7 +217,7 @@ struct acpi_processor_flags {
u8 has_lpi:1;
u8 power_setup_done:1;
u8 bm_rld_set:1;
u8 need_hotplug_init:1;
u8 previously_online:1;
};
struct acpi_processor {

View File

@ -237,11 +237,6 @@ acpi_table_parse_cedt(enum acpi_cedt_type id,
int acpi_parse_mcfg (struct acpi_table_header *header);
void acpi_table_print_madt_entry (struct acpi_subtable_header *madt);
static inline bool acpi_gicc_is_usable(struct acpi_madt_generic_interrupt *gicc)
{
return gicc->flags & ACPI_MADT_ENABLED;
}
#if defined(CONFIG_X86) || defined(CONFIG_LOONGARCH)
void acpi_numa_processor_affinity_init (struct acpi_srat_cpu_affinity *pa);
#else
@ -304,6 +299,8 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id,
int acpi_unmap_cpu(int cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
acpi_handle acpi_get_processor_handle(int cpu);
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr);
#endif
@ -1076,6 +1073,11 @@ static inline bool acpi_sleep_state_supported(u8 sleep_state)
return false;
}
static inline acpi_handle acpi_get_processor_handle(int cpu)
{
return NULL;
}
#endif /* !CONFIG_ACPI */
extern void arch_post_acpi_subsys_init(void);

View File

@ -93,6 +93,7 @@ static inline void set_nr_cpu_ids(unsigned int nr)
*
* cpu_possible_mask- has bit 'cpu' set iff cpu is populatable
* cpu_present_mask - has bit 'cpu' set iff cpu is populated
* cpu_enabled_mask - has bit 'cpu' set iff cpu can be brought online
* cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
* cpu_active_mask - has bit 'cpu' set iff cpu available to migration
*
@ -125,11 +126,13 @@ static inline void set_nr_cpu_ids(unsigned int nr)
extern struct cpumask __cpu_possible_mask;
extern struct cpumask __cpu_online_mask;
extern struct cpumask __cpu_enabled_mask;
extern struct cpumask __cpu_present_mask;
extern struct cpumask __cpu_active_mask;
extern struct cpumask __cpu_dying_mask;
#define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask)
#define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask)
#define cpu_enabled_mask ((const struct cpumask *)&__cpu_enabled_mask)
#define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask)
#define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask)
#define cpu_dying_mask ((const struct cpumask *)&__cpu_dying_mask)
@ -1075,6 +1078,7 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
#else
#define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
#define for_each_online_cpu(cpu) for_each_cpu((cpu), cpu_online_mask)
#define for_each_enabled_cpu(cpu) for_each_cpu((cpu), cpu_enabled_mask)
#define for_each_present_cpu(cpu) for_each_cpu((cpu), cpu_present_mask)
#endif
@ -1092,6 +1096,15 @@ set_cpu_possible(unsigned int cpu, bool possible)
cpumask_clear_cpu(cpu, &__cpu_possible_mask);
}
static inline void
set_cpu_enabled(unsigned int cpu, bool can_be_onlined)
{
if (can_be_onlined)
cpumask_set_cpu(cpu, &__cpu_enabled_mask);
else
cpumask_clear_cpu(cpu, &__cpu_enabled_mask);
}
static inline void
set_cpu_present(unsigned int cpu, bool present)
{
@ -1173,6 +1186,7 @@ static __always_inline unsigned int num_online_cpus(void)
return raw_atomic_read(&__num_online_cpus);
}
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
#define num_enabled_cpus() cpumask_weight(cpu_enabled_mask)
#define num_present_cpus() cpumask_weight(cpu_present_mask)
#define num_active_cpus() cpumask_weight(cpu_active_mask)
@ -1181,6 +1195,11 @@ static inline bool cpu_online(unsigned int cpu)
return cpumask_test_cpu(cpu, cpu_online_mask);
}
static inline bool cpu_enabled(unsigned int cpu)
{
return cpumask_test_cpu(cpu, cpu_enabled_mask);
}
static inline bool cpu_possible(unsigned int cpu)
{
return cpumask_test_cpu(cpu, cpu_possible_mask);
@ -1205,6 +1224,7 @@ static inline bool cpu_dying(unsigned int cpu)
#define num_online_cpus() 1U
#define num_possible_cpus() 1U
#define num_enabled_cpus() 1U
#define num_present_cpus() 1U
#define num_active_cpus() 1U
@ -1218,6 +1238,11 @@ static inline bool cpu_possible(unsigned int cpu)
return cpu == 0;
}
static inline bool cpu_enabled(unsigned int cpu)
{
return cpu == 0;
}
static inline bool cpu_present(unsigned int cpu)
{
return cpu == 0;

View File

@ -10,10 +10,6 @@
#include <linux/irqchip/arm-vgic-info.h>
#define GICD_INT_DEF_PRI 0xa0
#define GICD_INT_DEF_PRI_X4 ((GICD_INT_DEF_PRI << 24) |\
(GICD_INT_DEF_PRI << 16) |\
(GICD_INT_DEF_PRI << 8) |\
GICD_INT_DEF_PRI)
struct irq_domain;
struct fwnode_handle;

View File

@ -0,0 +1,52 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __LINUX_IRQCHIP_ARM_GIC_V3_PRIO_H
#define __LINUX_IRQCHIP_ARM_GIC_V3_PRIO_H
/*
* GIC priorities from the view of the PMR/RPR.
*
* These values are chosen to be valid in either the absolute priority space or
* the NS view of the priority space. The value programmed into the distributor
* and ITS will be chosen at boot time such that these values appear in the
* PMR/RPR.
*
* GICV3_PRIO_UNMASKED is the PMR view of the priority to use to permit both
* IRQs and pseudo-NMIs.
*
* GICV3_PRIO_IRQ is the PMR view of the priority of regular interrupts. This
* can be written to the PMR to mask regular IRQs.
*
* GICV3_PRIO_NMI is the PMR view of the priority of pseudo-NMIs. This can be
* written to the PMR to mask pseudo-NMIs.
*
* On arm64 some code sections either automatically switch back to PSR.I or
* explicitly require to not use priority masking. If bit GICV3_PRIO_PSR_I_SET
* is included in the priority mask, it indicates that PSR.I should be set and
* interrupt disabling temporarily does not rely on IRQ priorities.
*/
#define GICV3_PRIO_UNMASKED 0xe0
#define GICV3_PRIO_IRQ 0xc0
#define GICV3_PRIO_NMI 0x80
#define GICV3_PRIO_PSR_I_SET (1 << 4)
#ifndef __ASSEMBLER__
#define __gicv3_prio_to_ns(p) (0xff & ((p) << 1))
#define __gicv3_ns_to_prio(ns) (0x80 | ((ns) >> 1))
#define __gicv3_prio_valid_ns(p) \
(__gicv3_ns_to_prio(__gicv3_prio_to_ns(p)) == (p))
static_assert(__gicv3_prio_valid_ns(GICV3_PRIO_NMI));
static_assert(__gicv3_prio_valid_ns(GICV3_PRIO_IRQ));
static_assert(GICV3_PRIO_NMI < GICV3_PRIO_IRQ);
static_assert(GICV3_PRIO_IRQ < GICV3_PRIO_UNMASKED);
static_assert(GICV3_PRIO_IRQ < (GICV3_PRIO_IRQ | GICV3_PRIO_PSR_I_SET));
#endif /* __ASSEMBLER */
#endif /* __LINUX_IRQCHIP_ARM_GIC_V3_PRIO_H */

View File

@ -638,7 +638,7 @@ struct fwnode_handle;
int __init its_lpi_memreserve_init(void);
int its_cpu_init(void);
int its_init(struct fwnode_handle *handle, struct rdists *rdists,
struct irq_domain *domain);
struct irq_domain *domain, u8 irq_prio);
int mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent);
static inline bool gic_enable_sre(void)

View File

@ -309,4 +309,6 @@
} \
} while (0)
#include <asm/arm_pmuv3.h>
#endif

View File

@ -39,6 +39,14 @@
*/
#define REPEAT_BYTE(x) ((~0ul / 0xff) * (x))
/**
* REPEAT_BYTE_U32 - repeat the value @x multiple times as a u32 value
* @x: value to repeat
*
* NOTE: @x is not checked for > 0xff; larger values produce odd results.
*/
#define REPEAT_BYTE_U32(x) lower_32_bits(REPEAT_BYTE(x))
/* Set bits in the first 'n' bytes when loaded from memory */
#ifdef __LITTLE_ENDIAN
# define aligned_byte_mask(n) ((1UL << 8*(n))-1)

View File

@ -3072,6 +3072,9 @@ EXPORT_SYMBOL(__cpu_possible_mask);
struct cpumask __cpu_online_mask __read_mostly;
EXPORT_SYMBOL(__cpu_online_mask);
struct cpumask __cpu_enabled_mask __read_mostly;
EXPORT_SYMBOL(__cpu_enabled_mask);
struct cpumask __cpu_present_mask __read_mostly;
EXPORT_SYMBOL(__cpu_present_mask);

View File

@ -47,7 +47,7 @@ static void test_tpidr(pid_t child)
/* ...write a new value.. */
write_iov.iov_len = sizeof(uint64_t);
write_val[0] = read_val[0]++;
write_val[0] = read_val[0] + 1;
ret = ptrace(PTRACE_SETREGSET, child, NT_ARM_TLS, &write_iov);
ksft_test_result(ret == 0, "write_tpidr_one\n");

View File

@ -2,6 +2,7 @@ fp-pidbench
fp-ptrace
fp-stress
fpsimd-test
kernel-test
rdvl-sme
rdvl-sve
sve-probe-vls

View File

@ -12,6 +12,7 @@ TEST_GEN_PROGS := \
vec-syscfg \
za-fork za-ptrace
TEST_GEN_PROGS_EXTENDED := fp-pidbench fpsimd-test \
kernel-test \
rdvl-sme rdvl-sve \
sve-test \
ssve-test \

View File

@ -319,6 +319,19 @@ static void start_fpsimd(struct child_data *child, int cpu, int copy)
ksft_print_msg("Started %s\n", child->name);
}
static void start_kernel(struct child_data *child, int cpu, int copy)
{
int ret;
ret = asprintf(&child->name, "KERNEL-%d-%d", cpu, copy);
if (ret == -1)
ksft_exit_fail_msg("asprintf() failed\n");
child_start(child, "./kernel-test");
ksft_print_msg("Started %s\n", child->name);
}
static void start_sve(struct child_data *child, int vl, int cpu)
{
int ret;
@ -438,7 +451,7 @@ int main(int argc, char **argv)
int ret;
int timeout = 10;
int cpus, i, j, c;
int sve_vl_count, sme_vl_count, fpsimd_per_cpu;
int sve_vl_count, sme_vl_count;
bool all_children_started = false;
int seen_children;
int sve_vls[MAX_VLS], sme_vls[MAX_VLS];
@ -482,12 +495,7 @@ int main(int argc, char **argv)
have_sme2 = false;
}
/* Force context switching if we only have FPSIMD */
if (!sve_vl_count && !sme_vl_count)
fpsimd_per_cpu = 2;
else
fpsimd_per_cpu = 1;
tests += cpus * fpsimd_per_cpu;
tests += cpus * 2;
ksft_print_header();
ksft_set_plan(tests);
@ -542,8 +550,8 @@ int main(int argc, char **argv)
tests);
for (i = 0; i < cpus; i++) {
for (j = 0; j < fpsimd_per_cpu; j++)
start_fpsimd(&children[num_children++], i, j);
start_fpsimd(&children[num_children++], i, 0);
start_kernel(&children[num_children++], i, 0);
for (j = 0; j < sve_vl_count; j++)
start_sve(&children[num_children++], sve_vls[j], i);

View File

@ -0,0 +1,324 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2024 ARM Limited.
*/
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <errno.h>
#include <fcntl.h>
#include <signal.h>
#include <string.h>
#include <unistd.h>
#include <sys/socket.h>
#include <linux/kernel.h>
#include <linux/if_alg.h>
#define DATA_SIZE (16 * 4096)
static int base, sock;
static int digest_len;
static char *ref;
static char *digest;
static char *alg_name;
static struct iovec data_iov;
static int zerocopy[2];
static int sigs;
static int iter;
static void handle_exit_signal(int sig, siginfo_t *info, void *context)
{
printf("Terminated by signal %d, iterations=%d, signals=%d\n",
sig, iter, sigs);
exit(0);
}
static void handle_kick_signal(int sig, siginfo_t *info, void *context)
{
sigs++;
}
static char *drivers[] = {
"crct10dif-arm64-ce",
/* "crct10dif-arm64-neon", - Same priority as generic */
"sha1-ce",
"sha224-arm64",
"sha224-arm64-neon",
"sha224-ce",
"sha256-arm64",
"sha256-arm64-neon",
"sha256-ce",
"sha384-ce",
"sha512-ce",
"sha3-224-ce",
"sha3-256-ce",
"sha3-384-ce",
"sha3-512-ce",
"sm3-ce",
"sm3-neon",
};
static bool create_socket(void)
{
FILE *proc;
struct sockaddr_alg addr;
char buf[1024];
char *c, *driver_name;
bool is_shash, match;
int ret, i;
ret = socket(AF_ALG, SOCK_SEQPACKET, 0);
if (ret < 0) {
if (errno == EAFNOSUPPORT) {
printf("AF_ALG not supported\n");
return false;
}
printf("Failed to create AF_ALG socket: %s (%d)\n",
strerror(errno), errno);
return false;
}
base = ret;
memset(&addr, 0, sizeof(addr));
addr.salg_family = AF_ALG;
strncpy((char *)addr.salg_type, "hash", sizeof(addr.salg_type));
proc = fopen("/proc/crypto", "r");
if (!proc) {
printf("Unable to open /proc/crypto\n");
return false;
}
driver_name = NULL;
is_shash = false;
match = false;
/* Look through /proc/crypto for a driver with kernel mode FP usage */
while (!match) {
c = fgets(buf, sizeof(buf), proc);
if (!c) {
if (feof(proc)) {
printf("Nothing found in /proc/crypto\n");
return false;
}
continue;
}
/* Algorithm descriptions are separated by a blank line */
if (*c == '\n') {
if (is_shash && driver_name) {
for (i = 0; i < ARRAY_SIZE(drivers); i++) {
if (strcmp(drivers[i],
driver_name) == 0) {
match = true;
}
}
}
if (!match) {
digest_len = 0;
free(driver_name);
driver_name = NULL;
free(alg_name);
alg_name = NULL;
is_shash = false;
}
continue;
}
/* Remove trailing newline */
c = strchr(buf, '\n');
if (c)
*c = '\0';
/* Find the field/value separator and start of the value */
c = strchr(buf, ':');
if (!c)
continue;
c += 2;
if (strncmp(buf, "digestsize", strlen("digestsize")) == 0)
sscanf(c, "%d", &digest_len);
if (strncmp(buf, "name", strlen("name")) == 0)
alg_name = strdup(c);
if (strncmp(buf, "driver", strlen("driver")) == 0)
driver_name = strdup(c);
if (strncmp(buf, "type", strlen("type")) == 0)
if (strncmp(c, "shash", strlen("shash")) == 0)
is_shash = true;
}
strncpy((char *)addr.salg_name, alg_name,
sizeof(addr.salg_name) - 1);
ret = bind(base, (struct sockaddr *)&addr, sizeof(addr));
if (ret < 0) {
printf("Failed to bind %s: %s (%d)\n",
addr.salg_name, strerror(errno), errno);
return false;
}
ret = accept(base, NULL, 0);
if (ret < 0) {
printf("Failed to accept %s: %s (%d)\n",
addr.salg_name, strerror(errno), errno);
return false;
}
sock = ret;
ret = pipe(zerocopy);
if (ret != 0) {
printf("Failed to create zerocopy pipe: %s (%d)\n",
strerror(errno), errno);
return false;
}
ref = malloc(digest_len);
if (!ref) {
printf("Failed to allocated %d byte reference\n", digest_len);
return false;
}
digest = malloc(digest_len);
if (!digest) {
printf("Failed to allocated %d byte digest\n", digest_len);
return false;
}
return true;
}
static bool compute_digest(void *buf)
{
struct iovec iov;
int ret, wrote;
iov = data_iov;
while (iov.iov_len) {
ret = vmsplice(zerocopy[1], &iov, 1, SPLICE_F_GIFT);
if (ret < 0) {
printf("Failed to send buffer: %s (%d)\n",
strerror(errno), errno);
return false;
}
wrote = ret;
ret = splice(zerocopy[0], NULL, sock, NULL, wrote, 0);
if (ret < 0) {
printf("Failed to splice buffer: %s (%d)\n",
strerror(errno), errno);
} else if (ret != wrote) {
printf("Short splice: %d < %d\n", ret, wrote);
}
iov.iov_len -= wrote;
iov.iov_base += wrote;
}
reread:
ret = recv(sock, buf, digest_len, 0);
if (ret == 0) {
printf("No digest returned\n");
return false;
}
if (ret != digest_len) {
if (errno == -EAGAIN)
goto reread;
printf("Failed to get digest: %s (%d)\n",
strerror(errno), errno);
return false;
}
return true;
}
int main(void)
{
char *data;
struct sigaction sa;
int ret;
/* Ensure we have unbuffered output */
setvbuf(stdout, NULL, _IOLBF, 0);
/* The parent will communicate with us via signals */
memset(&sa, 0, sizeof(sa));
sa.sa_sigaction = handle_exit_signal;
sa.sa_flags = SA_RESTART | SA_SIGINFO;
sigemptyset(&sa.sa_mask);
ret = sigaction(SIGTERM, &sa, NULL);
if (ret < 0)
printf("Failed to install SIGTERM handler: %s (%d)\n",
strerror(errno), errno);
sa.sa_sigaction = handle_kick_signal;
ret = sigaction(SIGUSR2, &sa, NULL);
if (ret < 0)
printf("Failed to install SIGUSR2 handler: %s (%d)\n",
strerror(errno), errno);
data = malloc(DATA_SIZE);
if (!data) {
printf("Failed to allocate data buffer\n");
return EXIT_FAILURE;
}
memset(data, 0, DATA_SIZE);
data_iov.iov_base = data;
data_iov.iov_len = DATA_SIZE;
/*
* If we can't create a socket assume it's a lack of system
* support and fall back to a basic FPSIMD test for the
* benefit of fp-stress.
*/
if (!create_socket()) {
execl("./fpsimd-test", "./fpsimd-test", NULL);
printf("Failed to fall back to fspimd-test: %d (%s)\n",
errno, strerror(errno));
return EXIT_FAILURE;
}
/*
* Compute a reference digest we hope is repeatable, we do
* this at runtime partly to make it easier to play with
* parameters.
*/
if (!compute_digest(ref)) {
printf("Failed to compute reference digest\n");
return EXIT_FAILURE;
}
printf("AF_ALG using %s\n", alg_name);
while (true) {
if (!compute_digest(digest)) {
printf("Failed to compute digest, iter=%d\n", iter);
return EXIT_FAILURE;
}
if (memcmp(ref, digest, digest_len) != 0) {
printf("Digest mismatch, iter=%d\n", iter);
return EXIT_FAILURE;
}
iter++;
}
return EXIT_FAILURE;
}

View File

@ -2,6 +2,5 @@
CFLAGS += $(KHDR_INCLUDES)
TEST_GEN_PROGS := tags_test
TEST_PROGS := run_tags_test.sh
include ../../lib.mk

View File

@ -1,12 +0,0 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
echo "--------------------"
echo "running tags test"
echo "--------------------"
./tags_test
if [ $? -ne 0 ]; then
echo "[FAIL]"
else
echo "[PASS]"
fi

View File

@ -17,19 +17,21 @@ int main(void)
static int tbi_enabled = 0;
unsigned long tag = 0;
struct utsname *ptr;
int err;
ksft_print_header();
ksft_set_plan(1);
if (prctl(PR_SET_TAGGED_ADDR_CTRL, PR_TAGGED_ADDR_ENABLE, 0, 0, 0) == 0)
tbi_enabled = 1;
ptr = (struct utsname *)malloc(sizeof(*ptr));
if (!ptr)
ksft_exit_fail_msg("Failed to allocate utsname buffer\n");
ksft_exit_fail_perror("Failed to allocate utsname buffer");
if (tbi_enabled)
tag = 0x42;
ptr = (struct utsname *)SET_TAG(ptr, tag);
err = uname(ptr);
ksft_test_result(!uname(ptr), "Syscall successful with tagged address\n");
free(ptr);
return err;
ksft_finished();
}