Compare commits

..

No commits in common. "49e734ecec1aaa9835769605715558ca425a4356" and "955853cf83657faa58572ef3f08b44f0f88885c1" have entirely different histories.

807 changed files with 5347 additions and 9329 deletions

View File

@ -223,8 +223,6 @@ Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com>
Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com> Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>
Domen Puncer <domen@coderock.org> Domen Puncer <domen@coderock.org>
Douglas Gilbert <dougg@torque.net> Douglas Gilbert <dougg@torque.net>
Drew Fustini <fustini@kernel.org> <drew@pdp7.com>
<duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>
Ed L. Cashin <ecashin@coraid.com> Ed L. Cashin <ecashin@coraid.com>
Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org> Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org>
Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com> Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>
@ -416,7 +414,6 @@ Kenneth W Chen <kenneth.w.chen@intel.com>
Kenneth Westfield <quic_kwestfie@quicinc.com> <kwestfie@codeaurora.org> Kenneth Westfield <quic_kwestfie@quicinc.com> <kwestfie@codeaurora.org>
Kiran Gunda <quic_kgunda@quicinc.com> <kgunda@codeaurora.org> Kiran Gunda <quic_kgunda@quicinc.com> <kgunda@codeaurora.org>
Kirill Tkhai <tkhai@ya.ru> <ktkhai@virtuozzo.com> Kirill Tkhai <tkhai@ya.ru> <ktkhai@virtuozzo.com>
Kirill A. Shutemov <kas@kernel.org> <kirill.shutemov@linux.intel.com>
Kishon Vijay Abraham I <kishon@kernel.org> <kishon@ti.com> Kishon Vijay Abraham I <kishon@kernel.org> <kishon@ti.com>
Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@linaro.org> Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@linaro.org>
Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@somainline.org> Konrad Dybcio <konradybcio@kernel.org> <konrad.dybcio@somainline.org>
@ -833,6 +830,3 @@ Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com>
Yusuke Goda <goda.yusuke@renesas.com> Yusuke Goda <goda.yusuke@renesas.com>
Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>
Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com> Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
Zijun Hu <zijun.hu@oss.qualcomm.com> <quic_zijuhu@quicinc.com>
Zijun Hu <zijun.hu@oss.qualcomm.com> <zijuhu@codeaurora.org>
Zijun Hu <zijun_hu@htc.com>

View File

@ -2981,11 +2981,6 @@ S: 521 Pleasant Valley Road
S: Potsdam, New York 13676 S: Potsdam, New York 13676
S: USA S: USA
N: Shannon Nelson
E: sln@onemain.com
D: Worked on several network drivers including
D: ixgbe, i40e, ionic, pds_core, pds_vdpa, pds_fwctl
N: Dave Neuer N: Dave Neuer
E: dave.neuer@pobox.com E: dave.neuer@pobox.com
D: Helped implement support for Compaq's H31xx series iPAQs D: Helped implement support for Compaq's H31xx series iPAQs

View File

@ -56,7 +56,7 @@ Date: January 2009
Contact: Rafael J. Wysocki <rjw@rjwysocki.net> Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description: Description:
The /sys/devices/.../async attribute allows the user space to The /sys/devices/.../async attribute allows the user space to
enable or disable the device's suspend and resume callbacks to enable or diasble the device's suspend and resume callbacks to
be executed asynchronously (ie. in separate threads, in parallel be executed asynchronously (ie. in separate threads, in parallel
with the main suspend/resume thread) during system-wide power with the main suspend/resume thread) during system-wide power
transitions (eg. suspend to RAM, hibernation). transitions (eg. suspend to RAM, hibernation).

View File

@ -584,7 +584,6 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/spectre_v1 /sys/devices/system/cpu/vulnerabilities/spectre_v1
/sys/devices/system/cpu/vulnerabilities/spectre_v2 /sys/devices/system/cpu/vulnerabilities/spectre_v2
/sys/devices/system/cpu/vulnerabilities/srbds /sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsa
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
Date: January 2018 Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>

View File

@ -711,7 +711,7 @@ Description: This file shows the thin provisioning type. This is one of
The file is read only. The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count
Date: February 2018 Date: February 2018
Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com> Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com>
Description: This file shows the total physical memory resources. This is Description: This file shows the total physical memory resources. This is

View File

@ -49,12 +49,6 @@ Description:
(RO) Supported minimum scrub cycle duration in seconds (RO) Supported minimum scrub cycle duration in seconds
by the memory scrubber. by the memory scrubber.
Device-based scrub: returns the minimum scrub cycle
supported by the memory device.
Region-based scrub: returns the max of minimum scrub cycles
supported by individual memory devices that back the region.
What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration
Date: March 2025 Date: March 2025
KernelVersion: 6.15 KernelVersion: 6.15
@ -63,16 +57,6 @@ Description:
(RO) Supported maximum scrub cycle duration in seconds (RO) Supported maximum scrub cycle duration in seconds
by the memory scrubber. by the memory scrubber.
Device-based scrub: returns the maximum scrub cycle supported
by the memory device.
Region-based scrub: returns the min of maximum scrub cycles
supported by individual memory devices that back the region.
If the memory device does not provide maximum scrub cycle
information, return the maximum supported value of the scrub
cycle field.
What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration
Date: March 2025 Date: March 2025
KernelVersion: 6.15 KernelVersion: 6.15

View File

@ -1732,6 +1732,12 @@ The following nested keys are defined.
numa_hint_faults (npn) numa_hint_faults (npn)
Number of NUMA hinting faults. Number of NUMA hinting faults.
numa_task_migrated (npn)
Number of task migration by NUMA balancing.
numa_task_swapped (npn)
Number of task swap by NUMA balancing.
pgdemote_kswapd pgdemote_kswapd
Number of pages demoted by kswapd. Number of pages demoted by kswapd.

View File

@ -157,7 +157,9 @@ This is achieved by using the otherwise unused and obsolete VERW instruction in
combination with a microcode update. The microcode clears the affected CPU combination with a microcode update. The microcode clears the affected CPU
buffers when the VERW instruction is executed. buffers when the VERW instruction is executed.
Kernel does the buffer clearing with x86_clear_cpu_buffers(). Kernel reuses the MDS function to invoke the buffer clearing:
mds_clear_cpu_buffers()
On MDS affected CPUs, the kernel already invokes CPU buffer clear on On MDS affected CPUs, the kernel already invokes CPU buffer clear on
kernel/userspace, hypervisor/guest and C-state (idle) transitions. No kernel/userspace, hypervisor/guest and C-state (idle) transitions. No

View File

@ -7488,19 +7488,6 @@
having this key zero'ed is acceptable. E.g. in testing having this key zero'ed is acceptable. E.g. in testing
scenarios. scenarios.
tsa= [X86] Control mitigation for Transient Scheduler
Attacks on AMD CPUs. Search the following in your
favourite search engine for more details:
"Technical guidance for mitigating transient scheduler
attacks".
off - disable the mitigation
on - enable the mitigation (default)
user - mitigate only user/kernel transitions
vm - mitigate only guest/host transitions
tsc= Disable clocksource stability checks for TSC. tsc= Disable clocksource stability checks for TSC.
Format: <string> Format: <string>
[x86] reliable: mark tsc clocksource as reliable, this [x86] reliable: mark tsc clocksource as reliable, this

View File

@ -93,7 +93,7 @@ enters a C-state.
The kernel provides a function to invoke the buffer clearing: The kernel provides a function to invoke the buffer clearing:
x86_clear_cpu_buffers() mds_clear_cpu_buffers()
Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path. Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
Other than CFLAGS.ZF, this macro doesn't clobber any registers. Other than CFLAGS.ZF, this macro doesn't clobber any registers.
@ -185,9 +185,9 @@ Mitigation points
idle clearing would be a window dressing exercise and is therefore not idle clearing would be a window dressing exercise and is therefore not
activated. activated.
The invocation is controlled by the static key cpu_buf_idle_clear which is The invocation is controlled by the static key mds_idle_clear which is
switched depending on the chosen mitigation mode and the SMT state of the switched depending on the chosen mitigation mode and the SMT state of
system. the system.
The buffer clear is only invoked before entering the C-State to prevent The buffer clear is only invoked before entering the C-State to prevent
that stale data from the idling CPU from spilling to the Hyper-Thread that stale data from the idling CPU from spilling to the Hyper-Thread

View File

@ -233,16 +233,10 @@ attempts in order to enforce the LRU property which have increasing impacts on
other CPUs involved in the following operation attempts: other CPUs involved in the following operation attempts:
- Attempt to use CPU-local state to batch operations - Attempt to use CPU-local state to batch operations
- Attempt to fetch ``target_free`` free nodes from global lists - Attempt to fetch free nodes from global lists
- Attempt to pull any node from a global list and remove it from the hashmap - Attempt to pull any node from a global list and remove it from the hashmap
- Attempt to pull any node from any CPU's list and remove it from the hashmap - Attempt to pull any node from any CPU's list and remove it from the hashmap
The number of nodes to borrow from the global list in a batch, ``target_free``,
depends on the size of the map. Larger batch size reduces lock contention, but
may also exhaust the global structure. The value is computed at map init to
avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map
size. With a minimum of a single element and maximum budget of 128 at a time.
This algorithm is described visually in the following diagram. See the This algorithm is described visually in the following diagram. See the
description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
the corresponding operations: the corresponding operations:

View File

@ -35,18 +35,18 @@ digraph {
fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2, fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2,
label="Flush local pending, label="Flush local pending,
Rotate Global list, move Rotate Global list, move
target_free LOCAL_FREE_TARGET
from global -> local"] from global -> local"]
// Also corresponds to: // Also corresponds to:
// fn__local_list_flush() // fn__local_list_flush()
// fn_bpf_lru_list_rotate() // fn_bpf_lru_list_rotate()
fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2, fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2,
label="Able to free\ntarget_free\nnodes?"] label="Able to free\nLOCAL_FREE_TARGET\nnodes?"]
fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3, fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3,
label="Shrink inactive list label="Shrink inactive list
up to remaining up to remaining
target_free LOCAL_FREE_TARGET
(global LRU -> local)"] (global LRU -> local)"]
fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2, fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2,
label="> 0 entries in\nlocal free list?"] label="> 0 entries in\nlocal free list?"]

View File

@ -52,9 +52,6 @@ properties:
'#clock-cells': '#clock-cells':
const: 1 const: 1
'#reset-cells':
const: 1
required: required:
- compatible - compatible
- reg - reg

View File

@ -118,11 +118,15 @@ $defs:
ti,lvds-vod-swing-clock-microvolt: ti,lvds-vod-swing-clock-microvolt:
description: LVDS diferential output voltage <min max> for clock description: LVDS diferential output voltage <min max> for clock
lanes in microvolts. lanes in microvolts.
$ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 2
maxItems: 2 maxItems: 2
ti,lvds-vod-swing-data-microvolt: ti,lvds-vod-swing-data-microvolt:
description: LVDS diferential output voltage <min max> for data description: LVDS diferential output voltage <min max> for data
lanes in microvolts. lanes in microvolts.
$ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 2
maxItems: 2 maxItems: 2
allOf: allOf:

View File

@ -26,8 +26,7 @@ properties:
- const: realtek,rtl9301-i2c - const: realtek,rtl9301-i2c
reg: reg:
items: description: Register offset and size this I2C controller.
- description: Register offset and size this I2C controller.
"#address-cells": "#address-cells":
const: 1 const: 1

View File

@ -4,14 +4,14 @@
$id: http://devicetree.org/schemas/input/elan,ekth6915.yaml# $id: http://devicetree.org/schemas/input/elan,ekth6915.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml# $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Elan I2C-HID touchscreen controllers title: Elan eKTH6915 touchscreen controller
maintainers: maintainers:
- Douglas Anderson <dianders@chromium.org> - Douglas Anderson <dianders@chromium.org>
description: description:
Supports the Elan eKTH6915 and other I2C-HID touchscreen controllers. Supports the Elan eKTH6915 touchscreen controller.
These touchscreen controller use the i2c-hid protocol with a reset GPIO. This touchscreen controller uses the i2c-hid protocol with a reset GPIO.
allOf: allOf:
- $ref: /schemas/input/touchscreen/touchscreen.yaml# - $ref: /schemas/input/touchscreen/touchscreen.yaml#
@ -23,14 +23,12 @@ properties:
- enum: - enum:
- elan,ekth5015m - elan,ekth5015m
- const: elan,ekth6915 - const: elan,ekth6915
- items:
- const: elan,ekth8d18
- const: elan,ekth6a12nay
- enum: - enum:
- elan,ekth6915 - elan,ekth6915
- elan,ekth6a12nay - elan,ekth6a12nay
reg: true reg:
const: 0x10
interrupts: interrupts:
maxItems: 1 maxItems: 1

View File

@ -23,7 +23,7 @@ properties:
- allwinner,sun20i-d1-emac - allwinner,sun20i-d1-emac
- allwinner,sun50i-h6-emac - allwinner,sun50i-h6-emac
- allwinner,sun50i-h616-emac0 - allwinner,sun50i-h616-emac0
- allwinner,sun55i-a523-gmac0 - allwinner,sun55i-a523-emac0
- const: allwinner,sun50i-a64-emac - const: allwinner,sun50i-a64-emac
reg: reg:

View File

@ -80,8 +80,6 @@ examples:
interrupt-parent = <&intc>; interrupt-parent = <&intc>;
interrupts = <296 IRQ_TYPE_LEVEL_HIGH>; interrupts = <296 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "macirq"; interrupt-names = "macirq";
phy-handle = <&phy0>;
phy-mode = "rgmii-id";
resets = <&rst 30>; resets = <&rst 30>;
reset-names = "stmmaceth"; reset-names = "stmmaceth";
snps,multicast-filter-bins = <0>; snps,multicast-filter-bins = <0>;
@ -93,6 +91,7 @@ examples:
snps,mtl-rx-config = <&gmac0_mtl_rx_setup>; snps,mtl-rx-config = <&gmac0_mtl_rx_setup>;
snps,mtl-tx-config = <&gmac0_mtl_tx_setup>; snps,mtl-tx-config = <&gmac0_mtl_tx_setup>;
snps,axi-config = <&gmac0_stmmac_axi_setup>; snps,axi-config = <&gmac0_stmmac_axi_setup>;
status = "disabled";
gmac0_mtl_rx_setup: rx-queues-config { gmac0_mtl_rx_setup: rx-queues-config {
snps,rx-queues-to-use = <8>; snps,rx-queues-to-use = <8>;

View File

@ -45,7 +45,7 @@ allOf:
- ns16550 - ns16550
- ns16550a - ns16550a
then: then:
oneOf: anyOf:
- required: [ clock-frequency ] - required: [ clock-frequency ]
- required: [ clocks ] - required: [ clocks ]

View File

@ -0,0 +1,5 @@
Altera JTAG UART
Required properties:
- compatible : should be "ALTR,juart-1.0" <DEPRECATED>
- compatible : should be "altr,juart-1.0"

View File

@ -0,0 +1,8 @@
Altera UART
Required properties:
- compatible : should be "ALTR,uart-1.0" <DEPRECATED>
- compatible : should be "altr,uart-1.0"
Optional properties:
- clock-frequency : frequency of the clock input to the UART

View File

@ -1,19 +0,0 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/serial/altr,juart-1.0.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Altera JTAG UART
maintainers:
- Dinh Nguyen <dinguyen@kernel.org>
properties:
compatible:
const: altr,juart-1.0
required:
- compatible
additionalProperties: false

View File

@ -1,25 +0,0 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/serial/altr,uart-1.0.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Altera UART
maintainers:
- Dinh Nguyen <dinguyen@kernel.org>
allOf:
- $ref: /schemas/serial/serial.yaml#
properties:
compatible:
const: altr,uart-1.0
clock-frequency:
description: Frequency of the clock input to the UART.
required:
- compatible
unevaluatedProperties: false

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2 %YAML 1.2
--- ---
$id: http://devicetree.org/schemas/soc/fsl/fsl,ls1028a-reset.yaml# $id: http://devicetree.org/schemas//soc/fsl/fsl,ls1028a-reset.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml# $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale Layerscape Reset Registers Module title: Freescale Layerscape Reset Registers Module

View File

@ -1249,12 +1249,3 @@ Using try_lookup_noperm() will require linux/namei.h to be included.
Calling conventions for ->d_automount() have changed; we should *not* grab Calling conventions for ->d_automount() have changed; we should *not* grab
an extra reference to new mount - it should be returned with refcount 1. an extra reference to new mount - it should be returned with refcount 1.
---
collect_mounts()/drop_collected_mounts()/iterate_mounts() are gone now.
Replacement is collect_paths()/drop_collected_path(), with no special
iterator needed. Instead of a cloned mount tree, the new interface returns
an array of struct path, one for each mount collect_mounts() would've
created. These struct path point to locations in the caller's namespace
that would be roots of the cloned mounts.

View File

@ -6,9 +6,6 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines # Common defines
$defs: $defs:
name:
type: string
pattern: ^[0-9a-z-]+$
uint: uint:
type: integer type: integer
minimum: 0 minimum: 0
@ -79,7 +76,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
header: header:
description: For C-compatible languages, header which already defines this value. description: For C-compatible languages, header which already defines this value.
type: string type: string
@ -106,7 +103,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
value: value:
type: integer type: integer
doc: doc:
@ -135,7 +132,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
type: type:
description: The netlink attribute type description: The netlink attribute type
enum: [ u8, u16, u32, u64, s8, s16, s32, s64, string, binary ] enum: [ u8, u16, u32, u64, s8, s16, s32, s64, string, binary ]
@ -172,7 +169,7 @@ properties:
name: name:
description: | description: |
Name used when referring to this space in other definitions, not used outside of the spec. Name used when referring to this space in other definitions, not used outside of the spec.
$ref: '#/$defs/name' type: string
name-prefix: name-prefix:
description: | description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a- Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -209,7 +206,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
type: &attr-type type: &attr-type
description: The netlink attribute type description: The netlink attribute type
enum: [ unused, pad, flag, binary, bitfield32, enum: [ unused, pad, flag, binary, bitfield32,
@ -351,7 +348,7 @@ properties:
properties: properties:
name: name:
description: Name of the operation, also defining its C enum value in uAPI. description: Name of the operation, also defining its C enum value in uAPI.
$ref: '#/$defs/name' type: string
doc: doc:
description: Documentation for the command. description: Documentation for the command.
type: string type: string

View File

@ -6,9 +6,6 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines # Common defines
$defs: $defs:
name:
type: string
pattern: ^[0-9a-z-]+$
uint: uint:
type: integer type: integer
minimum: 0 minimum: 0
@ -32,7 +29,7 @@ additionalProperties: False
properties: properties:
name: name:
description: Name of the genetlink family. description: Name of the genetlink family.
$ref: '#/$defs/name' type: string
doc: doc:
type: string type: string
protocol: protocol:
@ -51,7 +48,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
header: header:
description: For C-compatible languages, header which already defines this value. description: For C-compatible languages, header which already defines this value.
type: string type: string
@ -78,7 +75,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
value: value:
type: integer type: integer
doc: doc:
@ -99,7 +96,7 @@ properties:
name: name:
description: | description: |
Name used when referring to this space in other definitions, not used outside of the spec. Name used when referring to this space in other definitions, not used outside of the spec.
$ref: '#/$defs/name' type: string
name-prefix: name-prefix:
description: | description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a- Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -124,7 +121,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
type: &attr-type type: &attr-type
enum: [ unused, pad, flag, binary, enum: [ unused, pad, flag, binary,
uint, sint, u8, u16, u32, u64, s8, s16, s32, s64, uint, sint, u8, u16, u32, u64, s8, s16, s32, s64,
@ -246,7 +243,7 @@ properties:
properties: properties:
name: name:
description: Name of the operation, also defining its C enum value in uAPI. description: Name of the operation, also defining its C enum value in uAPI.
$ref: '#/$defs/name' type: string
doc: doc:
description: Documentation for the command. description: Documentation for the command.
type: string type: string
@ -330,7 +327,7 @@ properties:
name: name:
description: | description: |
The name for the group, used to form the define and the value of the define. The name for the group, used to form the define and the value of the define.
$ref: '#/$defs/name' type: string
flags: *cmd_flags flags: *cmd_flags
kernel-family: kernel-family:

View File

@ -6,12 +6,6 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines # Common defines
$defs: $defs:
name:
type: string
pattern: ^[0-9a-z-]+$
name-cap:
type: string
pattern: ^[0-9a-zA-Z-]+$
uint: uint:
type: integer type: integer
minimum: 0 minimum: 0
@ -77,7 +71,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
header: header:
description: For C-compatible languages, header which already defines this value. description: For C-compatible languages, header which already defines this value.
type: string type: string
@ -104,7 +98,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
value: value:
type: integer type: integer
doc: doc:
@ -130,7 +124,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name-cap' type: string
type: type:
description: | description: |
The netlink attribute type. Members of type 'binary' or 'pad' The netlink attribute type. Members of type 'binary' or 'pad'
@ -189,7 +183,7 @@ properties:
name: name:
description: | description: |
Name used when referring to this space in other definitions, not used outside of the spec. Name used when referring to this space in other definitions, not used outside of the spec.
$ref: '#/$defs/name' type: string
name-prefix: name-prefix:
description: | description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a- Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -226,7 +220,7 @@ properties:
additionalProperties: False additionalProperties: False
properties: properties:
name: name:
$ref: '#/$defs/name' type: string
type: &attr-type type: &attr-type
description: The netlink attribute type description: The netlink attribute type
enum: [ unused, pad, flag, binary, bitfield32, enum: [ unused, pad, flag, binary, bitfield32,
@ -414,7 +408,7 @@ properties:
properties: properties:
name: name:
description: Name of the operation, also defining its C enum value in uAPI. description: Name of the operation, also defining its C enum value in uAPI.
$ref: '#/$defs/name' type: string
doc: doc:
description: Documentation for the command. description: Documentation for the command.
type: string type: string

View File

@ -38,15 +38,15 @@ definitions:
- -
name: dsa name: dsa
- -
name: pci-pf name: pci_pf
- -
name: pci-vf name: pci_vf
- -
name: virtual name: virtual
- -
name: unused name: unused
- -
name: pci-sf name: pci_sf
- -
type: enum type: enum
name: port-fn-state name: port-fn-state
@ -220,7 +220,7 @@ definitions:
- -
name: flag name: flag
- -
name: nul-string name: nul_string
value: 10 value: 10
- -
name: binary name: binary

View File

@ -188,7 +188,7 @@ definitions:
value: 10000 value: 10000
- -
type: const type: const
name: pin-frequency-77-5-khz name: pin-frequency-77_5-khz
value: 77500 value: 77500
- -
type: const type: const

View File

@ -48,7 +48,7 @@ definitions:
name: started name: started
doc: The firmware flashing process has started. doc: The firmware flashing process has started.
- -
name: in-progress name: in_progress
doc: The firmware flashing process is in progress. doc: The firmware flashing process is in progress.
- -
name: completed name: completed
@ -1422,7 +1422,7 @@ attribute-sets:
name: hkey name: hkey
type: binary type: binary
- -
name: input-xfrm name: input_xfrm
type: u32 type: u32
- -
name: start-context name: start-context
@ -2238,7 +2238,7 @@ operations:
- hfunc - hfunc
- indir - indir
- hkey - hkey
- input-xfrm - input_xfrm
dump: dump:
request: request:
attributes: attributes:

View File

@ -15,7 +15,7 @@ kernel-policy: global
definitions: definitions:
- -
type: enum type: enum
name: encap-type name: encap_type
name-prefix: fou-encap- name-prefix: fou-encap-
enum-name: enum-name:
entries: [ unspec, direct, gue ] entries: [ unspec, direct, gue ]
@ -43,26 +43,26 @@ attribute-sets:
name: type name: type
type: u8 type: u8
- -
name: remcsum-nopartial name: remcsum_nopartial
type: flag type: flag
- -
name: local-v4 name: local_v4
type: u32 type: u32
- -
name: local-v6 name: local_v6
type: binary type: binary
checks: checks:
min-len: 16 min-len: 16
- -
name: peer-v4 name: peer_v4
type: u32 type: u32
- -
name: peer-v6 name: peer_v6
type: binary type: binary
checks: checks:
min-len: 16 min-len: 16
- -
name: peer-port name: peer_port
type: u16 type: u16
byte-order: big-endian byte-order: big-endian
- -
@ -90,12 +90,12 @@ operations:
- port - port
- ipproto - ipproto
- type - type
- remcsum-nopartial - remcsum_nopartial
- local-v4 - local_v4
- peer-v4 - peer_v4
- local-v6 - local_v6
- peer-v6 - peer_v6
- peer-port - peer_port
- ifindex - ifindex
- -
@ -112,11 +112,11 @@ operations:
- af - af
- ifindex - ifindex
- port - port
- peer-port - peer_port
- local-v4 - local_v4
- peer-v4 - peer_v4
- local-v6 - local_v6
- peer-v6 - peer_v6
- -
name: get name: get

View File

@ -57,21 +57,21 @@ definitions:
doc: >- doc: >-
A new subflow has been established. 'error' should not be set. A new subflow has been established. 'error' should not be set.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if-idx [, error]. daddr6, sport, dport, backup, if_idx [, error].
- -
name: sub-closed name: sub-closed
doc: >- doc: >-
A subflow has been closed. An error (copy of sk_err) could be set if an A subflow has been closed. An error (copy of sk_err) could be set if an
error has been detected for this subflow. error has been detected for this subflow.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if-idx [, error]. daddr6, sport, dport, backup, if_idx [, error].
- -
name: sub-priority name: sub-priority
value: 13 value: 13
doc: >- doc: >-
The priority of a subflow has changed. 'error' should not be set. The priority of a subflow has changed. 'error' should not be set.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if-idx [, error]. daddr6, sport, dport, backup, if_idx [, error].
- -
name: listener-created name: listener-created
value: 15 value: 15
@ -255,7 +255,7 @@ attribute-sets:
name: timeout name: timeout
type: u32 type: u32
- -
name: if-idx name: if_idx
type: u32 type: u32
- -
name: reset-reason name: reset-reason

View File

@ -27,7 +27,7 @@ attribute-sets:
name: proc name: proc
type: u32 type: u32
- -
name: service-time name: service_time
type: s64 type: s64
- -
name: pad name: pad
@ -139,7 +139,7 @@ operations:
- prog - prog
- version - version
- proc - proc
- service-time - service_time
- saddr4 - saddr4
- daddr4 - daddr4
- saddr6 - saddr6

View File

@ -216,7 +216,7 @@ definitions:
type: struct type: struct
members: members:
- -
name: nd-target name: nd_target
type: binary type: binary
len: 16 len: 16
byte-order: big-endian byte-order: big-endian
@ -258,12 +258,12 @@ definitions:
type: struct type: struct
members: members:
- -
name: vlan-tpid name: vlan_tpid
type: u16 type: u16
byte-order: big-endian byte-order: big-endian
doc: Tag protocol identifier (TPID) to push. doc: Tag protocol identifier (TPID) to push.
- -
name: vlan-tci name: vlan_tci
type: u16 type: u16
byte-order: big-endian byte-order: big-endian
doc: Tag control identifier (TCI) to push. doc: Tag control identifier (TCI) to push.

View File

@ -603,7 +603,7 @@ definitions:
name: optmask name: optmask
type: u32 type: u32
- -
name: if-stats-msg name: if_stats_msg
type: struct type: struct
members: members:
- -
@ -2486,7 +2486,7 @@ operations:
name: getstats name: getstats
doc: Get / dump link stats. doc: Get / dump link stats.
attribute-set: stats-attrs attribute-set: stats-attrs
fixed-header: if-stats-msg fixed-header: if_stats_msg
do: do:
request: request:
value: 94 value: 94

View File

@ -232,7 +232,7 @@ definitions:
type: u8 type: u8
doc: log(P_max / (qth-max - qth-min)) doc: log(P_max / (qth-max - qth-min))
- -
name: Scell-log name: Scell_log
type: u8 type: u8
doc: cell size for idle damping doc: cell size for idle damping
- -
@ -253,7 +253,7 @@ definitions:
name: DPs name: DPs
type: u32 type: u32
- -
name: def-DP name: def_DP
type: u32 type: u32
- -
name: grio name: grio

View File

@ -66,7 +66,7 @@ Admin Function driver
As mentioned above RVU PF0 is called the admin function (AF), this driver As mentioned above RVU PF0 is called the admin function (AF), this driver
supports resource provisioning and configuration of functional blocks. supports resource provisioning and configuration of functional blocks.
Doesn't handle any I/O. It sets up few basic stuff but most of the Doesn't handle any I/O. It sets up few basic stuff but most of the
functionality is achieved via configuration requests from PFs and VFs. funcionality is achieved via configuration requests from PFs and VFs.
PF/VFs communicates with AF via a shared memory region (mailbox). Upon PF/VFs communicates with AF via a shared memory region (mailbox). Upon
receiving requests AF does resource provisioning and other HW configuration. receiving requests AF does resource provisioning and other HW configuration.

View File

@ -16,13 +16,11 @@ User interface
Creating a TLS connection Creating a TLS connection
------------------------- -------------------------
First create a new TCP socket and once the connection is established set the First create a new TCP socket and set the TLS ULP.
TLS ULP.
.. code-block:: c .. code-block:: c
sock = socket(AF_INET, SOCK_STREAM, 0); sock = socket(AF_INET, SOCK_STREAM, 0);
connect(sock, addr, addrlen);
setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls")); setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));
Setting the TLS ULP allows us to set/get TLS socket options. Currently Setting the TLS ULP allows us to set/get TLS socket options. Currently

View File

@ -312,7 +312,7 @@ Posting as one thread is discouraged because it confuses patchwork
(as of patchwork 2.2.2). (as of patchwork 2.2.2).
Co-posting selftests Co-posting selftests
~~~~~~~~~~~~~~~~~~~~ --------------------
Selftests should be part of the same series as the code changes. Selftests should be part of the same series as the code changes.
Specifically for fixes both code change and related test should go into Specifically for fixes both code change and related test should go into

View File

@ -7196,10 +7196,6 @@ The valid value for 'flags' is:
u64 leaf; u64 leaf;
u64 r11, r12, r13, r14; u64 r11, r12, r13, r14;
} get_tdvmcall_info; } get_tdvmcall_info;
struct {
u64 ret;
u64 vector;
} setup_event_notify;
}; };
} tdx; } tdx;
@ -7230,9 +7226,6 @@ inputs and outputs of the TDVMCALL. Currently the following values of
placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info``
field of the union. field of the union.
* ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to
set up a notification interrupt for vector ``vector``.
KVM may add support for more values in the future that may cause a userspace KVM may add support for more values in the future that may cause a userspace
exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case, exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
it will enter with output fields already valid; in the common case, the it will enter with output fields already valid; in the common case, the

View File

@ -79,20 +79,7 @@ to be configured to the TDX guest.
struct kvm_tdx_capabilities { struct kvm_tdx_capabilities {
__u64 supported_attrs; __u64 supported_attrs;
__u64 supported_xfam; __u64 supported_xfam;
__u64 reserved[254];
/* TDG.VP.VMCALL hypercalls executed in kernel and forwarded to
* userspace, respectively
*/
__u64 kernel_tdvmcallinfo_1_r11;
__u64 user_tdvmcallinfo_1_r11;
/* TDG.VP.VMCALL instruction executions subfunctions executed in kernel
* and forwarded to userspace, respectively
*/
__u64 kernel_tdvmcallinfo_1_r12;
__u64 user_tdvmcallinfo_1_r12;
__u64 reserved[250];
/* Configurable CPUID bits for userspace */ /* Configurable CPUID bits for userspace */
struct kvm_cpuid2 cpuid; struct kvm_cpuid2 cpuid;

View File

@ -36,7 +36,7 @@ Offset Size (in bytes) Content
The WMI object flags control whether the method or notification ID is used: The WMI object flags control whether the method or notification ID is used:
- 0x1: Data block is expensive to collect. - 0x1: Data block usage is expensive and must be explicitly enabled/disabled.
- 0x2: Data block contains WMI methods. - 0x2: Data block contains WMI methods.
- 0x4: Data block contains ASCIZ string. - 0x4: Data block contains ASCIZ string.
- 0x8: Data block describes a WMI event, use notification ID instead - 0x8: Data block describes a WMI event, use notification ID instead
@ -83,18 +83,14 @@ event as hexadecimal value. Their first parameter is an integer with a value
of 0 if the WMI event should be disabled, other values will enable of 0 if the WMI event should be disabled, other values will enable
the WMI event. the WMI event.
Those ACPI methods are always called even for WMI events not registered as
being expensive to collect to match the behavior of the Windows driver.
WCxx ACPI methods WCxx ACPI methods
----------------- -----------------
Similar to the ``WExx`` ACPI methods, except that instead of WMI events it controls Similar to the ``WExx`` ACPI methods, except that it controls data collection
data collection of data blocks registered as being expensive to collect. Thus the instead of events and thus the last two characters of the ACPI method name are
last two characters of the ACPI method name are the method ID of the data block the method ID of the data block to enable/disable.
to enable/disable.
Those ACPI methods are also called before setting data blocks to match the Those ACPI methods are also called before setting data blocks to match the
behavior of the Windows driver. behaviour of the Windows driver.
_WED ACPI method _WED ACPI method
---------------- ----------------

View File

@ -4181,7 +4181,6 @@ F: include/linux/cpumask_types.h
F: include/linux/find.h F: include/linux/find.h
F: include/linux/nodemask.h F: include/linux/nodemask.h
F: include/linux/nodemask_types.h F: include/linux/nodemask_types.h
F: include/uapi/linux/bits.h
F: include/vdso/bits.h F: include/vdso/bits.h
F: lib/bitmap-str.c F: lib/bitmap-str.c
F: lib/bitmap.c F: lib/bitmap.c
@ -4194,7 +4193,6 @@ F: tools/include/linux/bitfield.h
F: tools/include/linux/bitmap.h F: tools/include/linux/bitmap.h
F: tools/include/linux/bits.h F: tools/include/linux/bits.h
F: tools/include/linux/find.h F: tools/include/linux/find.h
F: tools/include/uapi/linux/bits.h
F: tools/include/vdso/bits.h F: tools/include/vdso/bits.h
F: tools/lib/bitmap.c F: tools/lib/bitmap.c
F: tools/lib/find_bit.c F: tools/lib/find_bit.c
@ -10506,7 +10504,7 @@ S: Maintained
F: block/partitions/efi.* F: block/partitions/efi.*
HABANALABS PCI DRIVER HABANALABS PCI DRIVER
M: Yaron Avizrat <yaron.avizrat@intel.com> M: Ofir Bitton <obitton@habana.ai>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Supported S: Supported
C: irc://irc.oftc.net/dri-devel C: irc://irc.oftc.net/dri-devel
@ -11157,8 +11155,7 @@ F: include/linux/platform_data/huawei-gaokun-ec.h
HUGETLB SUBSYSTEM HUGETLB SUBSYSTEM
M: Muchun Song <muchun.song@linux.dev> M: Muchun Song <muchun.song@linux.dev>
M: Oscar Salvador <osalvador@suse.de> R: Oscar Salvador <osalvador@suse.de>
R: David Hildenbrand <david@redhat.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages
@ -11169,7 +11166,6 @@ F: fs/hugetlbfs/
F: include/linux/hugetlb.h F: include/linux/hugetlb.h
F: include/trace/events/hugetlbfs.h F: include/trace/events/hugetlbfs.h
F: mm/hugetlb.c F: mm/hugetlb.c
F: mm/hugetlb_cgroup.c
F: mm/hugetlb_cma.c F: mm/hugetlb_cma.c
F: mm/hugetlb_cma.h F: mm/hugetlb_cma.h
F: mm/hugetlb_vmemmap.c F: mm/hugetlb_vmemmap.c
@ -13349,7 +13345,6 @@ M: Alexander Graf <graf@amazon.com>
M: Mike Rapoport <rppt@kernel.org> M: Mike Rapoport <rppt@kernel.org>
M: Changyuan Lyu <changyuanl@google.com> M: Changyuan Lyu <changyuanl@google.com>
L: kexec@lists.infradead.org L: kexec@lists.infradead.org
L: linux-mm@kvack.org
S: Maintained S: Maintained
F: Documentation/admin-guide/mm/kho.rst F: Documentation/admin-guide/mm/kho.rst
F: Documentation/core-api/kho/* F: Documentation/core-api/kho/*
@ -15552,7 +15547,6 @@ F: drivers/net/ethernet/mellanox/mlx4/en_*
MELLANOX ETHERNET DRIVER (mlx5e) MELLANOX ETHERNET DRIVER (mlx5e)
M: Saeed Mahameed <saeedm@nvidia.com> M: Saeed Mahameed <saeedm@nvidia.com>
M: Tariq Toukan <tariqt@nvidia.com> M: Tariq Toukan <tariqt@nvidia.com>
M: Mark Bloch <mbloch@nvidia.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
W: https://www.nvidia.com/networking/ W: https://www.nvidia.com/networking/
@ -15622,7 +15616,6 @@ MELLANOX MLX5 core VPI driver
M: Saeed Mahameed <saeedm@nvidia.com> M: Saeed Mahameed <saeedm@nvidia.com>
M: Leon Romanovsky <leonro@nvidia.com> M: Leon Romanovsky <leonro@nvidia.com>
M: Tariq Toukan <tariqt@nvidia.com> M: Tariq Toukan <tariqt@nvidia.com>
M: Mark Bloch <mbloch@nvidia.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
S: Maintained S: Maintained
@ -15680,16 +15673,11 @@ MEMBLOCK AND MEMORY MANAGEMENT INITIALIZATION
M: Mike Rapoport <rppt@kernel.org> M: Mike Rapoport <rppt@kernel.org>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes
F: Documentation/core-api/boot-time-mm.rst F: Documentation/core-api/boot-time-mm.rst
F: Documentation/core-api/kho/bindings/memblock/* F: Documentation/core-api/kho/bindings/memblock/*
F: include/linux/memblock.h F: include/linux/memblock.h
F: mm/bootmem_info.c
F: mm/memblock.c F: mm/memblock.c
F: mm/memtest.c
F: mm/mm_init.c F: mm/mm_init.c
F: mm/rodata_test.c
F: tools/testing/memblock/ F: tools/testing/memblock/
MEMORY ALLOCATION PROFILING MEMORY ALLOCATION PROFILING
@ -15744,6 +15732,7 @@ F: Documentation/admin-guide/mm/
F: Documentation/mm/ F: Documentation/mm/
F: include/linux/gfp.h F: include/linux/gfp.h
F: include/linux/gfp_types.h F: include/linux/gfp_types.h
F: include/linux/memfd.h
F: include/linux/memory_hotplug.h F: include/linux/memory_hotplug.h
F: include/linux/memory-tiers.h F: include/linux/memory-tiers.h
F: include/linux/mempolicy.h F: include/linux/mempolicy.h
@ -15803,10 +15792,6 @@ S: Maintained
W: http://www.linux-mm.org W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: mm/gup.c F: mm/gup.c
F: mm/gup_test.c
F: mm/gup_test.h
F: tools/testing/selftests/mm/gup_longterm.c
F: tools/testing/selftests/mm/gup_test.c
MEMORY MANAGEMENT - KSM (Kernel Samepage Merging) MEMORY MANAGEMENT - KSM (Kernel Samepage Merging)
M: Andrew Morton <akpm@linux-foundation.org> M: Andrew Morton <akpm@linux-foundation.org>
@ -15854,17 +15839,6 @@ F: mm/numa.c
F: mm/numa_emulation.c F: mm/numa_emulation.c
F: mm/numa_memblks.c F: mm/numa_memblks.c
MEMORY MANAGEMENT - OOM KILLER
M: Michal Hocko <mhocko@suse.com>
R: David Rientjes <rientjes@google.com>
R: Shakeel Butt <shakeel.butt@linux.dev>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/oom.h
F: include/trace/events/oom.h
F: include/uapi/linux/oom.h
F: mm/oom_kill.c
MEMORY MANAGEMENT - PAGE ALLOCATOR MEMORY MANAGEMENT - PAGE ALLOCATOR
M: Andrew Morton <akpm@linux-foundation.org> M: Andrew Morton <akpm@linux-foundation.org>
M: Vlastimil Babka <vbabka@suse.cz> M: Vlastimil Babka <vbabka@suse.cz>
@ -15879,17 +15853,8 @@ F: include/linux/compaction.h
F: include/linux/gfp.h F: include/linux/gfp.h
F: include/linux/page-isolation.h F: include/linux/page-isolation.h
F: mm/compaction.c F: mm/compaction.c
F: mm/debug_page_alloc.c
F: mm/fail_page_alloc.c
F: mm/page_alloc.c F: mm/page_alloc.c
F: mm/page_ext.c
F: mm/page_frag_cache.c
F: mm/page_isolation.c F: mm/page_isolation.c
F: mm/page_owner.c
F: mm/page_poison.c
F: mm/page_reporting.c
F: mm/show_mem.c
F: mm/shuffle.c
MEMORY MANAGEMENT - RECLAIM MEMORY MANAGEMENT - RECLAIM
M: Andrew Morton <akpm@linux-foundation.org> M: Andrew Morton <akpm@linux-foundation.org>
@ -15903,7 +15868,6 @@ L: linux-mm@kvack.org
S: Maintained S: Maintained
F: mm/pt_reclaim.c F: mm/pt_reclaim.c
F: mm/vmscan.c F: mm/vmscan.c
F: mm/workingset.c
MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)
M: Andrew Morton <akpm@linux-foundation.org> M: Andrew Morton <akpm@linux-foundation.org>
@ -15916,7 +15880,6 @@ R: Harry Yoo <harry.yoo@oracle.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
F: include/linux/rmap.h F: include/linux/rmap.h
F: mm/page_vma_mapped.c
F: mm/rmap.c F: mm/rmap.c
MEMORY MANAGEMENT - SECRETMEM MEMORY MANAGEMENT - SECRETMEM
@ -15949,9 +15912,9 @@ F: mm/swapfile.c
MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)
M: Andrew Morton <akpm@linux-foundation.org> M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@redhat.com> M: David Hildenbrand <david@redhat.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Zi Yan <ziy@nvidia.com> R: Zi Yan <ziy@nvidia.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com> R: Baolin Wang <baolin.wang@linux.alibaba.com>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com> R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Nico Pache <npache@redhat.com> R: Nico Pache <npache@redhat.com>
R: Ryan Roberts <ryan.roberts@arm.com> R: Ryan Roberts <ryan.roberts@arm.com>
@ -16009,14 +15972,11 @@ S: Maintained
W: http://www.linux-mm.org W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: include/trace/events/mmap.h F: include/trace/events/mmap.h
F: mm/mincore.c
F: mm/mlock.c F: mm/mlock.c
F: mm/mmap.c F: mm/mmap.c
F: mm/mprotect.c F: mm/mprotect.c
F: mm/mremap.c F: mm/mremap.c
F: mm/mseal.c F: mm/mseal.c
F: mm/msync.c
F: mm/nommu.c
F: mm/vma.c F: mm/vma.c
F: mm/vma.h F: mm/vma.h
F: mm/vma_exec.c F: mm/vma_exec.c
@ -16824,8 +16784,8 @@ F: include/dt-bindings/clock/mobileye,eyeq5-clk.h
MODULE SUPPORT MODULE SUPPORT
M: Luis Chamberlain <mcgrof@kernel.org> M: Luis Chamberlain <mcgrof@kernel.org>
M: Petr Pavlu <petr.pavlu@suse.com> M: Petr Pavlu <petr.pavlu@suse.com>
M: Daniel Gomez <da.gomez@kernel.org>
R: Sami Tolvanen <samitolvanen@google.com> R: Sami Tolvanen <samitolvanen@google.com>
R: Daniel Gomez <da.gomez@samsung.com>
L: linux-modules@vger.kernel.org L: linux-modules@vger.kernel.org
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
@ -17224,10 +17184,10 @@ F: drivers/rtc/rtc-ntxec.c
F: include/linux/mfd/ntxec.h F: include/linux/mfd/ntxec.h
NETRONOME ETHERNET DRIVERS NETRONOME ETHERNET DRIVERS
M: Louis Peens <louis.peens@corigine.com>
R: Jakub Kicinski <kuba@kernel.org> R: Jakub Kicinski <kuba@kernel.org>
R: Simon Horman <horms@kernel.org>
L: oss-drivers@corigine.com L: oss-drivers@corigine.com
S: Odd Fixes S: Maintained
F: drivers/net/ethernet/netronome/ F: drivers/net/ethernet/netronome/
NETWORK BLOCK DEVICE (NBD) NETWORK BLOCK DEVICE (NBD)
@ -19603,7 +19563,8 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git
F: drivers/pinctrl/intel/ F: drivers/pinctrl/intel/
PIN CONTROLLER - KEEMBAY PIN CONTROLLER - KEEMBAY
S: Orphan M: Lakshmi Sowjanya D <lakshmi.sowjanya.d@intel.com>
S: Supported
F: drivers/pinctrl/pinctrl-keembay* F: drivers/pinctrl/pinctrl-keembay*
PIN CONTROLLER - MEDIATEK PIN CONTROLLER - MEDIATEK
@ -20156,15 +20117,21 @@ S: Supported
F: Documentation/devicetree/bindings/soc/qcom/qcom,apr* F: Documentation/devicetree/bindings/soc/qcom/qcom,apr*
F: Documentation/devicetree/bindings/sound/qcom,* F: Documentation/devicetree/bindings/sound/qcom,*
F: drivers/soc/qcom/apr.c F: drivers/soc/qcom/apr.c
F: drivers/soundwire/qcom.c F: include/dt-bindings/sound/qcom,wcd9335.h
F: include/dt-bindings/sound/qcom,wcd93* F: include/dt-bindings/sound/qcom,wcd934x.h
F: sound/soc/codecs/lpass-*.* F: sound/soc/codecs/lpass-rx-macro.*
F: sound/soc/codecs/lpass-tx-macro.*
F: sound/soc/codecs/lpass-va-macro.c
F: sound/soc/codecs/lpass-wsa-macro.*
F: sound/soc/codecs/msm8916-wcd-analog.c F: sound/soc/codecs/msm8916-wcd-analog.c
F: sound/soc/codecs/msm8916-wcd-digital.c F: sound/soc/codecs/msm8916-wcd-digital.c
F: sound/soc/codecs/wcd-clsh-v2.* F: sound/soc/codecs/wcd-clsh-v2.*
F: sound/soc/codecs/wcd-mbhc-v2.* F: sound/soc/codecs/wcd-mbhc-v2.*
F: sound/soc/codecs/wcd93*.* F: sound/soc/codecs/wcd9335.*
F: sound/soc/codecs/wsa88*.* F: sound/soc/codecs/wcd934x.c
F: sound/soc/codecs/wsa881x.c
F: sound/soc/codecs/wsa883x.c
F: sound/soc/codecs/wsa884x.c
F: sound/soc/qcom/ F: sound/soc/qcom/
QCOM EMBEDDED USB DEBUGGER (EUD) QCOM EMBEDDED USB DEBUGGER (EUD)
@ -21195,7 +21162,7 @@ M: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org L: linux-renesas-soc@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/net/renesas,rzv2h-gbeth.yaml F: Documentation/devicetree/bindings/net/renesas,r9a09g057-gbeth.yaml
F: drivers/net/ethernet/stmicro/stmmac/dwmac-renesas-gbeth.c F: drivers/net/ethernet/stmicro/stmmac/dwmac-renesas-gbeth.c
RENESAS RZ/V2H(P) USB2PHY PORT RESET DRIVER RENESAS RZ/V2H(P) USB2PHY PORT RESET DRIVER
@ -21407,7 +21374,7 @@ N: spacemit
K: spacemit K: spacemit
RISC-V THEAD SoC SUPPORT RISC-V THEAD SoC SUPPORT
M: Drew Fustini <fustini@kernel.org> M: Drew Fustini <drew@pdp7.com>
M: Guo Ren <guoren@kernel.org> M: Guo Ren <guoren@kernel.org>
M: Fu Wei <wefu@redhat.com> M: Fu Wei <wefu@redhat.com>
L: linux-riscv@lists.infradead.org L: linux-riscv@lists.infradead.org
@ -22583,11 +22550,9 @@ S: Maintained
F: drivers/misc/sgi-xp/ F: drivers/misc/sgi-xp/
SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
M: D. Wythe <alibuda@linux.alibaba.com>
M: Dust Li <dust.li@linux.alibaba.com>
M: Sidraya Jayagond <sidraya@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com> M: Wenjia Zhang <wenjia@linux.ibm.com>
R: Mahanta Jambigi <mjambigi@linux.ibm.com> M: Jan Karcher <jaka@linux.ibm.com>
R: D. Wythe <alibuda@linux.alibaba.com>
R: Tony Lu <tonylu@linux.alibaba.com> R: Tony Lu <tonylu@linux.alibaba.com>
R: Wen Gu <guwen@linux.alibaba.com> R: Wen Gu <guwen@linux.alibaba.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
@ -24098,7 +24063,6 @@ M: Bin Du <bin.du@amd.com>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
S: Maintained S: Maintained
F: drivers/i2c/busses/i2c-designware-amdisp.c F: drivers/i2c/busses/i2c-designware-amdisp.c
F: include/linux/soc/amd/isp4_misc.h
SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER
M: Jaehoon Chung <jh80.chung@samsung.com> M: Jaehoon Chung <jh80.chung@samsung.com>
@ -25063,11 +25027,8 @@ M: Hugh Dickins <hughd@google.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com> R: Baolin Wang <baolin.wang@linux.alibaba.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
S: Maintained S: Maintained
F: include/linux/memfd.h
F: include/linux/shmem_fs.h F: include/linux/shmem_fs.h
F: mm/memfd.c
F: mm/shmem.c F: mm/shmem.c
F: mm/shmem_quota.c
TOMOYO SECURITY MODULE TOMOYO SECURITY MODULE
M: Kentaro Takeda <takedakn@nttdata.co.jp> M: Kentaro Takeda <takedakn@nttdata.co.jp>
@ -26939,7 +26900,7 @@ F: arch/x86/kernel/stacktrace.c
F: arch/x86/kernel/unwind_*.c F: arch/x86/kernel/unwind_*.c
X86 TRUST DOMAIN EXTENSIONS (TDX) X86 TRUST DOMAIN EXTENSIONS (TDX)
M: Kirill A. Shutemov <kas@kernel.org> M: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
R: Dave Hansen <dave.hansen@linux.intel.com> R: Dave Hansen <dave.hansen@linux.intel.com>
L: x86@kernel.org L: x86@kernel.org
L: linux-coco@lists.linux.dev L: linux-coco@lists.linux.dev
@ -27308,6 +27269,13 @@ S: Supported
W: http://www.marvell.com W: http://www.marvell.com
F: drivers/i2c/busses/i2c-xlp9xx.c F: drivers/i2c/busses/i2c-xlp9xx.c
XRA1403 GPIO EXPANDER
M: Nandor Han <nandor.han@ge.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/gpio/gpio-xra1403.txt
F: drivers/gpio/gpio-xra1403.c
XTENSA XTFPGA PLATFORM SUPPORT XTENSA XTFPGA PLATFORM SUPPORT
M: Max Filippov <jcmvbkbc@gmail.com> M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained S: Maintained

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 16 PATCHLEVEL = 16
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc6 EXTRAVERSION = -rc3
NAME = Baby Opossum Posse NAME = Baby Opossum Posse
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -256,7 +256,6 @@ config ARM64
select HOTPLUG_SMT if HOTPLUG_CPU select HOTPLUG_SMT if HOTPLUG_CPU
select IRQ_DOMAIN select IRQ_DOMAIN
select IRQ_FORCED_THREADING select IRQ_FORCED_THREADING
select JUMP_LABEL
select KASAN_VMALLOC if KASAN select KASAN_VMALLOC if KASAN
select LOCK_MM_AND_FIND_VMA select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA

View File

@ -20,6 +20,8 @@ flash@0 {
compatible = "jedec,spi-nor"; compatible = "jedec,spi-nor";
reg = <0x0>; reg = <0x0>;
spi-max-frequency = <25000000>; spi-max-frequency = <25000000>;
#address-cells = <1>;
#size-cells = <1>;
partitions { partitions {
compatible = "fixed-partitions"; compatible = "fixed-partitions";

View File

@ -100,8 +100,6 @@ dfr_mipi_out_panel: endpoint@0 {
&displaydfr_mipi { &displaydfr_mipi {
status = "okay"; status = "okay";
#address-cells = <1>;
#size-cells = <0>;
dfr_panel: panel@0 { dfr_panel: panel@0 {
compatible = "apple,j293-summit", "apple,summit"; compatible = "apple,j293-summit", "apple,summit";

View File

@ -71,7 +71,7 @@ hpm1: usb-pd@3f {
*/ */
&port00 { &port00 {
bus-range = <1 1>; bus-range = <1 1>;
wifi0: wifi@0,0 { wifi0: network@0,0 {
compatible = "pci14e4,4425"; compatible = "pci14e4,4425";
reg = <0x10000 0x0 0x0 0x0 0x0>; reg = <0x10000 0x0 0x0 0x0 0x0>;
/* To be filled by the loader */ /* To be filled by the loader */

View File

@ -405,6 +405,8 @@ displaydfr_mipi: dsi@228600000 {
compatible = "apple,t8103-display-pipe-mipi", "apple,h7-display-pipe-mipi"; compatible = "apple,t8103-display-pipe-mipi", "apple,h7-display-pipe-mipi";
reg = <0x2 0x28600000 0x0 0x100000>; reg = <0x2 0x28600000 0x0 0x100000>;
power-domains = <&ps_mipi_dsi>; power-domains = <&ps_mipi_dsi>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled"; status = "disabled";
ports { ports {

View File

@ -63,8 +63,6 @@ dfr_mipi_out_panel: endpoint@0 {
&displaydfr_mipi { &displaydfr_mipi {
status = "okay"; status = "okay";
#address-cells = <1>;
#size-cells = <0>;
dfr_panel: panel@0 { dfr_panel: panel@0 {
compatible = "apple,j493-summit", "apple,summit"; compatible = "apple,j493-summit", "apple,summit";

View File

@ -420,6 +420,8 @@ displaydfr_mipi: dsi@228600000 {
compatible = "apple,t8112-display-pipe-mipi", "apple,h7-display-pipe-mipi"; compatible = "apple,t8112-display-pipe-mipi", "apple,h7-display-pipe-mipi";
reg = <0x2 0x28600000 0x0 0x100000>; reg = <0x2 0x28600000 0x0 0x100000>;
power-domains = <&ps_mipi_dsi>; power-domains = <&ps_mipi_dsi>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled"; status = "disabled";
ports { ports {

View File

@ -1573,7 +1573,6 @@ CONFIG_RESET_QCOM_AOSS=y
CONFIG_RESET_QCOM_PDC=m CONFIG_RESET_QCOM_PDC=m
CONFIG_RESET_RZG2L_USBPHY_CTRL=y CONFIG_RESET_RZG2L_USBPHY_CTRL=y
CONFIG_RESET_TI_SCI=y CONFIG_RESET_TI_SCI=y
CONFIG_PHY_SNPS_EUSB2=m
CONFIG_PHY_XGENE=y CONFIG_PHY_XGENE=y
CONFIG_PHY_CAN_TRANSCEIVER=m CONFIG_PHY_CAN_TRANSCEIVER=m
CONFIG_PHY_NXP_PTN3222=m CONFIG_PHY_NXP_PTN3222=m
@ -1598,6 +1597,7 @@ CONFIG_PHY_QCOM_EDP=m
CONFIG_PHY_QCOM_PCIE2=m CONFIG_PHY_QCOM_PCIE2=m
CONFIG_PHY_QCOM_QMP=m CONFIG_PHY_QCOM_QMP=m
CONFIG_PHY_QCOM_QUSB2=m CONFIG_PHY_QCOM_QUSB2=m
CONFIG_PHY_QCOM_SNPS_EUSB2=m
CONFIG_PHY_QCOM_EUSB2_REPEATER=m CONFIG_PHY_QCOM_EUSB2_REPEATER=m
CONFIG_PHY_QCOM_M31_USB=m CONFIG_PHY_QCOM_M31_USB=m
CONFIG_PHY_QCOM_USB_HS=m CONFIG_PHY_QCOM_USB_HS=m

View File

@ -287,6 +287,17 @@
.Lskip_fgt2_\@: .Lskip_fgt2_\@:
.endm .endm
.macro __init_el2_gcs
mrs_s x1, SYS_ID_AA64PFR1_EL1
ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
cbz x1, .Lskip_gcs_\@
/* Ensure GCS is not enabled when we start trying to do BLs */
msr_s SYS_GCSCR_EL1, xzr
msr_s SYS_GCSCRE0_EL1, xzr
.Lskip_gcs_\@:
.endm
/** /**
* Initialize EL2 registers to sane values. This should be called early on all * Initialize EL2 registers to sane values. This should be called early on all
* cores that were booted in EL2. Note that everything gets initialised as * cores that were booted in EL2. Note that everything gets initialised as
@ -308,6 +319,7 @@
__init_el2_cptr __init_el2_cptr
__init_el2_fgt __init_el2_fgt
__init_el2_fgt2 __init_el2_fgt2
__init_el2_gcs
.endm .endm
#ifndef __KVM_NVHE_HYPERVISOR__ #ifndef __KVM_NVHE_HYPERVISOR__
@ -359,13 +371,6 @@
msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2 msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
.Lskip_mpam_\@: .Lskip_mpam_\@:
check_override id_aa64pfr1, ID_AA64PFR1_EL1_GCS_SHIFT, .Linit_gcs_\@, .Lskip_gcs_\@, x1, x2
.Linit_gcs_\@:
msr_s SYS_GCSCR_EL1, xzr
msr_s SYS_GCSCRE0_EL1, xzr
.Lskip_gcs_\@:
check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2 check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2
.Linit_sve_\@: /* SVE register access */ .Linit_sve_\@: /* SVE register access */

View File

@ -1480,6 +1480,7 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm,
struct reg_mask_range *range); struct reg_mask_range *range);
/* Guest/host FPSIMD coordination helpers */ /* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);

View File

@ -34,7 +34,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
cpufeature.o alternative.o cacheinfo.o \ cpufeature.o alternative.o cacheinfo.o \
smp.o smp_spin_table.o topology.o smccc-call.o \ smp.o smp_spin_table.o topology.o smccc-call.o \
syscall.o proton-pack.o idle.o patching.o pi/ \ syscall.o proton-pack.o idle.o patching.o pi/ \
rsi.o jump_label.o rsi.o
obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
sys_compat.o sys_compat.o
@ -47,6 +47,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o
obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o
obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o
obj-$(CONFIG_CPU_PM) += sleep.o suspend.o obj-$(CONFIG_CPU_PM) += sleep.o suspend.o
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
obj-$(CONFIG_KGDB) += kgdb.o obj-$(CONFIG_KGDB) += kgdb.o
obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o
obj-$(CONFIG_PCI) += pci.o obj-$(CONFIG_PCI) += pci.o

View File

@ -3135,13 +3135,6 @@ static bool has_sve_feature(const struct arm64_cpu_capabilities *cap, int scope)
} }
#endif #endif
#ifdef CONFIG_ARM64_SME
static bool has_sme_feature(const struct arm64_cpu_capabilities *cap, int scope)
{
return system_supports_sme() && has_user_cpuid_feature(cap, scope);
}
#endif
static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL), HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL),
HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES), HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES),
@ -3230,31 +3223,31 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC), HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC),
#ifdef CONFIG_ARM64_SME #ifdef CONFIG_ARM64_SME
HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME), HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64), HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2), HWCAP_CAP(ID_AA64SMFR0_EL1, LUTv2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_LUTV2),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2), HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p2, CAP_HWCAP, KERNEL_HWCAP_SME2P2),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1), HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2), HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64), HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64), HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32), HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16), HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16), HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16), HWCAP_CAP(ID_AA64SMFR0_EL1, F8F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F16),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32), HWCAP_CAP(ID_AA64SMFR0_EL1, F8F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F8F32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32), HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32), HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32), HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32), HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32), HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8FMA),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP4),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2), HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SF8DP2),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM), HWCAP_CAP(ID_AA64SMFR0_EL1, SBitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SBITPERM),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES), HWCAP_CAP(ID_AA64SMFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_AES),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA), HWCAP_CAP(ID_AA64SMFR0_EL1, SFEXPA, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SFEXPA),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP), HWCAP_CAP(ID_AA64SMFR0_EL1, STMOP, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_STMOP),
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4), HWCAP_CAP(ID_AA64SMFR0_EL1, SMOP4, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_SMOP4),
#endif /* CONFIG_ARM64_SME */ #endif /* CONFIG_ARM64_SME */
HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IMP, CAP_HWCAP, KERNEL_HWCAP_F8CVT), HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IMP, CAP_HWCAP, KERNEL_HWCAP_F8CVT),
HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA), HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IMP, CAP_HWCAP, KERNEL_HWCAP_F8FMA),

View File

@ -15,7 +15,6 @@
#include <asm/efi.h> #include <asm/efi.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/vmap_stack.h>
static bool region_is_misaligned(const efi_memory_desc_t *md) static bool region_is_misaligned(const efi_memory_desc_t *md)
{ {
@ -215,13 +214,9 @@ static int __init arm64_efi_rt_init(void)
if (!efi_enabled(EFI_RUNTIME_SERVICES)) if (!efi_enabled(EFI_RUNTIME_SERVICES))
return 0; return 0;
if (!IS_ENABLED(CONFIG_VMAP_STACK)) { p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL,
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); NUMA_NO_NODE, &&l);
return -ENOMEM; l: if (!p) {
}
p = arch_alloc_vmap_stack(THREAD_SIZE, NUMA_NO_NODE);
if (!p) {
pr_warn("Failed to allocate EFI runtime stack\n"); pr_warn("Failed to allocate EFI runtime stack\n");
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
return -ENOMEM; return -ENOMEM;

View File

@ -673,11 +673,6 @@ static void permission_overlay_switch(struct task_struct *next)
current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0); current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0);
if (current->thread.por_el0 != next->thread.por_el0) { if (current->thread.por_el0 != next->thread.por_el0) {
write_sysreg_s(next->thread.por_el0, SYS_POR_EL0); write_sysreg_s(next->thread.por_el0, SYS_POR_EL0);
/*
* No ISB required as we can tolerate spurious Overlay faults -
* the fault handler will check again based on the new value
* of POR_EL0.
*/
} }
} }

View File

@ -1143,7 +1143,7 @@ static inline unsigned int num_other_online_cpus(void)
void smp_send_stop(void) void smp_send_stop(void)
{ {
static unsigned long stop_in_progress; static unsigned long stop_in_progress;
static cpumask_t mask; cpumask_t mask;
unsigned long timeout; unsigned long timeout;
/* /*

View File

@ -825,6 +825,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (!kvm_arm_vcpu_is_finalized(vcpu)) if (!kvm_arm_vcpu_is_finalized(vcpu))
return -EPERM; return -EPERM;
ret = kvm_arch_vcpu_run_map_fp(vcpu);
if (ret)
return ret;
if (likely(vcpu_has_run_once(vcpu))) if (likely(vcpu_has_run_once(vcpu)))
return 0; return 0;
@ -2125,7 +2129,7 @@ static void cpu_hyp_init(void *discard)
static void cpu_hyp_uninit(void *discard) static void cpu_hyp_uninit(void *discard)
{ {
if (!is_protected_kvm_enabled() && __this_cpu_read(kvm_hyp_initialized)) { if (__this_cpu_read(kvm_hyp_initialized)) {
cpu_hyp_reset(); cpu_hyp_reset();
__this_cpu_write(kvm_hyp_initialized, 0); __this_cpu_write(kvm_hyp_initialized, 0);
} }
@ -2341,13 +2345,8 @@ static void __init teardown_hyp_mode(void)
free_hyp_pgds(); free_hyp_pgds();
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
if (per_cpu(kvm_hyp_initialized, cpu))
continue;
free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT); free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT);
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
if (!kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu])
continue;
if (free_sve) { if (free_sve) {
struct cpu_sve_state *sve_state; struct cpu_sve_state *sve_state;
@ -2355,9 +2354,6 @@ static void __init teardown_hyp_mode(void)
sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
free_pages((unsigned long) sve_state, pkvm_host_sve_state_order()); free_pages((unsigned long) sve_state, pkvm_host_sve_state_order());
} }
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
} }
} }

View File

@ -14,6 +14,32 @@
#include <asm/kvm_mmu.h> #include <asm/kvm_mmu.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
/*
* Called on entry to KVM_RUN unless this vcpu previously ran at least
* once and the most recent prior KVM_RUN for this vcpu was called from
* the same task as current (highly likely).
*
* This is guaranteed to execute before kvm_arch_vcpu_load_fp(vcpu),
* such that on entering hyp the relevant parts of current are already
* mapped.
*/
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
{
struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
int ret;
/* pKVM has its own tracking of the host fpsimd state. */
if (is_protected_kvm_enabled())
return 0;
/* Make sure the host task fpsimd state is visible to hyp: */
ret = kvm_share_hyp(fpsimd, fpsimd + 1);
if (ret)
return ret;
return 0;
}
/* /*
* Prepare vcpu for saving the host's FPSIMD state and loading the guest's. * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
* The actual loading is done by the FPSIMD access trap taken to hyp. * The actual loading is done by the FPSIMD access trap taken to hyp.

View File

@ -479,7 +479,6 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
{ {
struct kvm_mem_range cur; struct kvm_mem_range cur;
kvm_pte_t pte; kvm_pte_t pte;
u64 granule;
s8 level; s8 level;
int ret; int ret;
@ -497,23 +496,20 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
return -EPERM; return -EPERM;
} }
for (; level <= KVM_PGTABLE_LAST_LEVEL; level++) { do {
if (!kvm_level_supports_block_mapping(level)) u64 granule = kvm_granule_size(level);
continue;
granule = kvm_granule_size(level);
cur.start = ALIGN_DOWN(addr, granule); cur.start = ALIGN_DOWN(addr, granule);
cur.end = cur.start + granule; cur.end = cur.start + granule;
if (!range_included(&cur, range)) level++;
continue; } while ((level <= KVM_PGTABLE_LAST_LEVEL) &&
!(kvm_level_supports_block_mapping(level) &&
range_included(&cur, range)));
*range = cur; *range = cur;
return 0; return 0;
} }
WARN_ON(1);
return -EINVAL;
}
int host_stage2_idmap_locked(phys_addr_t addr, u64 size, int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
enum kvm_pgtable_prot prot) enum kvm_pgtable_prot prot)
{ {

View File

@ -1402,21 +1402,6 @@ static void kvm_map_l1_vncr(struct kvm_vcpu *vcpu)
} }
} }
#define has_tgran_2(__r, __sz) \
({ \
u64 _s1, _s2, _mmfr0 = __r; \
\
_s2 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \
TGRAN##__sz##_2, _mmfr0); \
\
_s1 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \
TGRAN##__sz, _mmfr0); \
\
((_s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_NI && \
_s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz) || \
(_s2 == ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz && \
_s1 != ID_AA64MMFR0_EL1_TGRAN##__sz##_NI)); \
})
/* /*
* Our emulated CPU doesn't support all the possible features. For the * Our emulated CPU doesn't support all the possible features. For the
* sake of simplicity (and probably mental sanity), wipe out a number * sake of simplicity (and probably mental sanity), wipe out a number
@ -1426,8 +1411,6 @@ static void kvm_map_l1_vncr(struct kvm_vcpu *vcpu)
*/ */
u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val) u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
{ {
u64 orig_val = val;
switch (reg) { switch (reg) {
case SYS_ID_AA64ISAR0_EL1: case SYS_ID_AA64ISAR0_EL1:
/* Support everything but TME */ /* Support everything but TME */
@ -1497,15 +1480,12 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
*/ */
switch (PAGE_SIZE) { switch (PAGE_SIZE) {
case SZ_4K: case SZ_4K:
if (has_tgran_2(orig_val, 4))
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP);
fallthrough; fallthrough;
case SZ_16K: case SZ_16K:
if (has_tgran_2(orig_val, 16))
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP);
fallthrough; fallthrough;
case SZ_64K: case SZ_64K:
if (has_tgran_2(orig_val, 64))
val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP);
break; break;
} }

View File

@ -401,7 +401,9 @@ void vgic_v3_nested_update_mi(struct kvm_vcpu *vcpu)
{ {
bool level; bool level;
level = (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En) && vgic_v3_get_misr(vcpu); level = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En;
if (level)
level &= vgic_v3_get_misr(vcpu);
kvm_vgic_inject_irq(vcpu->kvm, vcpu, kvm_vgic_inject_irq(vcpu->kvm, vcpu,
vcpu->kvm->arch.vgic.mi_intid, level, vcpu); vcpu->kvm->arch.vgic.mi_intid, level, vcpu);
} }

View File

@ -487,29 +487,17 @@ static void do_bad_area(unsigned long far, unsigned long esr,
} }
} }
static bool fault_from_pkey(struct vm_area_struct *vma, unsigned int mm_flags) static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma,
unsigned int mm_flags)
{ {
unsigned long iss2 = ESR_ELx_ISS2(esr);
if (!system_supports_poe()) if (!system_supports_poe())
return false; return false;
/* if (esr_fsc_is_permission_fault(esr) && (iss2 & ESR_ELx_Overlay))
* We do not check whether an Overlay fault has occurred because we return true;
* cannot make a decision based solely on its value:
*
* - If Overlay is set, a fault did occur due to POE, but it may be
* spurious in those cases where we update POR_EL0 without ISB (e.g.
* on context-switch). We would then need to manually check POR_EL0
* against vma_pkey(vma), which is exactly what
* arch_vma_access_permitted() does.
*
* - If Overlay is not set, we may still need to report a pkey fault.
* This is the case if an access was made within a mapping but with no
* page mapped, and POR_EL0 forbids the access (according to
* vma_pkey()). Such access will result in a SIGSEGV regardless
* because core code checks arch_vma_access_permitted(), but in order
* to report the correct error code - SEGV_PKUERR - we must handle
* that case here.
*/
return !arch_vma_access_permitted(vma, return !arch_vma_access_permitted(vma,
mm_flags & FAULT_FLAG_WRITE, mm_flags & FAULT_FLAG_WRITE,
mm_flags & FAULT_FLAG_INSTRUCTION, mm_flags & FAULT_FLAG_INSTRUCTION,
@ -647,7 +635,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
goto bad_area; goto bad_area;
} }
if (fault_from_pkey(vma, mm_flags)) { if (fault_from_pkey(esr, vma, mm_flags)) {
pkey = vma_pkey(vma); pkey = vma_pkey(vma);
vma_end_read(vma); vma_end_read(vma);
fault = 0; fault = 0;
@ -691,7 +679,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
goto bad_area; goto bad_area;
} }
if (fault_from_pkey(vma, mm_flags)) { if (fault_from_pkey(esr, vma, mm_flags)) {
pkey = vma_pkey(vma); pkey = vma_pkey(vma);
mmap_read_unlock(mm); mmap_read_unlock(mm);
fault = 0; fault = 0;

View File

@ -518,6 +518,7 @@ alternative_else_nop_endif
msr REG_PIR_EL1, x0 msr REG_PIR_EL1, x0
orr tcr2, tcr2, TCR2_EL1_PIE orr tcr2, tcr2, TCR2_EL1_PIE
msr REG_TCR2_EL1, x0
.Lskip_indirection: .Lskip_indirection:

View File

@ -50,6 +50,12 @@ struct kvm_vm_stat {
struct kvm_vm_stat_generic generic; struct kvm_vm_stat_generic generic;
u64 pages; u64 pages;
u64 hugepages; u64 hugepages;
u64 ipi_read_exits;
u64 ipi_write_exits;
u64 eiointc_read_exits;
u64 eiointc_write_exits;
u64 pch_pic_read_exits;
u64 pch_pic_write_exits;
}; };
struct kvm_vcpu_stat { struct kvm_vcpu_stat {
@ -59,12 +65,6 @@ struct kvm_vcpu_stat {
u64 cpucfg_exits; u64 cpucfg_exits;
u64 signal_exits; u64 signal_exits;
u64 hypercall_exits; u64 hypercall_exits;
u64 ipi_read_exits;
u64 ipi_write_exits;
u64 eiointc_read_exits;
u64 eiointc_write_exits;
u64 pch_pic_read_exits;
u64 pch_pic_write_exits;
}; };
#define KVM_MEM_HUGEPAGE_CAPABLE (1UL << 0) #define KVM_MEM_HUGEPAGE_CAPABLE (1UL << 0)

View File

@ -289,11 +289,9 @@ static int kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
er = EMULATE_FAIL; er = EMULATE_FAIL;
switch (((inst.word >> 24) & 0xff)) { switch (((inst.word >> 24) & 0xff)) {
case 0x0: /* CPUCFG GSPR */ case 0x0: /* CPUCFG GSPR */
trace_kvm_exit_cpucfg(vcpu, KVM_TRACE_EXIT_CPUCFG);
er = kvm_emu_cpucfg(vcpu, inst); er = kvm_emu_cpucfg(vcpu, inst);
break; break;
case 0x4: /* CSR{RD,WR,XCHG} GSPR */ case 0x4: /* CSR{RD,WR,XCHG} GSPR */
trace_kvm_exit_csr(vcpu, KVM_TRACE_EXIT_CSR);
er = kvm_handle_csr(vcpu, inst); er = kvm_handle_csr(vcpu, inst);
break; break;
case 0x6: /* Cache, Idle and IOCSR GSPR */ case 0x6: /* Cache, Idle and IOCSR GSPR */

View File

@ -9,7 +9,7 @@
static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
{ {
int ipnum, cpu, cpuid, irq; int ipnum, cpu, cpuid, irq_index, irq_mask, irq;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
for (irq = 0; irq < EIOINTC_IRQS; irq++) { for (irq = 0; irq < EIOINTC_IRQS; irq++) {
@ -18,6 +18,8 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
ipnum = count_trailing_zeros(ipnum); ipnum = count_trailing_zeros(ipnum);
ipnum = (ipnum >= 0 && ipnum < 4) ? ipnum : 0; ipnum = (ipnum >= 0 && ipnum < 4) ? ipnum : 0;
} }
irq_index = irq / 32;
irq_mask = BIT(irq & 0x1f);
cpuid = s->coremap.reg_u8[irq]; cpuid = s->coremap.reg_u8[irq];
vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid);
@ -25,16 +27,16 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
continue; continue;
cpu = vcpu->vcpu_id; cpu = vcpu->vcpu_id;
if (test_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu])) if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask))
__set_bit(irq, s->sw_coreisr[cpu][ipnum]); set_bit(irq, s->sw_coreisr[cpu][ipnum]);
else else
__clear_bit(irq, s->sw_coreisr[cpu][ipnum]); clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
} }
} }
static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level) static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level)
{ {
int ipnum, cpu, found; int ipnum, cpu, found, irq_index, irq_mask;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
struct kvm_interrupt vcpu_irq; struct kvm_interrupt vcpu_irq;
@ -46,16 +48,19 @@ static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level)
cpu = s->sw_coremap[irq]; cpu = s->sw_coremap[irq];
vcpu = kvm_get_vcpu(s->kvm, cpu); vcpu = kvm_get_vcpu(s->kvm, cpu);
irq_index = irq / 32;
irq_mask = BIT(irq & 0x1f);
if (level) { if (level) {
/* if not enable return false */ /* if not enable return false */
if (!test_bit(irq, (unsigned long *)s->enable.reg_u32)) if (((s->enable.reg_u32[irq_index]) & irq_mask) == 0)
return; return;
__set_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]); s->coreisr.reg_u32[cpu][irq_index] |= irq_mask;
found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS); found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS);
__set_bit(irq, s->sw_coreisr[cpu][ipnum]); set_bit(irq, s->sw_coreisr[cpu][ipnum]);
} else { } else {
__clear_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]); s->coreisr.reg_u32[cpu][irq_index] &= ~irq_mask;
__clear_bit(irq, s->sw_coreisr[cpu][ipnum]); clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS); found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS);
} }
@ -105,14 +110,159 @@ void eiointc_set_irq(struct loongarch_eiointc *s, int irq, int level)
unsigned long flags; unsigned long flags;
unsigned long *isr = (unsigned long *)s->isr.reg_u8; unsigned long *isr = (unsigned long *)s->isr.reg_u8;
level ? set_bit(irq, isr) : clear_bit(irq, isr);
spin_lock_irqsave(&s->lock, flags); spin_lock_irqsave(&s->lock, flags);
level ? __set_bit(irq, isr) : __clear_bit(irq, isr);
eiointc_update_irq(s, irq, level); eiointc_update_irq(s, irq, level);
spin_unlock_irqrestore(&s->lock, flags); spin_unlock_irqrestore(&s->lock, flags);
} }
static int loongarch_eiointc_read(struct kvm_vcpu *vcpu, struct loongarch_eiointc *s, static inline void eiointc_enable_irq(struct kvm_vcpu *vcpu,
gpa_t addr, unsigned long *val) struct loongarch_eiointc *s, int index, u8 mask, int level)
{
u8 val;
int irq;
val = mask & s->isr.reg_u8[index];
irq = ffs(val);
while (irq != 0) {
/*
* enable bit change from 0 to 1,
* need to update irq by pending bits
*/
eiointc_update_irq(s, irq - 1 + index * 8, level);
val &= ~BIT(irq - 1);
irq = ffs(val);
}
}
static int loongarch_eiointc_readb(struct kvm_vcpu *vcpu, struct loongarch_eiointc *s,
gpa_t addr, int len, void *val)
{
int index, ret = 0;
u8 data = 0;
gpa_t offset;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = offset - EIOINTC_NODETYPE_START;
data = s->nodetype.reg_u8[index];
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
index = offset - EIOINTC_IPMAP_START;
data = s->ipmap.reg_u8[index];
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = offset - EIOINTC_ENABLE_START;
data = s->enable.reg_u8[index];
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
index = offset - EIOINTC_BOUNCE_START;
data = s->bounce.reg_u8[index];
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = offset - EIOINTC_COREISR_START;
data = s->coreisr.reg_u8[vcpu->vcpu_id][index];
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
index = offset - EIOINTC_COREMAP_START;
data = s->coremap.reg_u8[index];
break;
default:
ret = -EINVAL;
break;
}
*(u8 *)val = data;
return ret;
}
static int loongarch_eiointc_readw(struct kvm_vcpu *vcpu, struct loongarch_eiointc *s,
gpa_t addr, int len, void *val)
{
int index, ret = 0;
u16 data = 0;
gpa_t offset;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 1;
data = s->nodetype.reg_u16[index];
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
index = (offset - EIOINTC_IPMAP_START) >> 1;
data = s->ipmap.reg_u16[index];
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 1;
data = s->enable.reg_u16[index];
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
index = (offset - EIOINTC_BOUNCE_START) >> 1;
data = s->bounce.reg_u16[index];
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START) >> 1;
data = s->coreisr.reg_u16[vcpu->vcpu_id][index];
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
index = (offset - EIOINTC_COREMAP_START) >> 1;
data = s->coremap.reg_u16[index];
break;
default:
ret = -EINVAL;
break;
}
*(u16 *)val = data;
return ret;
}
static int loongarch_eiointc_readl(struct kvm_vcpu *vcpu, struct loongarch_eiointc *s,
gpa_t addr, int len, void *val)
{
int index, ret = 0;
u32 data = 0;
gpa_t offset;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 2;
data = s->nodetype.reg_u32[index];
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
index = (offset - EIOINTC_IPMAP_START) >> 2;
data = s->ipmap.reg_u32[index];
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 2;
data = s->enable.reg_u32[index];
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
index = (offset - EIOINTC_BOUNCE_START) >> 2;
data = s->bounce.reg_u32[index];
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START) >> 2;
data = s->coreisr.reg_u32[vcpu->vcpu_id][index];
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
index = (offset - EIOINTC_COREMAP_START) >> 2;
data = s->coremap.reg_u32[index];
break;
default:
ret = -EINVAL;
break;
}
*(u32 *)val = data;
return ret;
}
static int loongarch_eiointc_readq(struct kvm_vcpu *vcpu, struct loongarch_eiointc *s,
gpa_t addr, int len, void *val)
{ {
int index, ret = 0; int index, ret = 0;
u64 data = 0; u64 data = 0;
@ -148,7 +298,7 @@ static int loongarch_eiointc_read(struct kvm_vcpu *vcpu, struct loongarch_eioint
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
*val = data; *(u64 *)val = data;
return ret; return ret;
} }
@ -158,7 +308,7 @@ static int kvm_eiointc_read(struct kvm_vcpu *vcpu,
gpa_t addr, int len, void *val) gpa_t addr, int len, void *val)
{ {
int ret = -EINVAL; int ret = -EINVAL;
unsigned long flags, data, offset; unsigned long flags;
struct loongarch_eiointc *eiointc = vcpu->kvm->arch.eiointc; struct loongarch_eiointc *eiointc = vcpu->kvm->arch.eiointc;
if (!eiointc) { if (!eiointc) {
@ -171,115 +321,355 @@ static int kvm_eiointc_read(struct kvm_vcpu *vcpu,
return -EINVAL; return -EINVAL;
} }
offset = addr & 0x7; vcpu->kvm->stat.eiointc_read_exits++;
addr -= offset;
vcpu->stat.eiointc_read_exits++;
spin_lock_irqsave(&eiointc->lock, flags); spin_lock_irqsave(&eiointc->lock, flags);
ret = loongarch_eiointc_read(vcpu, eiointc, addr, &data);
spin_unlock_irqrestore(&eiointc->lock, flags);
if (ret)
return ret;
data = data >> (offset * 8);
switch (len) { switch (len) {
case 1: case 1:
*(long *)val = (s8)data; ret = loongarch_eiointc_readb(vcpu, eiointc, addr, len, val);
break; break;
case 2: case 2:
*(long *)val = (s16)data; ret = loongarch_eiointc_readw(vcpu, eiointc, addr, len, val);
break; break;
case 4: case 4:
*(long *)val = (s32)data; ret = loongarch_eiointc_readl(vcpu, eiointc, addr, len, val);
break;
case 8:
ret = loongarch_eiointc_readq(vcpu, eiointc, addr, len, val);
break; break;
default: default:
*(long *)val = (long)data; WARN_ONCE(1, "%s: Abnormal address access: addr 0x%llx, size %d\n",
break; __func__, addr, len);
}
spin_unlock_irqrestore(&eiointc->lock, flags);
return ret;
} }
return 0; static int loongarch_eiointc_writeb(struct kvm_vcpu *vcpu,
}
static int loongarch_eiointc_write(struct kvm_vcpu *vcpu,
struct loongarch_eiointc *s, struct loongarch_eiointc *s,
gpa_t addr, u64 value, u64 field_mask) gpa_t addr, int len, const void *val)
{ {
int index, irq, ret = 0; int index, irq, bits, ret = 0;
u8 cpu; u8 cpu;
u64 data, old, mask; u8 data, old_data;
u8 coreisr, old_coreisr;
gpa_t offset; gpa_t offset;
offset = addr & 7; data = *(u8 *)val;
mask = field_mask << (offset * 8);
data = (value & field_mask) << (offset * 8);
addr -= offset;
offset = addr - EIOINTC_BASE; offset = addr - EIOINTC_BASE;
switch (offset) { switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END: case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 3; index = (offset - EIOINTC_NODETYPE_START);
old = s->nodetype.reg_u64[index]; s->nodetype.reg_u8[index] = data;
s->nodetype.reg_u64[index] = (old & ~mask) | data;
break; break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END: case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
/* /*
* ipmap cannot be set at runtime, can be set only at the beginning * ipmap cannot be set at runtime, can be set only at the beginning
* of irqchip driver, need not update upper irq level * of irqchip driver, need not update upper irq level
*/ */
old = s->ipmap.reg_u64; index = (offset - EIOINTC_IPMAP_START);
s->ipmap.reg_u64 = (old & ~mask) | data; s->ipmap.reg_u8[index] = data;
break; break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END: case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 3; index = (offset - EIOINTC_ENABLE_START);
old = s->enable.reg_u64[index]; old_data = s->enable.reg_u8[index];
s->enable.reg_u64[index] = (old & ~mask) | data; s->enable.reg_u8[index] = data;
/* /*
* 1: enable irq. * 1: enable irq.
* update irq when isr is set. * update irq when isr is set.
*/ */
data = s->enable.reg_u64[index] & ~old & s->isr.reg_u64[index]; data = s->enable.reg_u8[index] & ~old_data & s->isr.reg_u8[index];
while (data) { eiointc_enable_irq(vcpu, s, index, data, 1);
irq = __ffs(data); /*
eiointc_update_irq(s, irq + index * 64, 1); * 0: disable irq.
data &= ~BIT_ULL(irq); * update irq when isr is set.
*/
data = ~s->enable.reg_u8[index] & old_data & s->isr.reg_u8[index];
eiointc_enable_irq(vcpu, s, index, data, 0);
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
/* do not emulate hw bounced irq routing */
index = offset - EIOINTC_BOUNCE_START;
s->bounce.reg_u8[index] = data;
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START);
/* use attrs to get current cpu index */
cpu = vcpu->vcpu_id;
coreisr = data;
old_coreisr = s->coreisr.reg_u8[cpu][index];
/* write 1 to clear interrupt */
s->coreisr.reg_u8[cpu][index] = old_coreisr & ~coreisr;
coreisr &= old_coreisr;
bits = sizeof(data) * 8;
irq = find_first_bit((void *)&coreisr, bits);
while (irq < bits) {
eiointc_update_irq(s, irq + index * bits, 0);
bitmap_clear((void *)&coreisr, irq, 1);
irq = find_first_bit((void *)&coreisr, bits);
}
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
irq = offset - EIOINTC_COREMAP_START;
index = irq;
s->coremap.reg_u8[index] = data;
eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
static int loongarch_eiointc_writew(struct kvm_vcpu *vcpu,
struct loongarch_eiointc *s,
gpa_t addr, int len, const void *val)
{
int i, index, irq, bits, ret = 0;
u8 cpu;
u16 data, old_data;
u16 coreisr, old_coreisr;
gpa_t offset;
data = *(u16 *)val;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 1;
s->nodetype.reg_u16[index] = data;
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
/*
* ipmap cannot be set at runtime, can be set only at the beginning
* of irqchip driver, need not update upper irq level
*/
index = (offset - EIOINTC_IPMAP_START) >> 1;
s->ipmap.reg_u16[index] = data;
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 1;
old_data = s->enable.reg_u16[index];
s->enable.reg_u16[index] = data;
/*
* 1: enable irq.
* update irq when isr is set.
*/
data = s->enable.reg_u16[index] & ~old_data & s->isr.reg_u16[index];
for (i = 0; i < sizeof(data); i++) {
u8 mask = (data >> (i * 8)) & 0xff;
eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 1);
} }
/* /*
* 0: disable irq. * 0: disable irq.
* update irq when isr is set. * update irq when isr is set.
*/ */
data = ~s->enable.reg_u64[index] & old & s->isr.reg_u64[index]; data = ~s->enable.reg_u16[index] & old_data & s->isr.reg_u16[index];
while (data) { for (i = 0; i < sizeof(data); i++) {
irq = __ffs(data); u8 mask = (data >> (i * 8)) & 0xff;
eiointc_update_irq(s, irq + index * 64, 0); eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 0);
data &= ~BIT_ULL(irq); }
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
/* do not emulate hw bounced irq routing */
index = (offset - EIOINTC_BOUNCE_START) >> 1;
s->bounce.reg_u16[index] = data;
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START) >> 1;
/* use attrs to get current cpu index */
cpu = vcpu->vcpu_id;
coreisr = data;
old_coreisr = s->coreisr.reg_u16[cpu][index];
/* write 1 to clear interrupt */
s->coreisr.reg_u16[cpu][index] = old_coreisr & ~coreisr;
coreisr &= old_coreisr;
bits = sizeof(data) * 8;
irq = find_first_bit((void *)&coreisr, bits);
while (irq < bits) {
eiointc_update_irq(s, irq + index * bits, 0);
bitmap_clear((void *)&coreisr, irq, 1);
irq = find_first_bit((void *)&coreisr, bits);
}
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
irq = offset - EIOINTC_COREMAP_START;
index = irq >> 1;
s->coremap.reg_u16[index] = data;
eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
static int loongarch_eiointc_writel(struct kvm_vcpu *vcpu,
struct loongarch_eiointc *s,
gpa_t addr, int len, const void *val)
{
int i, index, irq, bits, ret = 0;
u8 cpu;
u32 data, old_data;
u32 coreisr, old_coreisr;
gpa_t offset;
data = *(u32 *)val;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 2;
s->nodetype.reg_u32[index] = data;
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
/*
* ipmap cannot be set at runtime, can be set only at the beginning
* of irqchip driver, need not update upper irq level
*/
index = (offset - EIOINTC_IPMAP_START) >> 2;
s->ipmap.reg_u32[index] = data;
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 2;
old_data = s->enable.reg_u32[index];
s->enable.reg_u32[index] = data;
/*
* 1: enable irq.
* update irq when isr is set.
*/
data = s->enable.reg_u32[index] & ~old_data & s->isr.reg_u32[index];
for (i = 0; i < sizeof(data); i++) {
u8 mask = (data >> (i * 8)) & 0xff;
eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 1);
}
/*
* 0: disable irq.
* update irq when isr is set.
*/
data = ~s->enable.reg_u32[index] & old_data & s->isr.reg_u32[index];
for (i = 0; i < sizeof(data); i++) {
u8 mask = (data >> (i * 8)) & 0xff;
eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 0);
}
break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
/* do not emulate hw bounced irq routing */
index = (offset - EIOINTC_BOUNCE_START) >> 2;
s->bounce.reg_u32[index] = data;
break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START) >> 2;
/* use attrs to get current cpu index */
cpu = vcpu->vcpu_id;
coreisr = data;
old_coreisr = s->coreisr.reg_u32[cpu][index];
/* write 1 to clear interrupt */
s->coreisr.reg_u32[cpu][index] = old_coreisr & ~coreisr;
coreisr &= old_coreisr;
bits = sizeof(data) * 8;
irq = find_first_bit((void *)&coreisr, bits);
while (irq < bits) {
eiointc_update_irq(s, irq + index * bits, 0);
bitmap_clear((void *)&coreisr, irq, 1);
irq = find_first_bit((void *)&coreisr, bits);
}
break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
irq = offset - EIOINTC_COREMAP_START;
index = irq >> 2;
s->coremap.reg_u32[index] = data;
eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
static int loongarch_eiointc_writeq(struct kvm_vcpu *vcpu,
struct loongarch_eiointc *s,
gpa_t addr, int len, const void *val)
{
int i, index, irq, bits, ret = 0;
u8 cpu;
u64 data, old_data;
u64 coreisr, old_coreisr;
gpa_t offset;
data = *(u64 *)val;
offset = addr - EIOINTC_BASE;
switch (offset) {
case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
index = (offset - EIOINTC_NODETYPE_START) >> 3;
s->nodetype.reg_u64[index] = data;
break;
case EIOINTC_IPMAP_START ... EIOINTC_IPMAP_END:
/*
* ipmap cannot be set at runtime, can be set only at the beginning
* of irqchip driver, need not update upper irq level
*/
index = (offset - EIOINTC_IPMAP_START) >> 3;
s->ipmap.reg_u64 = data;
break;
case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END:
index = (offset - EIOINTC_ENABLE_START) >> 3;
old_data = s->enable.reg_u64[index];
s->enable.reg_u64[index] = data;
/*
* 1: enable irq.
* update irq when isr is set.
*/
data = s->enable.reg_u64[index] & ~old_data & s->isr.reg_u64[index];
for (i = 0; i < sizeof(data); i++) {
u8 mask = (data >> (i * 8)) & 0xff;
eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 1);
}
/*
* 0: disable irq.
* update irq when isr is set.
*/
data = ~s->enable.reg_u64[index] & old_data & s->isr.reg_u64[index];
for (i = 0; i < sizeof(data); i++) {
u8 mask = (data >> (i * 8)) & 0xff;
eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 0);
} }
break; break;
case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END:
/* do not emulate hw bounced irq routing */ /* do not emulate hw bounced irq routing */
index = (offset - EIOINTC_BOUNCE_START) >> 3; index = (offset - EIOINTC_BOUNCE_START) >> 3;
old = s->bounce.reg_u64[index]; s->bounce.reg_u64[index] = data;
s->bounce.reg_u64[index] = (old & ~mask) | data;
break; break;
case EIOINTC_COREISR_START ... EIOINTC_COREISR_END: case EIOINTC_COREISR_START ... EIOINTC_COREISR_END:
index = (offset - EIOINTC_COREISR_START) >> 3; index = (offset - EIOINTC_COREISR_START) >> 3;
/* use attrs to get current cpu index */ /* use attrs to get current cpu index */
cpu = vcpu->vcpu_id; cpu = vcpu->vcpu_id;
old = s->coreisr.reg_u64[cpu][index]; coreisr = data;
old_coreisr = s->coreisr.reg_u64[cpu][index];
/* write 1 to clear interrupt */ /* write 1 to clear interrupt */
s->coreisr.reg_u64[cpu][index] = old & ~data; s->coreisr.reg_u64[cpu][index] = old_coreisr & ~coreisr;
data &= old; coreisr &= old_coreisr;
while (data) { bits = sizeof(data) * 8;
irq = __ffs(data); irq = find_first_bit((void *)&coreisr, bits);
eiointc_update_irq(s, irq + index * 64, 0); while (irq < bits) {
data &= ~BIT_ULL(irq); eiointc_update_irq(s, irq + index * bits, 0);
bitmap_clear((void *)&coreisr, irq, 1);
irq = find_first_bit((void *)&coreisr, bits);
} }
break; break;
case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END: case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END:
index = (offset - EIOINTC_COREMAP_START) >> 3; irq = offset - EIOINTC_COREMAP_START;
old = s->coremap.reg_u64[index]; index = irq >> 3;
s->coremap.reg_u64[index] = (old & ~mask) | data; s->coremap.reg_u64[index] = data;
data = s->coremap.reg_u64[index]; eiointc_update_sw_coremap(s, irq, data, sizeof(data), true);
eiointc_update_sw_coremap(s, index * 8, data, sizeof(data), true);
break; break;
default: default:
ret = -EINVAL; ret = -EINVAL;
@ -294,7 +684,7 @@ static int kvm_eiointc_write(struct kvm_vcpu *vcpu,
gpa_t addr, int len, const void *val) gpa_t addr, int len, const void *val)
{ {
int ret = -EINVAL; int ret = -EINVAL;
unsigned long flags, value; unsigned long flags;
struct loongarch_eiointc *eiointc = vcpu->kvm->arch.eiointc; struct loongarch_eiointc *eiointc = vcpu->kvm->arch.eiointc;
if (!eiointc) { if (!eiointc) {
@ -307,25 +697,24 @@ static int kvm_eiointc_write(struct kvm_vcpu *vcpu,
return -EINVAL; return -EINVAL;
} }
vcpu->stat.eiointc_write_exits++; vcpu->kvm->stat.eiointc_write_exits++;
spin_lock_irqsave(&eiointc->lock, flags); spin_lock_irqsave(&eiointc->lock, flags);
switch (len) { switch (len) {
case 1: case 1:
value = *(unsigned char *)val; ret = loongarch_eiointc_writeb(vcpu, eiointc, addr, len, val);
ret = loongarch_eiointc_write(vcpu, eiointc, addr, value, 0xFF);
break; break;
case 2: case 2:
value = *(unsigned short *)val; ret = loongarch_eiointc_writew(vcpu, eiointc, addr, len, val);
ret = loongarch_eiointc_write(vcpu, eiointc, addr, value, USHRT_MAX);
break; break;
case 4: case 4:
value = *(unsigned int *)val; ret = loongarch_eiointc_writel(vcpu, eiointc, addr, len, val);
ret = loongarch_eiointc_write(vcpu, eiointc, addr, value, UINT_MAX); break;
case 8:
ret = loongarch_eiointc_writeq(vcpu, eiointc, addr, len, val);
break; break;
default: default:
value = *(unsigned long *)val; WARN_ONCE(1, "%s: Abnormal address access: addr 0x%llx, size %d\n",
ret = loongarch_eiointc_write(vcpu, eiointc, addr, value, ULONG_MAX); __func__, addr, len);
break;
} }
spin_unlock_irqrestore(&eiointc->lock, flags); spin_unlock_irqrestore(&eiointc->lock, flags);
@ -600,7 +989,7 @@ static int kvm_eiointc_create(struct kvm_device *dev, u32 type)
{ {
int ret; int ret;
struct loongarch_eiointc *s; struct loongarch_eiointc *s;
struct kvm_io_device *device; struct kvm_io_device *device, *device1;
struct kvm *kvm = dev->kvm; struct kvm *kvm = dev->kvm;
/* eiointc has been created */ /* eiointc has been created */
@ -628,10 +1017,10 @@ static int kvm_eiointc_create(struct kvm_device *dev, u32 type)
return ret; return ret;
} }
device = &s->device_vext; device1 = &s->device_vext;
kvm_iodevice_init(device, &kvm_eiointc_virt_ops); kvm_iodevice_init(device1, &kvm_eiointc_virt_ops);
ret = kvm_io_bus_register_dev(kvm, KVM_IOCSR_BUS, ret = kvm_io_bus_register_dev(kvm, KVM_IOCSR_BUS,
EIOINTC_VIRT_BASE, EIOINTC_VIRT_SIZE, device); EIOINTC_VIRT_BASE, EIOINTC_VIRT_SIZE, device1);
if (ret < 0) { if (ret < 0) {
kvm_io_bus_unregister_dev(kvm, KVM_IOCSR_BUS, &s->device); kvm_io_bus_unregister_dev(kvm, KVM_IOCSR_BUS, &s->device);
kfree(s); kfree(s);

View File

@ -268,16 +268,36 @@ static int kvm_ipi_read(struct kvm_vcpu *vcpu,
struct kvm_io_device *dev, struct kvm_io_device *dev,
gpa_t addr, int len, void *val) gpa_t addr, int len, void *val)
{ {
vcpu->stat.ipi_read_exits++; int ret;
return loongarch_ipi_readl(vcpu, addr, len, val); struct loongarch_ipi *ipi;
ipi = vcpu->kvm->arch.ipi;
if (!ipi) {
kvm_err("%s: ipi irqchip not valid!\n", __func__);
return -EINVAL;
}
ipi->kvm->stat.ipi_read_exits++;
ret = loongarch_ipi_readl(vcpu, addr, len, val);
return ret;
} }
static int kvm_ipi_write(struct kvm_vcpu *vcpu, static int kvm_ipi_write(struct kvm_vcpu *vcpu,
struct kvm_io_device *dev, struct kvm_io_device *dev,
gpa_t addr, int len, const void *val) gpa_t addr, int len, const void *val)
{ {
vcpu->stat.ipi_write_exits++; int ret;
return loongarch_ipi_writel(vcpu, addr, len, val); struct loongarch_ipi *ipi;
ipi = vcpu->kvm->arch.ipi;
if (!ipi) {
kvm_err("%s: ipi irqchip not valid!\n", __func__);
return -EINVAL;
}
ipi->kvm->stat.ipi_write_exits++;
ret = loongarch_ipi_writel(vcpu, addr, len, val);
return ret;
} }
static const struct kvm_io_device_ops kvm_ipi_ops = { static const struct kvm_io_device_ops kvm_ipi_ops = {

View File

@ -196,7 +196,7 @@ static int kvm_pch_pic_read(struct kvm_vcpu *vcpu,
} }
/* statistics of pch pic reading */ /* statistics of pch pic reading */
vcpu->stat.pch_pic_read_exits++; vcpu->kvm->stat.pch_pic_read_exits++;
ret = loongarch_pch_pic_read(s, addr, len, val); ret = loongarch_pch_pic_read(s, addr, len, val);
return ret; return ret;
@ -303,7 +303,7 @@ static int kvm_pch_pic_write(struct kvm_vcpu *vcpu,
} }
/* statistics of pch pic writing */ /* statistics of pch pic writing */
vcpu->stat.pch_pic_write_exits++; vcpu->kvm->stat.pch_pic_write_exits++;
ret = loongarch_pch_pic_write(s, addr, len, val); ret = loongarch_pch_pic_write(s, addr, len, val);
return ret; return ret;

View File

@ -46,15 +46,11 @@ DEFINE_EVENT(kvm_transition, kvm_out,
/* Further exit reasons */ /* Further exit reasons */
#define KVM_TRACE_EXIT_IDLE 64 #define KVM_TRACE_EXIT_IDLE 64
#define KVM_TRACE_EXIT_CACHE 65 #define KVM_TRACE_EXIT_CACHE 65
#define KVM_TRACE_EXIT_CPUCFG 66
#define KVM_TRACE_EXIT_CSR 67
/* Tracepoints for VM exits */ /* Tracepoints for VM exits */
#define kvm_trace_symbol_exit_types \ #define kvm_trace_symbol_exit_types \
{ KVM_TRACE_EXIT_IDLE, "IDLE" }, \ { KVM_TRACE_EXIT_IDLE, "IDLE" }, \
{ KVM_TRACE_EXIT_CACHE, "CACHE" }, \ { KVM_TRACE_EXIT_CACHE, "CACHE" }
{ KVM_TRACE_EXIT_CPUCFG, "CPUCFG" }, \
{ KVM_TRACE_EXIT_CSR, "CSR" }
DECLARE_EVENT_CLASS(kvm_exit, DECLARE_EVENT_CLASS(kvm_exit,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
@ -86,14 +82,6 @@ DEFINE_EVENT(kvm_exit, kvm_exit_cache,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason)); TP_ARGS(vcpu, reason));
DEFINE_EVENT(kvm_exit, kvm_exit_cpucfg,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason));
DEFINE_EVENT(kvm_exit, kvm_exit_csr,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason));
DEFINE_EVENT(kvm_exit, kvm_exit, DEFINE_EVENT(kvm_exit, kvm_exit,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason)); TP_ARGS(vcpu, reason));

View File

@ -20,13 +20,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, idle_exits), STATS_DESC_COUNTER(VCPU, idle_exits),
STATS_DESC_COUNTER(VCPU, cpucfg_exits), STATS_DESC_COUNTER(VCPU, cpucfg_exits),
STATS_DESC_COUNTER(VCPU, signal_exits), STATS_DESC_COUNTER(VCPU, signal_exits),
STATS_DESC_COUNTER(VCPU, hypercall_exits), STATS_DESC_COUNTER(VCPU, hypercall_exits)
STATS_DESC_COUNTER(VCPU, ipi_read_exits),
STATS_DESC_COUNTER(VCPU, ipi_write_exits),
STATS_DESC_COUNTER(VCPU, eiointc_read_exits),
STATS_DESC_COUNTER(VCPU, eiointc_write_exits),
STATS_DESC_COUNTER(VCPU, pch_pic_read_exits),
STATS_DESC_COUNTER(VCPU, pch_pic_write_exits)
}; };
const struct kvm_stats_header kvm_vcpu_stats_header = { const struct kvm_stats_header kvm_vcpu_stats_header = {

View File

@ -63,8 +63,7 @@ config RISCV
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_ATOMIC_RMW
# clang >= 17: https://github.com/llvm/llvm-project/commit/62fa708ceb027713b386c7e0efda994f8bdc27e2 select ARCH_SUPPORTS_CFI_CLANG
select ARCH_SUPPORTS_CFI_CLANG if CLANG_VERSION >= 170000
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE
select ARCH_SUPPORTS_HUGETLBFS if MMU select ARCH_SUPPORTS_HUGETLBFS if MMU

View File

@ -1075,6 +1075,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
*/ */
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
#define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2)
#define TASK_SIZE_MAX LONG_MAX
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE) #define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE)

View File

@ -206,7 +206,7 @@ static inline void __runtime_fixup_32(__le16 *lui_parcel, __le16 *addi_parcel, u
addi_insn_mask &= 0x07fff; addi_insn_mask &= 0x07fff;
} }
if (lower_immediate & 0x00000fff || lui_insn == RISCV_INSN_NOP4) { if (lower_immediate & 0x00000fff) {
/* replace upper 12 bits of addi with lower 12 bits of val */ /* replace upper 12 bits of addi with lower 12 bits of val */
addi_insn &= addi_insn_mask; addi_insn &= addi_insn_mask;
addi_insn |= (lower_immediate & 0x00000fff) << 20; addi_insn |= (lower_immediate & 0x00000fff) << 20;

View File

@ -127,7 +127,6 @@ do { \
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
#define __get_user_8(x, ptr, label) \ #define __get_user_8(x, ptr, label) \
do { \
u32 __user *__ptr = (u32 __user *)(ptr); \ u32 __user *__ptr = (u32 __user *)(ptr); \
u32 __lo, __hi; \ u32 __lo, __hi; \
asm_goto_output( \ asm_goto_output( \
@ -142,7 +141,7 @@ do { \
: : label); \ : : label); \
(x) = (__typeof__(x))((__typeof__((x) - (x)))( \ (x) = (__typeof__(x))((__typeof__((x) - (x)))( \
(((u64)__hi << 32) | __lo))); \ (((u64)__hi << 32) | __lo))); \
} while (0)
#else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ #else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */
#define __get_user_8(x, ptr, label) \ #define __get_user_8(x, ptr, label) \
do { \ do { \

View File

@ -18,7 +18,7 @@ static __always_inline ssize_t getrandom_syscall(void *_buffer, size_t _len, uns
register unsigned int flags asm("a2") = _flags; register unsigned int flags asm("a2") = _flags;
asm volatile ("ecall\n" asm volatile ("ecall\n"
: "=r" (ret) : "+r" (ret)
: "r" (nr), "r" (buffer), "r" (len), "r" (flags) : "r" (nr), "r" (buffer), "r" (len), "r" (flags)
: "memory"); : "memory");

View File

@ -205,11 +205,11 @@ static inline void __riscv_v_vstate_save(struct __riscv_v_ext_state *save_to,
THEAD_VSETVLI_T4X0E8M8D1 THEAD_VSETVLI_T4X0E8M8D1
THEAD_VSB_V_V0T0 THEAD_VSB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VSB_V_V8T0 THEAD_VSB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VSB_V_V16T0 THEAD_VSB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VSB_V_V24T0 THEAD_VSB_V_V0T0
: : "r" (datap) : "memory", "t0", "t4"); : : "r" (datap) : "memory", "t0", "t4");
} else { } else {
asm volatile ( asm volatile (
@ -241,11 +241,11 @@ static inline void __riscv_v_vstate_restore(struct __riscv_v_ext_state *restore_
THEAD_VSETVLI_T4X0E8M8D1 THEAD_VSETVLI_T4X0E8M8D1
THEAD_VLB_V_V0T0 THEAD_VLB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VLB_V_V8T0 THEAD_VLB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VLB_V_V16T0 THEAD_VLB_V_V0T0
"add t0, t0, t4\n\t" "add t0, t0, t4\n\t"
THEAD_VLB_V_V24T0 THEAD_VLB_V_V0T0
: : "r" (datap) : "memory", "t0", "t4"); : : "r" (datap) : "memory", "t0", "t4");
} else { } else {
asm volatile ( asm volatile (

View File

@ -18,10 +18,10 @@ const struct cpu_operations cpu_ops_sbi;
/* /*
* Ordered booting via HSM brings one cpu at a time. However, cpu hotplug can * Ordered booting via HSM brings one cpu at a time. However, cpu hotplug can
* be invoked from multiple threads in parallel. Define an array of boot data * be invoked from multiple threads in parallel. Define a per cpu data
* to handle that. * to handle that.
*/ */
static struct sbi_hart_boot_data boot_data[NR_CPUS]; static DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data);
static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr, static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr,
unsigned long priv) unsigned long priv)
@ -67,7 +67,7 @@ static int sbi_cpu_start(unsigned int cpuid, struct task_struct *tidle)
unsigned long boot_addr = __pa_symbol(secondary_start_sbi); unsigned long boot_addr = __pa_symbol(secondary_start_sbi);
unsigned long hartid = cpuid_to_hartid_map(cpuid); unsigned long hartid = cpuid_to_hartid_map(cpuid);
unsigned long hsm_data; unsigned long hsm_data;
struct sbi_hart_boot_data *bdata = &boot_data[cpuid]; struct sbi_hart_boot_data *bdata = &per_cpu(boot_data, cpuid);
/* Make sure tidle is updated */ /* Make sure tidle is updated */
smp_mb(); smp_mb();

View File

@ -50,7 +50,6 @@ atomic_t hart_lottery __section(".sdata")
#endif #endif
; ;
unsigned long boot_cpu_hartid; unsigned long boot_cpu_hartid;
EXPORT_SYMBOL_GPL(boot_cpu_hartid);
/* /*
* Place kernel memory regions on the resource tree so that * Place kernel memory regions on the resource tree so that

View File

@ -454,7 +454,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
val.data_u64 = 0; val.data_u64 = 0;
if (user_mode(regs)) { if (user_mode(regs)) {
if (copy_from_user(&val, (u8 __user *)addr, len)) if (copy_from_user_nofault(&val, (u8 __user *)addr, len))
return -1; return -1;
} else { } else {
memcpy(&val, (u8 *)addr, len); memcpy(&val, (u8 *)addr, len);
@ -555,7 +555,7 @@ static int handle_scalar_misaligned_store(struct pt_regs *regs)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (user_mode(regs)) { if (user_mode(regs)) {
if (copy_to_user((u8 __user *)addr, &val, len)) if (copy_to_user_nofault((u8 __user *)addr, &val, len))
return -1; return -1;
} else { } else {
memcpy((u8 *)addr, &val, len); memcpy((u8 *)addr, &val, len);

View File

@ -30,7 +30,7 @@ SECTIONS
*(.data .data.* .gnu.linkonce.d.*) *(.data .data.* .gnu.linkonce.d.*)
*(.dynbss) *(.dynbss)
*(.bss .bss.* .gnu.linkonce.b.*) *(.bss .bss.* .gnu.linkonce.b.*)
} :text }
.note : { *(.note.*) } :text :note .note : { *(.note.*) } :text :note

View File

@ -8,7 +8,7 @@
#include <linux/types.h> #include <linux/types.h>
/* All SiFive vendor extensions supported in Linux */ /* All SiFive vendor extensions supported in Linux */
static const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = { const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = {
__RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF), __RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF),
__RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ), __RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ),
__RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD), __RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD),

View File

@ -38,7 +38,6 @@ static int s390_sha1_init(struct shash_desc *desc)
sctx->state[4] = SHA1_H4; sctx->state[4] = SHA1_H4;
sctx->count = 0; sctx->count = 0;
sctx->func = CPACF_KIMD_SHA_1; sctx->func = CPACF_KIMD_SHA_1;
sctx->first_message_part = 0;
return 0; return 0;
} }
@ -61,7 +60,6 @@ static int s390_sha1_import(struct shash_desc *desc, const void *in)
sctx->count = ictx->count; sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state)); memcpy(sctx->state, ictx->state, sizeof(ictx->state));
sctx->func = CPACF_KIMD_SHA_1; sctx->func = CPACF_KIMD_SHA_1;
sctx->first_message_part = 0;
return 0; return 0;
} }

View File

@ -32,7 +32,6 @@ static int sha512_init(struct shash_desc *desc)
ctx->count = 0; ctx->count = 0;
ctx->sha512.count_hi = 0; ctx->sha512.count_hi = 0;
ctx->func = CPACF_KIMD_SHA_512; ctx->func = CPACF_KIMD_SHA_512;
ctx->first_message_part = 0;
return 0; return 0;
} }
@ -58,7 +57,6 @@ static int sha512_import(struct shash_desc *desc, const void *in)
memcpy(sctx->state, ictx->state, sizeof(ictx->state)); memcpy(sctx->state, ictx->state, sizeof(ictx->state));
sctx->func = CPACF_KIMD_SHA_512; sctx->func = CPACF_KIMD_SHA_512;
sctx->first_message_part = 0;
return 0; return 0;
} }
@ -99,7 +97,6 @@ static int sha384_init(struct shash_desc *desc)
ctx->count = 0; ctx->count = 0;
ctx->sha512.count_hi = 0; ctx->sha512.count_hi = 0;
ctx->func = CPACF_KIMD_SHA_512; ctx->func = CPACF_KIMD_SHA_512;
ctx->first_message_part = 0;
return 0; return 0;
} }

View File

@ -265,7 +265,7 @@ static __always_inline unsigned long regs_get_kernel_stack_nth(struct pt_regs *r
addr = kernel_stack_pointer(regs) + n * sizeof(long); addr = kernel_stack_pointer(regs) + n * sizeof(long);
if (!regs_within_kernel_stack(regs, addr)) if (!regs_within_kernel_stack(regs, addr))
return 0; return 0;
return READ_ONCE_NOCHECK(*(unsigned long *)addr); return READ_ONCE_NOCHECK(addr);
} }
/** /**

View File

@ -54,7 +54,6 @@ static inline bool ers_result_indicates_abort(pci_ers_result_t ers_res)
case PCI_ERS_RESULT_CAN_RECOVER: case PCI_ERS_RESULT_CAN_RECOVER:
case PCI_ERS_RESULT_RECOVERED: case PCI_ERS_RESULT_RECOVERED:
case PCI_ERS_RESULT_NEED_RESET: case PCI_ERS_RESULT_NEED_RESET:
case PCI_ERS_RESULT_NONE:
return false; return false;
default: default:
return true; return true;
@ -79,6 +78,10 @@ static bool is_driver_supported(struct pci_driver *driver)
return false; return false;
if (!driver->err_handler->error_detected) if (!driver->err_handler->error_detected)
return false; return false;
if (!driver->err_handler->slot_reset)
return false;
if (!driver->err_handler->resume)
return false;
return true; return true;
} }
@ -103,10 +106,6 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
struct zpci_dev *zdev = to_zpci(pdev); struct zpci_dev *zdev = to_zpci(pdev);
int rc; int rc;
/* The underlying device may have been disabled by the event */
if (!zdev_enabled(zdev))
return PCI_ERS_RESULT_NEED_RESET;
pr_info("%s: Unblocking device access for examination\n", pci_name(pdev)); pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));
rc = zpci_reset_load_store_blocked(zdev); rc = zpci_reset_load_store_blocked(zdev);
if (rc) { if (rc) {
@ -115,11 +114,8 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
return PCI_ERS_RESULT_NEED_RESET; return PCI_ERS_RESULT_NEED_RESET;
} }
if (driver->err_handler->mmio_enabled) if (driver->err_handler->mmio_enabled) {
ers_res = driver->err_handler->mmio_enabled(pdev); ers_res = driver->err_handler->mmio_enabled(pdev);
else
ers_res = PCI_ERS_RESULT_NONE;
if (ers_result_indicates_abort(ers_res)) { if (ers_result_indicates_abort(ers_res)) {
pr_info("%s: Automatic recovery failed after MMIO re-enable\n", pr_info("%s: Automatic recovery failed after MMIO re-enable\n",
pci_name(pdev)); pci_name(pdev));
@ -128,6 +124,7 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev)); pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev));
return ers_res; return ers_res;
} }
}
pr_debug("%s: Unblocking DMA\n", pci_name(pdev)); pr_debug("%s: Unblocking DMA\n", pci_name(pdev));
rc = zpci_clear_error_state(zdev); rc = zpci_clear_error_state(zdev);
@ -153,12 +150,7 @@ static pci_ers_result_t zpci_event_do_reset(struct pci_dev *pdev,
return ers_res; return ers_res;
} }
pdev->error_state = pci_channel_io_normal; pdev->error_state = pci_channel_io_normal;
if (driver->err_handler->slot_reset)
ers_res = driver->err_handler->slot_reset(pdev); ers_res = driver->err_handler->slot_reset(pdev);
else
ers_res = PCI_ERS_RESULT_NONE;
if (ers_result_indicates_abort(ers_res)) { if (ers_result_indicates_abort(ers_res)) {
pr_info("%s: Automatic recovery failed after slot reset\n", pci_name(pdev)); pr_info("%s: Automatic recovery failed after slot reset\n", pci_name(pdev));
return ers_res; return ers_res;
@ -222,7 +214,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
goto out_unlock; goto out_unlock;
} }
if (ers_res != PCI_ERS_RESULT_NEED_RESET) { if (ers_res == PCI_ERS_RESULT_CAN_RECOVER) {
ers_res = zpci_event_do_error_state_clear(pdev, driver); ers_res = zpci_event_do_error_state_clear(pdev, driver);
if (ers_result_indicates_abort(ers_res)) { if (ers_result_indicates_abort(ers_res)) {
status_str = "failed (abort on MMIO enable)"; status_str = "failed (abort on MMIO enable)";
@ -233,16 +225,6 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
if (ers_res == PCI_ERS_RESULT_NEED_RESET) if (ers_res == PCI_ERS_RESULT_NEED_RESET)
ers_res = zpci_event_do_reset(pdev, driver); ers_res = zpci_event_do_reset(pdev, driver);
/*
* ers_res can be PCI_ERS_RESULT_NONE either because the driver
* decided to return it, indicating that it abstains from voting
* on how to recover, or because it didn't implement the callback.
* Both cases assume, that if there is nothing else causing a
* disconnect, we recovered successfully.
*/
if (ers_res == PCI_ERS_RESULT_NONE)
ers_res = PCI_ERS_RESULT_RECOVERED;
if (ers_res != PCI_ERS_RESULT_RECOVERED) { if (ers_res != PCI_ERS_RESULT_RECOVERED) {
pr_err("%s: Automatic recovery failed; operator intervention is required\n", pr_err("%s: Automatic recovery failed; operator intervention is required\n",
pci_name(pdev)); pci_name(pdev));
@ -291,8 +273,6 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid); struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
struct pci_dev *pdev = NULL; struct pci_dev *pdev = NULL;
pci_ers_result_t ers_res; pci_ers_result_t ers_res;
u32 fh = 0;
int rc;
zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n", zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n",
ccdf->fid, ccdf->fh, ccdf->pec); ccdf->fid, ccdf->fh, ccdf->pec);
@ -301,15 +281,6 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
if (zdev) { if (zdev) {
mutex_lock(&zdev->state_lock); mutex_lock(&zdev->state_lock);
rc = clp_refresh_fh(zdev->fid, &fh);
if (rc)
goto no_pdev;
if (!fh || ccdf->fh != fh) {
/* Ignore events with stale handles */
zpci_dbg(3, "err fid:%x, fh:%x (stale %x)\n",
ccdf->fid, fh, ccdf->fh);
goto no_pdev;
}
zpci_update_fh(zdev, ccdf->fh); zpci_update_fh(zdev, ccdf->fh);
if (zdev->zbus->bus) if (zdev->zbus->bus)
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn); pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);

View File

@ -41,7 +41,7 @@ int start_io_thread(struct os_helper_thread **td_out, int *fd_out)
*fd_out = fds[1]; *fd_out = fds[1];
err = os_set_fd_block(*fd_out, 0); err = os_set_fd_block(*fd_out, 0);
err |= os_set_fd_block(kernel_fd, 0); err = os_set_fd_block(kernel_fd, 0);
if (err) { if (err) {
printk("start_io_thread - failed to set nonblocking I/O.\n"); printk("start_io_thread - failed to set nonblocking I/O.\n");
goto out_close; goto out_close;

View File

@ -1625,19 +1625,35 @@ static void vector_eth_configure(
device->dev = dev; device->dev = dev;
INIT_LIST_HEAD(&vp->list); *vp = ((struct vector_private)
vp->dev = dev; {
vp->unit = n; .list = LIST_HEAD_INIT(vp->list),
vp->options = get_transport_options(def); .dev = dev,
vp->parsed = def; .unit = n,
vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER; .options = get_transport_options(def),
/* .rx_irq = 0,
* TODO - we need to calculate headroom so that ip header .tx_irq = 0,
.parsed = def,
.max_packet = get_mtu(def) + ETH_HEADER_OTHER,
/* TODO - we need to calculate headroom so that ip header
* is 16 byte aligned all the time * is 16 byte aligned all the time
*/ */
vp->headroom = get_headroom(def); .headroom = get_headroom(def),
vp->coalesce = 2; .form_header = NULL,
vp->req_size = get_req_size(def); .verify_header = NULL,
.header_rxbuffer = NULL,
.header_txbuffer = NULL,
.header_size = 0,
.rx_header_size = 0,
.rexmit_scheduled = false,
.opened = false,
.transport_data = NULL,
.in_write_poll = false,
.coalesce = 2,
.req_size = get_req_size(def),
.in_error = false,
.bpf = NULL
});
dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST); dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST);
INIT_WORK(&vp->reset_tx, vector_reset_tx); INIT_WORK(&vp->reset_tx, vector_reset_tx);

View File

@ -570,17 +570,6 @@ static void uml_vfio_release_device(struct uml_vfio_device *dev)
kfree(dev); kfree(dev);
} }
static struct uml_vfio_device *uml_vfio_find_device(const char *device)
{
struct uml_vfio_device *dev;
list_for_each_entry(dev, &uml_vfio_devices, list) {
if (!strcmp(dev->name, device))
return dev;
}
return NULL;
}
static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp) static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp)
{ {
struct uml_vfio_device *dev; struct uml_vfio_device *dev;
@ -593,9 +582,6 @@ static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *k
uml_vfio_container.fd = fd; uml_vfio_container.fd = fd;
} }
if (uml_vfio_find_device(device))
return -EEXIST;
dev = kzalloc(sizeof(*dev), GFP_KERNEL); dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev) if (!dev)
return -ENOMEM; return -ENOMEM;

View File

@ -147,7 +147,7 @@ config X86
select ARCH_WANTS_DYNAMIC_TASK_STRUCT select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_WANTS_NO_INSTR select ARCH_WANTS_NO_INSTR
select ARCH_WANT_GENERAL_HUGETLB select ARCH_WANT_GENERAL_HUGETLB
select ARCH_WANT_HUGE_PMD_SHARE if X86_64 select ARCH_WANT_HUGE_PMD_SHARE
select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP if X86_64 select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP if X86_64
select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64
@ -2695,15 +2695,6 @@ config MITIGATION_ITS
disabled, mitigation cannot be enabled via cmdline. disabled, mitigation cannot be enabled via cmdline.
See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst> See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
config MITIGATION_TSA
bool "Mitigate Transient Scheduler Attacks"
depends on CPU_SUP_AMD
default y
help
Enable mitigation for Transient Scheduler Attacks. TSA is a hardware
security vulnerability on AMD CPUs which can lead to forwarding of
invalid info to subsequent instructions and thus can affect their
timing and thereby cause a leakage.
endif endif
config ARCH_HAS_ADD_PAGES config ARCH_HAS_ADD_PAGES

View File

@ -88,7 +88,7 @@ static const char * const sev_status_feat_names[] = {
*/ */
static u64 snp_tsc_scale __ro_after_init; static u64 snp_tsc_scale __ro_after_init;
static u64 snp_tsc_offset __ro_after_init; static u64 snp_tsc_offset __ro_after_init;
static unsigned long snp_tsc_freq_khz __ro_after_init; static u64 snp_tsc_freq_khz __ro_after_init;
DEFINE_PER_CPU(struct sev_es_runtime_data*, runtime_data); DEFINE_PER_CPU(struct sev_es_runtime_data*, runtime_data);
DEFINE_PER_CPU(struct sev_es_save_area *, sev_vmsa); DEFINE_PER_CPU(struct sev_es_save_area *, sev_vmsa);
@ -2167,31 +2167,15 @@ static unsigned long securetsc_get_tsc_khz(void)
void __init snp_secure_tsc_init(void) void __init snp_secure_tsc_init(void)
{ {
struct snp_secrets_page *secrets; unsigned long long tsc_freq_mhz;
unsigned long tsc_freq_mhz;
void *mem;
if (!cc_platform_has(CC_ATTR_GUEST_SNP_SECURE_TSC)) if (!cc_platform_has(CC_ATTR_GUEST_SNP_SECURE_TSC))
return; return;
mem = early_memremap_encrypted(sev_secrets_pa, PAGE_SIZE);
if (!mem) {
pr_err("Unable to get TSC_FACTOR: failed to map the SNP secrets page.\n");
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SECURE_TSC);
}
secrets = (__force struct snp_secrets_page *)mem;
setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ); setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ);
rdmsrq(MSR_AMD64_GUEST_TSC_FREQ, tsc_freq_mhz); rdmsrq(MSR_AMD64_GUEST_TSC_FREQ, tsc_freq_mhz);
snp_tsc_freq_khz = (unsigned long)(tsc_freq_mhz * 1000);
/* Extract the GUEST TSC MHZ from BIT[17:0], rest is reserved space */
tsc_freq_mhz &= GENMASK_ULL(17, 0);
snp_tsc_freq_khz = SNP_SCALE_TSC_FREQ(tsc_freq_mhz * 1000, secrets->tsc_factor);
x86_platform.calibrate_cpu = securetsc_get_tsc_khz; x86_platform.calibrate_cpu = securetsc_get_tsc_khz;
x86_platform.calibrate_tsc = securetsc_get_tsc_khz; x86_platform.calibrate_tsc = securetsc_get_tsc_khz;
early_memunmap(mem, PAGE_SIZE);
} }

View File

@ -36,20 +36,20 @@ EXPORT_SYMBOL_GPL(write_ibpb);
/* /*
* Define the VERW operand that is disguised as entry code so that * Define the VERW operand that is disguised as entry code so that
* it can be referenced with KPTI enabled. This ensures VERW can be * it can be referenced with KPTI enabled. This ensure VERW can be
* used late in exit-to-user path after page tables are switched. * used late in exit-to-user path after page tables are switched.
*/ */
.pushsection .entry.text, "ax" .pushsection .entry.text, "ax"
.align L1_CACHE_BYTES, 0xcc .align L1_CACHE_BYTES, 0xcc
SYM_CODE_START_NOALIGN(x86_verw_sel) SYM_CODE_START_NOALIGN(mds_verw_sel)
UNWIND_HINT_UNDEFINED UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR ANNOTATE_NOENDBR
.word __KERNEL_DS .word __KERNEL_DS
.align L1_CACHE_BYTES, 0xcc .align L1_CACHE_BYTES, 0xcc
SYM_CODE_END(x86_verw_sel); SYM_CODE_END(mds_verw_sel);
/* For KVM */ /* For KVM */
EXPORT_SYMBOL_GPL(x86_verw_sel); EXPORT_SYMBOL_GPL(mds_verw_sel);
.popsection .popsection

View File

@ -456,7 +456,6 @@
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */ #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */
#define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */ #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */
#define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */ #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */
#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */
#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */
@ -488,9 +487,6 @@
#define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */
#define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */
#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */
#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
/* /*
* BUG word(s) * BUG word(s)
@ -546,5 +542,5 @@
#define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */
#define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */
#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
#define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
#endif /* _ASM_X86_CPUFEATURES_H */ #endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -9,14 +9,6 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/msr.h> #include <asm/msr.h>
/*
* Define bits that are always set to 1 in DR7, only bit 10 is
* architecturally reserved to '1'.
*
* This is also the init/reset value for DR7.
*/
#define DR7_FIXED_1 0x00000400
DECLARE_PER_CPU(unsigned long, cpu_dr7); DECLARE_PER_CPU(unsigned long, cpu_dr7);
#ifndef CONFIG_PARAVIRT_XXL #ifndef CONFIG_PARAVIRT_XXL
@ -108,8 +100,8 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value)
static inline void hw_breakpoint_disable(void) static inline void hw_breakpoint_disable(void)
{ {
/* Reset the control register for HW Breakpoint */ /* Zero the control register for HW Breakpoint */
set_debugreg(DR7_FIXED_1, 7); set_debugreg(0UL, 7);
/* Zero-out the individual HW breakpoint address registers */ /* Zero-out the individual HW breakpoint address registers */
set_debugreg(0UL, 0); set_debugreg(0UL, 0);
@ -133,12 +125,9 @@ static __always_inline unsigned long local_db_save(void)
return 0; return 0;
get_debugreg(dr7, 7); get_debugreg(dr7, 7);
dr7 &= ~0x400; /* architecturally set bit */
/* Architecturally set bit */
dr7 &= ~DR7_FIXED_1;
if (dr7) if (dr7)
set_debugreg(DR7_FIXED_1, 7); set_debugreg(0, 7);
/* /*
* Ensure the compiler doesn't lower the above statements into * Ensure the compiler doesn't lower the above statements into
* the critical section; disabling breakpoints late would not * the critical section; disabling breakpoints late would not

View File

@ -44,13 +44,13 @@ static __always_inline void native_irq_enable(void)
static __always_inline void native_safe_halt(void) static __always_inline void native_safe_halt(void)
{ {
x86_idle_clear_cpu_buffers(); mds_idle_clear_cpu_buffers();
asm volatile("sti; hlt": : :"memory"); asm volatile("sti; hlt": : :"memory");
} }
static __always_inline void native_halt(void) static __always_inline void native_halt(void)
{ {
x86_idle_clear_cpu_buffers(); mds_idle_clear_cpu_buffers();
asm volatile("hlt": : :"memory"); asm volatile("hlt": : :"memory");
} }

View File

@ -31,7 +31,6 @@
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/pvclock-abi.h> #include <asm/pvclock-abi.h>
#include <asm/debugreg.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/msr-index.h> #include <asm/msr-index.h>
@ -250,6 +249,7 @@ enum x86_intercept_stage;
#define DR7_BP_EN_MASK 0x000000ff #define DR7_BP_EN_MASK 0x000000ff
#define DR7_GE (1 << 9) #define DR7_GE (1 << 9)
#define DR7_GD (1 << 13) #define DR7_GD (1 << 13)
#define DR7_FIXED_1 0x00000400
#define DR7_VOLATILE 0xffff2bff #define DR7_VOLATILE 0xffff2bff
#define KVM_GUESTDBG_VALID_MASK \ #define KVM_GUESTDBG_VALID_MASK \
@ -700,13 +700,8 @@ struct kvm_vcpu_hv {
struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS];
/* /* Preallocated buffer for handling hypercalls passing sparse vCPU set */
* Preallocated buffers for handling hypercalls that pass sparse vCPU
* sets (for high vCPU counts, they're too large to comfortably fit on
* the stack).
*/
u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS];
DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
struct hv_vp_assist_page vp_assist_page; struct hv_vp_assist_page vp_assist_page;
@ -769,7 +764,6 @@ enum kvm_only_cpuid_leafs {
CPUID_8000_0022_EAX, CPUID_8000_0022_EAX,
CPUID_7_2_EDX, CPUID_7_2_EDX,
CPUID_24_0_EBX, CPUID_24_0_EBX,
CPUID_8000_0021_ECX,
NR_KVM_CPU_CAPS, NR_KVM_CPU_CAPS,
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS, NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,

Some files were not shown because too many files have changed in this diff Show More