mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2025-08-17 12:41:51 +00:00

- Host driver for GICv5, the next generation interrupt controller for arm64, including support for interrupt routing, MSIs, interrupt translation and wired interrupts. - Use FEAT_GCIE_LEGACY on GICv5 systems to virtualize GICv3 VMs on GICv5 hardware, leveraging the legacy VGIC interface. - Userspace control of the 'nASSGIcap' GICv3 feature, allowing userspace to disable support for SGIs w/o an active state on hardware that previously advertised it unconditionally. - Map supporting endpoints with cacheable memory attributes on systems with FEAT_S2FWB and DIC where KVM no longer needs to perform cache maintenance on the address range. - Nested support for FEAT_RAS and FEAT_DoubleFault2, allowing the guest hypervisor to inject external aborts into an L2 VM and take traps of masked external aborts to the hypervisor. - Convert more system register sanitization to the config-driven implementation. - Fixes to the visibility of EL2 registers, namely making VGICv3 system registers accessible through the VGIC device instead of the ONE_REG vCPU ioctls. - Various cleanups and minor fixes. LoongArch: - Add stat information for in-kernel irqchip - Add tracepoints for CPUCFG and CSR emulation exits - Enhance in-kernel irqchip emulation - Various cleanups. RISC-V: - Enable ring-based dirty memory tracking - Improve perf kvm stat to report interrupt events - Delegate illegal instruction trap to VS-mode - MMU improvements related to upcoming nested virtualization s390x - Fixes x86: - Add CONFIG_KVM_IOAPIC for x86 to allow disabling support for I/O APIC, PIC, and PIT emulation at compile time. - Share device posted IRQ code between SVM and VMX and harden it against bugs and runtime errors. - Use vcpu_idx, not vcpu_id, for GA log tag/metadata, to make lookups O(1) instead of O(n). - For MMIO stale data mitigation, track whether or not a vCPU has access to (host) MMIO based on whether the page tables have MMIO pfns mapped; using VFIO is prone to false negatives - Rework the MSR interception code so that the SVM and VMX APIs are more or less identical. - Recalculate all MSR intercepts from scratch on MSR filter changes, instead of maintaining shadow bitmaps. - Advertise support for LKGS (Load Kernel GS base), a new instruction that's loosely related to FRED, but is supported and enumerated independently. - Fix a user-triggerable WARN that syzkaller found by setting the vCPU in INIT_RECEIVED state (aka wait-for-SIPI), and then putting the vCPU into VMX Root Mode (post-VMXON). Trying to detect every possible path leading to architecturally forbidden states is hard and even risks breaking userspace (if it goes from valid to valid state but passes through invalid states), so just wait until KVM_RUN to detect that the vCPU state isn't allowed. - Add KVM_X86_DISABLE_EXITS_APERFMPERF to allow disabling interception of APERF/MPERF reads, so that a "properly" configured VM can access APERF/MPERF. This has many caveats (APERF/MPERF cannot be zeroed on vCPU creation or saved/restored on suspend and resume, or preserved over thread migration let alone VM migration) but can be useful whenever you're interested in letting Linux guests see the effective physical CPU frequency in /proc/cpuinfo. - Reject KVM_SET_TSC_KHZ for vm file descriptors if vCPUs have been created, as there's no known use case for changing the default frequency for other VM types and it goes counter to the very reason why the ioctl was added to the vm file descriptor. And also, there would be no way to make it work for confidential VMs with a "secure" TSC, so kill two birds with one stone. - Dynamically allocation the shadow MMU's hashed page list, and defer allocating the hashed list until it's actually needed (the TDP MMU doesn't use the list). - Extract many of KVM's helpers for accessing architectural local APIC state to common x86 so that they can be shared by guest-side code for Secure AVIC. - Various cleanups and fixes. x86 (Intel): - Preserve the host's DEBUGCTL.FREEZE_IN_SMM when running the guest. Failure to honor FREEZE_IN_SMM can leak host state into guests. - Explicitly check vmcs12.GUEST_DEBUGCTL on nested VM-Enter to prevent L1 from running L2 with features that KVM doesn't support, e.g. BTF. x86 (AMD): - WARN and reject loading kvm-amd.ko instead of panicking the kernel if the nested SVM MSRPM offsets tracker can't handle an MSR (which is pretty much a static condition and therefore should never happen, but still). - Fix a variety of flaws and bugs in the AVIC device posted IRQ code. - Inhibit AVIC if a vCPU's ID is too big (relative to what hardware supports) instead of rejecting vCPU creation. - Extend enable_ipiv module param support to SVM, by simply leaving IsRunning clear in the vCPU's physical ID table entry. - Disable IPI virtualization, via enable_ipiv, if the CPU is affected by erratum #1235, to allow (safely) enabling AVIC on such CPUs. - Request GA Log interrupts if and only if the target vCPU is blocking, i.e. only if KVM needs a notification in order to wake the vCPU. - Intercept SPEC_CTRL on AMD if the MSR shouldn't exist according to the vCPU's CPUID model. - Accept any SNP policy that is accepted by the firmware with respect to SMT and single-socket restrictions. An incompatible policy doesn't put the kernel at risk in any way, so there's no reason for KVM to care. - Drop a superfluous WBINVD (on all CPUs!) when destroying a VM and use WBNOINVD instead of WBINVD when possible for SEV cache maintenance. - When reclaiming memory from an SEV guest, only do cache flushes on CPUs that have ever run a vCPU for the guest, i.e. don't flush the caches for CPUs that can't possibly have cache lines with dirty, encrypted data. Generic: - Rework irqbypass to track/match producers and consumers via an xarray instead of a linked list. Using a linked list leads to O(n^2) insertion times, which is hugely problematic for use cases that create large numbers of VMs. Such use cases typically don't actually use irqbypass, but eliminating the pointless registration is a future problem to solve as it likely requires new uAPI. - Track irqbypass's "token" as "struct eventfd_ctx *" instead of a "void *", to avoid making a simple concept unnecessarily difficult to understand. - Decouple device posted IRQs from VFIO device assignment, as binding a VM to a VFIO group is not a requirement for enabling device posted IRQs. - Clean up and document/comment the irqfd assignment code. - Disallow binding multiple irqfds to an eventfd with a priority waiter, i.e. ensure an eventfd is bound to at most one irqfd through the entire host, and add a selftest to verify eventfd:irqfd bindings are globally unique. - Add a tracepoint for KVM_SET_MEMORY_ATTRIBUTES to help debug issues related to private <=> shared memory conversions. - Drop guest_memfd's .getattr() implementation as the VFS layer will call generic_fillattr() if inode_operations.getattr is NULL. - Fix issues with dirty ring harvesting where KVM doesn't bound the processing of entries in any way, which allows userspace to keep KVM in a tight loop indefinitely. - Kill off kvm_arch_{start,end}_assignment() and x86's associated tracking, now that KVM no longer uses assigned_device_count as a heuristic for either irqbypass usage or MDS mitigation. Selftests: - Fix a comment typo. - Verify KVM is loaded when getting any KVM module param so that attempting to run a selftest without kvm.ko loaded results in a SKIP message about KVM not being loaded/enabled (versus some random parameter not existing). - Skip tests that hit EACCES when attempting to access a file, and rpint a "Root required?" help message. In most cases, the test just needs to be run with elevated permissions. -----BEGIN PGP SIGNATURE----- iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmiKXMgUHHBib256aW5p QHJlZGhhdC5jb20ACgkQv/vSX3jHroMhMQf/QDhC/CP1aGXph2whuyeD2NMqPKiU 9KdnDNST+ftPwjg9QxZ9mTaa8zeVz/wly6XlxD9OQHy+opM1wcys3k0GZAFFEEQm YrThgURdzEZ3nwJZgb+m0t4wjJQtpiFIBwAf7qq6z1VrqQBEmHXJ/8QxGuqO+BNC j5q/X+q6KZwehKI6lgFBrrOKWFaxqhnRAYfW6rGBxRXxzTJuna37fvDpodQnNceN zOiq+avfriUMArTXTqOteJNKU0229HjiPSnjILLnFQ+B3akBlwNG0jk7TMaAKR6q IZWG1EIS9q1BAkGXaw6DE1y6d/YwtXCR5qgAIkiGwaPt5yj9Oj6kRN2Ytw== =j2At -----END PGP SIGNATURE----- Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm Pull kvm updates from Paolo Bonzini: "ARM: - Host driver for GICv5, the next generation interrupt controller for arm64, including support for interrupt routing, MSIs, interrupt translation and wired interrupts - Use FEAT_GCIE_LEGACY on GICv5 systems to virtualize GICv3 VMs on GICv5 hardware, leveraging the legacy VGIC interface - Userspace control of the 'nASSGIcap' GICv3 feature, allowing userspace to disable support for SGIs w/o an active state on hardware that previously advertised it unconditionally - Map supporting endpoints with cacheable memory attributes on systems with FEAT_S2FWB and DIC where KVM no longer needs to perform cache maintenance on the address range - Nested support for FEAT_RAS and FEAT_DoubleFault2, allowing the guest hypervisor to inject external aborts into an L2 VM and take traps of masked external aborts to the hypervisor - Convert more system register sanitization to the config-driven implementation - Fixes to the visibility of EL2 registers, namely making VGICv3 system registers accessible through the VGIC device instead of the ONE_REG vCPU ioctls - Various cleanups and minor fixes LoongArch: - Add stat information for in-kernel irqchip - Add tracepoints for CPUCFG and CSR emulation exits - Enhance in-kernel irqchip emulation - Various cleanups RISC-V: - Enable ring-based dirty memory tracking - Improve perf kvm stat to report interrupt events - Delegate illegal instruction trap to VS-mode - MMU improvements related to upcoming nested virtualization s390x - Fixes x86: - Add CONFIG_KVM_IOAPIC for x86 to allow disabling support for I/O APIC, PIC, and PIT emulation at compile time - Share device posted IRQ code between SVM and VMX and harden it against bugs and runtime errors - Use vcpu_idx, not vcpu_id, for GA log tag/metadata, to make lookups O(1) instead of O(n) - For MMIO stale data mitigation, track whether or not a vCPU has access to (host) MMIO based on whether the page tables have MMIO pfns mapped; using VFIO is prone to false negatives - Rework the MSR interception code so that the SVM and VMX APIs are more or less identical - Recalculate all MSR intercepts from scratch on MSR filter changes, instead of maintaining shadow bitmaps - Advertise support for LKGS (Load Kernel GS base), a new instruction that's loosely related to FRED, but is supported and enumerated independently - Fix a user-triggerable WARN that syzkaller found by setting the vCPU in INIT_RECEIVED state (aka wait-for-SIPI), and then putting the vCPU into VMX Root Mode (post-VMXON). Trying to detect every possible path leading to architecturally forbidden states is hard and even risks breaking userspace (if it goes from valid to valid state but passes through invalid states), so just wait until KVM_RUN to detect that the vCPU state isn't allowed - Add KVM_X86_DISABLE_EXITS_APERFMPERF to allow disabling interception of APERF/MPERF reads, so that a "properly" configured VM can access APERF/MPERF. This has many caveats (APERF/MPERF cannot be zeroed on vCPU creation or saved/restored on suspend and resume, or preserved over thread migration let alone VM migration) but can be useful whenever you're interested in letting Linux guests see the effective physical CPU frequency in /proc/cpuinfo - Reject KVM_SET_TSC_KHZ for vm file descriptors if vCPUs have been created, as there's no known use case for changing the default frequency for other VM types and it goes counter to the very reason why the ioctl was added to the vm file descriptor. And also, there would be no way to make it work for confidential VMs with a "secure" TSC, so kill two birds with one stone - Dynamically allocation the shadow MMU's hashed page list, and defer allocating the hashed list until it's actually needed (the TDP MMU doesn't use the list) - Extract many of KVM's helpers for accessing architectural local APIC state to common x86 so that they can be shared by guest-side code for Secure AVIC - Various cleanups and fixes x86 (Intel): - Preserve the host's DEBUGCTL.FREEZE_IN_SMM when running the guest. Failure to honor FREEZE_IN_SMM can leak host state into guests - Explicitly check vmcs12.GUEST_DEBUGCTL on nested VM-Enter to prevent L1 from running L2 with features that KVM doesn't support, e.g. BTF x86 (AMD): - WARN and reject loading kvm-amd.ko instead of panicking the kernel if the nested SVM MSRPM offsets tracker can't handle an MSR (which is pretty much a static condition and therefore should never happen, but still) - Fix a variety of flaws and bugs in the AVIC device posted IRQ code - Inhibit AVIC if a vCPU's ID is too big (relative to what hardware supports) instead of rejecting vCPU creation - Extend enable_ipiv module param support to SVM, by simply leaving IsRunning clear in the vCPU's physical ID table entry - Disable IPI virtualization, via enable_ipiv, if the CPU is affected by erratum #1235, to allow (safely) enabling AVIC on such CPUs - Request GA Log interrupts if and only if the target vCPU is blocking, i.e. only if KVM needs a notification in order to wake the vCPU - Intercept SPEC_CTRL on AMD if the MSR shouldn't exist according to the vCPU's CPUID model - Accept any SNP policy that is accepted by the firmware with respect to SMT and single-socket restrictions. An incompatible policy doesn't put the kernel at risk in any way, so there's no reason for KVM to care - Drop a superfluous WBINVD (on all CPUs!) when destroying a VM and use WBNOINVD instead of WBINVD when possible for SEV cache maintenance - When reclaiming memory from an SEV guest, only do cache flushes on CPUs that have ever run a vCPU for the guest, i.e. don't flush the caches for CPUs that can't possibly have cache lines with dirty, encrypted data Generic: - Rework irqbypass to track/match producers and consumers via an xarray instead of a linked list. Using a linked list leads to O(n^2) insertion times, which is hugely problematic for use cases that create large numbers of VMs. Such use cases typically don't actually use irqbypass, but eliminating the pointless registration is a future problem to solve as it likely requires new uAPI - Track irqbypass's "token" as "struct eventfd_ctx *" instead of a "void *", to avoid making a simple concept unnecessarily difficult to understand - Decouple device posted IRQs from VFIO device assignment, as binding a VM to a VFIO group is not a requirement for enabling device posted IRQs - Clean up and document/comment the irqfd assignment code - Disallow binding multiple irqfds to an eventfd with a priority waiter, i.e. ensure an eventfd is bound to at most one irqfd through the entire host, and add a selftest to verify eventfd:irqfd bindings are globally unique - Add a tracepoint for KVM_SET_MEMORY_ATTRIBUTES to help debug issues related to private <=> shared memory conversions - Drop guest_memfd's .getattr() implementation as the VFS layer will call generic_fillattr() if inode_operations.getattr is NULL - Fix issues with dirty ring harvesting where KVM doesn't bound the processing of entries in any way, which allows userspace to keep KVM in a tight loop indefinitely - Kill off kvm_arch_{start,end}_assignment() and x86's associated tracking, now that KVM no longer uses assigned_device_count as a heuristic for either irqbypass usage or MDS mitigation Selftests: - Fix a comment typo - Verify KVM is loaded when getting any KVM module param so that attempting to run a selftest without kvm.ko loaded results in a SKIP message about KVM not being loaded/enabled (versus some random parameter not existing) - Skip tests that hit EACCES when attempting to access a file, and print a "Root required?" help message. In most cases, the test just needs to be run with elevated permissions" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (340 commits) Documentation: KVM: Use unordered list for pre-init VGIC registers RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map() RISC-V: KVM: Use find_vma_intersection() to search for intersecting VMAs RISC-V: perf/kvm: Add reporting of interrupt events RISC-V: KVM: Enable ring-based dirty memory tracking RISC-V: KVM: Fix inclusion of Smnpm in the guest ISA bitmap RISC-V: KVM: Delegate illegal instruction fault to VS mode RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs RISC-V: KVM: Factor-out g-stage page table management RISC-V: KVM: Add vmid field to struct kvm_riscv_hfence RISC-V: KVM: Introduce struct kvm_gstage_mapping RISC-V: KVM: Factor-out MMU related declarations into separate headers RISC-V: KVM: Use ncsr_xyz() in kvm_riscv_vcpu_trap_redirect() RISC-V: KVM: Implement kvm_arch_flush_remote_tlbs_range() RISC-V: KVM: Don't flush TLB when PTE is unchanged RISC-V: KVM: Replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH RISC-V: KVM: Rename and move kvm_riscv_local_tlb_sanitize() RISC-V: KVM: Drop the return value of kvm_riscv_vcpu_aia_init() RISC-V: KVM: Check kvm_riscv_vcpu_alloc_vector_context() return value KVM: arm64: selftests: Add FEAT_RAS EL2 registers to get-reg-list ...
472 lines
13 KiB
C
472 lines
13 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* PCI Message Signaled Interrupt (MSI) - irqdomain support
|
|
*/
|
|
#include <linux/acpi_iort.h>
|
|
#include <linux/irqdomain.h>
|
|
#include <linux/of_irq.h>
|
|
|
|
#include "msi.h"
|
|
|
|
int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
|
{
|
|
struct irq_domain *domain;
|
|
|
|
domain = dev_get_msi_domain(&dev->dev);
|
|
if (domain && irq_domain_is_hierarchy(domain))
|
|
return msi_domain_alloc_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN, nvec);
|
|
|
|
return pci_msi_legacy_setup_msi_irqs(dev, nvec, type);
|
|
}
|
|
|
|
void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
|
|
{
|
|
struct irq_domain *domain;
|
|
|
|
domain = dev_get_msi_domain(&dev->dev);
|
|
if (domain && irq_domain_is_hierarchy(domain)) {
|
|
msi_domain_free_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN);
|
|
} else {
|
|
pci_msi_legacy_teardown_msi_irqs(dev);
|
|
msi_free_msi_descs(&dev->dev);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* pci_msi_domain_write_msg - Helper to write MSI message to PCI config space
|
|
* @irq_data: Pointer to interrupt data of the MSI interrupt
|
|
* @msg: Pointer to the message
|
|
*/
|
|
static void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg)
|
|
{
|
|
struct msi_desc *desc = irq_data_get_msi_desc(irq_data);
|
|
|
|
/*
|
|
* For MSI-X desc->irq is always equal to irq_data->irq. For
|
|
* MSI only the first interrupt of MULTI MSI passes the test.
|
|
*/
|
|
if (desc->irq == irq_data->irq)
|
|
__pci_write_msi_msg(desc, msg);
|
|
}
|
|
|
|
/**
|
|
* pci_msi_domain_calc_hwirq - Generate a unique ID for an MSI source
|
|
* @desc: Pointer to the MSI descriptor
|
|
*
|
|
* The ID number is only used within the irqdomain.
|
|
*/
|
|
static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
|
|
{
|
|
struct pci_dev *dev = msi_desc_to_pci_dev(desc);
|
|
|
|
return (irq_hw_number_t)desc->msi_index |
|
|
pci_dev_id(dev) << 11 |
|
|
((irq_hw_number_t)(pci_domain_nr(dev->bus) & 0xFFFFFFFF)) << 27;
|
|
}
|
|
|
|
static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
|
|
struct msi_desc *desc)
|
|
{
|
|
arg->desc = desc;
|
|
arg->hwirq = pci_msi_domain_calc_hwirq(desc);
|
|
}
|
|
|
|
static struct msi_domain_ops pci_msi_domain_ops_default = {
|
|
.set_desc = pci_msi_domain_set_desc,
|
|
};
|
|
|
|
static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info)
|
|
{
|
|
struct msi_domain_ops *ops = info->ops;
|
|
|
|
if (ops == NULL) {
|
|
info->ops = &pci_msi_domain_ops_default;
|
|
} else {
|
|
if (ops->set_desc == NULL)
|
|
ops->set_desc = pci_msi_domain_set_desc;
|
|
}
|
|
}
|
|
|
|
static void pci_msi_domain_update_chip_ops(struct msi_domain_info *info)
|
|
{
|
|
struct irq_chip *chip = info->chip;
|
|
|
|
BUG_ON(!chip);
|
|
if (!chip->irq_write_msi_msg)
|
|
chip->irq_write_msi_msg = pci_msi_domain_write_msg;
|
|
if (!chip->irq_mask)
|
|
chip->irq_mask = pci_msi_mask_irq;
|
|
if (!chip->irq_unmask)
|
|
chip->irq_unmask = pci_msi_unmask_irq;
|
|
}
|
|
|
|
/**
|
|
* pci_msi_create_irq_domain - Create a MSI interrupt domain
|
|
* @fwnode: Optional fwnode of the interrupt controller
|
|
* @info: MSI domain info
|
|
* @parent: Parent irq domain
|
|
*
|
|
* Updates the domain and chip ops and creates a MSI interrupt domain.
|
|
*
|
|
* Returns:
|
|
* A domain pointer or NULL in case of failure.
|
|
*/
|
|
struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
|
|
struct msi_domain_info *info,
|
|
struct irq_domain *parent)
|
|
{
|
|
if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE))
|
|
info->flags &= ~MSI_FLAG_LEVEL_CAPABLE;
|
|
|
|
if (info->flags & MSI_FLAG_USE_DEF_DOM_OPS)
|
|
pci_msi_domain_update_dom_ops(info);
|
|
if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS)
|
|
pci_msi_domain_update_chip_ops(info);
|
|
|
|
/* Let the core code free MSI descriptors when freeing interrupts */
|
|
info->flags |= MSI_FLAG_FREE_MSI_DESCS;
|
|
|
|
info->flags |= MSI_FLAG_ACTIVATE_EARLY | MSI_FLAG_DEV_SYSFS;
|
|
if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE))
|
|
info->flags |= MSI_FLAG_MUST_REACTIVATE;
|
|
|
|
/* PCI-MSI is oneshot-safe */
|
|
info->chip->flags |= IRQCHIP_ONESHOT_SAFE;
|
|
/* Let the core update the bus token */
|
|
info->bus_token = DOMAIN_BUS_PCI_MSI;
|
|
|
|
return msi_create_irq_domain(fwnode, info, parent);
|
|
}
|
|
EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
|
|
|
|
/*
|
|
* Per device MSI[-X] domain functionality
|
|
*/
|
|
static void pci_device_domain_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc)
|
|
{
|
|
arg->desc = desc;
|
|
arg->hwirq = desc->msi_index;
|
|
}
|
|
|
|
static __always_inline void cond_mask_parent(struct irq_data *data)
|
|
{
|
|
struct msi_domain_info *info = data->domain->host_data;
|
|
|
|
if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT))
|
|
irq_chip_mask_parent(data);
|
|
}
|
|
|
|
static __always_inline void cond_unmask_parent(struct irq_data *data)
|
|
{
|
|
struct msi_domain_info *info = data->domain->host_data;
|
|
|
|
if (unlikely(info->flags & MSI_FLAG_PCI_MSI_MASK_PARENT))
|
|
irq_chip_unmask_parent(data);
|
|
}
|
|
|
|
static void pci_irq_mask_msi(struct irq_data *data)
|
|
{
|
|
struct msi_desc *desc = irq_data_get_msi_desc(data);
|
|
|
|
pci_msi_mask(desc, BIT(data->irq - desc->irq));
|
|
cond_mask_parent(data);
|
|
}
|
|
|
|
static void pci_irq_unmask_msi(struct irq_data *data)
|
|
{
|
|
struct msi_desc *desc = irq_data_get_msi_desc(data);
|
|
|
|
cond_unmask_parent(data);
|
|
pci_msi_unmask(desc, BIT(data->irq - desc->irq));
|
|
}
|
|
|
|
#ifdef CONFIG_GENERIC_IRQ_RESERVATION_MODE
|
|
# define MSI_REACTIVATE MSI_FLAG_MUST_REACTIVATE
|
|
#else
|
|
# define MSI_REACTIVATE 0
|
|
#endif
|
|
|
|
#define MSI_COMMON_FLAGS (MSI_FLAG_FREE_MSI_DESCS | \
|
|
MSI_FLAG_ACTIVATE_EARLY | \
|
|
MSI_FLAG_DEV_SYSFS | \
|
|
MSI_REACTIVATE)
|
|
|
|
static const struct msi_domain_template pci_msi_template = {
|
|
.chip = {
|
|
.name = "PCI-MSI",
|
|
.irq_mask = pci_irq_mask_msi,
|
|
.irq_unmask = pci_irq_unmask_msi,
|
|
.irq_write_msi_msg = pci_msi_domain_write_msg,
|
|
.flags = IRQCHIP_ONESHOT_SAFE,
|
|
},
|
|
|
|
.ops = {
|
|
.set_desc = pci_device_domain_set_desc,
|
|
},
|
|
|
|
.info = {
|
|
.flags = MSI_COMMON_FLAGS | MSI_FLAG_MULTI_PCI_MSI,
|
|
.bus_token = DOMAIN_BUS_PCI_DEVICE_MSI,
|
|
},
|
|
};
|
|
|
|
static void pci_irq_mask_msix(struct irq_data *data)
|
|
{
|
|
pci_msix_mask(irq_data_get_msi_desc(data));
|
|
cond_mask_parent(data);
|
|
}
|
|
|
|
static void pci_irq_unmask_msix(struct irq_data *data)
|
|
{
|
|
cond_unmask_parent(data);
|
|
pci_msix_unmask(irq_data_get_msi_desc(data));
|
|
}
|
|
|
|
void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
|
|
struct msi_desc *desc)
|
|
{
|
|
/* Don't fiddle with preallocated MSI descriptors */
|
|
if (!desc->pci.mask_base)
|
|
msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
|
|
}
|
|
EXPORT_SYMBOL_GPL(pci_msix_prepare_desc);
|
|
|
|
static const struct msi_domain_template pci_msix_template = {
|
|
.chip = {
|
|
.name = "PCI-MSIX",
|
|
.irq_mask = pci_irq_mask_msix,
|
|
.irq_unmask = pci_irq_unmask_msix,
|
|
.irq_write_msi_msg = pci_msi_domain_write_msg,
|
|
.flags = IRQCHIP_ONESHOT_SAFE,
|
|
},
|
|
|
|
.ops = {
|
|
.prepare_desc = pci_msix_prepare_desc,
|
|
.set_desc = pci_device_domain_set_desc,
|
|
},
|
|
|
|
.info = {
|
|
.flags = MSI_COMMON_FLAGS | MSI_FLAG_PCI_MSIX |
|
|
MSI_FLAG_PCI_MSIX_ALLOC_DYN,
|
|
.bus_token = DOMAIN_BUS_PCI_DEVICE_MSIX,
|
|
},
|
|
};
|
|
|
|
static bool pci_match_device_domain(struct pci_dev *pdev, enum irq_domain_bus_token bus_token)
|
|
{
|
|
return msi_match_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, bus_token);
|
|
}
|
|
|
|
static bool pci_create_device_domain(struct pci_dev *pdev, const struct msi_domain_template *tmpl,
|
|
unsigned int hwsize)
|
|
{
|
|
struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
|
|
|
|
if (!domain || !irq_domain_is_msi_parent(domain))
|
|
return true;
|
|
|
|
return msi_create_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, tmpl,
|
|
hwsize, NULL, NULL);
|
|
}
|
|
|
|
/**
|
|
* pci_setup_msi_device_domain - Setup a device MSI interrupt domain
|
|
* @pdev: The PCI device to create the domain on
|
|
* @hwsize: The maximum number of MSI vectors
|
|
*
|
|
* Return:
|
|
* True when:
|
|
* - The device does not have a MSI parent irq domain associated,
|
|
* which keeps the legacy architecture specific and the global
|
|
* PCI/MSI domain models working
|
|
* - The MSI domain exists already
|
|
* - The MSI domain was successfully allocated
|
|
* False when:
|
|
* - MSI-X is enabled
|
|
* - The domain creation fails.
|
|
*
|
|
* The created MSI domain is preserved until:
|
|
* - The device is removed
|
|
* - MSI is disabled and a MSI-X domain is created
|
|
*/
|
|
bool pci_setup_msi_device_domain(struct pci_dev *pdev, unsigned int hwsize)
|
|
{
|
|
if (WARN_ON_ONCE(pdev->msix_enabled))
|
|
return false;
|
|
|
|
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
|
|
return true;
|
|
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
|
|
msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
|
|
|
|
return pci_create_device_domain(pdev, &pci_msi_template, hwsize);
|
|
}
|
|
|
|
/**
|
|
* pci_setup_msix_device_domain - Setup a device MSI-X interrupt domain
|
|
* @pdev: The PCI device to create the domain on
|
|
* @hwsize: The size of the MSI-X vector table
|
|
*
|
|
* Return:
|
|
* True when:
|
|
* - The device does not have a MSI parent irq domain associated,
|
|
* which keeps the legacy architecture specific and the global
|
|
* PCI/MSI domain models working
|
|
* - The MSI-X domain exists already
|
|
* - The MSI-X domain was successfully allocated
|
|
* False when:
|
|
* - MSI is enabled
|
|
* - The domain creation fails.
|
|
*
|
|
* The created MSI-X domain is preserved until:
|
|
* - The device is removed
|
|
* - MSI-X is disabled and a MSI domain is created
|
|
*/
|
|
bool pci_setup_msix_device_domain(struct pci_dev *pdev, unsigned int hwsize)
|
|
{
|
|
if (WARN_ON_ONCE(pdev->msi_enabled))
|
|
return false;
|
|
|
|
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
|
|
return true;
|
|
if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
|
|
msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
|
|
|
|
return pci_create_device_domain(pdev, &pci_msix_template, hwsize);
|
|
}
|
|
|
|
/**
|
|
* pci_msi_domain_supports - Check for support of a particular feature flag
|
|
* @pdev: The PCI device to operate on
|
|
* @feature_mask: The feature mask to check for (full match)
|
|
* @mode: If ALLOW_LEGACY this grants the feature when there is no irq domain
|
|
* associated to the device. If DENY_LEGACY the lack of an irq domain
|
|
* makes the feature unsupported
|
|
*/
|
|
bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
|
|
enum support_mode mode)
|
|
{
|
|
struct msi_domain_info *info;
|
|
struct irq_domain *domain;
|
|
unsigned int supported;
|
|
|
|
domain = dev_get_msi_domain(&pdev->dev);
|
|
|
|
if (!domain || !irq_domain_is_hierarchy(domain)) {
|
|
if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS))
|
|
return mode == ALLOW_LEGACY;
|
|
return false;
|
|
}
|
|
|
|
if (!irq_domain_is_msi_parent(domain)) {
|
|
/*
|
|
* For "global" PCI/MSI interrupt domains the associated
|
|
* msi_domain_info::flags is the authoritative source of
|
|
* information.
|
|
*/
|
|
info = domain->host_data;
|
|
supported = info->flags;
|
|
} else {
|
|
/*
|
|
* For MSI parent domains the supported feature set
|
|
* is available in the parent ops. This makes checks
|
|
* possible before actually instantiating the
|
|
* per device domain because the parent is never
|
|
* expanding the PCI/MSI functionality.
|
|
*/
|
|
supported = domain->msi_parent_ops->supported_flags;
|
|
}
|
|
|
|
return (supported & feature_mask) == feature_mask;
|
|
}
|
|
|
|
/*
|
|
* Users of the generic MSI infrastructure expect a device to have a single ID,
|
|
* so with DMA aliases we have to pick the least-worst compromise. Devices with
|
|
* DMA phantom functions tend to still emit MSIs from the real function number,
|
|
* so we ignore those and only consider topological aliases where either the
|
|
* alias device or RID appears on a different bus number. We also make the
|
|
* reasonable assumption that bridges are walked in an upstream direction (so
|
|
* the last one seen wins), and the much braver assumption that the most likely
|
|
* case is that of PCI->PCIe so we should always use the alias RID. This echoes
|
|
* the logic from intel_irq_remapping's set_msi_sid(), which presumably works
|
|
* well enough in practice; in the face of the horrible PCIe<->PCI-X conditions
|
|
* for taking ownership all we can really do is close our eyes and hope...
|
|
*/
|
|
static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data)
|
|
{
|
|
u32 *pa = data;
|
|
u8 bus = PCI_BUS_NUM(*pa);
|
|
|
|
if (pdev->bus->number != bus || PCI_BUS_NUM(alias) != bus)
|
|
*pa = alias;
|
|
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* pci_msi_domain_get_msi_rid - Get the MSI requester id (RID)
|
|
* @domain: The interrupt domain
|
|
* @pdev: The PCI device.
|
|
*
|
|
* The RID for a device is formed from the alias, with a firmware
|
|
* supplied mapping applied
|
|
*
|
|
* Returns: The RID.
|
|
*/
|
|
u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev)
|
|
{
|
|
struct device_node *of_node;
|
|
u32 rid = pci_dev_id(pdev);
|
|
|
|
pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid);
|
|
|
|
of_node = irq_domain_get_of_node(domain);
|
|
rid = of_node ? of_msi_map_id(&pdev->dev, of_node, rid) :
|
|
iort_msi_map_id(&pdev->dev, rid);
|
|
|
|
return rid;
|
|
}
|
|
|
|
/**
|
|
* pci_msi_map_rid_ctlr_node - Get the MSI controller node and MSI requester id (RID)
|
|
* @pdev: The PCI device
|
|
* @node: Pointer to store the MSI controller device node
|
|
*
|
|
* Use the firmware data to find the MSI controller node for @pdev.
|
|
* If found map the RID and initialize @node with it. @node value must
|
|
* be set to NULL on entry.
|
|
*
|
|
* Returns: The RID.
|
|
*/
|
|
u32 pci_msi_map_rid_ctlr_node(struct pci_dev *pdev, struct device_node **node)
|
|
{
|
|
u32 rid = pci_dev_id(pdev);
|
|
|
|
pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid);
|
|
|
|
return of_msi_xlate(&pdev->dev, node, rid);
|
|
}
|
|
|
|
/**
|
|
* pci_msi_get_device_domain - Get the MSI domain for a given PCI device
|
|
* @pdev: The PCI device
|
|
*
|
|
* Use the firmware data to find a device-specific MSI domain
|
|
* (i.e. not one that is set as a default).
|
|
*
|
|
* Returns: The corresponding MSI domain or NULL if none has been found.
|
|
*/
|
|
struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
|
|
{
|
|
struct irq_domain *dom;
|
|
u32 rid = pci_dev_id(pdev);
|
|
|
|
pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid);
|
|
dom = of_msi_map_get_device_domain(&pdev->dev, rid, DOMAIN_BUS_PCI_MSI);
|
|
if (!dom)
|
|
dom = iort_get_device_domain(&pdev->dev, rid,
|
|
DOMAIN_BUS_PCI_MSI);
|
|
return dom;
|
|
}
|