For v7, bits [12:5] are ignored for !imm.
For v8, those same bits are reserved, but are not trapped.
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit df663ac0a4)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
For v7, bits [18:0] are ignored.
For v8, bits [18:14] are reserved and bits [13:0] are ignored.
Fixes: e8325dc02d ("target/sparc: Move RDTBR, FLUSHW to decodetree")
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 6ff52f9dee)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
For v7, bits [18:0] are ignored.
For v8, bits [18:14] are reserved and bits [13:0] are ignored.
Fixes: 5d617bfba0 ("target/sparc: Move RDWIM, RDPR to decodetree")
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit dc9678cc97)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
For v7, bits [18:0] are ignored.
For v8, bits [18:14] are reserved and bits [13:0] are ignored.
Fixes: 668bb9b755 ("target/sparc: Move RDPSR, RDHPR to decodetree")
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit a0345f6283)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Bits [18:0] are not decoded with v7, and for v8 unused values
of rs1 simply produce undefined results.
Fixes: af25071c1d ("target/sparc: Move RDASR, STBAR, MEMBAR to decodetree")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
(cherry picked from commit 49d669ccf3)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Solaris 8 appears to have a bug whereby it executes v9 MEMBAR
instructions when booting a freshly installed image. According
to the SPARC v8 architecture manual, whilst bits 13 and bits 12-0
of the "Read State Register Instructions" are notionally zero,
they are marked as unused (i.e. ignored).
Fixes: af25071c1d ("target/sparc: Move RDASR, STBAR, MEMBAR to decodetree")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3097
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
(cherry picked from commit b6cdd6c605)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Use ## to drop the preceding comma if __VA_ARGS__ is empty.
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit b7cd0a1821)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Define X86ASIdx as enum, like ARM's ARMASIdx, so that it's clear index 0
is for memory and index 1 is for SMM.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Tested-By: Kirill Martynov <stdcalllevi@yandex-team.ru>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20250730095253.1833411-3-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 591f817d81)
(Mjt: pick this change for completness with the previous one)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Kirill Martynov reported assertation in cpu_asidx_from_attrs() being hit
when x86_cpu_dump_state() is called to dump the CPU state[*]. It happens
when the CPU is in SMM and KVM emulation failure due to misbehaving
guest.
The root cause is that QEMU i386 never enables the SMM address space for
cpu since KVM SMM support has been added.
Enable the SMM cpu address space under KVM when the SMM is enabled for
the x86machine.
[*] https://lore.kernel.org/qemu-devel/20250523154431.506993-1-stdcalllevi@yandex-team.ru/
Reported-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Kirill Martynov <stdcalllevi@yandex-team.ru>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20250730095253.1833411-2-xiaoyao.li@intel.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0516f4b702)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This patch replaces uses of the generic TRANS macro with TRANS64 for
instructions that are only valid when 64-bit support is available.
This improves correctness and avoids potential assertion failures or
undefined behavior during translation on 32-bit-only configurations.
Signed-off-by: WANG Rui <wangrui@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
(cherry picked from commit 96e7448c1f)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Currently when CONFIG_POWERNV is not enabled, the build fails, such as
with --without-default-devices:
$ ./configure --without-default-devices
$ make
[281/283] Linking target qemu-system-ppc64
FAILED: qemu-system-ppc64
cc -m64 @qemu-system-ppc64.rsp
/usr/bin/ld: libqemu-ppc64-softmmu.a.p/target_ppc_misc_helper.c.o: in function `helper_load_sprd':
.../target/ppc/misc_helper.c:335:(.text+0xcdc): undefined reference to `pnv_chip_find_core'
/usr/bin/ld: libqemu-ppc64-softmmu.a.p/target_ppc_misc_helper.c.o: in function `helper_store_sprd':
.../target/ppc/misc_helper.c:375:(.text+0xdf4): undefined reference to `pnv_chip_find_core'
collect2: error: ld returned 1 exit status
...
This is since target/ppc/misc_helper.c references PowerNV specific
'pnv_chip_find_core' call.
Split the PowerNV specific SPRD code out of the generic PowerPC code, by
moving the SPRD code to pnv.c
Fixes: 9808ce6d5c ("target/ppc: Big-core scratch register fix")
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
Reported-by: Thomas Huth <thuth@redhat.com>
Suggested-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Aditya Gupta <adityag@linux.ibm.com>
Acked-by: Cédric Le Goater <clg@redhat.com>
Message-ID: <20250820122516.949766-2-adityag@linux.ibm.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
(cherry picked from commit 46d03bb23d)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Use extract64(entry, psn, 1) instead of (entry & (1 << psn)) to avoid
undefined behavior for shifts by 32–63 and to make bit extraction intent explicit.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Signed-off-by: Denis Rastyogin <gerben@altlinux.org>
Message-ID: <20250814104914.13101-1-gerben@altlinux.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
(cherry picked from commit 1f82ca7234)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
vmsr_open_socket() leaks the Error set by
qio_channel_socket_connect_sync(). Plug the leak by not creating the
Error.
Fixes: 0418f90809 (Add support for RAPL MSRs in KVM/Qemu)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20250723133257.1497640-2-armbru@redhat.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
(cherry picked from commit b2e4534a2c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Trap PMCR_EL0 or PMCR accesses to EL2 when MDCR_EL2.TPMCR is set.
Similar to MDCR_EL2.TPM, MDCR_EL2.TPMCR allows trapping EL0 and EL1
accesses to the PMCR register to EL2.
Cc: qemu-stable@nongnu.org
Signed-off-by: Smail AIDER <smail.aider@huawei.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250811112143.1577055-2-smail.aider@huawei.com
Message-Id: <20250722131925.2119169-1-smail.aider@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 186db6a73b)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
For all 32-bit systems and 64-bit Windows systems, "long" is 4 bytes long.
Due to using "long" for a linear address, svm_canonicalization would
set all high bits to 1 when (assuming 48-bit linear address) the segment
base is bigger than 0x7FFF.
This fixes booting guests under TCG when the guest IDT and GDT bases are
above 0x7FFF, thereby resulting in incorrect bases. When an interrupt
arrives, it would trigger a #PF exception; the #PF would trigger again,
resulting in a #DF exception; the #PF would trigger for the third time,
resulting in triple-fault, and eventually causes a shutdown VM-Exit to
the hypervisor right after guest boot.
Cc: qemu-stable@nongnu.org
Signed-off-by: Zero Tang <zero.tangptr@gmail.com>
(cherry picked from commit c12cbaa007)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This reverts commit 00268e0002.
(The only conflict is in the !is_tdx_vm() part of the condition,
which is safe to keep).
mark_unavailable_features() actively blocks usage of the feature,
so it is a functional change, not merely a emitting warning.
The commit was intended to merely warn if PDCM was enabled when
the performance counters are not, so revert it.
Reported-by: Christian A. Ehrhardt <christian.ehrhardt@canonical.com>
Analyzed-by: Daniel P. Berrangé <berrange@redhat.com>
Analyzed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-ID: <20250819150235.785559-1-pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
According to the specification, [X]VLDI should trigger an invalid instruction
exception only when Bit[12] is 1 and Bit[11:8] > 12. This patch fixes an issue
where an exception was incorrectly raised even when Bit[12] was 0.
Test case:
```
.global main
main:
vldi $vr0, 3328
ret
```
Reported-by: Zhou Qiankang <wszqkzqk@qq.com>
Signed-off-by: WANG Rui <wangrui@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20250804132212.4816-1-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Currently, the addressable ID encoding for CPUID[0x1].EBX[bits 16-23]
(Maximum number of addressable IDs for logical processors in this
physical package) is covered by vendor_cpuid_only_v2 compat property.
The previous consideration was to avoid breaking migration and this
compat property makes it unfriendly to backport the commit f985a1195b
("i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX
[23:16]").
However, NetBSD booting is broken since the commit 88dd4ca06c
("i386/cpu: Use APIC ID info to encode cache topo in CPUID[4]"),
because NetBSD calculates smt information via `lp_max` / `core_max` for
legacy Intel CPUs which doesn't support 0xb leaf, where `lp_max` is from
CPUID[0x1].EBX.bits[16-23] and `core_max` is from CPUID[0x4].0x0.bits[26
-31].
The commit 88dd4ca0 changed the encoding rule of `core_max` but didn't
update `lp_max`, so that NetBSD would get the wrong smt information,
which leads to the module loading failure.
Luckily, the commit f985a1195b ("i386/cpu: Fix number of addressable
IDs field for CPUID.01H.EBX[23:16]") updated the encoding rule for
`lp_max` and accidentally fixed the NetBSD issue too. This also shows
that using CPUID[0x1] and CPUID[0x4].0x0 to calculate HT/SMT information
is a common practice to detect CPU topology on legacy Intel CPUs.
Therefore, it's necessary to backport the commit f985a1195b to
previous stable QEMU to help address the similar issues as well. Then
the compat property is not needed any more since all stable QEMUs will
follow the same encoding way.
So, in CPUID[0x1], move addressable ID encoding out of compat property.
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Inspired-by: Chuang Xu <xuchuangxclwt@bytedance.com>
Fixes: commit f985a1195b ("i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX[23:16]")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3061
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Message-ID: <20250804053548.1808629-1-zhao1.liu@intel.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The code to handle setting SVE registers via the gdbstub is broken:
* it sets each pair of elements in the zregs[].d[] array in the
wrong order for the most common (little endian) case: the least
significant 64-bit value comes first
* it makes no attempt to handle target_endian()
* it does a simple copy out of the (target endian) gdbstub buffer
into the (host endan) zregs data structure, which is wrong on
big endian hosts
Fix all these problems:
* use ldq_p() to read from the gdbstub buffer
* check target_big_endian() to see if we need to handle the
128-bit values the opposite way around
Cc: qemu-stable@nongnu.org
Signed-off-by: Vacha Bhavsar <vacha.bhavsar@oss.qualcomm.com>
Message-id: 20250722173736.2332529-3-vacha.bhavsar@oss.qualcomm.com
[PMM: adjusted commit message, fixed spacing]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the code for allowing the gdbstub to set the value of an AArch64
FP/SIMD register, we weren't accounting for target_big_endian()
being true. This meant that for aarch64_be-linux-user we would
set the two halves of the FP register the wrong way around.
The much more common case of a little-endian guest is not affected;
nor are big-endian hosts.
Correct the handling of this case.
Cc: qemu-stable@nongnu.org
Signed-off-by: Vacha Bhavsar <vacha.bhavsar@oss.qualcomm.com>
Message-id: 20250722173736.2332529-2-vacha.bhavsar@oss.qualcomm.com
[PMM: added comment, expanded commit message, fixed missing space]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In commit 655659a74a we fixed some bugs in the encoding of the
Debug Communications Channel registers, including that we were
incorrectly exposing an AArch32 register at p14, 3, c0, c5, 0.
Unfortunately removing a register is a break of forwards migration
compatibility for TCG, because we will fail the migration if the
source QEMU passes us a cpreg which the destination QEMU does not
have. We don't have a mechanism for saying "it's OK to ignore this
sysreg in the inbound data", so for the 10.1 release reinstate the
incorrect AArch32 register.
(We probably have had other cases in the past of breaking migration
compatibility like this, but we didn't notice because we didn't test
and in any case not that many people care about TCG migration
compatibility. KVM migration compat is not affected because for KVM
we treat the kernel as the source of truth for what system registers
are present.)
Fixes: 655659a74a ("target/arm: Correct encoding of Debug Communications Channel registers")
Reported-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250731134338.250203-1-peter.maydell@linaro.org
In the PMUv3, a new AArch32 64-bit (MCRR/MRRC) accessor for the
PMCCNTR was added. In QEMU we forgot to implement this, so only
provide the 32-bit accessor. Since we have a 64-bit PMCCNTR
sysreg for AArch64, adding the 64-bit AArch32 version is easy.
We add the PMCCNTR to the v8_cp_reginfo because PMUv3 was added
in the ARMv8 architecture. This is consistent with how we
handle the existing PMCCNTR support, where we always implement
it for all v7 CPUs. This is arguably something we should
clean up so it is gated on ARM_FEATURE_PMU and/or an ID
register check for the relevant PMU version, but we should
do that as its own tidyup rather than being inconsistent between
this PMCCNTR accessor and the others.
Since the register name is the same as the 32-bit PMCCNTR, we set
ARM_CP_NO_GDB on the 32-bit one to avoid generating an invalid GDB XML.
See https://developer.arm.com/documentation/ddi0601/2024-06/AArch32-Registers/PMCCNTR--Performance-Monitors-Cycle-Count-Register?lang=en
Note for potential backporting:
* this code in cpregs-pmu.c will be in helper.c on stable
branches that don't have commit ae2086426d
Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Richardson <alexrichardson@google.com>
Message-id: 20250725170136.145116-1-alexrichardson@google.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
On LoongArch64 system, the high 32 bit of 64 bit virtual address should be
0x00000[0-7]yyy or 0xffff8yyy. The bit from 47 to 63 should be all 0 or
all 1.
Function get_physical_address() only checks bit 48 to 63, there will be
problem with the following test case. On physical machine, there is bus
error report and program exits abnormally. However on qemu TCG system
emulation mode, the program runs normally. The virtual address
0xffff000000000000ULL + addr and addr are treated the same on TLB entry
checking. This patch fixes this issue.
void main()
{
void *addr, *addr1;
int val;
addr = malloc(100);
*(int *)addr = 1;
addr1 = 0xffff000000000000ULL + addr;
val = *(int *)addr1;
printf("val %d \n", val);
}
Cc: qemu-stable@nongnu.org
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Acked-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20250714015446.746163-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
RISC-V AIA Spec states:
"For a machine-level environment, extension Smaia encompasses all added
CSRs and all modifications to interrupt response behavior that the AIA
specifies for a hart, over all privilege levels. For a supervisor-level
environment, extension Ssaia is essentially the same as Smaia except
excluding the machine-level CSRs and behavior not directly visible to
supervisor level."
Since midelegh is an AIA machine-mode CSR, add Smaia extension check in
aia_smode32 predicate.
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Nutty Liu<liujingqi@lanxincomputing.com>
Message-ID: <20250701030021.99218-3-jay.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RISC-V Privileged Spec states:
"In harts with S-mode, the medeleg and mideleg registers must exist, and
setting a bit in medeleg or mideleg will delegate the corresponding trap
, when occurring in S-mode or U-mode, to the S-mode trap handler. In
harts without S-mode, the medeleg and mideleg registers should not
exist."
Add smode predicate to ensure these CSRs are only accessible when S-mode
is supported.
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Nutty Liu<liujingqi@lanxincomputing.com>
Message-ID: <20250701030021.99218-2-jay.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When supervisor CSRs are accessed from VU-mode, a virtual instruction
exception should be raised instead of an illegal instruction.
Fixes: c1fbcecb3a (target/riscv: Fix csr number based privilege checking)
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
Reviewed-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Nutty Liu <liujingqi@lanxincomputing.com>
Message-ID: <20250708060720.7030-1-luxu.kernel@bytedance.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This reverts commit 28c12c1f2f.
As reported in [1] this commit is breaking Linux vector code, and
although a simpler reproducer was provided, the fix itself isn't trivial
due to the amount and the nature of the changes. And we really do not
want to keep Linux broken while we work on it.
The revert will fix Linux and will give us time to do a proper fix.
[1] https://mail.gnu.org/archive/html/qemu-devel/2025-07/msg02525.html
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Message-ID: <20250710100525.372985-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
GETPC() should always be called from the top level helper, e.g. the
first helper that is called by the translation code. We stopped doing
that in commit 3157a553ec, and then we introduced problems when
unwinding the exceptions being thrown by helper_mret(), as reported by
[1].
Call GETPC() at the top level helper and pass the value along.
[1] https://gitlab.com/qemu-project/qemu/-/issues/3020
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: 3157a553ec ("target/riscv: Add Smrnmi mnret instruction")
Closes: https://gitlab.com/qemu-project/qemu/-/issues/3020
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Nutty Liu <liujingqi@lanxincomputing.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250714133739.1248296-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
pmp_is_in_range() prefers to match addresses within the interval
[start, end]. To archieve this, pmpaddrX is decremented during the end
address update.
In TOR mode, a rule is ignored if its start address is greater than or
equal to its end address.
However, if pmpaddrX is set to 0, this decrement operation causes the
calulated end address to wrap around to UINT_MAX. In this scenario, the
address guard for this PMP entry would become ineffective.
This patch addresses the issue by moving the guard check earlier,
preventing the problematic wraparound when pmpaddrX is zero.
Signed-off-by: Vac Chen <vacantron@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20250706065554.42953-1-vacantron@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to the 'MIPS MT Application-Specific Extension' manual:
If the VPE executing the instruction is not a Master VPE,
with the MVP bit of the VPEConf0 register set, the EVP bit
is unchanged by the instruction.
Modify the DVPE/EVPE opcodes to only update the MVPControl.EVP bit
if executed on a master VPE.
Cc: qemu-stable@nongnu.org
Reported-by: Hansni Bu
Buglink: https://bugs.launchpad.net/qemu/+bug/1926277
Fixes: f249412c74 ("mips: Add MT halting and waking of VPEs")
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Message-ID: <20210427133343.159718-1-f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Table A-5 of the Intel manual incorrectly lists the third operand of
VINSERTx128 as Wqq, but it is actually a 128-bit value. This is
visible when W is a memory operand close to the end of the page.
Fixes the recently-added poly1305_kunit test in linux-next.
(No testcase yet, but I plan to modify test-avx2 to use memory
close to the end of the page. This would work because the test
vectors correctly have the memory operand as xmm2/m128).
Reported-by: Eric Biggers <ebiggers@kernel.org>
Tested-by: Eric Biggers <ebiggers@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: qemu-stable@nongnu.org
Fixes: 7906847768 ("target/i386: reimplement 0x0f 0x3a, add AVX", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Linux zeroes LORC_EL1 on boot at EL2, without further interaction with FEAT_LOR afterwards.
Stub out LORC_EL1 accesses as FEAT_LOR is a mandatory extension on Armv8.1+.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In our implementation of the SVE2p1 contiguous load to 128-bit
element insns such as LD1D (scalar plus scalar, single register), we
got the order of the arguments to the DO_LD1_2() macro wrong. Here
the first argument is the element size and the second is the memory
size, and the element size is always the same size or larger than
the memory size.
For the 128-bit versions, we want to load either 32-bit or 64-bit
values from memory and extend them to the 128-bit vector element, but
were trying to load 128 bit values and then stuff them into 32-bit or
64-bit vector elements. Correct the macro ordering.
Fixes: fc5f060bcb ("target/arm: Implement {LD1, ST1}{W, D} (128-bit element) for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-7-peter.maydell@linaro.org
Our implementation of the helper functions for the LD1Q and ST1Q
insns reused the existing DO_LD1_ZPZ_D and DO_ST1_ZPZ_D macros. This
passes the wrong esize (8, not 16) to sve_ldl_z().
Create new macros DO_LD1_ZPZ_Q and DO_ST1_ZPZ_Q which pass the
correct esize, and use them for the LD1Q and ST1Q helpers.
Fixes: d2aa9a804e ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-6-peter.maydell@linaro.org
Unlike the "LD1D (scalar + vector)" etc instructions, LD1Q is
vector + scalar. This means that:
* the vector and the scalar register are in opposite fields
in the encoding
* 31 in the scalar register field is XZR, not XSP
The same applies for ST1Q.
This means we can't reuse the trans_LD1_zprz() and trans_ST1_zprz()
functions for LD1Q and ST1Q. Split them out to use their own
trans functions.
Note that the change made here to sve.decode requires the decodetree
bugfix "decodetree: Infer argument set before inferring format" to
avoid a spurious compile-time error about "dtype".
Fixes: d2aa9a804e ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250723165458.3509150-5-peter.maydell@linaro.org
Instead of trying to pack mtedesc into the upper 17 bits of a 32-bit
gvec descriptor, pass the gvec descriptor in the lower 32 bits and
the mte descriptor in the upper 32 bits of a 64-bit operand.
This fixes two bugs:
(1) in gen_sve_ldr() and gen_sve_str() call gen_mte_checkN() with a
length value which is the SVE vector length and can be up to 256
bytes. We don't assert there that it fits in the descriptor, so
we would just fail to do the MTE checks on the right length of memory
if the VL is more than 32 bytes
(2) the new-in-SVE2p1 insns LD3Q, LD4Q, ST3Q, ST4Q also involve
transfers of more than 32 bytes of memory. In this case we would
assert at translate time.
(Note for potential backporting: this commit depends on the previous
"target/arm: Expand the descriptor for SME/SVE memory ops to i64".)
Fixes: 7b1613a102 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20250723165458.3509150-3-peter.maydell@linaro.org
[PMM: expand commit message to clarify that we are fixing bugs here]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have run out of room attempting to pack both the gvec
descriptor and the mte descriptor into 32 bits.
Here, change nothing except the parameter type, which
affects all declarations, the function typedefs, and the
type used with tcg expansion.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20250723165458.3509150-2-peter.maydell@linaro.org
Commit a2260983c6 ("hvf: arm: Add support for GICv3") added GICv3 support
by implementing emulation for a few system registers. ICC_RPR_EL1 was
defined but not plugged in the sysreg handlers (for no good reason).
Fix it.
Fixes: a2260983c6 ("hvf: arm: Add support for GICv3")
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250714160139.10404-3-zenghui.yu@linux.dev
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Quoting Peter Maydell:
" hvf_sysreg_read_cp() and hvf_sysreg_write_cp() do not check the .access
field of the ARMCPRegInfo to ensure that they forbid writes to registers
that are marked with a .access field that says they're read-only (and
ditto reads to write-only registers). "
Before we add more registers in GIC sysreg handlers, let's get it correct
by adding the .access checks to hvf_sysreg_read_cp() and
hvf_sysreg_write_cp(). With that, a sysreg access with invalid permission
will result in an UNDEFINED exception.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Message-id: 20250714160139.10404-2-zenghui.yu@linux.dev
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For the LD1Q instruction (gather load of quadwords) we use the
LD1_zprz pattern with MO_128 elements. At this element size there is
no signed vs unsigned distinction, and we only set the 'u' bit in the
arg_LD1_zprz struct because we share the code and decode struct with
smaller element sizes.
However, we set u=0 in the decode pattern line but then accidentally
asserted that it was 1 in the trans function. Since our usual convention
is that the "default" is unsigned and we only mark operations as signed
when they really do need to extend, change the decode pattern line to
set u=1 to match the assert.
Fixes: d2aa9a804e ("target/arm: Implement LD1Q, ST1Q for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-11-peter.maydell@linaro.org
The FMAXNMQV and FMINNMQV insns use the default NaN as their identity
value for inactive source vector elements. We open-coded this in
sve_helper.c, hoping to avoid a function call. However, this fails
to account for FPCR.AH=1 changing the default NaN value to set the
sign bit. Use a call to floatN_default_nan() to obtain this value.
Fixes: 1de7ecfc12 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-10-peter.maydell@linaro.org
In the part of the SVE DO_REDUCE macro used by the SVE2p1 FMAXQV,
FMINQV, etc insns, we incorrectly applied the H() macro twice when
calculating an offset to add to the vn pointer. This has no effect
on little-endian hosts but on big-endian hosts the two invocations
will cancel each other out and we will access the wrong part of the
array.
The "s * 16" part of the expression is already aligned, so we only
need to use the H macro on the "e". Correct the macro usage.
Fixes: 1de7ecfc12 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-9-peter.maydell@linaro.org
When we implemented the FMAXQV and FMINQV insns we accidentally
inverted the sense of the FPCR.AH test, so we gave the AH=1 behaviour
when FPCR.AH was zero, and vice-versa. (The difference is limited to
handling of negative zero and NaN inputs.)
Fixes: 1de7ecfc12 ("target/arm: Implement FADDQV, F{MIN, MAX}{NM}QV for SVE2p1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20250718173032.2498900-8-peter.maydell@linaro.org
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the SVE floating-point multiply-add (indexed) insn group. Implement
these.
Fixes: 7b1613a102 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-7-peter.maydell@linaro.org
FEAT_SVE_B16B16 adds bfloat16 versions of the FMLA and FMLS insns in
the "SVE floating-point multiply-accumulate writing addend" group,
encoded as sz=0b00.
Fixes: 7b1613a102 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-6-peter.maydell@linaro.org
FEAT_SVE_B16B16 adds a bfloat16 version of the FMUL insn in the
floating-point multiply (indexed) instruction group. The encoding
is slightly bespoke; in our implementation we use MO_8 to indicate
bfloat16, as with the other B16B16 insns.
Fixes: 7b1613a102 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-5-peter.maydell@linaro.org
FEAT_SVE_B16B16 adds bfloat16 versions of the SVE floating point
(predicated) instructions, which are encoded via sz=0b00. Add the
BFMAX and BFMIN insns. These have separate behaviour for AH=1 and
AH=0; we have already implemented the AH=1 helper for the SME2
versions of these insns.
Fixes: 7b1613a102 ("target/arm: Enable FEAT_SME2p1 on -cpu max")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20250718173032.2498900-4-peter.maydell@linaro.org