A user in enterprise support reported a low (<1/50) chance of network
crash (with soft lockups reported) when exiting ovs-tcpdump (a tcpdump
wrapper provided by Open vSwitch) which they could only resolve by
rebooting the host.
After reporting the issue upstream with a reproducer [1], an OVS
developer submitted a kernel patch which is now included 6.13 and some
stable kernels. With this patch, the reproducer does not seem to
trigger the issue anymore. Hence, backport the patch.
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2025-January/053423.html
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Cherry-pick upstream commit f24f669d03f884a6ef95cca84317d0f329e93961
to avoid unnecessary performance penalty for setups with a new enough
CPU microcode update applied.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adapted to context change in "arch/x86/kvm/cpuid.h", because of the
vcpu_supports_xsave_pkru() function that got added by Proxmox VE
downstream patch "kvm: xsave set: mask-out PKRU bit in xfeatures if
vCPU has no support". But otherwise clean cherry-pick from linux-next,
no functional changes.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
git will automatically change the length of the index hashes when
formatting a patch depending on what references are present in the
submodule. After pulling in the stable tags today, git wanted to add
a character to all hashes for me. Use --full-index when generating the
patches to avoid such issues in the future.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This change breaks passthrough of the iGPU on older Intel Plattforms
(Skylake)
https://forum.proxmox.com/threads/.157266
The patch was orignally applied by Ubuntu upstream for an issue
unrelated to passthrough - flickering of the display with these chips,
where some comments suggest that setting intel_iommu=igfx_off does not
fix the issue, while the patch explicitly says it does the same as
setting intel_iommu=igfx_off - my quick glance at the code agrees with
the patch author, with the downside that with the patch you cannot
enable it again via kernel_cmdline.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2062951
As this is a regression, and our user-base does not seem to have
encountered the issue with flickering displays simply revert the
change for now. A proper fix seems to be in the makings in linux
upstream (according to the launchpad issue)
I tested this on an old machine we had lying around - reverting the
patch suppressed the message:
pci 0000:00:02.0: DMAR: Disabling IOMMU for graphics on this chipset
(also did not notice any flickering in a short graphic session
(wayland+kde)).
I'd suggest pulling this also into our 6.8 kernel (but this can also happen
after we get some feedback that it indeed fixes the issue of the
reporters in the forum)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the latest linux-stable pull I found in ubuntu-oracular was for 6.11.5
- this fix here seems targeted enough. see also the discussion
upstream:
https://lore.kernel.org/all/20241029163317.GA216411@nvidia.com/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Fixes rare read corruption issues using the in kernel ceph client.
On incomplete read requests, the clean tail flag should make sure to
zero fill the remaining bytes for the subrequest.
If the iov iterator is not at the correct position, e.g., because the
subreq->transferred was not yet updated, this can however zero fill
downloaded data, corrupting the read content.
Link to issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=5683
Link to upstream issue:
https://bugzilla.kernel.org/show_bug.cgi?id=219237
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: mention an specific example for subreq misalignment ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported in the community forum [0].
This fixes an issue with read/write operations done on ocfs2 with
io_uring. This has caused QEMU guests to be unable to determine the file
format at [1] because of an unsuccessful read and therefore could not
boot, which has been resolved with this patch.
This patch is already merged in Jens Axboe's linux-block tree and also
merged in the mainline v6.12 prepatch kernels:
> # git tag --contains c0a9d496e0fece67db777bd48550376cf2960c47
> v6.12-rc1
> v6.12-rc2
> v6.13-rc3
[0] https://forum.proxmox.com/threads/140273/post-702007
[1] https://elixir.bootlin.com/qemu/v9.0.2/source/block.c#L1031
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
cherry-picked from Linux kernel.org upstream commit
5560a612c20d3daacbf5da7913deefa5c31742f4
The issue was reported in the enterprise support. The customer
contacted the ledmon maintainer, who found that it is not an issue
with ledmon, bisected the kernel and came up with this fix
Originally-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
picked from v6.9.8, the bug can cause lost NFS connections according to
upstream, and possibly corrupt backups according to our user report.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As reported in the community forum [0], there currently is a memory
leak in the CIFS client code. Reproduced by running a backup with CIFS
target storage:
> while true; do vzdump 101 --storage cifs --prune-backups keep-last=1; echo 3 > /proc/sys/vm/drop_caches; done
A fix was found on the kernel mailing list tagged for stable v6.6+
and it does solve the issue, but is not yet included in any (stable)
kernels.
[0]: https://forum.proxmox.com/threads/147603/post-682388
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
It's in master-next of current ubuntu noble kernel git tree and a null
check cannot really hurt.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The reporter has an Adaptec 5805 controller (using the aacraid
driver), which reports a byteswapped page length for VPD page 0. It
reports "02 00" as page length instead of "00 02".
This stopped working with kernel 6.8.4 due to commit b5fc07a5fb56
("scsi: core: Consult supported VPD page list prior to fetching page")
To address that issue limit the page search scope to the size of our
VPD buffer to guard against devices returning a larger page count than
requested.
Reported-by: Peter Schneider <pschneider1968@googlemail.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The patch from commit e5731f4 ("backport fix for managing block flush
queue list") caused some fallout when used with LVM on root, as that
uses some rather odd (but previously working fine) PREFLUSH
| POSTFLUSH format that was now causing the list to be used without
being initialized, resulting in freezes.
Link: https://lore.kernel.org/all/20240608143115.972486-1-chengming.zhou@linux.dev/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported in the community forum [0] and easy to reproduce by doing
e.g.
> while true; do mount -t nfs 192.168.20.148:/rpool/data /mnt/test; done
from another node for a share that does not exist or for which the
client has no permissions.
[0]: https://forum.proxmox.com/threads/146649
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The original fix disabled the xsaves feature for zen1/2. The issue has
since been fixed in the cpus microcode and this patch keeps the feature enabled
if the microcode version is recent enough to contain the fix.
Signed-off-by: Folke Gleumes <f.gleumes@proxmox.com>
With apparmor 4, when recvmsg() calls are checked by the apparmor LSM
they will always return EINVAL.
This causes very weird issues when apparmor profiles are in use, and a
lot of networking issues in containers (which are always using
apparmor).
When coming from sys_recvmsg, msg->msg_namelen is explicitly set to
zero early on. (see ____sys_recvmsg in net/socket.c)
We still end up in 'map_addr' where the assumption is that addr !=
NULL means addrlen has a valid size.
This is likely not a final fix, it was suggested by jjohansen on irc
to get things going until this is resolved properly.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This reverts commit 29cb6fcbb7, user
feedback was showing any positive impact of this patch, and upstream
still hasn't a fix for older stable releases (but for 6.8), so for now
rather revert this and wait for either a better (well, actual) fix or
updating to 6.8 or newer.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Users have been reporting [1] that VMs occasionally become
unresponsive with high CPU usage for some time (varying between ~1 and
more than 60 seconds). After that time, the guests come back and
continue running. Windows VMs seem most affected (not responding to
pings during the hang, RDP sessions time out), but we also got reports
about Linux VMs (reporting soft lockups). The issue was not present on
host kernel 5.15 and was first reported with kernel 6.2. Users
reported that the issue becomes easier to trigger the more memory is
assigned to the guests. Setting mitigations=off was reported to
alleviate (but not eliminate) the issue. For most users the issue
seems to disappear after (also) disabling KSM [2], but some users
reported freezes even with KSM disabled [3].
It turned out the reports concerned NUMA hosts only, and that the
freezes correlated with runs of the NUMA balancer [4]. Users reported
that disabling the NUMA balancer resolves the issue (even with KSM
enabled).
We put together a Linux VM reproducer, ran a git-bisect on the kernel
to find the commit introducing the issue and asked upstream for help
[5]. As it turned out, an upstream bugreport was recently opened [6]
and a preliminary fix to the KVM TDP MMU was proposed [7]. With that
patch [7] on top of kernel 6.7, the reproducer does not trigger
freezes anymore. As of now, the patch (or its v2 [8]) is not yet
merged in the mainline kernel, and backporting it may be difficult due
to dependencies on other KVM changes [9].
However, the bugreport [6] also prompted an upstream developer to
propose a patch to the kernel scheduler logic that decides whether a
contended spinlock/rwlock should be dropped [10]. Without the patch,
PREEMPT_DYNAMIC kernels (such as ours) would always drop contended
locks. With the patch, the kernel only drops contended locks if the
kernel is currently set to preempt=full. As noted in the commit
message [10], this can (counter-intuitively) improve KVM performance.
Our kernel defaults to preempt=voluntary (according to
/sys/kernel/debug/sched/preempt), so with the patch it does not drop
contended locks anymore, and the reproducer does not trigger freezes
anymore. Hence, backport [10] to our kernel.
[1] https://forum.proxmox.com/threads/130727/
[2] https://forum.proxmox.com/threads/130727/page-4#post-575886
[3] https://forum.proxmox.com/threads/130727/page-8#post-617587
[4] https://www.kernel.org/doc/html/latest/admin-guide/sysctl/kernel.html#numa-balancing
[5] https://lore.kernel.org/kvm/832697b9-3652-422d-a019-8c0574a188ac@proxmox.com/
[6] https://bugzilla.kernel.org/show_bug.cgi?id=218259
[7] https://lore.kernel.org/all/20230825020733.2849862-1-seanjc@google.com/
[8] https://lore.kernel.org/all/20240110012045.505046-1-seanjc@google.com/
[9] https://lore.kernel.org/kvm/Zaa654hwFKba_7pf@google.com/
[10] https://lore.kernel.org/all/20240110214723.695930-1-seanjc@google.com/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
to silence array-index-out-of-bounds warnings for dynamically-sized
arrays. All commits applied cleanly and just replace array[1] with
array[].
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This is generating far too much noise in the logs, so keep it at once
per boot until we (and other user space tools) adapted to the kernel
wanting user space to chose memfd execution behavior very explicitly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This improves compatibility for guests w.r.t. live-migration, or live
snapshot rollback, to hosts with less (FPU) xfeatures supported, as
long as the set of features that was actually exposed to the guest is
still supported.
This improves on the ad856280ddea ("x86/kvm/fpu: Limit guest
user_xfeatures to supported bits of XCR0") bug fix.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this exposes the FLUSHBYASID CPU flag to nested VMs when running on an
AMD CPU. also reverts a made up check that would advertise
FLUSHBYASID as not supported. this enable certain modern hypervisors
such as VMWare ESXi 7 and Workstation 17 to run nested VMs properly
again.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>