As reported in the community forum [0], containers requiring cgroup v1
would not start anymore, even when systemd.unified_cgroup_hierarchy=0
was set on the kernel commandline. The error message would be:
> cgfsng_setup_limits_legacy: 3442 No such file or directory - Failed to set "memory.limit_in_bytes" to "536870912"
Kernel commit e93d4166b40a ("mm: memcg: put cgroup v1-specific code
under a config option"), which was first shipped in v6.11, made it
necessary to explicitly enable the new associated Kconfig flag.
[0]: https://forum.proxmox.com/threads/156830/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[TL: note that the commit was first shipped with 6.11 ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes rare read corruption issues using the in kernel ceph client.
On incomplete read requests, the clean tail flag should make sure to
zero fill the remaining bytes for the subrequest.
If the iov iterator is not at the correct position, e.g., because the
subreq->transferred was not yet updated, this can however zero fill
downloaded data, corrupting the read content.
Link to issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=5683
Link to upstream issue:
https://bugzilla.kernel.org/show_bug.cgi?id=219237
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: mention an specific example for subreq misalignment ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported in the community forum [0].
This fixes an issue with read/write operations done on ocfs2 with
io_uring. This has caused QEMU guests to be unable to determine the file
format at [1] because of an unsuccessful read and therefore could not
boot, which has been resolved with this patch.
This patch is already merged in Jens Axboe's linux-block tree and also
merged in the mainline v6.12 prepatch kernels:
> # git tag --contains c0a9d496e0fece67db777bd48550376cf2960c47
> v6.12-rc1
> v6.12-rc2
> v6.13-rc3
[0] https://forum.proxmox.com/threads/140273/post-702007
[1] https://elixir.bootlin.com/qemu/v9.0.2/source/block.c#L1031
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
cherry-picked from Linux kernel.org upstream commit
5560a612c20d3daacbf5da7913deefa5c31742f4
The issue was reported in the enterprise support. The customer
contacted the ledmon maintainer, who found that it is not an issue
with ledmon, bisected the kernel and came up with this fix
Originally-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
picked from v6.9.8, the bug can cause lost NFS connections according to
upstream, and possibly corrupt backups according to our user report.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As reported in the community forum [0], there currently is a memory
leak in the CIFS client code. Reproduced by running a backup with CIFS
target storage:
> while true; do vzdump 101 --storage cifs --prune-backups keep-last=1; echo 3 > /proc/sys/vm/drop_caches; done
A fix was found on the kernel mailing list tagged for stable v6.6+
and it does solve the issue, but is not yet included in any (stable)
kernels.
[0]: https://forum.proxmox.com/threads/147603/post-682388
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
It's in master-next of current ubuntu noble kernel git tree and a null
check cannot really hurt.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The reporter has an Adaptec 5805 controller (using the aacraid
driver), which reports a byteswapped page length for VPD page 0. It
reports "02 00" as page length instead of "00 02".
This stopped working with kernel 6.8.4 due to commit b5fc07a5fb56
("scsi: core: Consult supported VPD page list prior to fetching page")
To address that issue limit the page search scope to the size of our
VPD buffer to guard against devices returning a larger page count than
requested.
Reported-by: Peter Schneider <pschneider1968@googlemail.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The patch from commit e5731f4 ("backport fix for managing block flush
queue list") caused some fallout when used with LVM on root, as that
uses some rather odd (but previously working fine) PREFLUSH
| POSTFLUSH format that was now causing the list to be used without
being initialized, resulting in freezes.
Link: https://lore.kernel.org/all/20240608143115.972486-1-chengming.zhou@linux.dev/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported in the community forum [0] and easy to reproduce by doing
e.g.
> while true; do mount -t nfs 192.168.20.148:/rpool/data /mnt/test; done
from another node for a share that does not exist or for which the
client has no permissions.
[0]: https://forum.proxmox.com/threads/146649
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>