QMP and monitor helpers are moved from QemuServer.pm.
By using only vm_running_locally instead of check_running, a cyclic
dependency to QemuConfig is avoided. This also means that the $nocheck
parameter serves no more purpose, and has thus been removed along with
vm_mon_cmd_nocheck.
Care has been taken to avoid errors resulting from this, and
occasionally a manual check for a VM's existance inserted on the
callsite.
Methods have been renamed to avoid redundant naming:
* vm_qmp_command -> qmp_cmd
* vm_mon_cmd -> mon_cmd
* vm_human_monitor_command -> hmp_cmd
mon_cmd is exported since it has many users. This patch also changes all
non-package users of vm_qmp_command to use the mon_cmd helper. Includes
mocking for tests.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The codepath for "any" hugepages did not check if memory size was even,
leading to the code below trying to allocate half a hugepage (e.g. VM
with 2049MiB RAM would lead to 1024.5 2kB hugepages).
Also improve error message for systems with only 1GB hugepages enabled.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
As reported in bug #2402, a system started with "default_hugepagesz=1G
hugepagesz=1G" does not have a /sys/kernel/mm/hugepages/hugepages-2048kB
directory.
To fix, ignore the missing directory in hugepages_mount (since it might
not be needed anyway), and correctly check if the requested hugepage
size is available in hugepages_size instead.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
when no numaX config options were present we returned the
hash as a list instead of a hash reference...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Currently Proxmox VE always deallocates HugePagesTLB
when starting a new machine and it makes it impossible
to preconfigure kernel /proc/cmdline with persistent allocation.
This change makes deallocation to prefer defaults set by /proc/cmdline,
by parsing the cmdline and respecting hugepages= and hugepagesz=.
Signed-off-by: Kamil Trzciński <ayufan@ayufan.eu>
foreach_dimm() provides a guest numa node index, when used
in conjunction with the guest-to-host numa node topology
mapping one has to make sure that the correct host-side
indices are used.
This covers situations where the user defines a numaX with
hostnodes=Y with Y != X.
For instance:
cores: 2
hotplug: disk,network,cpu,memory
hugepages: 2
memory: 2048
numa: 1
numa1: memory=512,hostnodes=0,cpus=0-1,policy=bind
numa2: memory=512,hostnodes=0,cpus=2-3,policy=bind
Both numa IDs 1 and 2 passed by foreach_dimm() have to be
mapped to host node 0.
Note that this also reverses the foreach_reverse_dimm() numa
node numbering as the current code, while walking sizes
backwards, walked the numa IDs inside each size forward,
which makes more sense. (Memory hot-unplug is still working
with this.)
vm configuration
----------------
hugepages: (any|2|1024)
any: we'll try to allocate 1GB hugepage if possible, if not we use 2MB hugepage
2: we want to use 2MB hugepage
1024: we want to use 1GB hugepage. (memory need to be multiple of 1GB in this case)
optionnal host configuration for 1GB hugepages
----------------------------------------------
1GB hugepages can be allocated at boot if user want it.
hugepages need to be contiguous, so sometime it's not possible to reserve them on the fly
/etc/default/grub : GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=x"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>