Moved code so that initialization of drivedesc_hash stays a single block.
Avoid auto-vivication in parse_drive.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
E.g.: If a feature requires 4.1+pveN and we're using machine version 4.2
we don't need to increase the pve version to N (4.2+pve0 is enough).
We check this by doing a min_version call against a non-existant higher
pve-version for the major/minor tuple we want to test for, which can
only work if the major/minor alone is high enough.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Clarify why a cancel is actually not really canceling here, because
we're already finished with storage migration and the block jobs are
all in ready state and we (source) are going to stop soon to hand
over to target.
> Note that if you issue 'block-job-cancel' after 'drive-mirror' has
> indicated (via the event BLOCK_JOB_READY) that the source and
> destination are synchronized, then the event triggered by this
> command changes to BLOCK_JOB_COMPLETED, to indicate that the
> mirroring has ended and the destination now has a point-in-time
> copy tied to the time of the cancellation
-- qapi/block-core.json (QEMU 4.2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The change to the prefixed version broke migration from new to old
qemu-server version. This reverts the change and adds a TODO comment for
7.0 to change it to the prefixed version then.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
...instead of booting with an invalid config once and then silently
changing the memory size for consequent VM starts.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Alwin Antreich <a.antreich@proxmox.com>
This cannot work, since we adjust the 'memory' property of the VM config
on hotplugging, but then the user-defined NUMA topology won't match for
the next start attempt.
Check needs to happen here, since it otherwise fails early with "total
memory for NUMA nodes must be equal to vm static memory".
With this change the error message reflects what is actually happening
and doesn't allow VMs with exactly 1GB of RAM either.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Alwin Antreich <a.antreich@proxmox.com>
The reuse of the tunnel, which we're opening to communicate with the target
node and to forward the unix socket for the state migration, for the NBD unix
socket requires adding support for an array of sockets to forward, not just a
single one. We also have to change the $sock_addr variable to an array
for the cleanup of the socket file as SSH does not remove the file.
To communicate to the target node the support of unix sockets for NBD
storage migration, we're specifying an nbd_protocol_version which is set
to 1. This version is then passed to the target node via STDIN. Because
we don't want to be dependent on the order of arguments being passed
via STDIN, we also prefix the spice ticket with 'spice_ticket: '. The
target side handles both the spice ticket and the nbd protocol version
with a fallback for old source nodes passing the spice ticket without a
prefix.
All arguments are line based and require a newline in between.
When the NBD server on the target node is started with a unix socket, we
get a different line containing all the information required to start
the drive-mirror. This contains the unix socket path used on the target node
which we require for forwarding and cleanup.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
For secure live migration with local disks via NBD over a unix socket,
we have to somehow communicate from the source node to the target node
if it supports it. This is because there can only be one NBD server with
exactly one socket bound.
The source node passes that information via STDIN. Support for
'spice_ticket: (...)' is added in addition to 'nbd_protocol_version:
<version>'. As old source nodes send the spice ticket without a prefix,
we still have to have a fallback for this case. New information should
always be passed via a prefix that is matched, otherwise it will be
recognized as spice ticket.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
As the NBD server spawned by qemu can only listen on a single socket,
we're dependent on a version being passed to vm_start that indicates
which protocol can be used, TCP or Unix, by the source node.
The change in socket type (TCP to Unix) comes with a different URI. For
unix sockets it has the form: 'nbd:unix:<path/to/socket>:exportname=<device>'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
With Qemu 4.2 we encountered a problem with unix sockets and SSH socket
forwarding for drive-mirror. It seems the socket gets reopened again and
again after it closes for some reason. This can be worked around by
specifying 'block-job-cancel' instead of 'block-job-complete' when we're
not interested in swapping the disks again from NBD to their original
protocol. This is always the case when we use drive-mirror for live
migrating a VM.
qemu_drive_mirror is used for migration and for clone_disk. All in all
we have 3 cases to handle. Either the 'skip' case which skips the
completion of the job. The 'wait' case which was the default before and
still is when $completion is undefined. And the new 'wait_noswap' case
which is used for the live migration.
If 'wait_noswap' is specified, we issue a 'block-job-cancel' once the block
job is in 'ready' state. This completes the block job without swapping the
disks.
clone_disk always uses 'block-job-cancel' via the qemu_blockjobs_cancel
sub.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
and make it match with what parse_drive does. Even though the 'real' format
was pve-volume-id, callers already expected that parse_drive returns a hash
with a valid 'file' key (e.g. PVE/API2/Qemu.pm:1147ff).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Since the MacOS Mojave Apple ships AppleQEMUGuestAgent by default.
However, it does not fully adhere to QGA specs as they do expect each
command to be newline delimited.
This makes each command to be newline delimited which is harmless for
all other systems (Windows, Linux), but enable guest agent by default
without any changes on OSX.
Signed-off-by: Kamil Trzcinski <ayufan@ayufan.eu>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
avoids a genisoimage output like:
> Total translation table size: 0
> Total rockridge attributes bytes: 417
> Total directory bytes: 0
> Path table size(bytes): 10
> Max brk space used 0
> 178 extents written (0 MB)
on every VM start.
Rather than that useless output, tell genisoimage to be quiet, which
still prints errors but nothing else. Additionally print a short
single line about that we're to create the cloud-init iso.
Reformat while at it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
removes safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls to use the shared definitions.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This fixes an issue when migrating a VM with an unused volume with format
qcow2 or vmdk. Since 'snapshots' wasn't set, storage_migrate wanted to
export/import with format raw+size instead. Therefore it used (instead of
just 'dd') 'qemu-img convert', which fails when its output leaves through
a pipe. Upon importing, a second error is present, because the format from
the volume ID doesn't match the format of the stream and there is no
conversion yet.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
LGTM-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
If for whatever reason there is no size in the property string
of a drive, 'qm rescan' would do nothing for that drive and
live migration would also fail.
Also adds a check to avoid potential auto-vivification of volid_hash->{$volid}
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The initialization for the drive keys in $confdesc is changed
to be a single for-loop iterating over the keys of $drivedesc_hash and
the initialization of the unusedN keys is move to directly below it.
To avoid the need to change all the call sites, functions with more than
a few callers are exported from the submodule and imported into QemuServer.pm.
For callers of the now imported functions within QemuServer.pm, the prefix
PVE::QemuServer is dropped, because it is unnecessary and now even confusing.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which contains the full descriptions of the drives, and
make parse_drive not depend on $confdesc anymore.
In preparation to moving drive-related code to its own module.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Allow a user to add a virtio-rng-pci (an emulated hardware random
number generator) to a VM with the rng0 setting. The setting is
version_guard()-ed.
Limit the selection of entropy source to one of three:
/dev/urandom (preferred): Non-blocking kernel entropy source
/dev/random: Blocking kernel source
/dev/hwrng: Hardware RNG on the host for passthrough
QEMU itself defaults to /dev/urandom (or the equivalent getrandom()
call) if no source file is given, but I don't fully trust that
behaviour to stay constant, considering the documentation [0] already
disagrees with the code [1], so let's always specify the file ourselves.
/dev/urandom is preferred, since it prevents host entropy starvation.
The quality of randomness is still good enough to emulate a hwrng, since
a) it's still seeded from the kernel's true entropy pool periodically
and b) it's mixed with true entropy in the guest as well.
Additionally, all sources about entropy predicition attacks I could find
mention that to predict /dev/urandom results, /dev/random has to be
accessed or manipulated in one way or the other - this is not possible
from a VM however, as the entropy we're talking about comes from the
*hosts* blocking pool.
More about the entropy and security implications of the non-blocking
interface in [2] and [3].
Note further that only one /dev/hwrng exists at any given time, if
multiple RNGs are available, only the one selected in
'/sys/devices/virtual/misc/hw_random/rng_current' will feed the file.
Selecting this is left as an exercise to the user, if at all required.
We limit the available entropy to 1 KiB/s by default, but allow the user
to override this. Interesting to note is that the limiter does not work
linearly, i.e. max_bytes=1024/period=1000 means that up to 1 KiB of data
becomes available on a 1000 millisecond timer, not that 1 KiB is
streamed to the guest over the course of one second - hence the
configurable period.
The default used here is the same as given in the QEMU documentation [0]
and has been verified to affect entropy availability in a guest by
measuring /dev/random throughput. 1 KiB/s is enough to avoid any
early-boot entropy shortages, and already has a significant impact on
/dev/random availability in the guest.
[0] https://wiki.qemu.org/Features/VirtIORNG
[1] https://git.qemu.org/?p=qemu.git;a=blob;f=crypto/random-platform.c;h=f92f96987d7d262047c7604b169a7fdf11236107;hb=HEAD
[2] https://lwn.net/Articles/261804/
[3] https://lwn.net/Articles/808575/
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>