This will create a new drive for each local drive found,
and start the vm with this new drives.
if targetstorage == 1, we use same sid than original vm disk
a nbd server is started in qemu and expose local volumes to network port
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
we can use multiple drive_mirror in parralel.
block-job-complete can be skipped, if we want to add more mirror job later.
also add support for nbd uri to qemu_drive_mirror
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
otherwise we end up with undeletable VM configs in case
vdisk_free fails (which could happen because of cluster-wide
lock contention, storage problems, ..).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Output before this patch
INFO: include disk 'scsi0' 'file=pve4tank:vm-402-disk-1'
Output after this patch:
INFO: include disk 'scsi0' 'file=pve4tank:vm-402-disk-1' 64G
we're mainly intersted by the volume size here, it was requested in #351
we can migrate local snapshots when on zfs or dir storage with qcow2,
but the check was incorrect
we checked for if (zfs && !qcow2) instead of if (zfs || qcow2)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When trying to migrate a VM from a node with qemu server <= 4.0-92 to
a node with qemu server >= 4.0-93 we failed as the remote qemu-server
got no explicit migration_type' from the older qemu server on the
source.
Check if migration_type is defined on a incoming migration start, if
not set it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The qmp command 'guest-fsfreeze-freeze' issues in linux a FIFREEZE
ioctl call on all mounted guest FS.
This ioctl call locks the filesystem and gets it into an consistent
state. For this all caches must be synced after blocking new writes
to the FS, which may need a relative long time, especially under high
IO load on the backing storage.
In windows a VSS (Volume Shadow Copy Service) request_freeze will
issued. As of the closed Windows nature the exact mechanisms cannot
be checked but some microsoft blog posts and other forum post suggest
that it should return fast but certain workloads can still trigger a
long delay resulting an similar problems.
Thus try to minimize the error probability and increase the timeout
significantly.
We use 60 minutes as timeout as this seems a limit which should not
get trespassed in a somewhat healthy system.
See:
https://forum.proxmox.com/threads/22192/
see the 'freeze_super' and 'thaw_super' function in fs/super.c from
the linux kernel tree for more details on the freeze behavior in
Linux guests.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
All special flags for Windows 8 and Windows 2012 (win8 type)
are kept the same , since we set flags based on checking if
/^win(\d+)$/ is greater than 6 or 7
because these allow adding arbitrary devices to VMs (and
other potentially dangerous things).
whitelist 'info *' and 'help' as usable with just
VM.Monitor, if more are desired and requested they can be
added later.
like for snapshot, we need to check if krbd is enabled, to known
if we need to use qmp delete-drive-snapshot or storage command directly
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This cleanup windows guest os version handling,
with normalizing ostype with numbers in a new windows_version sub.
if($ostype eq 'wxp' || $ostype eq 'w2k3' || $ostype eq 'w2k') {
$winversion = 5;
} elsif($ostype eq 'w2k8' || $ostype eq 'wvista') {
$winversion = 6;
} elsif ($ostype =~ m/^win(\d+)$/) {
$winversion = $1;
}
so we can simply do test on windows version with lower or upper version
Hyperv enlightments configuration is centralized
in a new add_hyperv_enlighments sub.
Also disable hyperv with win < 8 + ovmf.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before patch:
INFO: exclude disk 'scsi1' (backup=no)
INFO: skip unused drive 'local:401/vm-401-disk-3.raw' (not included into backup)
INFO: skip unused drive 'local:401/vm-401-disk-1.raw' (not included into backup)
After patch applied:
INFO: include disk 'scsi0' local:401/vm-401-disk-4.qcow2
INFO: exclude disk 'scsi1' local:401/vm-401-disk-2.raw (backup=no)
INFO: include disk 'scsi2' pve4tank:vm-401-disk-1
INFO: skip unused drive 'local:401/vm-401-disk-3.raw' (not included into backup)
INFO: skip unused drive 'local:401/vm-401-disk-1.raw' (not included into backup)
Let 'cdrom' use the pve-qm-ide format, as it's supposed to
be an alias to ide2.
We're not using the 'alias' schema property since the qemu
configs still use a custom parser (due to the
pending-changes system and the filename-to-volume-id
conversion for legacy support) which does not deal with
schema aliases.
before copying the efidisk image to a storage,
we first have to activate the volume
also, add the -n flag to qemu-img convert (prevents
creation of the target volume)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when restoring into an existing VM, we don't want to die
half-way through because we can't delete one of the existing
volumes. instead, warn about the deletion failure, but
continue anyway. the not deleted disk is then added as
unused automatically.
this adds a bootsplash image in /usr/share/qemu-server
and if this file exists, use it for seabios
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if efidisk0 is defined, use it as a efivars disk,
to permanently store efivars (such as boot options)
we check if the files exist, and act accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we create the efidisk0 over the api,
we discard the size, and create a 128kbyte vdisk
and copy it there with qemu-img convert
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
just a simple disk (only size, format and volid) for
efivars disk
also do not add it to command line in foreach_drive
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
drive-mirror is not working with qemu 2.6 when iothread is enabled.
with virtio-blk : mirror is working, but block-job-completed crash the vm
with virtio-scsi : mirror hang at start.
This should be fixed in qemu 2.7
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
foreach_dimm() provides a guest numa node index, when used
in conjunction with the guest-to-host numa node topology
mapping one has to make sure that the correct host-side
indices are used.
This covers situations where the user defines a numaX with
hostnodes=Y with Y != X.
For instance:
cores: 2
hotplug: disk,network,cpu,memory
hugepages: 2
memory: 2048
numa: 1
numa1: memory=512,hostnodes=0,cpus=0-1,policy=bind
numa2: memory=512,hostnodes=0,cpus=2-3,policy=bind
Both numa IDs 1 and 2 passed by foreach_dimm() have to be
mapped to host node 0.
Note that this also reverses the foreach_reverse_dimm() numa
node numbering as the current code, while walking sizes
backwards, walked the numa IDs inside each size forward,
which makes more sense. (Memory hot-unplug is still working
with this.)
we did not check if some values were hash refs in
the verbose output.
this patch adds a recursive hash print sub and uses it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this API call changes the config quite drastically, and as
such should not be possible while an operation that holds a
lock is ongoing (e.g., migration, backup, snapshot).
we have a few problems with hotplug at the moment:
qemu may add usb hubs when adding usb devices but fails to remove them
when removing the usb device (this is a qemu bug)
also when starting a guest with a usb device we add ehci and uchi
controllers, which we cannot hot unplug
with those devices, it is impossible to live migrate the guest
to another host, meaning even if you remove all usb devices,
the migrate fails
so we deactivate usb hotplugging for now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this patch introduces working usb hotplugging
you can now add a usb device while a vm is running
this does not work with spice at the moment, only
with usb passthrough
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since usb devices do not have their own
"query" command in qmp, we have to use
qom-list /machines/peripheral
which essentially gets a list of peripheral devices of
the vm
there we only get the usb devices
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
vm configuration
----------------
hugepages: (any|2|1024)
any: we'll try to allocate 1GB hugepage if possible, if not we use 2MB hugepage
2: we want to use 2MB hugepage
1024: we want to use 1GB hugepage. (memory need to be multiple of 1GB in this case)
optionnal host configuration for 1GB hugepages
----------------------------------------------
1GB hugepages can be allocated at boot if user want it.
hugepages need to be contiguous, so sometime it's not possible to reserve them on the fly
/etc/default/grub : GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=x"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
foreach_volid recurses over snapshots as well, resulting in
lots of repeated checks (especially for VMs with lots of
snapshots and disks).
a potential vmstate volume must be checked explicitly,
because foreach_drive does not care about those.