adds a new class (intended to be used under nodes in pve-manager)
which adds the three api calls: list, smart and init
list being a general list of the available disk with infos
smart being a call to get the smart data from a given device
init being a call to write a gpt header to an unused disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the functions for listing the disks (mostly copied from
the ceph code), checking if a disk is a valid blockdevice, if it
is used/in a zfs pool/as an lvm pv, and an init function (just to add a gpt header;
this is important if one wants to use a fresh disk for ceph journals)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).
The client need to specify theses special options to be able to connect
This patch allow to create a ceph config file for each storeid in
/etc/pve/priv/ceph/$storeid.conf
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
PVE team cannot support specialized vendor-specific storage
plugins because of lack of hardware. But we can allow users to
add own plugins for their storages without need to rewrite any
PVE code and thus ease PVE updates to them.
Idea of this patch is to add folder /usr/share/perl5/PVE/Storage/Custom
where user can place his plugins and PVE will automatically load
them on start or warn if it could not and continue. Maybe we could
even load all plugins (except PVE::Storage::Plugin itself) this way,
because current storage plugins are not really plugins, if they
need to be explicitly loaded in PVE code :-).
Custom plugins MUST have api() method returning version for which
it was designed. If API changes from PVE side, module is just not
being registered and warnig message is printed do log, so user have
to update module. Until module update, corresponding storage will
just disappear from PVE, so it shall not impose any data damage
because of API change.
This approach works (with some limitations) if plugin works in
generic PVE way: full control of volumes lifecycle. And will not
currently work for custom plugins like iSCSI, which needs to select
pre-existing volumes. Maybe someone will add more flexible way to
pve-manager to select input elements for storage plugins to target
this.
Currently tested with my NetApp plugin.
Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
ssh(1) mentions that compression is only disirable on slow
connections.
since migration from cluster node to cluster node needs a
fast network anyway, we can drop the compression for
a speed improvement
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.
Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.
Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.
"ceph version" retrieves the version from the cluster (i.e.,
from the queried monitor), but what is needed here is the
local ceph version, which is returned by "ceph --version".
Ideally we don't need this, but this with the directory
storage this is a user-input field which gets returned
by the storage's path() method which is used in various
external command calls.
By default a directory storage creates its path. In some
cases this can be undesired, mostly when storages have
nested paths (eg. a dir storage on a ZFS path or in an NFS
share, or inside custom mount points).
As a simple fix to this the 'mkdir' option (default ON)
can now be used to disable this behavior.
otherwise mapping those images will fail. disabling the
features only needs to be done once per image, so it makes
sense to do this when creating the images.
unfortunately, the command does not work in hammer, so
it needs a version check for jewel or higher.
extract_vzdump_config_tar is an adapted combination
of tar_archive_search_conf() and the first part of
recover_config(), both from PVE::LXC::Create.
a compressed vma backup file needs special error
handling because vma exits as soon as it found the config
file, which the used decompressors treat as error.