Do not try to scan (and thus activate) storages which aren't
configured to support (or cannot support) "vdisks" anyway.
Avoids seemingly strange failures of VM migrations due to a backup storage
not being currently online - even if that storage isn't referenced in
the VM config anywhere..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is a hack and we should get rid of `run_client_cmd` and
`run_raw_client_cmd` as an API entry!
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
also `pvesm set` and `pvesm add` should behave the same with
respect to how configuration options are treated
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we already differentiate between standard and non-standard names anyway
and don't detect and return the VMID in the latter case anyway. drop it
from the RE as well to allow names like 'vzdump-qemu-template.vma.lzo'
without the need for a fake VMID.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Because we always have 4-digit years, we can simply pass
the year itself to timelocal instead of subtracting 1900.
Like this it will also work for years not in the range 2000-2999.
See also:
https://perldoc.perl.org/Time/Local.html#Year-Value-Interpretation
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
With Debian Buster it looks like the 'scsi-' method is no longer
reliable. In addition to that, which is also used for non-multipath
systems, add the 'dm-uuid-mpath-' method as fallback. This is also used
by openstack (see os-brick
39b201160b/os_brick/initiator/linuxscsi.py (L400))
Also sort the output of readdir so 'scsi-' is always after
'dm-uuid-mpath-' so the output of pvesm list does not change for systems
that worked before.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
It would be s/bps/pbs/ but as we already have "proxmox-backup-client"
included in the log through the executable name, so it should be
clear that this is a PBS command - so drop that part entirely.
Now using:
> run: /usr/bin/proxmox-backup-client ...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Only expect the logfilename if the archive has a standard name.
This also gives a mechanism to get an untainted filename.
archive_info can take either a volume ID or a path as it's
currently implemented. This is useful for vzdump when there
is no storage (i.e. for 'vzdump --dumpdir'). Add a test case for this.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This reverts commit 95015dbbf2.
parse_volname always gives 'images' and not 'rootdir'. In most
cases the volume name alone does not contain the needed information,
e.g. vm-123-disk-0 can be both a VM volume or a container volume.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
where 'is_std_name' shows whether the backup name uses the standard naming
schema and most likely was created by our tools.
Also adds a '^' to the existing filename matching regex, which
should be fine since basename() is used beforehand.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
when compiling the disk list add a property with a stable
/dev/disk/by-id/ path for a block device when available.
This is needed to create zpools with the stable by-id links
The /dev/disk/by-id/ directory can contain multiple links to the same device
(e.g. when it's used as a LVM PV, or one for the wwn/nvme-eui in addition
to the one with vendor and serial). We take the first one which matches
the bus where the disk is attached. For nvme disks we exclude the one
containing the nvme-eui.
The patch assumes that not all disks need to have such a link (e.g.
virtio-block devices as we pass them to guests).
Additionally the tests were adapted to run successfully.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
we want to enforce at least the strictness that our tools can do
something with a backup archive..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* run_command is already imported, use that fact
* avoid useless comments just describing what the code tells one
anyway
* restructure a few parts to more concise/easier to read
implementation.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The rework of the backup file detection logic missed the non-standard
file name case. This patch allows to restore backups with different file
names. Though the config extraction fails, since the type is unknown.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
on an undefined value at /usr/share/perl5/PVE/Storage/Plugin.pm line 928
This error message crops up when a file is deleted after getting the
file list and before the loop passed the file entry.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
since it is a valid content type and adapt the path_to_volume_id_test.
Also adds an extra check if all vtype_subdirs are returned.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
The vzdump file was passed with the full path to the regex. That regex
captures the time from the file name, to calculate the epoch.
As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Test to reduce the potential for accidental breakage on regex changes.
And to make sure that all vtype_subdirs are parsed.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
with File::stat::stat to minimize variable declarations. And allow to
mock this method in tests instead of the perl build-in stat.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
detection into separate functions so they are reusable and easier
modifiable. This patch also adds the test for archive_info.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
A newer than the Luminous version is shipped with buster, and our
ceph repos are on Nautilus (14.2) in PVE 6.
Allows to drop a check for really old ceph versions (< 10, so
Infernalis and older).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
dmesg: libceph: bad option at 'conf=/etc/pve/ceph.conf'
After the upgrade to PVE 6 with Ceph Luminous, the mount.ceph helper
doesn't understand the conf= option yet. And the CephFS mount with the
kernel client fails. After upgrading to Ceph Nautilus the option exists
in the mount.ceph helper.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
the '.*' was greedy, also consuming all but one digits of the real percentage
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
switch to \s* instead of .*?, to prevent mis-interpreting potential
strings like '< 50%' or '0-50%'
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
ZFS supports the -p flag in the list command since a few years now.
Let us use the real byte values and avoid the error prone calculation
from human readable numbers that can lead to incorrect numbers if the
reported human readable value is a rounded number.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Getting the volume sizes as byte values instead of converted to human
readable units helps to avoid rounding errors in the further processing
if the volume size is more on the odd side.
The `zfs list` command supports the -p(arseable) flag since a few years
now.
When returning the size in bytes there is no calculation performed and
thus we need to explicitly cast the size to an integer before returning
it.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to guess a valid volname for a targetstorage of a different type.
This makes it possible to migrate raw volumes between 'dir' and 'lvm'
storages.
It is only used when the storage type for the source storage X
and target storage Y differ and should work as long as Y uses
the standard naming scheme (VMID/vm-VMID-name.fmt respectively vm-VMID-name).
If it doesn't, we get an invalid name and fail, which is the old
behavior (except if X and Y have different types but the same
non-standard naming-scheme, where the old behavior did work).
The original name is preserved, except for a possible extension
and it's also checked if the format is valid for the target storage.
Example: mylvm:vm-123-disk-4 <-> mydir:123/vm-123-disk-4.raw
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and also return the ID of the allocated volume. This option
allows plugins to choose a new name if there is a collision.
In storage_migrate, the API version of the receiving side is checked.
In Storage.pm's volume_import, when a plugin returns 'undef',
it can be assumed that the import with the requested volid was
successful (it should've died otherwise) and so volid is returned.
This is done for backwards compatibility with foreign plugins.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Instead of relying on list_volumes of Plugin.pm (which filters by
the content types set in the config), use our own to always
show the luns of an iscsi.
This makes sense here, since we need it to show the luns when using
it as base storage for LVM (where we have content type 'none' set).
It does not interfere with the rest of the GUI, since on e.g. disk
creation, we already filter the storages in the dropdown by content
type, iow. an iscsi storage used this way still does not show up
when trying to create a disk.
This also shows the luns now in the 'Content' tab, but this is also
OK, since the user cannot actually do anything there with the luns.
(Besides looking at them)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With the option valid_target_formats it's possible
to let the caller specify possible formats for the target
of an operation.
[0]: If the option is not set, assume that every format is valid.
In most cases the format of the the target and the format
of the source will agree (and therefore assumption [0] is
not actually assuming very much and ensures backwards
compatability). But when cloning a volume on a storage
using Plugin.pm's implementation (e.g. directory based
storages), the result is always a qcow2 image.
When cloning containers, the new option can be used to detect
that qcow2 is not valid and hence the clone feature is not
available.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
a small grammar fix, and we now return ctime of all files, as
remaining storages are planned for the future omit this hint
completely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this way the content listing api also returns the vmid on content
listings which, among other things, is useful for the gui for
filtering
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Do not pollute top-level private directory, use "storage" folder but
with backward compatibility.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1. Avoids the error
qemu-img: The new size must be a multiple of 512
for qcow2 disks.
2. Because volume_import expects disk sizes to be a multiple of 1 KiB.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
As /etc/pve/priv is already pretty polluted, having a
"<storage-id>.pw" file there smells like it could make problems in
the future.
So let the pbs pw file generator use /etc/pve/priv/storages as base
path.
Other storage should move also to that path in the future, if they
save such secrets anywhere in /etc/pve.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since we redirect the output to our (insecure) socket, logfunc is only
used for STDERR anyway, so we might as well make it explicit on the
caller side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
One the first write bringing the unit file in existence we can just
start it, after that we need to tell systemd that we want to actively
reload it.
While this is slightly shaky due to the fact that we do not check all
paths where such a unit could reside, it is something we can do
because earlier one couldn't have a unit/overwrite anyway (from
procfs mountinfo generated one do not support that) and does adding
such override ones from now on should work.
Also note that we can only get here in the "user does no weird stuff"
case when "cephfs_is_mounted" actively tells that there is no cephfs
mounted at the $mountpoint - at which time we can safely re-write the
potential updated unit file, reload and mount again.
So let's make our life a bit easier here until a user actually
complains about a rational issue for this, maybe we have PVE 7.0 then
and can get rid of that anyway :)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This fixes a potential races where fuse get's unmouted to late in the
shutdown process, i.e., at a time where network was down and it could
not talk to any MDS or monitor anymore.
We could fix it the same way we did once with the kernel based mount,
i.e., adding _netdev, but doing so would require to switch over from
"ceph-fuse" to "mount.fuse.ceph" which has better compatibility with
the common mount tool API.
As that helper exists we can reuse the newer systemd_netmount
ephemeral unit generator, only some options differ in name between
fuse and kernel variant.
So besides solving a potential issue we get a more unified handling
of those two cases.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
commit 54e0b0034b introduced the
"_netdev" option, for PVE 5.3. The systemd generator then correctly
resolved that in the following resulting order-dependencies:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount
This worked well and all were happy. With the current systemd in 6.0
we sometimes get the local-fs ones there generated too. This is a
fallout from a try to better handling nested mount hierachies, where
a .mount unit needs to be mounter or unmounted, before or after,
respectively, the parent mount was processed. It seems that sometime
that glitches and thus a "RequireMountFor=/mnt/pve" gets thrown in
and result sometimes in the local-fs order constraints being added.
The issue now is, that one must not have ordering depends to all,
local-fs, local-fs-pre, remote-fs, remote-fs-pre, as that gets you a
ordering cycle. Systemd tries to solve that cycle by randomly
dropping one constraint and retrying. By luck this is a not so
important unit, and all goes on well. Most of the time one isn't that
lucky and something important gets dropped, for example:
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found ordering cycle on systemd-timesyncd.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on systemd-tmpfiles-setup.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on local-fs.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on mnt-pve-cephfs.mount/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on remote-fs-pre.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on rbdmap.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on sysinit.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Job remote-fs-pre.target/stop deleted to break ordering cycle starting with sysinit.target/stop
Then, most of the time the host reboot hangs for ~10 minutes, often
showing scapegoat units like the pve-ha-lrm being the cause of the
hang (even if no HA is configure >.<).
This behavior is fixed with newer systemd versions, e.g., the v244
from buster-backports, but that is not a real option for us for now.
So until 7.0 we generate the unit with the correct dependencies
directly in the ephemeral /run/ tmpfs backed systemd/system path and
start it.
While FUSE gets only the local-fs ordering constraint, it seems to cope
very well regarding such symptoms. But it _is_ racy and probably only
works due to systemd stopping it early as it has not much ordering
constraints at all.. It should be moved in the future nonetheless, as
there's a mount.fuse.ceph helper that should be not an issue.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is but a hack, but we have no general helper/tools module here
and I do not want to do versioned dependencies for this fast-tracked
bugfix to pve-common, so I'll have to live with the shame for now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when listing volumes, otherwise an empty hash can be persisted into the
current worker's $vmlist, which could cause issues at various other API
endpoints.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We can use 'list_images' to get the desired volume IDs in
'find_free_diskname' for most plugins. For the two LVM plugins, 'list_images'
potentially skips untagged volumes, so we keep the custom version. For the
RBD plugin, 'list_images' is much more costly than the custom version, so we
keep the custom version.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we need to unprotect more snapshots than just the base one, since we
allow linked clones of regular VM snapshots. unprotection will only work
if no linked clones exist anymore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The size is required to be a multiple of volblocksize. Make sure
that the requirement is always met, so ZFS won't complain when we do
things like 'qm resize 102 scsi1 +0.01G'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Letting LVM set the meta-data size internally was not a good idea, as
it produces really small metadata LVs. Adapts the same logic as the
installer.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
Those come normally from virtual devices, like a IPMI disk, if no
media is attached. They spam the log really often on operations like
migrate, and are quite scare-mongering. So filter them out.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it does not work:
disable RBD image features this kernel RBD drivers is not compatible with: fast-diff,object-map,deep-flatten
clone failed: could not disable krbd-incompatible image features 'fast-diff,object-map,deep-flatten' for rbd image: vm-123123123-disk-0@test: rbd: snapshot name specified for a command that doesn't use it
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since 'pvesm export' and 'pvesm import' are connected via a pipe and
SSH, a fatal error in the former can lead to no valid header being
written to the pipe. handle this more gracefully by printing an easier
to understand error message, instead of uninitialized warnings with no
context.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Modern kernel, like 5.3, support all those features ('fast-diff',
'object-map', 'deep-flatten'), so we do not want to disable them
there. 5.0 already supports exclusive-locks, so no need to disable
exclusive locking there.
Further, we also want to profit from new features available, so let's
enable those which can be enabled "live" (i.e., after image creation)
if their available.
While we could also parse the kernel information directly from:
/sys/module/libceph/parameters/supported_features
there's not much advantage to that, features cannot be disabled with
KConfig, their also very dependent of the kernel version booted.
So for us it's enough to check that one.
This only affects container and VMs backed by a storage with KRBD
explicitly enabled. But as the enabling and disabling happens
transparently, it has no effect on the running guest.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The bugfix for #2317 introduced a kind of odd API behavior, where
each volume was returned twice from our API if a storage has both
'rootdir' & 'images' content types enabled. To give the content type
of the volume an actual meaning, it is now inferred from the
associated guest, if there's no guest or we don't have an owner for
that volume we default to 'images'.
At the volume level, there is no option to list volumes based on
content types, since the volumes do not know what type they are
actually used for.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
When adding a zfspool storage with 'pvesm add' the mount point is now
added automatically to the storage configuration if it can be
determined. path() does not assume the default mountpoint anymore,
fixing 2085.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When working with several ZFS over iSCSI / LIO storages, we might lookup
between them with less than 15 sec interval.
Previously, the cache of the previous storage was used, which was breaking
disk move for example
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
The common ZFSPlugin was missing volume name parsing
in a few places. This was not a problem for standard
volumes, but broke functionnalities (like resize,
snapshot, rollback) with linked clones as the name of
the zvol must be extracted from the entry in the config
(remove base-X-disk-Y prefix)
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
In the default config, emulate_tpu is set to 0, which disables
unmap support. Once enabled, trim can run from guest to reclaim free
space.
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
It's not needed, LIO sees the new size automatically.
And it was broken anyway. Partially fix#2335
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
Using the json output, as suggested by Thomas, we now die if the decoding
fails and, if not, all return values are set to the corresponding decoded
values. That should prevent any unforeseen null size values, except if
qemu-img info reports it, which we then consider as valid.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
Migration with --targetstorage was broken because of this.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
$1 and $2 get set to undef from the vmid filter regex, so we have to do
the name/format regex after, else we get errors like:
'use of unitiialized value $1[...]'
and the listing is empty
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The patch uses the value from the field 'stored' if it is available.
In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.
The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
To maintain full (backwards) compatibility, leave the type name as
'iso' - this makes this patch work without changing every consumer of
storage APIs.
Note that currently these files can only be attached as a CDROM/DVD
drive, so USB-only images can be uploaded but might not work in VMs.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and actually do that not just for creating zvols, but also when
activating them. this should fix a range of issues/races that sometimes
occured on bootup, snapshot rollback or similar operations.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
plugins can still override list_volumes if they want separate methods to
list rootdir and images content.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Previously, the web GUI timed out when removing content (e.g. backup) took
too long. Doing the main part of the API DELETE call in a fork_worker solves
this.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
since list_volumes is only supposed to be called with filtered content
types, this should ensure that get_subdir is only called for plugins
that have a defined 'path' property, like the old code in
PVE::Storage::template_list did.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we can only do this here, since the ceph cluster is not aware of
osd encryption, only the local node is (via ceph-volume and lv tags)
this way, we are able to show an 'encrypted' flag in the disk gui at least
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
previously ceph included a udev rule to populate
/dev/disk/by-parttypeuuid/
but not anymore, so we now use 'lsblk --json -o path,parttype' to
get a mapping between parttype uuid and partition
fix the test by simulating empty lsblk output
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To allow getting closer to finally drop "pvecm mtunnel".
Code parts taken from pipe_socket_to_command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
[regex fixup]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
checking '$server:$subdir' is too strict to work in all cirumcstances,
e.g. adding/removing a monitor would mean that it is not the same
anymore, same if one is adding/removing the ports from the config
check only if the subdir is the same and if it is a cephfs
this way, it still returns true if someone changes the config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we write only the mon_host config beginning with nautilus,
we have to get the monitor ips from there as well
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
; is the beginning of a comment, but in some configuration settings
it is also valid syntax, e.g. for mon_host it is a valid
seperator for hosts (sigh ...)
only remove lines when it starts with a ';'
since we remove all comments anyway any time we write the ceph conf
it should not really matter for our users
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we want a consistent config has, regardless of how the user or a tool
adds it to the config, so we map ' ' and '-' to '_' in the keys
this way we can always access the correct key without trying multiple
times
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
don't require any specific file types, if something is here which can
be requested to delete over API/CLI it either comes from an API/CLI
operation and should be thus OK to delete, else a user caused the
creation of the special file and it either works and all is good or
the user gets notified as we check if unlink succeeded anyway
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Symlinks with a non-existing target fail Perls '-f' test and were thus
not deleteable via the API (failing with '$path does not exist').
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Broken symlinks (and other files without a size) will now show up as 0
byte instead of causing a format validation error in the API.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Since ceph-fuse is called directly in the CephFS storage plugin, which
can not process the _netdev option, mounting the CephFS storage fails
when fuse is set in the storage.cfg.
This patch moves the _netdev option into the else part of the if fuse is
set statement. _netdev is only added if the CephFS kernel client mounts
the storage.
It seems _netdev is not needed anyway for the fuse mount, as the
connection is closed, once the fuse process gets killed on shutdown.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
simply chmod the temp file before copying to the "correct" permission
mode, where all users with access to the directory can read the file,
to mirror the behavior one gets for a apl_download call.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it
first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we now expect the first parameter to be either a string with a single
disk, or an array ref with a list of disks
this way we can get the info of multiple disks simultaneously while
not iterating over all disks
this will be used to get the info for osd/db/wal disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Less reading and the own name for the variable should helps to grasp
more quickly what it should contain
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device
also add tests for this detection
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With this, users can select disks with LVM on it for journal/db devices,
which will be necessary for ceph nautilus, since there
journals/db/wal will be put on an LV
of course when creating an osd, we have to detect if that
is ok (probably based on the vg name on it)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the delete parameter get's injected by the SectionConfigs
updateSchem, but we need to handle it ourself in the code
This makes the following possible:
pvesm set STORAGEID --delete property
Also the API equivalent is now possible. Adapted from the HA
managers Resource update API call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>