to guess a valid volname for a targetstorage of a different type.
This makes it possible to migrate raw volumes between 'dir' and 'lvm'
storages.
It is only used when the storage type for the source storage X
and target storage Y differ and should work as long as Y uses
the standard naming scheme (VMID/vm-VMID-name.fmt respectively vm-VMID-name).
If it doesn't, we get an invalid name and fail, which is the old
behavior (except if X and Y have different types but the same
non-standard naming-scheme, where the old behavior did work).
The original name is preserved, except for a possible extension
and it's also checked if the format is valid for the target storage.
Example: mylvm:vm-123-disk-4 <-> mydir:123/vm-123-disk-4.raw
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and also return the ID of the allocated volume. This option
allows plugins to choose a new name if there is a collision.
In storage_migrate, the API version of the receiving side is checked.
In Storage.pm's volume_import, when a plugin returns 'undef',
it can be assumed that the import with the requested volid was
successful (it should've died otherwise) and so volid is returned.
This is done for backwards compatibility with foreign plugins.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Instead of relying on list_volumes of Plugin.pm (which filters by
the content types set in the config), use our own to always
show the luns of an iscsi.
This makes sense here, since we need it to show the luns when using
it as base storage for LVM (where we have content type 'none' set).
It does not interfere with the rest of the GUI, since on e.g. disk
creation, we already filter the storages in the dropdown by content
type, iow. an iscsi storage used this way still does not show up
when trying to create a disk.
This also shows the luns now in the 'Content' tab, but this is also
OK, since the user cannot actually do anything there with the luns.
(Besides looking at them)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With the option valid_target_formats it's possible
to let the caller specify possible formats for the target
of an operation.
[0]: If the option is not set, assume that every format is valid.
In most cases the format of the the target and the format
of the source will agree (and therefore assumption [0] is
not actually assuming very much and ensures backwards
compatability). But when cloning a volume on a storage
using Plugin.pm's implementation (e.g. directory based
storages), the result is always a qcow2 image.
When cloning containers, the new option can be used to detect
that qcow2 is not valid and hence the clone feature is not
available.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
a small grammar fix, and we now return ctime of all files, as
remaining storages are planned for the future omit this hint
completely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this way the content listing api also returns the vmid on content
listings which, among other things, is useful for the gui for
filtering
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Do not pollute top-level private directory, use "storage" folder but
with backward compatibility.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1. Avoids the error
qemu-img: The new size must be a multiple of 512
for qcow2 disks.
2. Because volume_import expects disk sizes to be a multiple of 1 KiB.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
As /etc/pve/priv is already pretty polluted, having a
"<storage-id>.pw" file there smells like it could make problems in
the future.
So let the pbs pw file generator use /etc/pve/priv/storages as base
path.
Other storage should move also to that path in the future, if they
save such secrets anywhere in /etc/pve.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since we redirect the output to our (insecure) socket, logfunc is only
used for STDERR anyway, so we might as well make it explicit on the
caller side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
One the first write bringing the unit file in existence we can just
start it, after that we need to tell systemd that we want to actively
reload it.
While this is slightly shaky due to the fact that we do not check all
paths where such a unit could reside, it is something we can do
because earlier one couldn't have a unit/overwrite anyway (from
procfs mountinfo generated one do not support that) and does adding
such override ones from now on should work.
Also note that we can only get here in the "user does no weird stuff"
case when "cephfs_is_mounted" actively tells that there is no cephfs
mounted at the $mountpoint - at which time we can safely re-write the
potential updated unit file, reload and mount again.
So let's make our life a bit easier here until a user actually
complains about a rational issue for this, maybe we have PVE 7.0 then
and can get rid of that anyway :)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This fixes a potential races where fuse get's unmouted to late in the
shutdown process, i.e., at a time where network was down and it could
not talk to any MDS or monitor anymore.
We could fix it the same way we did once with the kernel based mount,
i.e., adding _netdev, but doing so would require to switch over from
"ceph-fuse" to "mount.fuse.ceph" which has better compatibility with
the common mount tool API.
As that helper exists we can reuse the newer systemd_netmount
ephemeral unit generator, only some options differ in name between
fuse and kernel variant.
So besides solving a potential issue we get a more unified handling
of those two cases.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
commit 54e0b0034b introduced the
"_netdev" option, for PVE 5.3. The systemd generator then correctly
resolved that in the following resulting order-dependencies:
> Wants=network-online.target
> Before=umount.target remote-fs.target
> After=remote-fs-pre.target system.slice network.target network-online.target -.mount
This worked well and all were happy. With the current systemd in 6.0
we sometimes get the local-fs ones there generated too. This is a
fallout from a try to better handling nested mount hierachies, where
a .mount unit needs to be mounter or unmounted, before or after,
respectively, the parent mount was processed. It seems that sometime
that glitches and thus a "RequireMountFor=/mnt/pve" gets thrown in
and result sometimes in the local-fs order constraints being added.
The issue now is, that one must not have ordering depends to all,
local-fs, local-fs-pre, remote-fs, remote-fs-pre, as that gets you a
ordering cycle. Systemd tries to solve that cycle by randomly
dropping one constraint and retrying. By luck this is a not so
important unit, and all goes on well. Most of the time one isn't that
lucky and something important gets dropped, for example:
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found ordering cycle on systemd-timesyncd.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on systemd-tmpfiles-setup.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on local-fs.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on mnt-pve-cephfs.mount/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on remote-fs-pre.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on rbdmap.service/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Found dependency on sysinit.target/stop
> Jan 24 18:43:05 prod1 systemd[1]: sysinit.target: Job remote-fs-pre.target/stop deleted to break ordering cycle starting with sysinit.target/stop
Then, most of the time the host reboot hangs for ~10 minutes, often
showing scapegoat units like the pve-ha-lrm being the cause of the
hang (even if no HA is configure >.<).
This behavior is fixed with newer systemd versions, e.g., the v244
from buster-backports, but that is not a real option for us for now.
So until 7.0 we generate the unit with the correct dependencies
directly in the ephemeral /run/ tmpfs backed systemd/system path and
start it.
While FUSE gets only the local-fs ordering constraint, it seems to cope
very well regarding such symptoms. But it _is_ racy and probably only
works due to systemd stopping it early as it has not much ordering
constraints at all.. It should be moved in the future nonetheless, as
there's a mount.fuse.ceph helper that should be not an issue.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>