Currently, the iSCSI plugin regex patterns only match IPv4 and IPv6
addresses, causing session parsing to fail when portals use hostnames
(like nas.example.com:3260).
This patch updates ISCSI_TARGET_RE and session parsing regex to accept
any non-whitespace characters before the port, allowing hostname-based
portals to work correctly.
Tested with IP and hostname-based portals on Proxmox VE 8.2, 8.3, and 8.4.1
Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250626022920.1323623-1-stelios@libvirt.dev
(cherry picked from commit 6bf171ec54)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
After removing an ESXi storage, a zombie process is generated because
the forked FUSE process (esxi-folder-fuse) is not properly reaped.
This patch implements a double-fork mechanism to ensure the FUSE process
is reparented to init (PID 1), which will properly reap it when it
exits. Additionally adds the missing waitpid() call to reap the
intermediate child process.
Tested on Proxmox VE 8.4.1 with ESXi 8.0U3e storage.
Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250701154135.2387872-1-stelios@libvirt.dev
(cherry picked from commit c33abdf062)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This reduces the potential breakage from commit "fix #5071: zfs over
iscsi: add 'zfs-base-path' configuration option". Only setups where
'/dev/zvol' exists, but is not a valid base, will still be affected.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250605111109.52712-2-f.ebner@proxmox.com
(cherry picked from commit 4fb733a9ac)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use '/dev/zvol' as a base path for new storages for providers 'iet'
and 'LIO', because that is what modern distributions use.
This is a breaking change regarding the addition of new storages on
older distributions, but it's enough to specify the base path '/dev'
explicitly for setups that require it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250605111109.52712-1-f.ebner@proxmox.com
(cherry picked from commit d181d0b1ee)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
using the new top-level `make tidy` target, which calls perltidy via
our wrapper to enforce the desired style as closely as possible.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See pve-common's commit 5ae1f2e ("buildsys: add tidy make target")
for details about the chosen xargs parameters.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No other plugin activates the storage inside the path() method either.
The caller needs to ensure that the storage is activated before using
the result of path().
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since the former was just a wrapper around the latter, and the only call
site..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-By: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
all librados interaction is now via our XS binding, the last usage was
removed in 41aacc6cde
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-By: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
so users can upload qcow2/raw/vmdk files directly in the UI
Check the uploaded file with 'file_size_info' and the untrusted flag.
This checks the file format, existence of backing files, etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250407101310.3196974-3-d.csapak@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In the BTRFS plugin, resize_volume() for a subovlume currently fails
with "failed to get btrfs subvolume ID from: ". This is because the
btrfs 'subvol show' command is invoked with '-q', so there is no
output.
As btrfs quotas are currently not implemented, die early with a clean
error instead and comment out the unused code for now.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-6-f.ebner@proxmox.com
The result from the file_size_info() call is not used by
volume_export_formats() and most failure scenarios of file_size_info()
lead to an undefined return value rather than a failure. This includes
the case for a non-existent file. The default path() implementation
doesn't do any existence check either.
An interesting scenario where file_size_info() does fail, is when the
volume is corrupted or not in the queried format. But this is a rare
edge case, so an early check doesn't seem worth it. It will be caught
by volume_export() itself, or in case of VM migration, also when
querying the size during scanning of local volumes.
While checking for the definedness of $size could serve as an early
sanity check, it is not currently done and other plugins don't do such
early checks in their implementation of volume_export_formats()
either. Keep the implementation abstract in Plugin.pm too and avoid
doing IO. Callers that want to do early existence checks or similar
can do so themselves explicitly, covering all plugins.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-4-f.ebner@proxmox.com
Return the same size as in list context. See also commit "plugin: file
size info: be consistent about size of directory subvol".
Fixes cloning containers with unsized subvolumes on BTRFS. Before the
change, this would fail with "mkfs.ext4: Device size reported to be
zero.". That is because with non-zero size, the allocation of the
volume for the clone will be done with 'raw' format by the
alloc_disk() helper in LXC.pm rather than 'subvol'. This change will
make cloning containers with unsized subvol directories possible.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-3-f.ebner@proxmox.com
In list context, the file_size_info() function in Plugin.pm would
return 0 for the size of a subvol directory, but in scalar context 1.
As reported in the community forum [0], the change in commit e50dde0
("volume export: rely on storage plugin's format"), changing the
caller in volume_export() to scalar context exposed the inconsistency
in the return value for the size. This led to breakage of migration
with unsized btrfs subvolumes.
Align the returned values to avoid such surprises. Caller that would
treat 0 as an error should use list context and check if it is a
subvol, if they are able to handle them.
NOTE: it is not possible to attach either a subvol, or a volume with
zero size to a VM.
Existing callers of file_size_info() do not break with this change:
GuestImport/OVF.pm - parse_ovf() +
API/Qemu.pm - create_disks():
Doesn't support subvol directories, dies if size is 0.
Storage/Plugin.pm - create_base() +
Storage/{BTRFS,}Plugin.pm - list_images():
Checks for definedness of $size.
Storage/{BTRFS,ESXi,}Plugin.pm - volume_size_info():
Transitive, see below.
Storage/Plugin.pm - volume_export():
Regressed by commit e50dde0, this change will restore previous
behavior.
Storage/Plugin.pm - volume_export_formats() +
GuestImport.pm - extract_disk_from_import_file() +
Storage.pm - assert_iso_content():
Doesn't use the result.
CLI/qm.pm - importdisk:
Dies early if source is a directory.
CLI/qm.pm - importovf:
Calls parse_ovf() earlier.
Existing callers of volume_size_info() do not break with this change:
API2/Storage/Content.pm - info:
Uses list context, not affected by change.
QemuMigrate.pm - scan_local_volumes() +
API2/Qemu.pm - create_disks(), first call:
Uses list context, not affected by change, does not support subvol
directories.
API2/Qemu.pm - create_disks(), second call +
API2/Qemu.pm - import_from_volid():
Doesn't support subvol directories, dies if size is 0.
API2/Qemu.pm - resize_vm +
VZDump/QemuServer.pm - prepare() +
QemuServer.pm - clone_disk() +
QemuServer.pm - create_efidisk():
Doesn't support subvol directories (see NOTE above), but would not
die directly if size is 0. (In case of create_efidisk(), the size of
the just created disk is queried.)
API2/LXC.pm - resize_vm:
For directory plugins, the subsequent call to resize_volume() fails
with "can't resize this image format".
For BTRFS, quotas are currently not supported and the call to
resize_volume() fails with "failed to get btrfs subvolume ID from:".
This is because the btrfs 'subvol show' command is invoked with
'-q', so there is no output. Even if it would work, it would be more
correct to use 0 as the current size to add to the new quota rather
than 1.
LXC/Config.pm - rescan_volume():
This will happily use size=1 from before this change, but that is
not correct. A subsequent 'pct rescan' will correct the size to 0.
It is only used when hotplugging an existing subvol directory.
LXC.pm - copy_volume():
This will happily use size=1 from before this change, but that is
not correct and will lead to failure "mkfs.ext4: Device size
reported to be zero.". That is because with non-zero size, the
allocation of the volume for the clone will be done with 'raw'
format by the alloc_disk() helper in LXC.pm rather than 'subvol'.
This change will make cloning containers with unsized subvol
directories possible.
QemuServer/Cloudinit.pm - commit_cloudinit_disk():
Doesn't support subvol directories (see NOTE above), will allocate
a new volume when size is 0.
[0]: https://forum.proxmox.com/threads/162943/
Fixes: e50dde0 ("volume export: rely on storage plugin's format")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250303092445.13873-2-f.ebner@proxmox.com
Currently, setting up a directory storage with trailing slashes in the
path results in log messages with double slashes, if this path gets
expanded by an action like vzdump. While this is just a cosmetic
issue, it looks odd and some users might think this is a bug if they
currently investigate some other issue and see such paths.
This patch removes those trailing slashes once the directory storage
class config gets updated.
Signed-off-by: Daniel Herzig <d.herzig@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The subvolume itself cannot be included if there is a base snapshot
or the command would fail with e.g.
> ERROR: subvolume /mnt/btrfs/images/400/vm-400-disk-0 is not read-only
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For a 'raw' volume, the path includes the '/disk.raw' suffix, but the
check expects the containing subvolume directory.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The split_list() helper will return a list, and assignment in scalar
context would result in the number of elements, instead of having the
desired array reference, that the BTRFS plugin expects.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Changes for version 11:
* Allow declaring storage features via plugin data.
* Introduce new_backup_provider() plugin method.
* Allow declaring sensitive properties via plugin data.
See the api changelog file for details.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-7-f.ebner@proxmox.com
Hard-coding a list of sensitive properties means that custom plugins
cannot define their own sensitive properties for the on_add/on_update
hooks.
Have plugins declare the list of their sensitive properties in the
plugin data. For backwards compatibility, return the previously
hard-coded list if no such declaration is present.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-6-f.ebner@proxmox.com
The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.
The backup provider API is split into two parts, both of which again
need different implementations for VM and LXC guests:
1. Backup API
In Proxmox VE, a backup job consists of backup tasks for individual
guests. There are methods for initialization and cleanup of the job,
i.e. job_init() and job_cleanup() and for each guest backup, i.e.
backup_init() and backup_cleanup().
The backup_get_mechanism() method is used to decide on the backup
mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory'
for containers is possible. The method also let's the plugin indicate
whether to use a bitmap for incremental VM backup or not. It is enough
to implement one mechanism for VMs and one mechanism for containers.
Next, there are methods for backing up the guest's configuration and
data, backup_vm() for VM backup and backup_container() for container
backup, with the latter running
Finally, some helpers like getting the provider name or volume ID for
the backup target, as well as for handling the backup log.
The backup transaction looks as follows:
First, job_init() is called that can be used to check backup server
availability and prepare the connection. Then for each guest
backup_init() followed by backup_vm() or backup_container() and finally
backup_cleanup(). Afterwards job_cleanup() is called. For containers,
there is an additional backup_container_prepare() call while still
privileged. The actual backup_container() call happens as the
(unprivileged) container root user, so that the file owner and group IDs
match the container's perspective.
1.1 Backup Mechanisms
VM:
Access to the data on the VM's disk from the time the backup started
is made available via a so-called "snapshot access". This is either
the full image, or in case a bitmap is used, the dirty parts of the
image since the last time the bitmap was used for a successful backup.
Reading outside of the dirty parts will result in an error. After
backing up each part of the disk, it should be discarded in the export
to avoid unnecessary space usage on the Proxmox VE side (there is an
associated fleecing image).
VM mechanism 'file-handle':
The snapshot access is exposed via a file descriptor. A subroutine to
read the dirty regions for incremental backup is provided as well.
VM mechanism 'nbd':
The snapshot access and, if used, bitmap are exported via NBD.
Container mechanism 'directory':
A copy or snapshot of the container's filesystem state is made
available as a directory. The method is executed inside the user
namespace associated to the container.
2. Restore API
The restore_get_mechanism() method is used to decide on the restore
mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for
containers are possible. It is enough to implement one mechanism for
VMs and one mechanism for containers.
Next, methods for extracting the guest and firewall configuration and
the implementations of the restore mechanism via a pair of methods: an
init method, for making the data available to Proxmox VE and a cleanup
method that is called after restore.
2.1. Restore Mechanisms
VM mechanism 'qemu-img':
The backup provider gives a path to the disk image that will be
restored. The path needs to be something 'qemu-img' can deal with,
e.g. can also be an NBD URI or similar.
Container mechanism 'directory':
The backup provider gives the path to a directory with the full
filesystem structure of the container.
Container mechanism 'tar':
The backup provider gives the path to a (potentially compressed) tar
archive with the full filesystem structure of the container.
See the PVE::BackupProvider::Plugin module for the full API
documentation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: replace backup_vm_available_bitmaps with
backup_vm_query_incremental, which instead of a bitmap name provides
a bitmap mode that is 'new' (create or *recreate* a bitmap) or 'use'
(use an existing bitmap, or create one if none exists)]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-5-f.ebner@proxmox.com
For punching holes via fallocate. This will be useful for the external
backup provider API to discard parts of the source. The 'file-handle'
mechanism there uses a fuse mount, which does not implement the
BLKDISCARD ioctl, but does implement fallocate.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-4-f.ebner@proxmox.com
Which looks up whether a storage supports a given feature in its
'plugindata'. This is intentionally kept simple and not implemented
as a plugin method for now. Should it ever become more complex
requiring plugins to override the default implementation, it can
later be changed to a method.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250404133204.239783-3-f.ebner@proxmox.com
The web UI uses the download-url endpoint for downloading an ISO, VZ
template, or OVA file via wget. In a setup where this request has to
go over a proxy (configured in the http_proxy datacenter option), the
download only works for http:// URLs, not https:// URLs. The reason is
that the download-url handler does not pass the https_proxy option to
the download_file_from_url helper, hence the helper only sets the
http_proxy environment variable for wget, not the https_proxy one.
Fix this by also passing the https_proxy option to the
download_file_from_url helper.
This will break setups that rely on http_proxy not being respected for
https:// URLs. For example, setups that have a proxy for external
connections, but download e.g. ISO files (only) via https from an
internal repository that the proxy doesn't serve.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.kernel.org/r/20250326105108.34911-2-f.weber@proxmox.com
The new 'pve-storage-image-format' standard option uses a simple enum
instead of a subroutine verifier. Since the 'pve-storage-format'
format that is replaced by it was used in pve-guest-common's
StorageTunnel, the format cannot be removed without a versioned
breaks.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The API endpoint will automatically detect the format from the
extension for raw, qcow2 and vmdk, but it was not yet possible to
specify the format explicitly via the parameter. This could be
annoying/surprising to users. There also might be third-party plugins
that want to use vmdk, but not require a suffix in the name. Add
'vmdk' as an allowed format to avoid these issues and for consistency
by using the 'pve-storage-format' format.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The format was dropped in QEMU binary version 2.2 with commit
550830f935 ("block: delete cow block driver").
This follows qemu-server commit "drive: remove ancient 'cow' from
formats".
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Check if there is already a logged session present and fall back to
previous TCP check port connection.
pvestatd is calling check_connection every 10 seconds. This check
produces a lot of noise at the iscsi server logging.
Signed-off-by: Victor Seva <linuxmaniac@torreviejawireless.org>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Friedrich Weber <f.weber@proxmox.com>