Compare commits

...

135 Commits

Author SHA1 Message Date
jiangcuo
4bb6ddbe31
Merge branch 'proxmox:master' into pxvirt9 2025-08-17 10:55:49 +08:00
Wolfgang Bumiller
02acde02b6 make zfs tests declarative
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:49:04 +02:00
Wolfgang Bumiller
0f7a4d2d84 make tidy
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-08-07 16:24:08 +02:00
Stelios Vailakakis
6bf171ec54 iscsi: add hostname support in portal addresses
Currently, the iSCSI plugin regex patterns only match IPv4 and IPv6
addresses, causing session parsing to fail when portals use hostnames
(like nas.example.com:3260).

This patch updates ISCSI_TARGET_RE and session parsing regex to accept
any non-whitespace characters before the port, allowing hostname-based
portals to work correctly.

Tested with IP and hostname-based portals on Proxmox VE 8.2, 8.3, and 8.4.1

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250626022920.1323623-1-stelios@libvirt.dev
2025-08-04 20:41:09 +02:00
Stelios Vailakakis
c33abdf062 fix #6073: esxi: fix zombie process after storage removal
After removing an ESXi storage, a zombie process is generated because
the forked FUSE process (esxi-folder-fuse) is not properly reaped.

This patch implements a double-fork mechanism to ensure the FUSE process
is reparented to init (PID 1), which will properly reap it when it
exits. Additionally adds the missing waitpid() call to reap the
intermediate child process.

Tested on Proxmox VE 8.4.1 with ESXi 8.0U3e storage.

Signed-off-by: Stelios Vailakakis <stelios@libvirt.dev>
Link: https://lore.proxmox.com/20250701154135.2387872-1-stelios@libvirt.dev
2025-08-04 20:36:38 +02:00
Thomas Lamprecht
609752f3ae bump version to 9.0.13
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-08-01 18:36:56 +02:00
Fiona Ebner
5750596f5b deactivate volumes: terminate error message with newline
Avoid that Perl auto-attaches the line number and file name.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250801081649.13882-1-f.ebner@proxmox.com
2025-08-01 13:22:45 +02:00
Thomas Lamprecht
153f7d8f85 bump version to 9.0.12
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:22:16 +02:00
Friedrich Weber
3c209eaeb7 plugin: nfs, cifs: use volume qemu snapshot methods from dir plugin
Taking an offline snapshot of a VM on an NFS/CIFS storage with
snapshot-as-volume-chain currently creates a volume-chain snapshot as
expected, but taking an online snapshot unexpectedly creates a qcow2
snapshot. This was also reported in the forum [1].

The reason is that the NFS/CIFS plugins inherit the method
volume_qemu_snapshot_method from the Plugin base class, whereas they
actually behave similarly to the Directory plugin. To fix this,
implement the method for the NFS/CIFS plugins and let it call the
Directory plugin's implementation.

[1] https://forum.proxmox.com/threads/168619/post-787374

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731082538.31891-1-f.weber@proxmox.com
2025-07-31 14:19:13 +02:00
Thomas Lamprecht
81261f9ca1 re-tidy perl code
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 14:16:25 +02:00
Fabian Grünbichler
7513e21d74 plugin: parse_name_dir: drop deprecation warning
this gets printed very often if such a volume exists - e.g. adding such a
volume to a config with `qm set` prints it 10 times..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-5-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
Fabian Grünbichler
6dbeba59da plugin: extend snapshot name parsing to legacy volnames
otherwise a volume like `100/oldstyle-100-disk-0.qcow2` can be snapshotted, but
the snapshot file is treated as a volume instead of a snapshot afterwards.

this also avoids issues with volnames with `vm-` in their names, similar to the
LVM fix for underscores.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-4-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
Fabian Grünbichler
59a54b3d5f fix #6584: plugin: list_images: only include parseable filenames
by only including filenames that are also valid when actually parsing them,
things like snapshot files or files not following our naming scheme are no
longer candidates for rescanning or included in other output.

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-3-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
Fabian Grünbichler
a477189575 plugin: fix parse_name_dir regression for custom volume names
prior to the introduction of snapshot as volume chains, volume names of
almost arbitrary form were accepted. only forbid filenames which are
part of the newly introduced namespace for snapshot files, while
deprecating other names not following our usual naming scheme, instead
of forbidding them outright.

Fixes: b63147f5df "plugin: fix volname parsing"

Co-authored-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250731111519.931104-2-f.gruenbichler@proxmox.com
2025-07-31 14:15:54 +02:00
Thomas Lamprecht
94a54793cd bump version to 9.0.11
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 09:19:03 +02:00
Friedrich Weber
92efe5c6cb plugin: lvm: volume snapshot info: untaint snapshot filename
Without untainting, offline-deleting a volume-chain snapshot on an LVM
storage via the GUI can fail with an "Insecure dependecy in exec
[...]" error, because volume_snapshot_delete uses the filename its
qemu-img invocation.

Commit 93f0dfb ("plugin: volume snapshot info: untaint snapshot
filename") fixed this already for the volume_snapshot_info
implementation of the Plugin base class, but missed that the LVM
plugin overrides the method and was still missing the untaint.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731071306.11777-1-f.weber@proxmox.com
2025-07-31 09:18:33 +02:00
Thomas Lamprecht
74b5031c9a bump version to 9.0.10
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-31 04:14:23 +02:00
Aaron Lauterer
0dc6c9d39c status: rrddata: use new pve-storage-9.0 rrd location if file is present
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250726010626.1496866-26-a.lauterer@proxmox.com
2025-07-31 04:13:27 +02:00
Thomas Lamprecht
868de9b1a8 bump version to 9.0.9
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-30 19:51:11 +02:00
Fiona Ebner
e502404fa2 config: drop 'maxfiles' parameter
The 'maxfiles' parameter has been deprecated since the addition of
'prune-backups' in the Proxmox VE 7 beta.

The setting was auto-converted when reading the storage
configuration.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718125408.133376-2-f.ebner@proxmox.com
2025-07-30 19:35:50 +02:00
Fiona Ebner
fc633887dc lvm plugin: volume snapshot: actually print error when renaming
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-4-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
Fiona Ebner
db2025f5ba fix #6587: lvm plugin: snapshot info: fix parsing snapshot name
Volume names are allowed to contain underscores, so it is impossible
to determine the snapshot name from just the volume name, e.g:
snap_vm-100-disk_with_underscore_here_s_some_more.qcow2

Therefore, pass along the short volume name too and match against it.

Note that none of the variables from the result of parse_volname()
were actually used previously.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-3-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
Fiona Ebner
819dafe516 lvm plugin: snapshot info: avoid superfluous argument for closure
The $volname variable is never modified in the function, so it doesn't
need to be passed into the $get_snapname_from_path closure.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250730162117.160498-2-f.ebner@proxmox.com
2025-07-30 19:32:40 +02:00
Fiona Ebner
169f8091dd test: add tests for volume access checks
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250730130506.96278-1-f.ebner@proxmox.com
2025-07-30 18:42:52 +02:00
Maximiliano Sandoval
5245e044ad fix #5181: pbs: store and read passwords as unicode
At the moment calling
```
pvesm add pbs test --password="bär12345" --datastore='test' # ..other params
```

Will result in the API handler getting the param->{passowrd} as a utf-8
encoded string. When dumped with Debug::Peek's Dump() one can see:

```
SV = PV(0x5a02c1a3ff10) at 0x5a02bd713670
  REFCNT = 1
  FLAGS = (POK,IsCOW,pPOK,UTF8)
  PV = 0x5a02c1a409b0 "b\xC3\xA4r12345"\0 [UTF8 "b\x{e4}r12345"]
  CUR = 9
  LEN = 11
  COW_REFCNT = 0
```

Then when writing the file via file_set_contents (using syswrite
internally) will result in perl encoding the password as latin1 and a
file with contents:

```
$ hexdump -C /etc/pve/priv/storage/test.pw
00000000  62 e4 72 31 32 33 34 35                           |b.r12345|
00000008
```

when the correct contents should have been:
```
00000000  62 c3 a4 72 31 32 33 34  35                       |b..r12345|
00000009
```

Later when the file is read via file_read_firstline it will result in

```
SV = PV(0x5e8baa411090) at 0x5e8baa5a96b8
  REFCNT = 1
  FLAGS = (POK,pPOK)
  PV = 0x5e8baa43ee20 "b\xE4r12345"\0
  CUR = 8
  LEN = 81
```

which is a different string than the original.

At the moment, adding the storage will work as the utf8 password is
still in memory, however, however subsequent uses (e.g. pvestatd) will
fail.

This patch fixes the issue by encoding the string as utf8 both when
reading and storing it to disk. The user was able in the past to go
around the issue by writing the right password in
/etc/pve/priv/{storage}.pw and this fix is compatible with that.

It is documented at
https://pbs.proxmox.com/docs/backup-client.html#environment-variables
that the Backup Server password must be valid utf-8.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250730072239.24928-1-m.sandoval@proxmox.com
2025-07-30 11:55:18 +02:00
Fiona Ebner
cafbdb8c52 bump version to 9.0.8
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 17:28:23 +02:00
Wolfgang Bumiller
172c71a64d common: use v5.36
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
Wolfgang Bumiller
1afe55b35b escape dirs in path_to_volume_id regexes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
Wolfgang Bumiller
dfad07158d drop rootdir case in path_to_volume_id
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
Wolfgang Bumiller
715ec4f95b parse_volname: remove openvz 'rootdir' case
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 16:42:49 +02:00
Wolfgang Bumiller
f62fc773ad tests: drop rootdir/ tests
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[FE: use 'images' rather than not-yet-existing 'ct-vol' for now
     disable seen vtype tests for now]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 16:42:18 +02:00
Wolfgang Bumiller
9b7fa1e758 btrfs: remove unnecessary mkpath call
The existence of the original volume should imply the existence of its
parent directory, after all... And with the new typed subdirectories
this was wrong.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-29 15:52:00 +02:00
Shannon Sterz
a9315a0ed3 fix #6561: zfspool: track refquota for subvolumes via user properties
ZFS itself does not track the refquota per snapshot, so this needs to
be handled by Proxmox VE. Otherwise, rolling back a volume that has
been resized since the snapshot was taken, will retain the new size.
This is problematic, as it means the value in the guest config does
not match the size of the disk on the storage anymore.

This implementation does so by leveraging a user property per
snapshot.

Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729121151.159797-1-s.sterz@proxmox.com
[FE: improve capitalization and wording in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 15:16:03 +02:00
Fabian Grünbichler
d0239ba9c0 lvm plugin: use relative path for qcow2 rebase command
otherwise the resulting qcow2 file will contain an absolute path, which makes
renaming the backing VG of the storage impossible.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-5-f.gruenbichler@proxmox.com
2025-07-29 14:43:07 +02:00
Fabian Grünbichler
7da44f56e4 plugin: use relative path for qcow2 rebase command
otherwise the resulting qcow2 file will contain an absolute path, which makes
changing the backing path of the directory storage impossible.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-4-f.gruenbichler@proxmox.com
2025-07-29 14:43:07 +02:00
Fabian Grünbichler
191cddac30 lvm plugin: fix typo in rebase log message
this was copied over from Plugin.pm

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-3-f.gruenbichler@proxmox.com
[FE: use string concatenation rather than multi-argument print]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 14:43:01 +02:00
Fabian Grünbichler
a7afad969d plugin: fix typo in rebase log message
by directly printing the to-be-executed command, instead of copying it which is
error-prone.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250729115320.579286-2-f.gruenbichler@proxmox.com
[FE: use string concatenation rather than multi-argument print]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-29 14:41:48 +02:00
Friedrich Weber
93f0dfbc75 plugin: volume snapshot info: untaint snapshot filename
Without untainting, offline-deleting a volume-chain snapshot on a
directory storage via the GUI fails with an "Insecure dependecy in
exec [...]" error, because volume_snapshot_delete uses the filename
its qemu-img invocation.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2025-07-28 15:10:49 +02:00
Lierfang Support Team
43a6990ee4 Add riscv64 support 2025-07-26 23:26:19 +08:00
a261b91a5e update version to 8.3.6-1 2025-07-26 23:16:58 +08:00
645325c128 Fix bcache issue https://github.com/jiangcuo/Proxmox-Port/issues/175 2025-07-26 23:16:31 +08:00
4d03b00f67 fix changelog error 2025-07-26 23:16:31 +08:00
6b738d730b bump version to 8.3.4-3 2025-07-26 23:16:27 +08:00
845e000f51 fix bcache error 2025-07-26 23:16:02 +08:00
493fcc9ff1 update pve-storage to 8.3.4-2 2025-07-26 23:16:00 +08:00
92e95eb2ba add pvebcache cli 2025-07-26 23:15:41 +08:00
b5399acb05 bump libpve-storage-perl to 8.3.4-1 2025-07-26 23:15:39 +08:00
4e3fc22f04 bump libpve-storage-perl to 8.3.4 2025-07-26 23:14:59 +08:00
jiangcuo
4c90018efa * Add vdisk_clone_pxvirt func. This func will force create link clone for pxvditemplate vm.
* enable snapshot link clone on zfspool.
2025-07-26 23:14:17 +08:00
jiangcuo
10ae4c099c bump pve-storage to 8.3.3+port1 2025-07-26 23:12:27 +08:00
jiangcuo
63922bb75b Qcow2 can't use clonedisk fn 2025-07-26 23:12:01 +08:00
jiangcuo
30a7bad8f8 Add clone_image_pxvirt for pxvirt 2025-07-26 23:12:01 +08:00
jiangcuo
a1a5cca6d4 Add bcache support 2025-07-26 23:11:59 +08:00
jiangcuo
19e733e945 Update Makefile 2025-07-26 23:10:49 +08:00
Wolfgang Bumiller
43ec7bdfe6 plugin: move 'parse_snap_name' up to before its use
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-23 08:52:17 +02:00
Wolfgang Bumiller
3cb0c3398c bump version to 9.0.7
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-22 15:01:58 +02:00
Wolfgang Bumiller
42bc721b41 make tidy
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
cfe7d7ebe7 default format helper: only return default format
Callers that required the valid formats are now using the
resolve_format_hint() helper instead.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
c86d8f6d80 introduce resolve_format_hint() helper
Callers interested in the list of valid formats from
storage_default_format() actually want this functionality.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
ad20e4faef api: status: rely on get_formats() method for determining format-related info
Rely on get_formats() rather than just the static plugin data in the
'status' API call. This removes the need for the special casing for
LVM storages without the 'snapshot-as-volume-chain' option. It also
fixes the issue that the 'format' storage configuration option to
override the default format was previously ignored there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
dd2efb7846 lvm plugin: implement get_formats() method
As the alloc_lvm_image() helper asserts, qcow2 cannot be used as a
format without the 'snapshot-as-volume-chain' configuration option.
Therefore it is necessary to implement get_formats() and distinguish
based on the storage configuration.

In case the 'snapshot-as-volume-chain' option is set, qcow2 is even
preferred and thus declared the default format.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
e9e24973fd plugin: add get_formats() method and use it instead of default_format()
The LVM plugin can only use qcow2 format when the
'snapshot-as-volume-chain' configuration option is set. The format
information is currently only recorded statically in the plugin data.
This causes issues, for example, restoring a guest volume that uses
qcow2 as a format hint on an LVM storage without the option set will
fail, because the plugin data indicates that qcow2 is supported.
Introduce a dedicated method, so that plugins can indicate what
actually is supported according to the storage configuration.

The implementation for LVM is done in a separate commit.

Remove the now unused default_format() function from Plugin.pm.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: docs: add missing params, drop =pod line, use !! for bools]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-22 14:57:22 +02:00
Fiona Ebner
cd7c8e0ce6 api change log: improve style consistency a bit
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-22 14:57:22 +02:00
Max R. Carrara
285a7764d6 fix #6553: lvmthin: implement volume_rollback_is_possible sub
Because LvmThinPlugin.pm uses LVMPlugin.pm as a base, it inherits the
`volume_rollback_is_possible()` subroutine added in eda88c94. Its
implementation however causes snapshot rollbacks to fail with
"can't rollback snapshot for 'raw' volume".

Fix this by implementing `volume_rollback_is_possible()`.

Closes: #6553
Signed-off-by: Max R. Carrara <m.carrara@proxmox.com>
2025-07-22 14:56:00 +02:00
Alexandre Derumier via pve-devel
4f3c1d40ef lvmplugin: find_free_diskname: check if fmt param exist
this log have been reported on the forum

"recovering backed-up configuration from 'qotom-pbs-bkp-for-beelink-vms-25g:backup/ct/110/2025-07-17T04:33:50Z'
Use of uninitialized value $fmt in string eq at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 517.
"

https://forum.proxmox.com/threads/pve-beta-9-cannot-restore-lxc-from-pbs.168633/

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Link: https://lore.proxmox.com/mailman.221.1752926423.354.pve-devel@lists.proxmox.com
2025-07-19 20:25:15 +02:00
Thomas Lamprecht
c428173669 bump version to 9.0.6
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-18 14:28:56 +02:00
Fiona Ebner
aea2fcae82 lvm plugin: list images: properly handle qcow2 format
In particular, this also fixes volume rescan.

Fixes: eda88c9 ("lvmplugin: add qcow2 snapshot")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718102023.70591-2-f.ebner@proxmox.com
2025-07-18 12:21:33 +02:00
Fiona Ebner
9b6e138788 lvm plugin: properly handle qcow2 format when querying volume size info
In particular this fixes moving a qcow2 on top of LVM to a different
storage.

Fixes: eda88c9 ("lvmplugin: add qcow2 snapshot")
Reported-by: Michael Köppl <m.koeppl@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718102023.70591-1-f.ebner@proxmox.com
2025-07-18 12:20:56 +02:00
Wolfgang Bumiller
5a5561b6ae plugin: doc: resolve mixup of 'storage' and 'mixed' cases
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-18 10:07:13 +02:00
Thomas Lamprecht
6bf6c8ec3c bump version to 9.0.5
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 20:15:48 +02:00
Thomas Lamprecht
07b005bb55 plugin: update docs for volume_qemu_snapshot_method to new return values
Fixes: 41c6e4b ("replace volume_support_qemu_snapshot with volume_qemu_snapshot")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 20:15:48 +02:00
Thomas Lamprecht
ed6df31cf4 d/postinst: drop obsolete migration for CIFS credential file path
As this cannot trigger due to no direct upgrade path existing between
PVE 7 and PVE 9, we only support single major version upgrades at a
time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 20:15:48 +02:00
Thomas Lamprecht
61aaf78786 zfs: reformat code with perltidy
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 20:15:48 +02:00
Thomas Lamprecht
a81ee83127 config: rename external-snapshots to snapshot-as-volume-chain
Not perfect but now it's still easy to rename and the new variant fits
a bit better to the actual design and implementation.

Add best-effort migration for storage.cfg, this has been never
publicly released after all.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 20:15:48 +02:00
Thomas Lamprecht
2d44f2eb3e bump version to 9.0.4
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-17 01:17:49 +02:00
Thomas Lamprecht
2cd4dafb22 api: storage status: filter out qcow2 format as valid for LVM without external-snapshots
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 22:35:08 +02:00
Wolfgang Bumiller
41c6e4bf7a replace volume_support_qemu_snapshot with volume_qemu_snapshot
This also changes the return values, since their meanings are rather
weird from the storage point of view. For instance, "internal" meant
it is *not* the storage which does the snapshot, while "external"
meant a mixture of storage and qemu-server side actions. `undef` meant
the storage does it all...

┌────────────┬───────────┐
│ previous   │ new       │
├────────────┼───────────┤
│ "internal" │ "qemu"    │
│ "external" │ "mixed"   │
│ undef      │ "storage" │
└────────────┴───────────┘

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-16 15:55:28 +02:00
Wolfgang Bumiller
3941068c25 lvm: activate volume before deleting snapshots
since we call qemu-img on them the device nodes need to be available

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
e17a33794c storage: remove $running param from volume_snapshot
not needed anymore after change in qemu-server

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
4ef8ab60f6 lvmplugin: add external-snapshots option && forbid creation of qcow2 volumes without it
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
d78a91fdbc plugin : improve parse_namedir warning
display the volname
skip warning for external snapshot name

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
1dab17545c plugin: lvmplugin: add parse_snap_name
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
618b5bc3d8 lvmplugin: add volume_snapshot_info
and remove public methods:
get_snapname_from_path
get_snap_volname

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
f649f5a99c plugin|lvmplugin: don't allow volume rename if external snapshots exist.
Just to safe, as this is already check higher in the stack.

Technically, it's possible to implement snapshot file renaming,
and update backing_file info with "qemu-img rebase -u".

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
44b4e42552 lvmplugin: snapshot: use relative path for backing image
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
04cbc41943 lvmplugin: alloc_snap_image: die if file_size_info return empty size
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
2edfea478f plugin: volume_export: don't allow export of external snapshots
not yet implemented

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
b61e564606 common: fix qemu_img_resize
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
94b637a923 lvm snapshot: activate volume
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Fabian Grünbichler
615da71f77 rename_snapshot: fix parameter checks
both source and target snapshot need to be provided when renaming.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Fabian Grünbichler
f32e25f920 helpers: move qemu_img* to Common module
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Fabian Grünbichler
06016db1cb helpers: make qemu_img* storage config independent
by moving the preallocation handling to the call site, and preparing
them for taking further options like cluster size in the future.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
ea30d36da1 tests: add lvmplugin test
use same template than zfspoolplugin tests

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
eda88c94ed lvmplugin: add qcow2 snapshot
we format lvm logical volume with qcow2 to handle snapshot chain.

like for qcow2 file, when a snapshot is taken, the current lvm volume
is renamed to snap volname, and a new current lvm volume is created
with the snap volname as backing file

snapshot volname is similar to lvmthin : snap_${volname}_{$snapname}.qcow2

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
ccbced53c5 qcow2: add external snapshot support
add a snapext option to enable the feature

When a snapshot is taken, the current volume is renamed to snap volname
and a current image is created with the snap volume as backing file

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
b63147f5df plugin: fix volname parsing
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
a8d8bdf9ef storage: add volume_support_qemu_snapshot
Returns if the volume is supporting qemu snapshot:
 'internal' : do the snapshot with qemu internal snapshot
 'external' : do the snapshot with qemu external snapshot
  undef     : does not support qemu snapshot

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
5f916079ea storage: add rename_snapshot method
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
bb21ba381d storage: volume_snapshot: add $running param
This add a $running param to volume_snapshot,
it can be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level.

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
73bfe226d6 rbd && zfs : create_base : remove $running param from volume_snapshot
template guests are never running and never write
to their disks/mountpoints, those $running parameters there can be
dropped.

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
83cccdcdea plugin: add qemu_img_resize
and add missing preallocation
dc5f690b97

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
eedae199a8 plugin: add qemu_img_measure
This compute the whole size of a qcow2 volume with datas + metadatas.
Needed for qcow2 over lvm volume.

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
933736ad6d plugin: add qemu_img_info
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
24fe1bf621 plugin: add qemu_img_create_qcow2_backed
and use it for plugin linked clone

This also enable extended_l2=on, as it's mandatory for backing file
preallocation.

Preallocation was missing previously, so it should increase performance
for linked clone now (around x5 in randwrite 4k)

cluster_size is set to 128k, as it reduce qcow2 overhead (reduce disk,
but also memory needed to cache metadatas)

l2_extended is not enabled yet on base image, but it could help too
to reduce overhead without impacting performance

bench on 100G qcow2 file:

fio --filename=/dev/sdb --direct=1 --rw=randwrite --bs=4k --iodepth=32 --ioengine=libaio --name=test
fio --filename=/dev/sdb --direct=1 --rw=randread --bs=4k --iodepth=32 --ioengine=libaio --name=test

base image:

randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 20215
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 22219
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20217
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 21742
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21599
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 22037

clone image with backing file:

randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 3912
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 21476
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20563
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 22265
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 18016
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21611

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Alexandre Derumier
dd2bd851ca plugin: add qemu_img_create
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:55:28 +02:00
Hannes Duerr
8099a4639f RBD Plugin: add missing check for external ceph cluster
In 7684225 ("ceph/rbd: set 'keyring' in ceph configuration for
externally managed RBD storages") the ceph config creation was packed
into a new function and checked whether the installation is an external
Ceph cluster or not.
However, a check was forgotten in the RBDPlugin which is now added.

Without this check a configuration in /etc/pve/priv/ceph/<pool>.conf is
created and pvestatd complains

 pvestatd[1144]: ignoring custom ceph config for storage 'pool', 'monhost' is not set (assuming pveceph managed cluster)! because the file /etc/pve/priv/ceph/pool.conf

Fixes: 7684225 ("ceph/rbd: set 'keyring' in ceph configuration for externally managed RBD storages")
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250716130117.71785-1-h.duerr@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 15:26:02 +02:00
Fiona Ebner
4fb733a9ac zfs over iscsi: on-add hook: dynamically determine base path
This reduces the potential breakage from commit "fix #5071: zfs over
iscsi: add 'zfs-base-path' configuration option". Only setups where
'/dev/zvol' exists, but is not a valid base, will still be affected.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250605111109.52712-2-f.ebner@proxmox.com
2025-07-15 17:33:57 +02:00
Fiona Ebner
d181d0b1ee fix #5071: zfs over iscsi: add 'zfs-base-path' configuration option
Use '/dev/zvol' as a base path for new storages for providers 'iet'
and 'LIO', because that is what modern distributions use.

This is a breaking change regarding the addition of new storages on
older distributions, but it's enough to specify the base path '/dev'
explicitly for setups that require it.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250605111109.52712-1-f.ebner@proxmox.com
2025-07-15 17:33:57 +02:00
Thomas Lamprecht
7ecab87144 re-tidy perl source code with correct perltidy version
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-10 15:27:18 +02:00
Thomas Lamprecht
1e9b459717 bump version to 9.0.3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-09 17:36:58 +02:00
Friedrich Weber
2796d6b639 lvmthin: disable autoactivation for new logical volumes
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

Autoactivation is problematic for shared LVM storages, see #4997 [1].
For the inherently local LVM-thin storage it is less problematic, but
it still makes sense to avoid unnecessarily activating LVs and thus
making them visible on the host at boot.

To avoid that, disable autoactivation after creating new LVs. lvcreate
on trixie does not accept the --setautoactivation flag for thin LVs
yet, support was only added with [2]. Hence, setting the flag is is
done with an additional lvchange command for now. With this setting,
LVM autoactivation will not activate these LVs, and the storage stack
will take care of activating/deactivating LVs when needed.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] 1fba3b876b

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-3-f.weber@proxmox.com
2025-07-09 17:05:45 +02:00
Friedrich Weber
f296ffc4e4 fix #4997: lvm: create: disable autoactivation for new logical volumes
When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

This is not necessarily problematic for local LVM VGs, but it is
problematic for VGs on top of a shared LUN used by multiple cluster
nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).

Concretely, in a cluster with a shared LVM VG where an LV is active on
nodes 1 and 2, deleting the LV on node 1 will not clean up the
device-mapper device on node 2. If an LV with the same name is
recreated later, the leftover device-mapper device will cause
activation of that LV on node 2 to fail with:

> device-mapper: create ioctl on [...] failed: Device or resource busy

Hence, certain combinations of guest removal (and thus LV removals)
and node reboots can cause guest creation or VM live migration (which
both entail LV activation) to fail with the above error message for
certain VMIDs, see bug #4997 for more information [1].

To avoid this issue in the future, disable autoactivation when
creating new LVs using the `--setautoactivation` flag. With this
setting, LVM autoactivation will not activate these LVs, and the
storage stack will take care of activating/deactivating the LV (only)
on the correct node when needed.

This additionally fixes an issue with multipath on FC/SAS-attached
LUNs where LVs would be activated too early after boot when multipath
is not yet available, see [3] for more details and current workaround.

The `--setautoactivation` flag was introduced with LVM 2.03.12 [2], so
it is available since Bookworm/PVE 8, which ships 2.03.16. Nodes with
older LVM versions ignore the flag and remove it on metadata updates,
which is why PVE 8 could not use the flag reliably, since there may
still be PVE 7 nodes in the cluster that reset it on metadata updates.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/blob/main/WHATS_NEW
[3] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#FC/SAS-specific_configuration

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-2-f.weber@proxmox.com
2025-07-09 17:03:14 +02:00
Fabian Grünbichler
c369b5fa57 bump version to 9.0.2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-07-03 11:48:45 +02:00
Fiona Ebner
280bb6be77 plugin api: bump api version and age
Introduce qemu_blockdev_options() plugin method.

In terms of the plugin API only, adding the qemu_blockdev_options()
method is a fully backwards-compatible change. When qemu-server will
switch to '-blockdev' however, plugins where the default implemenation
is not sufficient, will not be usable for virtual machines anymore.
Therefore, this is intended for the next major release, Proxmox VE 9.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: fixed typo, add paragraph break

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-07-03 11:48:45 +02:00
Fiona Ebner
3bba744b0b plugin: qemu blockdev options: parse protocol paths in default implementation
for better backwards compatibility. This also means using path()
rather than filesystem_path() as the latter does not return protocol
paths.

Some protocol paths are not implemented (considered all that are
listed by grepping for '\.protocol_name' in QEMU):
- ftp(s)/http(s), which would access web servers via curl. This one
  could be added if there is enough interest.
- nvme://XXXX:XX:XX.X/X, which would access a host NVME device.
- null-{aio,co}, which are mainly useful for debugging.
- pbs, because path-based access is not used anymore for PBS,
  live-restore in qemu-server already defines a driver-based device.
- nfs and ssh, because the QEMU build script used by Proxmox VE does
  not enable them.
- blk{debug,verify}, because they are for debugging.
- the ones used by blkio, i.e. io_uring, nvme-io_uring,
  virtio-blk-vfio-pci, virtio-blk-vhost-user and
  virtio-blk-vhost-vdpa, because the QEMU build script used by Proxmox
  VE does not enable blkio.
- backup-dump and zeroinit, because they should not be used by the
  storage layer directly.
- gluster, because support is dropped in Proxmox VE 9.
- host_cdrom, because the storage layer should not access host CD-ROM
  devices.
- fat, because it hopefully isn't used by any third-party plugin here.

Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 11:48:45 +02:00
Fabian Grünbichler
1e75dbcefd qemu blockdev options: error out in case driver is not supported
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-07-03 11:48:36 +02:00
Fiona Ebner
9aa2722d69 qemu blockdev options: restrict allowed drivers and options
Everything the default plugin method implementation can return is
allowed, so there is no breakage introduced by this patch.

By far the most common drivers will be 'file' and 'host_device', which
the default implementation of the plugin method currently uses. Other
quite common ones will be 'iscsi' and 'nbd'. There might also be
plugins with 'rbd' and it is planned to support QEMU protocol-paths in
the default plugin method implementation, where the 'rbd:' protocol
will also be supported.

Plugin authors are encouraged to request additional drivers and
options based on their needs on the pve-devel mailing list. The list
just starts out more restrictive, but everything where there is no
good reason to not allow could be allowed in the future upon request.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
6c07619abd plugin: add machine version to qemu_blockdev_options() interface
Plugins can guard based on the machine version to be able to switch
drivers or options in a safe way without the risk of breaking older
versions.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
2d874037f3 plugin: qemu block device: add support for snapshot option
This is mostly in preparation for external qcow2 snapshot support.

For internal qcow2 snapshots, which currently are the only supported
variant, it is not possible to attach the snapshot only. If access to
that is required it will need to be handled differently, e.g. via a
FUSE/NBD export.

Such accesses are currently not done for running VMs via '-drive'
either, so there still is feature parity.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
590fb76238 plugin: qemu block device: add hints option and EFI disk hint
For '-drive', qemu-server sets special cache options for EFI disk
using RBD. In preparation to seamlessly switch to the new '-blockdev'
interface, do the same here. Note that the issue from bug #3329, which
is solved by these cache options, still affects current versions.

With -blockdev, the cache options are split up. While cache.direct and
cache.no-flush can be set in the -blockdev options, cache.writeback is
a front-end property and was intentionally removed from the -blockdev
options by QEMU commit aaa436f998 ("block: Remove cache.writeback from
blockdev-add"). It needs to be configured as the 'write-cache'
property for the ide-hd/scsi-hd/virtio-blk device.

The default is already 'writeback' and no cache mode can be set for an
EFI drive configuration in Proxmox VE currently, so there will not be
a clash.

┌─────────────┬─────────────────┬──────────────┬────────────────┐
│             │ cache.writeback │ cache.direct │ cache.no-flush │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│writeback    │ on              │ off          │ off            │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│none         │ on              │ on           │ off            │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│writethrough │ off             │ off          │ off            │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│directsync   │ off             │ on           │ off            │
├─────────────┼─────────────────┼──────────────┼────────────────┤
│unsafe       │ on              │ off          │ on             │
└─────────────┴─────────────────┴──────────────┴────────────────┘

Table from 'man kvm'.

Alternatively, the option could only be set once when allocating the
RBD volume. However, then we would need to detect all cases were a
volume could potentially be used as an EFI disk later. Having a custom
disk type would help a lot there. The approach here was chosen as it
is catch-all and should not be too costly either.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
f9c390bdfd rbd plugin: implement new method to get qemu blockdevice options
Co-developed-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
7684225bac ceph/rbd: set 'keyring' in ceph configuration for externally managed RBD storages
For QEMU, when using '-blockdev', there is no way to specify the
keyring file like was possible with '-drive', so it has to be set in
the corresponding Ceph configuration file. As it applies to all images
on the storage, it also is the most natural place for the setting.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
b8acc0286b zfs pool plugin: implement method to get qemu blockdevice options
ZFS does not have a filesystem_path() method, so the default
implementation for qemu_blockdev_options() cannot be re-used. This is
most likely, because snapshots are currently not directly accessible
via a filesystem path in the Proxmox VE storage layer.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
02931346c6 zfs iscsi plugin: implement new method to get qemu blockdevice options
Reported-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
c136eb76c7 iscsi direct plugin: implement method to get qemu blockdevice options
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Fiona Ebner
073c5677c7 plugin: add method to get qemu blockdevice options for volume
This is in preparation to switch qemu-server from using '-drive' to
the modern '-blockdev' in the QEMU commandline options as well as for
the qemu-storage-daemon, which only supports '-blockdev'. The plugins
know best what driver and options are needed to access an image, so
a dedicated plugin method returning the necessary parameters for
'-blockdev' is the most straight-forward.

There intentionally is only handling for absolute paths in the default
plugin implementation. Any plugin requiring more needs to implement
the method itself. With PVE 9 being a major release and most popular
plugins not using special protocols like 'rbd://', this seems
acceptable.

For NBD, etc. qemu-server should construct the blockdev object.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-07-03 10:46:45 +02:00
Thomas Lamprecht
823707a7ac bump version to 9.0.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-16 16:12:57 +02:00
Thomas Lamprecht
7669a99e97 drop support for using GlusterFS directly
As the GlusterFS project is unmaintained since a while and other
projects like QEMU also drop support for using it natively.

One can still use the gluster tools to mount an instance manually and
then use it as directory storage; the better (long term) option will
be to replace the storage server with something maintained though, as
PVE 8 will be supported until the middle of 2026 users have some time
before they need to decide what way they will go.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-16 16:05:59 +02:00
Thomas Lamprecht
a734efcbd3 bump version to 9.0.0
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-11 10:04:37 +02:00
Thomas Lamprecht
5a66c27cc6 auto-format code using perltidy with Proxmox style guide
using the new top-level `make tidy` target, which calls perltidy via
our wrapper to enforce the desired style as closely as possible.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-11 10:03:21 +02:00
Thomas Lamprecht
5d23073cb6 buildsys: add top-level make tidy target
See pve-common's commit 5ae1f2e ("buildsys: add tidy make target")
for details about the chosen xargs parameters.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-06-10 10:31:13 +02:00
Fiona Ebner
b6d049b176 esxi plugin: remove invalid fixme
No other plugin activates the storage inside the path() method either.
The caller needs to ensure that the storage is activated before using
the result of path().

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-12 10:55:24 +02:00
Fiona Ebner
9758abcb5e iscsi direct plugin: add trailing newline to error messages
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-05-09 13:39:01 +02:00
Fabian Grünbichler
b265925d64 rbd: merge rbd_cmd and build_cmd helpers
since the former was just a wrapper around the latter, and the only call
site..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-By: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
2025-04-22 12:47:05 +02:00
Fabian Grünbichler
e2b9e36f48 rbd: remove no longer used rados_cmd helper
all librados interaction is now via our XS binding, the last usage was
removed in 41aacc6cde

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-By: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
2025-04-22 12:47:02 +02:00
68 changed files with 16612 additions and 13728 deletions

View File

@ -6,6 +6,57 @@ without breaking anything unaware of it.)
Future changes should be documented in here.
## Version 12:
* Introduce `qemu_blockdev_options()` plugin method
Proxmox VE will switch to the more modern QEMU command line option `-blockdev` replacing `-drive`.
With `-drive`, it was enough to specify a path, where special protocol paths like `iscsi://` were
also supported. With `-blockdev`, the data is more structured, a driver needs to be specified
alongside the path to an image and each driver supports driver-specific options. Most storage
plugins should be fine using driver `host_device` in case of a block device and `file` in case of
a file and no special options. See the default implemenation of the base plugin for guidance, also
if the plugin uses protocol paths. Implement this method for Proxmox VE 9.
See `$allowed_qemu_blockdev_options` in `PVE/Storage.pm` for currently allowed drivers and option.
Feel free to request allowing more drivers or options on the pve-devel mailing list based on your
needs.
* Introduce `rename_snapshot()` plugin method
This method allow to rename a vm disk snapshot name to a different snapshot name.
* Introduce `volume_qemu_snapshot_method()` plugin method
This method declares how snapshots should be handled for *running* VMs.
This should return one of the following:
'qemu':
Qemu must perform the snapshot. The storage plugin does nothing.
'storage':
The storage plugin *transparently* performs the snapshot and the running VM does not need to
do anything.
'mixed':
For taking a snapshot: The storage performs an offline snapshot and qemu then has to reopen
the volume.
For removing a snapshot: One of 2 things will happen (both must be supported):
a) Qemu will "unhook" the snapshot by moving its data into the child snapshot, and then call
`volume_snapshot_delete` with `running` set, in which case the storage should delete only
the snapshot without touching the surrounding snapshots.
b) Qemu will "commit" the child snapshot to the one which is being removed, then call
`volume_snapshot_delete()` on the child snapshot, then call `rename_snapshot()` to move the
merged snapshot into place.
NOTE: Storages must support using "current" as a special name in `rename_snapshot()` to
cheaply convert a snapshot into the current disk state and back.
* Introduce `get_formats()` plugin method
Get information about the supported formats and default format according to the current storage
configuration. The default implemenation is backwards-compatible with previous behavior and looks
at the definition given in the plugin data, as well as the `format` storage configuration option,
which can override the default format. Must be implemented when the supported formats or default
format depend on the storage configuration.
## Version 11:
* Allow declaring storage features via plugin data
@ -15,7 +66,7 @@ Future changes should be documented in here.
`backup-provider`, see below for more details. To declare support for this feature, return
`features => { 'backup-provider' => 1 }` as part of the plugin data.
* Introduce new_backup_provider() plugin method
* Introduce `new_backup_provider()` plugin method
Proxmox VE now supports a `Backup Provider API` that can be used to implement custom backup
solutions tightly integrated in the Proxmox VE stack. See the `PVE::BackupProvider::Plugin::Base`

View File

@ -1,4 +1,5 @@
include /usr/share/dpkg/pkg-info.mk
include /usr/share/dpkg/architecture.mk
PACKAGE=libpve-storage-perl
BUILDDIR ?= $(PACKAGE)-$(DEB_VERSION)
@ -6,10 +7,14 @@ DSC=$(PACKAGE)_$(DEB_VERSION).dsc
GITVERSION:=$(shell git rev-parse HEAD)
DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_all.deb
DEB=$(PACKAGE)_$(DEB_VERSION_UPSTREAM_REVISION)_$(DEB_HOST_ARCH).deb
all:
.PHONY: tidy
tidy:
git ls-files ':*.p[ml]'| xargs -n4 -P0 proxmox-perltidy
.PHONY: dinstall
dinstall: deb
dpkg -i $(DEB)
@ -18,7 +23,7 @@ $(BUILDDIR):
rm -rf $@ $@.tmp
cp -a src $@.tmp
cp -a debian $@.tmp/
echo "git clone git://git.proxmox.com/git/pve-storage.git\\ngit checkout $(GITVERSION)" >$@.tmp/debian/SOURCE
echo "git clone https://github.com/jiangcuo/pve-storage.git\\ngit checkout $(GITVERSION)" >$@.tmp/debian/SOURCE
mv $@.tmp $@
.PHONY: deb

204
debian/changelog vendored
View File

@ -1,3 +1,173 @@
libpve-storage-perl (9.0.13) trixie; urgency=medium
* deactivate volumes: terminate error message with newline.
-- Proxmox Support Team <support@proxmox.com> Fri, 01 Aug 2025 18:36:51 +0200
libpve-storage-perl (9.0.12) trixie; urgency=medium
* plugin: fix parse_name_dir regression for custom volume names.
* fix #6584: plugin: list_images: only include parseable filenames.
* plugin: extend snapshot name parsing to legacy volnames.
* plugin: parse_name_dir: drop noisy deprecation warning.
* plugin: nfs, cifs: use volume qemu snapshot methods from dir plugin to
ensure a online-snapshot on such storage types with
snapshot-as-volume-chain enabled does not takes a internal qcow2 snapshot.
-- Proxmox Support Team <support@proxmox.com> Thu, 31 Jul 2025 14:22:12 +0200
libpve-storage-perl (9.0.11) trixie; urgency=medium
* lvm volume snapshot info: untaint snapshot filename
-- Proxmox Support Team <support@proxmox.com> Thu, 31 Jul 2025 09:18:56 +0200
libpve-storage-perl (9.0.10) trixie; urgency=medium
* RRD metrics: use new pve-storage-9.0 format RRD file location, if it
exists.
-- Proxmox Support Team <support@proxmox.com> Thu, 31 Jul 2025 04:14:19 +0200
libpve-storage-perl (9.0.9) trixie; urgency=medium
* fix #5181: pbs: store and read passwords as unicode.
* fix #6587: lvm plugin: snapshot info: fix parsing snapshot name.
* config: drop 'maxfiles' parameter, it was replaced with the more flexible
prune options in Proxmox VE 7.0 already.
-- Proxmox Support Team <support@proxmox.com> Wed, 30 Jul 2025 19:51:07 +0200
libpve-storage-perl (9.0.8) trixie; urgency=medium
* snapshot-as-volume-chain: fix offline removal of snapshot on directory
storage via UI/API by untainting/validating a filename correctly.
* snapshot-as-volume-chain: fix typo in log message for rebase operation.
* snapshot-as-volume-chain: ensure backing file references are kept relative
upon snapshot deletion. This ensures the backing chain stays intact should
the volumes be moved to a different path.
* fix #6561: ZFS: ensure refquota for container volumes is correctly applied
after rollback. The quota is tracked via a ZFS user property.
* btrfs plugin: remove unnecessary mkpath call
* drop some left-overs for 'rootdir' sub-directory handling that were
left-over from when Proxmox VE supported OpenVZ.
* path to volume ID conversion: properly quote regexes for hardening.
-- Proxmox Support Team <support@proxmox.com> Tue, 29 Jul 2025 17:17:11 +0200
libpve-storage-perl (9.0.7) trixie; urgency=medium
* fix #6553: lvmthin: implement volume_rollback_is_possible sub
* plugin: add get_formats() method and use it instead of default_format()
* lvm plugin: implement get_formats() method
* lvm plugin: check if 'fmt' parameter is defined before comparisons
* api: status: rely on get_formats() method for determining format-related info
* introduce resolve_format_hint() helper
* improve api change log style
-- Proxmox Support Team <support@proxmox.com> Tue, 22 Jul 2025 15:01:49 +0200
libpve-storage-perl (9.0.6) trixie; urgency=medium
* lvm plugin: properly handle qcow2 format when querying volume size info.
* lvm plugin: list images: properly handle qcow2 format.
-- Proxmox Support Team <support@proxmox.com> Fri, 18 Jul 2025 14:28:53 +0200
libpve-storage-perl (9.0.5) trixie; urgency=medium
* config: rename external-snapshots option to snapshot-as-volume-chain.
* d/postinst: drop obsolete migration for CIFS credential file path, left
over from upgrade to PVE 7.
-- Proxmox Support Team <support@proxmox.com> Thu, 17 Jul 2025 19:52:21 +0200
libpve-storage-perl (9.0.4) trixie; urgency=medium
* fix #5071: zfs over iscsi: add 'zfs-base-path' configuration option.
* zfs over iscsi: on-add hook: dynamically determine base path.
* rbd storage: add missing check for external ceph cluster.
* LVM: add initial support for storage-managed snapshots through qcow2.
* directory file system based storages: add initial support for external
qcow2 snapshots.
-- Proxmox Support Team <support@proxmox.com> Thu, 17 Jul 2025 01:17:05 +0200
libpve-storage-perl (9.0.3) trixie; urgency=medium
* fix #4997: lvm: volume create: disable auto-activation for new logical
volumes, as that can be problematic for VGs on top of a shared LUN used by
multiple cluster nodes, for example those accessed via iSCSI/Fibre
Channel/direct-attached SAS.
* lvm-thin: disable auto-activation for new logical volumes to stay
consistent with thick LVM and to avoid the small overhead on activating
volumes thatmight not be used.
-- Proxmox Support Team <support@proxmox.com> Wed, 09 Jul 2025 17:34:36 +0200
libpve-storage-perl (9.0.2) trixie; urgency=medium
* plugin: add method to get qemu blockdevice options for a volume
* implement qemu_blockdevice_options for iscsi direct, zfs iscsi, zfs pool,
* and rbd plugins
* ceph/rbd: set 'keyring' in ceph configuration for externally managed RBD storages
* plugin api: bump api version and age
-- Proxmox Support Team <support@proxmox.com> Thu, 03 Jul 2025 11:44:15 +0200
libpve-storage-perl (9.0.1) trixie; urgency=medium
* drop support for accessing Gluster based storage directly due to its
effective end of support. The last upstream release happened over 2.5
years ago and there's currently no one providing enterprise support or
security updates.
User can either stay on Proxmox VE 8 until its end-of-life (probably end
of June 2026), or mount GlusterFS "manually" (e.g., /etc/fstab) and add it
as directory storage to Proxmox VE.
We recommend moving to another storage technology altogether though.
-- Proxmox Support Team <support@proxmox.com> Mon, 16 Jun 2025 16:12:37 +0200
libpve-storage-perl (9.0.0) trixie; urgency=medium
* re-build for Debian 12 "Trixie" based Proxmox VE 9 release.
-- Proxmox Support Team <support@proxmox.com> Wed, 11 Jun 2025 10:04:22 +0200
libpve-storage-perl (8.3.6-1) bookworm; urgency=medium
* pvebcache: fix issue
-- Lierfang Support Team <itsupport@lierfang.com> Mon, 14 Apr 2025 18:58:32 +0800
libpve-storage-perl (8.3.6) bookworm; urgency=medium
* plugin: file size info: be consistent about size of directory subvol to
@ -35,6 +205,24 @@ libpve-storage-perl (8.3.5) bookworm; urgency=medium
-- Proxmox Support Team <support@proxmox.com> Sun, 06 Apr 2025 21:18:38 +0200
libpve-storage-perl (8.3.4-3) bookworm; urgency=medium
* fix bcache syntax error
-- Lierfang Support Team <itsupport@lierfang.com> Sun, 09 Mar 2025 17:19:59 +0800
libpve-storage-perl (8.3.4-2) bookworm; urgency=medium
* fix bcache cli missing
-- Lierfang Support Team <itsupport@lierfang.com> Sun, 09 Mar 2025 16:46:16 +0800
libpve-storage-perl (8.3.4-1) bookworm; urgency=medium
* fix bcache missing.
-- Lierfang Support Team <itsupport@lierfang.com> Wed, 26 Feb 2025 17:23:42 +0800
libpve-storage-perl (8.3.4) bookworm; urgency=medium
* rbd plugin: drop broken cache for pool specific information in list image.
@ -62,6 +250,22 @@ libpve-storage-perl (8.3.4) bookworm; urgency=medium
-- Proxmox Support Team <support@proxmox.com> Thu, 03 Apr 2025 19:20:17 +0200
libpve-storage-perl (8.3.4) bookworm; urgency=medium
* cli: add pvebcache.
* copyright: add lierfang information.
-- Lierfang Support Team <itsupport@lierfang.com> Wed, 26 Feb 2025 17:13:39 +0800
libpve-storage-perl (8.3.3+port1) bookworm; urgency=medium
* add clone_image_pxvirt function for pxvdi
* Add bcache support
-- Jiangcuo <jiangcuo@lierfang.com> Sat, 22 Feb 2025 14:04:30 +0800
libpve-storage-perl (8.3.3) bookworm; urgency=medium
* plugin: export/import: fix calls to path() method

12
debian/control vendored
View File

@ -1,7 +1,7 @@
Source: libpve-storage-perl
Section: perl
Priority: optional
Maintainer: Proxmox Support Team <support@proxmox.com>
Maintainer: Lierfang Support Team <itsupport@lierfang.com>
Build-Depends: debhelper-compat (= 13),
libfile-chdir-perl,
libposix-strptime-perl,
@ -18,21 +18,21 @@ Build-Depends: debhelper-compat (= 13),
pve-qemu-kvm | qemu-utils,
zfsutils-linux,
Standards-Version: 4.6.2
Homepage: https://www.proxmox.com
Homepage: https://www.lierfang.com
Package: libpve-storage-perl
Architecture: all
Architecture: any
Breaks: libpve-guest-common-perl (<< 4.0-3),
libpve-http-server-perl (<< 4.0-3),
pve-container (<< 3.1-2),
pve-manager (<< 5.2-12),
qemu-server (<< 8.3.2),
Depends: bzip2,
Depends: bcache-tools,
bzip2,
ceph-common (>= 12.2~),
ceph-fuse,
ceph-fuse [ !riscv64 ],
cifs-utils,
cstream,
glusterfs-client (>= 3.4.0-2),
libfile-chdir-perl,
libposix-strptime-perl,
libpve-access-control (>= 8.1.2),

18
debian/copyright vendored
View File

@ -1,3 +1,21 @@
Copyright (C) 2011 - 2025 Lierfang
This software is maintained by Lierfang <itsupport@lierfang.com>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Copyright (C) 2010 - 2024 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>

32
debian/postinst vendored
View File

@ -6,31 +6,19 @@ set -e
case "$1" in
configure)
if test -n "$2"; then
# TODO: remove once PVE 8.0 is released
if dpkg --compare-versions "$2" 'lt' '7.0-3'; then
warning="Warning: failed to move old CIFS credential file, cluster not quorate?"
for file in /etc/pve/priv/*.cred; do
if [ -f "$file" ]; then
echo "Info: found CIFS credentials using old path: $file" >&2
mkdir -p "/etc/pve/priv/storage" || { echo "$warning" && continue; }
base=$(basename --suffix=".cred" "$file")
target="/etc/pve/priv/storage/$base.pw"
if [ -f "$target" ]; then
if diff "$file" "$target" >&2 > /dev/null; then
echo "Info: removing $file, because it is identical to $target" >&2
rm "$file" || { echo "$warning" && continue; }
else
echo "Warning: not renaming $file, because $target already exists and differs!" >&2
fi
else
echo "Info: renaming $file to $target" >&2
mv "$file" "$target" || { echo "$warning" && continue; }
if test -n "$2"; then # got old version so this is an update
# TODO: Can be dropped with some 9.x stable release, this was never in a publicly available
# package, so only for convenience for internal testing setups.
if dpkg --compare-versions "$2" 'lt' '9.0.5'; then
if grep -Pq '^\texternal-snapshots ' /etc/pve/storage.cfg; then
echo "Replacing old 'external-snapshots' with 'snapshot-as-volume-chain' in /etc/pve/storage.cfg"
sed -i 's/^\texternal-snapshots /\tsnapshot-as-volume-chain /' /etc/pve/storage.cfg || \
echo "Failed to replace old 'external-snapshots' with 'snapshot-as-volume-chain' in /etc/pve/storage.cfg"
fi
fi
done
fi
fi
;;

View File

@ -19,27 +19,27 @@ use PVE::API2::Disks::ZFS;
use PVE::RESTHandler;
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Disks::LVM",
path => 'lvm',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Disks::LVMThin",
path => 'lvmthin',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Disks::Directory",
path => 'directory',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Disks::ZFS",
path => 'zfs',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
@ -58,7 +58,7 @@ __PACKAGE__->register_method ({
type => "object",
properties => {},
},
links => [ { rel => 'child', href => "{name}" } ],
links => [{ rel => 'child', href => "{name}" }],
},
code => sub {
my ($param) = @_;
@ -75,9 +75,10 @@ __PACKAGE__->register_method ({
];
return $result;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'list',
path => 'list',
method => 'GET',
@ -123,8 +124,8 @@ __PACKAGE__->register_method ({
used => { type => 'string', optional => 1 },
gpt => { type => 'boolean' },
mounted => { type => 'boolean' },
size => { type => 'integer'},
osdid => { type => 'integer'}, # TODO: deprecate / remove in PVE 9?
size => { type => 'integer' },
osdid => { type => 'integer' }, # TODO: deprecate / remove in PVE 9?
'osdid-list' => {
type => 'array',
items => { type => 'integer' },
@ -132,13 +133,13 @@ __PACKAGE__->register_method ({
vendor => { type => 'string', optional => 1 },
model => { type => 'string', optional => 1 },
serial => { type => 'string', optional => 1 },
wwn => { type => 'string', optional => 1},
health => { type => 'string', optional => 1},
wwn => { type => 'string', optional => 1 },
health => { type => 'string', optional => 1 },
parent => {
type => 'string',
description => 'For partitions only. The device path of ' .
'the disk the partition resides on.',
optional => 1
description => 'For partitions only. The device path of '
. 'the disk the partition resides on.',
optional => 1,
},
},
},
@ -150,9 +151,7 @@ __PACKAGE__->register_method ({
my $include_partitions = $param->{'include-partitions'} // 0;
my $disks = PVE::Diskmanage::get_disks(
undef,
$skipsmart,
$include_partitions
undef, $skipsmart, $include_partitions,
);
my $type = $param->{type} // '';
@ -163,8 +162,8 @@ __PACKAGE__->register_method ({
if ($type eq 'journal_disks') {
next if $entry->{osdid} >= 0;
if (my $usage = $entry->{used}) {
next if !($usage eq 'partitions' && $entry->{gpt}
|| $usage eq 'LVM');
next
if !($usage eq 'partitions' && $entry->{gpt} || $usage eq 'LVM');
}
} elsif ($type eq 'unused') {
next if $entry->{used};
@ -174,9 +173,10 @@ __PACKAGE__->register_method ({
push @$result, $entry;
}
return $result;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'smart',
path => 'smart',
method => 'GET',
@ -207,7 +207,7 @@ __PACKAGE__->register_method ({
properties => {
health => { type => 'string' },
type => { type => 'string', optional => 1 },
attributes => { type => 'array', optional => 1},
attributes => { type => 'array', optional => 1 },
text => { type => 'string', optional => 1 },
},
},
@ -222,9 +222,10 @@ __PACKAGE__->register_method ({
$result = { health => $result->{health} } if $param->{healthonly};
return $result;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'initgpt',
path => 'initgpt',
method => 'POST',
@ -271,9 +272,10 @@ __PACKAGE__->register_method ({
my $diskid = $disk;
$diskid =~ s|^.*/||; # remove all up to the last slash
return $rpcenv->fork_worker('diskinit', $diskid, $authuser, $worker);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'wipe_disk',
path => 'wipedisk',
method => 'PUT',
@ -314,6 +316,7 @@ __PACKAGE__->register_method ({
my $basename = basename($disk); # avoid '/' in the ID
return $rpcenv->fork_worker('wipedisk', $basename, $authuser, $worker);
}});
},
});
1;

View File

@ -90,7 +90,7 @@ my $write_ini = sub {
file_set_contents($filename, $content);
};
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
@ -139,36 +139,44 @@ __PACKAGE__->register_method ({
my $result = [];
dir_glob_foreach('/etc/systemd/system', '^mnt-pve-(.+)\.mount$', sub {
dir_glob_foreach(
'/etc/systemd/system',
'^mnt-pve-(.+)\.mount$',
sub {
my ($filename, $storid) = @_;
$storid = PVE::Systemd::unescape_unit($storid);
my $unitfile = "/etc/systemd/system/$filename";
my $unit = $read_ini->($unitfile);
push @$result, {
push @$result,
{
unitfile => $unitfile,
path => "/mnt/pve/$storid",
device => $unit->{'Mount'}->{'What'},
type => $unit->{'Mount'}->{'Type'},
options => $unit->{'Mount'}->{'Options'},
};
});
},
);
return $result;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
path => '',
method => 'POST',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Create a Filesystem on an unused disk. Will be mounted under '/mnt/pve/NAME'.",
description =>
"Create a Filesystem on an unused disk. Will be mounted under '/mnt/pve/NAME'.",
parameters => {
additionalProperties => 0,
properties => {
@ -226,7 +234,8 @@ __PACKAGE__->register_method ({
# reserve the name and add as disabled, will be enabled below if creation works out
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params, 1);
$name, $node, $storage_params, $verify_params, 1,
);
}
my $mounted = PVE::Diskmanage::mounted_paths();
@ -251,10 +260,14 @@ __PACKAGE__->register_method ({
my ($devname) = $dev =~ m|^/dev/(.*)$|;
$part = "/dev/";
dir_glob_foreach("/sys/block/$devname", qr/\Q$devname\E.+/, sub {
dir_glob_foreach(
"/sys/block/$devname",
qr/\Q$devname\E.+/,
sub {
my ($partition) = @_;
$part .= $partition;
});
},
);
}
# create filesystem
@ -277,14 +290,17 @@ __PACKAGE__->register_method ({
$cmd = [$BLKID, $part, '-o', 'export'];
print "# ", join(' ', @$cmd), "\n";
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
my ($line) = @_;
if ($line =~ m/^UUID=(.*)$/) {
$uuid = $1;
$uuid_path = "/dev/disk/by-uuid/$uuid";
}
});
},
);
die "could not get UUID of device '$part'\n" if !$uuid;
@ -305,22 +321,25 @@ __PACKAGE__->register_method ({
if ($param->{add_storage}) {
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params);
$name, $node, $storage_params, $verify_params,
);
}
});
};
return $rpcenv->fork_worker('dircreate', $name, $user, $worker);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '{name}',
method => 'DELETE',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Unmounts the storage and removes the mount unit.",
@ -330,8 +349,9 @@ __PACKAGE__->register_method ({
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
'cleanup-config' => {
description => "Marks associated storage(s) as not available on this node anymore ".
"or removes them from the configuration (if configured for this node only).",
description =>
"Marks associated storage(s) as not available on this node anymore "
. "or removes them from the configuration (if configured for this node only).",
type => 'boolean',
optional => 1,
default => 0,
@ -380,7 +400,9 @@ __PACKAGE__->register_method ({
run_command(['systemctl', 'stop', $mountunitname]);
run_command(['systemctl', 'disable', $mountunitname]);
unlink $mountunitpath or $! == ENOENT or die "cannot remove $mountunitpath - $!\n";
unlink $mountunitpath
or $! == ENOENT
or die "cannot remove $mountunitpath - $!\n";
my $config_err;
if ($param->{'cleanup-config'}) {
@ -388,7 +410,9 @@ __PACKAGE__->register_method ({
my ($scfg) = @_;
return $scfg->{type} eq 'dir' && $scfg->{path} eq $path;
};
eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
eval {
PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node);
};
warn $config_err = $@ if $@;
}
@ -402,6 +426,7 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('dirremove', $name, $user, $worker);
}});
},
});
1;

View File

@ -14,7 +14,7 @@ use PVE::RESTHandler;
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
@ -72,7 +72,8 @@ __PACKAGE__->register_method ({
},
size => {
type => 'integer',
description => 'The size of the physical volume in bytes',
description =>
'The size of the physical volume in bytes',
},
free => {
type => 'integer',
@ -97,7 +98,7 @@ __PACKAGE__->register_method ({
my $vg = $vgs->{$vg_name};
$vg->{name} = $vg_name;
$vg->{leaf} = 0;
foreach my $pv (@{$vg->{pvs}}) {
foreach my $pv (@{ $vg->{pvs} }) {
$pv->{leaf} = 1;
}
$vg->{children} = delete $vg->{pvs};
@ -108,16 +109,18 @@ __PACKAGE__->register_method ({
leaf => 0,
children => $result,
};
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
path => '',
method => 'POST',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Create an LVM Volume Group",
@ -167,7 +170,8 @@ __PACKAGE__->register_method ({
# reserve the name and add as disabled, will be enabled below if creation works out
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params, 1);
$name, $node, $storage_params, $verify_params, 1,
);
}
my $worker = sub {
@ -187,22 +191,25 @@ __PACKAGE__->register_method ({
if ($param->{add_storage}) {
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params);
$name, $node, $storage_params, $verify_params,
);
}
});
};
return $rpcenv->fork_worker('lvmcreate', $name, $user, $worker);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '{name}',
method => 'DELETE',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Remove an LVM Volume Group.",
@ -212,8 +219,9 @@ __PACKAGE__->register_method ({
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
'cleanup-config' => {
description => "Marks associated storage(s) as not available on this node anymore ".
"or removes them from the configuration (if configured for this node only).",
description =>
"Marks associated storage(s) as not available on this node anymore "
. "or removes them from the configuration (if configured for this node only).",
type => 'boolean',
optional => 1,
default => 0,
@ -251,7 +259,9 @@ __PACKAGE__->register_method ({
my ($scfg) = @_;
return $scfg->{type} eq 'lvm' && $scfg->{vgname} eq $name;
};
eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
eval {
PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node);
};
warn $config_err = $@ if $@;
}
@ -274,6 +284,7 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('lvmremove', $name, $user, $worker);
}});
},
});
1;

View File

@ -15,7 +15,7 @@ use PVE::RESTHandler;
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
@ -66,16 +66,18 @@ __PACKAGE__->register_method ({
code => sub {
my ($param) = @_;
return PVE::Storage::LvmThinPlugin::list_thinpools(undef);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
path => '',
method => 'POST',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Create an LVM thinpool",
@ -125,7 +127,8 @@ __PACKAGE__->register_method ({
# reserve the name and add as disabled, will be enabled below if creation works out
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params, 1);
$name, $node, $storage_params, $verify_params, 1,
);
}
my $worker = sub {
@ -143,45 +146,51 @@ __PACKAGE__->register_method ({
PVE::Storage::LVMPlugin::lvm_create_volume_group($dev, $name);
my $pv = PVE::Storage::LVMPlugin::lvm_pv_info($dev);
# keep some free space just in case
my $datasize = $pv->{size} - 128*1024;
my $datasize = $pv->{size} - 128 * 1024;
# default to 1% for metadata
my $metadatasize = $datasize/100;
my $metadatasize = $datasize / 100;
# but at least 1G, as recommended in lvmthin man
$metadatasize = 1024*1024 if $metadatasize < 1024*1024;
$metadatasize = 1024 * 1024 if $metadatasize < 1024 * 1024;
# but at most 16G, which is the current lvm max
$metadatasize = 16*1024*1024 if $metadatasize > 16*1024*1024;
$metadatasize = 16 * 1024 * 1024 if $metadatasize > 16 * 1024 * 1024;
# shrink data by needed amount for metadata
$datasize -= 2*$metadatasize;
$datasize -= 2 * $metadatasize;
run_command([
'/sbin/lvcreate',
'--type', 'thin-pool',
'--type',
'thin-pool',
"-L${datasize}K",
'--poolmetadatasize', "${metadatasize}K",
'-n', $name,
$name
'--poolmetadatasize',
"${metadatasize}K",
'-n',
$name,
$name,
]);
PVE::Diskmanage::udevadm_trigger($dev);
if ($param->{add_storage}) {
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params);
$name, $node, $storage_params, $verify_params,
);
}
});
};
return $rpcenv->fork_worker('lvmthincreate', $name, $user, $worker);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '{name}',
method => 'DELETE',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Remove an LVM thin pool.",
@ -192,8 +201,9 @@ __PACKAGE__->register_method ({
name => get_standard_option('pve-storage-id'),
'volume-group' => get_standard_option('pve-storage-id'),
'cleanup-config' => {
description => "Marks associated storage(s) as not available on this node anymore ".
"or removes them from the configuration (if configured for this node only).",
description =>
"Marks associated storage(s) as not available on this node anymore "
. "or removes them from the configuration (if configured for this node only).",
type => 'boolean',
optional => 1,
default => 0,
@ -232,11 +242,14 @@ __PACKAGE__->register_method ({
if ($param->{'cleanup-config'}) {
my $match = sub {
my ($scfg) = @_;
return $scfg->{type} eq 'lvmthin'
return
$scfg->{type} eq 'lvmthin'
&& $scfg->{vgname} eq $vg
&& $scfg->{thinpool} eq $lv;
};
eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
eval {
PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node);
};
warn $config_err = $@ if $@;
}
@ -264,6 +277,7 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('lvmthinremove', "${vg}-${lv}", $user, $worker);
}});
},
});
1;

View File

@ -19,7 +19,7 @@ my $ZPOOL = '/sbin/zpool';
my $ZFS = '/sbin/zfs';
sub get_pool_data {
die "zfsutils-linux not installed\n" if ! -f $ZPOOL;
die "zfsutils-linux not installed\n" if !-f $ZPOOL;
my $propnames = [qw(name size alloc free frag dedup health)];
my $numbers = {
@ -31,26 +31,29 @@ sub get_pool_data {
};
my $pools = [];
run_command([$ZPOOL, 'list', '-HpPLo', join(',', @$propnames)], outfunc => sub {
run_command(
[$ZPOOL, 'list', '-HpPLo', join(',', @$propnames)],
outfunc => sub {
my ($line) = @_;
my @props = split('\s+', trim($line));
my $pool = {};
for (my $i = 0; $i < scalar(@$propnames); $i++) {
if ($numbers->{$propnames->[$i]}) {
$pool->{$propnames->[$i]} = $props[$i] + 0;
if ($numbers->{ $propnames->[$i] }) {
$pool->{ $propnames->[$i] } = $props[$i] + 0;
} else {
$pool->{$propnames->[$i]} = $props[$i];
$pool->{ $propnames->[$i] } = $props[$i];
}
}
push @$pools, $pool;
});
},
);
return $pools;
}
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
@ -101,20 +104,21 @@ __PACKAGE__->register_method ({
},
},
},
links => [ { rel => 'child', href => "{name}" } ],
links => [{ rel => 'child', href => "{name}" }],
},
code => sub {
my ($param) = @_;
return get_pool_data();
}});
},
});
sub preparetree {
my ($el) = @_;
delete $el->{lvl};
if ($el->{children} && scalar(@{$el->{children}})) {
if ($el->{children} && scalar(@{ $el->{children} })) {
$el->{leaf} = 0;
foreach my $child (@{$el->{children}}) {
foreach my $child (@{ $el->{children} }) {
preparetree($child);
}
} else {
@ -122,8 +126,7 @@ sub preparetree {
}
}
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'detail',
path => '{name}',
method => 'GET',
@ -172,7 +175,8 @@ __PACKAGE__->register_method ({
},
children => {
type => 'array',
description => "The pool configuration information, including the vdevs for each section (e.g. spares, cache), may be nested.",
description =>
"The pool configuration information, including the vdevs for each section (e.g. spares, cache), may be nested.",
items => {
type => 'object',
properties => {
@ -199,8 +203,8 @@ __PACKAGE__->register_method ({
},
msg => {
type => 'string',
description => 'An optional message about the vdev.'
}
description => 'An optional message about the vdev.',
},
},
},
},
@ -225,7 +229,9 @@ __PACKAGE__->register_method ({
my $stack = [$pool];
my $curlvl = 0;
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
my ($line) = @_;
if ($line =~ m/^\s*(\S+): (\S+.*)$/) {
@ -237,8 +243,12 @@ __PACKAGE__->register_method ({
$pool->{$curfield} .= " " . $1;
} elsif (!$config && $line =~ m/^\s*config:/) {
$config = 1;
} elsif ($config && $line =~ m/^(\s+)(\S+)\s*(\S+)?(?:\s+(\S+)\s+(\S+)\s+(\S+))?\s*(.*)$/) {
my ($space, $name, $state, $read, $write, $cksum, $msg) = ($1, $2, $3, $4, $5, $6, $7);
} elsif (
$config
&& $line =~ m/^(\s+)(\S+)\s*(\S+)?(?:\s+(\S+)\s+(\S+)\s+(\S+))?\s*(.*)$/
) {
my ($space, $name, $state, $read, $write, $cksum, $msg) =
($1, $2, $3, $4, $5, $6, $7);
if ($name ne "NAME") {
my $lvl = int(length($space) / 2) + 1; # two spaces per level
my $vdev = {
@ -255,15 +265,15 @@ __PACKAGE__->register_method ({
my $cur = pop @$stack;
if ($lvl > $curlvl) {
$cur->{children} = [ $vdev ];
$cur->{children} = [$vdev];
} elsif ($lvl == $curlvl) {
$cur = pop @$stack;
push @{$cur->{children}}, $vdev;
push @{ $cur->{children} }, $vdev;
} else {
while ($lvl <= $cur->{lvl} && $cur->{lvl} != 0) {
$cur = pop @$stack;
}
push @{$cur->{children}}, $vdev;
push @{ $cur->{children} }, $vdev;
}
push @$stack, $cur;
@ -271,14 +281,16 @@ __PACKAGE__->register_method ({
$curlvl = $lvl;
}
}
});
},
);
# change treenodes for extjs tree
$pool->{name} = delete $pool->{pool};
preparetree($pool);
return $pool;
}});
},
});
my $draid_config_format = {
spares => {
@ -293,14 +305,15 @@ my $draid_config_format = {
},
};
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
path => '',
method => 'POST',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'add_storage'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Create a ZFS pool.",
@ -313,13 +326,20 @@ __PACKAGE__->register_method ({
type => 'string',
description => 'The RAID level to use.',
enum => [
'single', 'mirror',
'raid10', 'raidz', 'raidz2', 'raidz3',
'draid', 'draid2', 'draid3',
'single',
'mirror',
'raid10',
'raidz',
'raidz2',
'raidz3',
'draid',
'draid2',
'draid3',
],
},
devices => {
type => 'string', format => 'string-list',
type => 'string',
format => 'string-list',
description => 'The block devices you want to create the zpool on.',
},
'draid-config' => {
@ -366,7 +386,8 @@ __PACKAGE__->register_method ({
my $draid_config;
if (exists $param->{'draid-config'}) {
die "draid-config set without using dRAID level\n" if $raidlevel !~ m/^draid/;
$draid_config = parse_property_string($draid_config_format, $param->{'draid-config'});
$draid_config =
parse_property_string($draid_config_format, $param->{'draid-config'});
}
for my $dev (@$devs) {
@ -388,7 +409,8 @@ __PACKAGE__->register_method ({
# reserve the name and add as disabled, will be enabled below if creation works out
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params, 1);
$name, $node, $storage_params, $verify_params, 1,
);
}
my $pools = get_pool_data();
@ -439,7 +461,10 @@ __PACKAGE__->register_method ({
if ($is_partition) {
eval {
PVE::Diskmanage::change_parttype($dev, '6a898cc3-1dd2-11b2-99a6-080020736631');
PVE::Diskmanage::change_parttype(
$dev,
'6a898cc3-1dd2-11b2-99a6-080020736631',
);
};
warn $@ if $@;
}
@ -462,8 +487,8 @@ __PACKAGE__->register_method ({
my $cmd = [$ZPOOL, 'create', '-o', "ashift=$ashift", $name];
if ($raidlevel eq 'raid10') {
for (my $i = 0; $i < @$devs; $i+=2) {
push @$cmd, 'mirror', $devs->[$i], $devs->[$i+1];
for (my $i = 0; $i < @$devs; $i += 2) {
push @$cmd, 'mirror', $devs->[$i], $devs->[$i + 1];
}
} elsif ($raidlevel eq 'single') {
push @$cmd, $devs->[0];
@ -484,7 +509,8 @@ __PACKAGE__->register_method ({
run_command($cmd);
if (-e '/lib/systemd/system/zfs-import@.service') {
my $importunit = 'zfs-import@'. PVE::Systemd::escape_unit($name, undef) . '.service';
my $importunit =
'zfs-import@' . PVE::Systemd::escape_unit($name, undef) . '.service';
$cmd = ['systemctl', 'enable', $importunit];
print "# ", join(' ', @$cmd), "\n";
run_command($cmd);
@ -494,23 +520,31 @@ __PACKAGE__->register_method ({
if ($param->{add_storage}) {
PVE::API2::Storage::Config->create_or_update(
$name, $node, $storage_params, $verify_params);
$name, $node, $storage_params, $verify_params,
);
}
};
return $rpcenv->fork_worker('zfscreate', $name, $user, sub {
return $rpcenv->fork_worker(
'zfscreate',
$name,
$user,
sub {
PVE::Diskmanage::locked_disk_action($code);
});
}});
},
);
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '{name}',
method => 'DELETE',
proxyto => 'node',
protected => 1,
permissions => {
description => "Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
description =>
"Requires additionally 'Datastore.Allocate' on /storage when setting 'cleanup-config'",
check => ['perm', '/', ['Sys.Modify']],
},
description => "Destroy a ZFS pool.",
@ -520,8 +554,9 @@ __PACKAGE__->register_method ({
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
'cleanup-config' => {
description => "Marks associated storage(s) as not available on this node anymore ".
"or removes them from the configuration (if configured for this node only).",
description =>
"Marks associated storage(s) as not available on this node anymore "
. "or removes them from the configuration (if configured for this node only).",
type => 'boolean',
optional => 1,
default => 0,
@ -551,7 +586,9 @@ __PACKAGE__->register_method ({
my $to_wipe = [];
if ($param->{'cleanup-disks'}) {
# Using -o name does not only output the name in combination with -v.
run_command(['zpool', 'list', '-vHPL', $name], outfunc => sub {
run_command(
['zpool', 'list', '-vHPL', $name],
outfunc => sub {
my ($line) = @_;
my ($name) = PVE::Tools::split_list($line);
@ -562,7 +599,8 @@ __PACKAGE__->register_method ({
$dev =~ s|^/dev/||;
my $info = PVE::Diskmanage::get_disks($dev, 1, 1);
die "unable to obtain information for disk '$dev'\n" if !$info->{$dev};
die "unable to obtain information for disk '$dev'\n"
if !$info->{$dev};
# Wipe whole disk if usual ZFS layout with partition 9 as ZFS reserved.
my $parent = $info->{$dev}->{parent};
@ -571,15 +609,19 @@ __PACKAGE__->register_method ({
my $info9 = $info->{"${parent}9"};
$wipe = $info->{$dev}->{parent} # need leading /dev/
if $info9 && $info9->{used} && $info9->{used} =~ m/^ZFS reserved/;
if $info9
&& $info9->{used}
&& $info9->{used} =~ m/^ZFS reserved/;
}
push $to_wipe->@*, $wipe;
});
},
);
}
if (-e '/lib/systemd/system/zfs-import@.service') {
my $importunit = 'zfs-import@' . PVE::Systemd::escape_unit($name) . '.service';
my $importunit =
'zfs-import@' . PVE::Systemd::escape_unit($name) . '.service';
run_command(['systemctl', 'disable', $importunit]);
}
@ -591,7 +633,9 @@ __PACKAGE__->register_method ({
my ($scfg) = @_;
return $scfg->{type} eq 'zfspool' && $scfg->{pool} eq $name;
};
eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
eval {
PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node);
};
warn $config_err = $@ if $@;
}
@ -605,6 +649,7 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('zfsremove', $name, $user, $worker);
}});
},
});
1;

View File

@ -29,10 +29,12 @@ my $api_storage_config = sub {
my $scfg = dclone(PVE::Storage::storage_config($cfg, $storeid));
$scfg->{storage} = $storeid;
$scfg->{digest} = $cfg->{digest};
$scfg->{content} = PVE::Storage::Plugin->encode_value($scfg->{type}, 'content', $scfg->{content});
$scfg->{content} =
PVE::Storage::Plugin->encode_value($scfg->{type}, 'content', $scfg->{content});
if ($scfg->{nodes}) {
$scfg->{nodes} = PVE::Storage::Plugin->encode_value($scfg->{type}, 'nodes', $scfg->{nodes});
$scfg->{nodes} =
PVE::Storage::Plugin->encode_value($scfg->{type}, 'nodes', $scfg->{nodes});
}
return $scfg;
@ -60,7 +62,7 @@ sub cleanup_storages_for_node {
storage => $storeid,
});
} else {
$self->delete({storage => $storeid});
$self->delete({ storage => $storeid });
}
}
}
@ -91,11 +93,11 @@ sub create_or_update {
for my $key ('type', $verify_params->@*) {
if (!defined($scfg->{$key})) {
die "Option '${key}' is not configured for storage '$sid', "
."expected it to be '$storage_params->{$key}'";
. "expected it to be '$storage_params->{$key}'";
}
if ($storage_params->{$key} ne $scfg->{$key}) {
die "Option '${key}' ($storage_params->{$key}) does not match "
."existing storage configuration '$scfg->{$key}'\n";
. "existing storage configuration '$scfg->{$key}'\n";
}
}
}
@ -116,13 +118,14 @@ sub create_or_update {
}
}
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
description => "Storage index.",
permissions => {
description => "Only list entries where you have 'Datastore.Audit' or 'Datastore.AllocateSpace' permissions on '/storage/<storage>'",
description =>
"Only list entries where you have 'Datastore.Audit' or 'Datastore.AllocateSpace' permissions on '/storage/<storage>'",
user => 'all',
},
parameters => {
@ -140,9 +143,9 @@ __PACKAGE__->register_method ({
type => 'array',
items => {
type => "object",
properties => { storage => { type => 'string'} },
properties => { storage => { type => 'string' } },
},
links => [ { rel => 'child', href => "{storage}" } ],
links => [{ rel => 'child', href => "{storage}" }],
},
code => sub {
my ($param) = @_;
@ -156,7 +159,7 @@ __PACKAGE__->register_method ({
my $res = [];
foreach my $storeid (@sids) {
my $privs = [ 'Datastore.Audit', 'Datastore.AllocateSpace' ];
my $privs = ['Datastore.Audit', 'Datastore.AllocateSpace'];
next if !$rpcenv->check_any($authuser, "/storage/$storeid", $privs, 1);
my $scfg = &$api_storage_config($cfg, $storeid);
@ -165,9 +168,10 @@ __PACKAGE__->register_method ({
}
return $res;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'read',
path => '{storage}',
method => 'GET',
@ -188,9 +192,10 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
return &$api_storage_config($cfg, $param->{storage});
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
protected => 1,
path => '',
@ -244,7 +249,8 @@ __PACKAGE__->register_method ({
my $opts = $plugin->check_config($storeid, $param, 1, 1);
my $returned_config;
PVE::Storage::lock_storage_config(sub {
PVE::Storage::lock_storage_config(
sub {
my $cfg = PVE::Storage::config();
if (my $scfg = PVE::Storage::storage_config($cfg, $storeid, 1)) {
@ -256,8 +262,9 @@ __PACKAGE__->register_method ({
$returned_config = $plugin->on_add_hook($storeid, $opts, %$sensitive);
if (defined($opts->{mkdir})) { # TODO: remove complete option in Proxmox VE 9
warn "NOTE: The 'mkdir' option set for '${storeid}' is deprecated and will be removed"
." in Proxmox VE 9. Use 'create-base-path' or 'create-subdirs' instead.\n"
warn
"NOTE: The 'mkdir' option set for '${storeid}' is deprecated and will be removed"
. " in Proxmox VE 9. Use 'create-base-path' or 'create-subdirs' instead.\n";
}
eval {
@ -275,7 +282,9 @@ __PACKAGE__->register_method ({
PVE::Storage::write_config($cfg);
}, "create storage failed");
},
"create storage failed",
);
my $res = {
storage => $storeid,
@ -283,9 +292,10 @@ __PACKAGE__->register_method ({
};
$res->{config} = $returned_config if $returned_config;
return $res;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'update',
protected => 1,
path => '{storage}',
@ -331,11 +341,12 @@ __PACKAGE__->register_method ({
my $type;
if ($delete) {
$delete = [ PVE::Tools::split_list($delete) ];
$delete = [PVE::Tools::split_list($delete)];
}
my $returned_config;
PVE::Storage::lock_storage_config(sub {
PVE::Storage::lock_storage_config(
sub {
my $cfg = PVE::Storage::config();
PVE::SectionConfig::assert_if_modified($cfg, $digest);
@ -369,13 +380,16 @@ __PACKAGE__->register_method ({
}
if (defined($scfg->{mkdir})) { # TODO: remove complete option in Proxmox VE 9
warn "NOTE: The 'mkdir' option set for '${storeid}' is deprecated and will be removed"
." in Proxmox VE 9. Use 'create-base-path' or 'create-subdirs' instead.\n"
warn
"NOTE: The 'mkdir' option set for '${storeid}' is deprecated and will be removed"
. " in Proxmox VE 9. Use 'create-base-path' or 'create-subdirs' instead.\n";
}
PVE::Storage::write_config($cfg);
}, "update storage failed");
},
"update storage failed",
);
my $res = {
storage => $storeid,
@ -383,9 +397,10 @@ __PACKAGE__->register_method ({
};
$res->{config} = $returned_config if $returned_config;
return $res;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
protected => 1,
path => '{storage}', # /storage/config/{storage}
@ -397,9 +412,12 @@ __PACKAGE__->register_method ({
parameters => {
additionalProperties => 0,
properties => {
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage,
}),
},
),
},
},
returns => { type => 'null' },
@ -408,7 +426,8 @@ __PACKAGE__->register_method ({
my $storeid = extract_param($param, 'storage');
PVE::Storage::lock_storage_config(sub {
PVE::Storage::lock_storage_config(
sub {
my $cfg = PVE::Storage::config();
my $scfg = PVE::Storage::storage_config($cfg, $storeid);
@ -424,11 +443,14 @@ __PACKAGE__->register_method ({
PVE::Storage::write_config($cfg);
}, "delete storage failed");
},
"delete storage failed",
);
PVE::AccessControl::remove_storage_access($storeid);
return undef;
}});
},
});
1;

View File

@ -16,13 +16,18 @@ use PVE::SSHInfo;
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
description => "List storage content.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
protected => 1,
proxyto => 'node',
@ -30,20 +35,27 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
content => {
description => "Only list content of this type.",
type => 'string', format => 'pve-storage-content',
type => 'string',
format => 'pve-storage-content',
optional => 1,
completion => \&PVE::Storage::complete_content_type,
},
vmid => get_standard_option('pve-vmid', {
vmid => get_standard_option(
'pve-vmid',
{
description => "Only list images for this VM",
optional => 1,
completion => \&PVE::Cluster::complete_vmid,
}),
},
),
},
},
returns => {
@ -66,7 +78,8 @@ __PACKAGE__->register_method ({
optional => 1,
},
'format' => {
description => "Format identifier ('raw', 'qcow2', 'subvol', 'iso', 'tgz' ...)",
description =>
"Format identifier ('raw', 'qcow2', 'subvol', 'iso', 'tgz' ...)",
type => 'string',
},
size => {
@ -75,8 +88,8 @@ __PACKAGE__->register_method ({
renderer => 'bytes',
},
used => {
description => "Used space. Please note that most storage plugins " .
"do not report anything useful here.",
description => "Used space. Please note that most storage plugins "
. "do not report anything useful here.",
type => 'integer',
renderer => 'bytes',
optional => 1,
@ -88,18 +101,21 @@ __PACKAGE__->register_method ({
optional => 1,
},
notes => {
description => "Optional notes. If they contain multiple lines, only the first one is returned here.",
description =>
"Optional notes. If they contain multiple lines, only the first one is returned here.",
type => 'string',
optional => 1,
},
encrypted => {
description => "If whole backup is encrypted, value is the fingerprint or '1' "
." if encrypted. Only useful for the Proxmox Backup Server storage type.",
description =>
"If whole backup is encrypted, value is the fingerprint or '1' "
. " if encrypted. Only useful for the Proxmox Backup Server storage type.",
type => 'string',
optional => 1,
},
verification => {
description => "Last backup verification result, only useful for PBS storages.",
description =>
"Last backup verification result, only useful for PBS storages.",
type => 'object',
properties => {
state => {
@ -120,7 +136,7 @@ __PACKAGE__->register_method ({
},
},
},
links => [ { rel => 'child', href => "{volid}" } ],
links => [{ rel => 'child', href => "{volid}" }],
},
code => sub {
my ($param) = @_;
@ -133,11 +149,16 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
my $vollist = PVE::Storage::volume_list($cfg, $storeid, $param->{vmid}, $param->{content});
my $vollist =
PVE::Storage::volume_list($cfg, $storeid, $param->{vmid}, $param->{content});
my $res = [];
foreach my $item (@$vollist) {
eval { PVE::Storage::check_volume_access($rpcenv, $authuser, $cfg, undef, $item->{volid}); };
eval {
PVE::Storage::check_volume_access(
$rpcenv, $authuser, $cfg, undef, $item->{volid},
);
};
next if $@;
$item->{vmid} = int($item->{vmid}) if defined($item->{vmid});
$item->{size} = int($item->{size}) if defined($item->{size});
@ -146,9 +167,10 @@ __PACKAGE__->register_method ({
}
return $res;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'create',
path => '',
method => 'POST',
@ -162,26 +184,36 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
filename => {
description => "The name of the file to create.",
type => 'string',
},
vmid => get_standard_option('pve-vmid', {
vmid => get_standard_option(
'pve-vmid',
{
description => "Specify owner VM",
completion => \&PVE::Cluster::complete_vmid,
}),
},
),
size => {
description => "Size in kilobyte (1024 bytes). Optional suffixes 'M' (megabyte, 1024K) and 'G' (gigabyte, 1024M)",
description =>
"Size in kilobyte (1024 bytes). Optional suffixes 'M' (megabyte, 1024K) and 'G' (gigabyte, 1024M)",
type => 'string',
pattern => '\d+[MG]?',
},
format => get_standard_option('pve-storage-image-format', {
format => get_standard_option(
'pve-storage-image-format',
{
requires => 'size',
optional => 1,
}),
},
),
},
},
returns => {
@ -210,7 +242,8 @@ __PACKAGE__->register_method ({
if ($name =~ m/\.(raw|qcow2|vmdk)$/) {
my $fmt = $1;
raise_param_exc({ format => "different storage formats ($param->{format} != $fmt)" })
raise_param_exc({
format => "different storage formats ($param->{format} != $fmt)" })
if $param->{format} && $param->{format} ne $fmt;
$param->{format} = $fmt;
@ -218,12 +251,13 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
my $volid = PVE::Storage::vdisk_alloc ($cfg, $storeid, $param->{vmid},
$param->{format},
$name, $size);
my $volid = PVE::Storage::vdisk_alloc(
$cfg, $storeid, $param->{vmid}, $param->{format}, $name, $size,
);
return $volid;
}});
},
});
# we allow to pass volume names (without storage prefix) if the storage
# is specified as separate parameter.
@ -234,7 +268,7 @@ my $real_volume_id = sub {
if ($volume =~ m/:/) {
eval {
my ($sid, $volname) = PVE::Storage::parse_volume_id ($volume);
my ($sid, $volname) = PVE::Storage::parse_volume_id($volume);
die "storage ID mismatch ($sid != $storeid)\n"
if $storeid && $sid ne $storeid;
$volid = $volume;
@ -252,7 +286,7 @@ my $real_volume_id = sub {
return wantarray ? ($volid, $storeid) : $volid;
};
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'info',
path => '{volume}',
method => 'GET',
@ -287,8 +321,8 @@ __PACKAGE__->register_method ({
renderer => 'bytes',
},
used => {
description => "Used space. Please note that most storage plugins " .
"do not report anything useful here.",
description => "Used space. Please note that most storage plugins "
. "do not report anything useful here.",
type => 'integer',
renderer => 'bytes',
},
@ -343,9 +377,10 @@ __PACKAGE__->register_method ({
}
return $entry;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'updateattributes',
path => '{volume}',
method => 'PUT',
@ -397,15 +432,17 @@ __PACKAGE__->register_method ({
}
return undef;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '{volume}',
method => 'DELETE',
description => "Delete volume",
permissions => {
description => "You need 'Datastore.Allocate' privilege on the storage (or 'Datastore.AllocateSpace' for backup volumes if you have VM.Backup privilege on the VM).",
description =>
"You need 'Datastore.Allocate' privilege on the storage (or 'Datastore.AllocateSpace' for backup volumes if you have VM.Backup privilege on the VM).",
user => 'all',
},
protected => 1,
@ -414,10 +451,13 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
optional => 1,
completion => \&PVE::Storage::complete_storage,
}),
},
),
volume => {
description => "Volume identifier",
type => 'string',
@ -425,14 +465,15 @@ __PACKAGE__->register_method ({
},
delay => {
type => 'integer',
description => "Time to wait for the task to finish. We return 'null' if the task finish within that time.",
description =>
"Time to wait for the task to finish. We return 'null' if the task finish within that time.",
minimum => 1,
maximum => 30,
optional => 1,
},
},
},
returns => { type => 'string', optional => 1, },
returns => { type => 'string', optional => 1 },
code => sub {
my ($param) = @_;
@ -452,10 +493,12 @@ __PACKAGE__->register_method ({
}
my $worker = sub {
PVE::Storage::vdisk_free ($cfg, $volid);
PVE::Storage::vdisk_free($cfg, $volid);
print "Removed volume '$volid'\n";
if ($vtype eq 'backup'
&& $path =~ /(.*\/vzdump-\w+-\d+-\d{4}_\d{2}_\d{2}-\d{2}_\d{2}_\d{2})[^\/]+$/) {
if (
$vtype eq 'backup'
&& $path =~ /(.*\/vzdump-\w+-\d+-\d{4}_\d{2}_\d{2}-\d{2}_\d{2}_\d{2})[^\/]+$/
) {
# Remove log file #318 and notes file #3972 if they still exist
PVE::Storage::archive_auxiliaries_remove($path);
}
@ -469,7 +512,8 @@ __PACKAGE__->register_method ({
my $currently_deleting; # not necessarily true, e.g. sequential api call from cli
do {
my $task = PVE::Tools::upid_decode($upid);
$currently_deleting = PVE::ProcFSTools::check_process_running($task->{pid}, $task->{pstart});
$currently_deleting =
PVE::ProcFSTools::check_process_running($task->{pid}, $task->{pstart});
sleep 1 if $currently_deleting;
} while (time() < $end_time && $currently_deleting);
@ -481,9 +525,10 @@ __PACKAGE__->register_method ({
}
}
return $upid;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'copy',
path => '{volume}',
method => 'POST',
@ -494,7 +539,7 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', { optional => 1}),
storage => get_standard_option('pve-storage-id', { optional => 1 }),
volume => {
description => "Source volume identifier",
type => 'string',
@ -503,10 +548,13 @@ __PACKAGE__->register_method ({
description => "Target volume identifier",
type => 'string',
},
target_node => get_standard_option('pve-node', {
target_node => get_standard_option(
'pve-node',
{
description => "Target node. Default is local node.",
optional => 1,
}),
},
),
},
},
returns => {
@ -548,13 +596,20 @@ __PACKAGE__->register_method ({
# you need to get this working (fails currently, because storage_migrate() uses
# ssh to connect to local host (which is not needed
my $sshinfo = PVE::SSHInfo::get_ssh_info($target_node);
PVE::Storage::storage_migrate($cfg, $src_volid, $sshinfo, $target_sid, {'target_volname' => $target_volname});
PVE::Storage::storage_migrate(
$cfg,
$src_volid,
$sshinfo,
$target_sid,
{ 'target_volname' => $target_volname },
);
print "DEBUG: end worker $upid\n";
};
return $rpcenv->fork_worker('imgcopy', undef, $user, $worker);
}});
},
});
1;

View File

@ -33,7 +33,7 @@ my $parse_volname_or_id = sub {
return $volid;
};
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'list',
path => 'list',
method => 'GET',
@ -47,11 +47,15 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
volume => {
description => "Backup volume ID or name. Currently only PBS snapshots are supported.",
description =>
"Backup volume ID or name. Currently only PBS snapshots are supported.",
type => 'string',
completion => \&PVE::Storage::complete_volume,
},
@ -113,7 +117,7 @@ __PACKAGE__->register_method ({
PVE::Storage::check_volume_access($rpcenv, $user, $cfg, undef, $volid, 'backup');
raise_param_exc({'storage' => "Only PBS storages supported for file-restore."})
raise_param_exc({ 'storage' => "Only PBS storages supported for file-restore." })
if $scfg->{type} ne 'pbs';
my (undef, $snap) = PVE::Storage::parse_volname($cfg, $volid);
@ -139,9 +143,10 @@ __PACKAGE__->register_method ({
}
die "invalid proxmox-file-restore output";
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'download',
path => 'download',
method => 'GET',
@ -156,11 +161,15 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
volume => {
description => "Backup volume ID or name. Currently only PBS snapshots are supported.",
description =>
"Backup volume ID or name. Currently only PBS snapshots are supported.",
type => 'string',
completion => \&PVE::Storage::complete_volume,
},
@ -196,7 +205,7 @@ __PACKAGE__->register_method ({
PVE::Storage::check_volume_access($rpcenv, $user, $cfg, undef, $volid, 'backup');
raise_param_exc({'storage' => "Only PBS storages supported for file-restore."})
raise_param_exc({ 'storage' => "Only PBS storages supported for file-restore." })
if $scfg->{type} ne 'pbs';
my (undef, $snap) = PVE::Storage::parse_volname($cfg, $volid);
@ -204,11 +213,16 @@ __PACKAGE__->register_method ({
my $client = PVE::PBSClient->new($scfg, $storeid);
my $fifo = $client->file_restore_extract_prepare();
$rpcenv->fork_worker('pbs-download', undef, $user, sub {
$rpcenv->fork_worker(
'pbs-download',
undef,
$user,
sub {
my $name = decode_base64($path);
print "Starting download of file: $name\n";
$client->file_restore_extract($fifo, $snap, $path, 1, $tar);
});
},
);
my $ret = {
download => {
@ -218,6 +232,7 @@ __PACKAGE__->register_method ({
},
};
return $ret;
}});
},
});
1;

View File

@ -12,14 +12,20 @@ use PVE::Tools qw(extract_param);
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'dryrun',
path => '',
method => 'GET',
description => "Get prune information for backups. NOTE: this is only a preview and might not be " .
"what a subsequent prune call does if backups are removed/added in the meantime.",
description =>
"Get prune information for backups. NOTE: this is only a preview and might not be "
. "what a subsequent prune call does if backups are removed/added in the meantime.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
protected => 1,
proxyto => 'node',
@ -27,24 +33,35 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
'prune-backups' => get_standard_option('prune-backups', {
description => "Use these retention options instead of those from the storage configuration.",
},
),
'prune-backups' => get_standard_option(
'prune-backups',
{
description =>
"Use these retention options instead of those from the storage configuration.",
optional => 1,
}),
},
),
type => {
description => "Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
description =>
"Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
type => 'string',
optional => 1,
enum => ['qemu', 'lxc'],
},
vmid => get_standard_option('pve-vmid', {
vmid => get_standard_option(
'pve-vmid',
{
description => "Only consider backups for this guest.",
optional => 1,
completion => \&PVE::Cluster::complete_vmid,
}),
},
),
},
},
returns => {
@ -57,12 +74,14 @@ __PACKAGE__->register_method ({
type => 'string',
},
'ctime' => {
description => "Creation time of the backup (seconds since the UNIX epoch).",
description =>
"Creation time of the backup (seconds since the UNIX epoch).",
type => 'integer',
},
'mark' => {
description => "Whether the backup would be kept or removed. Backups that are" .
" protected or don't use the standard naming scheme are not removed.",
description =>
"Whether the backup would be kept or removed. Backups that are"
. " protected or don't use the standard naming scheme are not removed.",
type => 'string',
enum => ['keep', 'remove', 'protected', 'renamed'],
},
@ -92,16 +111,17 @@ __PACKAGE__->register_method ({
if defined($prune_backups);
return PVE::Storage::prune_backups($cfg, $storeid, $prune_backups, $vmid, $type, 1);
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'delete',
path => '',
method => 'DELETE',
description => "Prune backups. Only those using the standard naming scheme are considered.",
permissions => {
description => "You need the 'Datastore.Allocate' privilege on the storage " .
"(or if a VM ID is specified, 'Datastore.AllocateSpace' and 'VM.Backup' for the VM).",
description => "You need the 'Datastore.Allocate' privilege on the storage "
. "(or if a VM ID is specified, 'Datastore.AllocateSpace' and 'VM.Backup' for the VM).",
user => 'all',
},
protected => 1,
@ -110,23 +130,34 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage,
}),
'prune-backups' => get_standard_option('prune-backups', {
description => "Use these retention options instead of those from the storage configuration.",
}),
},
),
'prune-backups' => get_standard_option(
'prune-backups',
{
description =>
"Use these retention options instead of those from the storage configuration.",
},
),
type => {
description => "Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
description =>
"Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
type => 'string',
optional => 1,
enum => ['qemu', 'lxc'],
},
vmid => get_standard_option('pve-vmid', {
vmid => get_standard_option(
'pve-vmid',
{
description => "Only prune backups for this VM.",
completion => \&PVE::Cluster::complete_vmid,
optional => 1,
}),
},
),
},
},
returns => { type => 'string' },
@ -159,6 +190,7 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('prunebackups', $id, $authuser, $worker);
}});
},
});
1;

View File

@ -33,17 +33,16 @@ __PACKAGE__->register_method({
items => {
type => "object",
properties => {
method => { type => 'string'},
method => { type => 'string' },
},
},
links => [ { rel => 'child', href => "{method}" } ],
links => [{ rel => 'child', href => "{method}" }],
},
code => sub {
my ($param) = @_;
my $res = [
{ method => 'cifs' },
{ method => 'glusterfs' },
{ method => 'iscsi' },
{ method => 'lvm' },
{ method => 'nfs' },
@ -52,7 +51,8 @@ __PACKAGE__->register_method({
];
return $res;
}});
},
});
__PACKAGE__->register_method({
name => 'nfsscan',
@ -70,7 +70,8 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
server => {
description => "The server address (name or IP).",
type => 'string', format => 'pve-storage-server',
type => 'string',
format => 'pve-storage-server',
},
},
},
@ -101,7 +102,8 @@ __PACKAGE__->register_method({
push @$data, { path => $k, options => $res->{$k} };
}
return $data;
}});
},
});
__PACKAGE__->register_method({
name => 'cifsscan',
@ -119,7 +121,8 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
server => {
description => "The server address (name or IP).",
type => 'string', format => 'pve-storage-server',
type => 'string',
format => 'pve-storage-server',
},
username => {
description => "User name.",
@ -172,7 +175,8 @@ __PACKAGE__->register_method({
}
return $data;
}});
},
});
__PACKAGE__->register_method({
name => 'pbsscan',
@ -190,7 +194,8 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
server => {
description => "The server address (name or IP).",
type => 'string', format => 'pve-storage-server',
type => 'string',
format => 'pve-storage-server',
},
username => {
description => "User-name or API token-ID.",
@ -236,59 +241,9 @@ __PACKAGE__->register_method({
my $password = delete $param->{password};
return PVE::Storage::PBSPlugin::scan_datastores($param, $password);
}
},
});
# Note: GlusterFS currently does not have an equivalent of showmount.
# As workaround, we simply use nfs showmount.
# see http://www.gluster.org/category/volumes/
__PACKAGE__->register_method({
name => 'glusterfsscan',
path => 'glusterfs',
method => 'GET',
description => "Scan remote GlusterFS server.",
protected => 1,
proxyto => "node",
permissions => {
check => ['perm', '/storage', ['Datastore.Allocate']],
},
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
server => {
description => "The server address (name or IP).",
type => 'string', format => 'pve-storage-server',
},
},
},
returns => {
type => 'array',
items => {
type => "object",
properties => {
volname => {
description => "The volume name.",
type => 'string',
},
},
},
},
code => sub {
my ($param) = @_;
my $server = $param->{server};
my $res = PVE::Storage::scan_nfs($server);
my $data = [];
foreach my $path (sort keys %$res) {
if ($path =~ m!^/([^\s/]+)$!) {
push @$data, { volname => $1 };
}
}
return $data;
}});
__PACKAGE__->register_method({
name => 'iscsiscan',
path => 'iscsi',
@ -305,7 +260,8 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
portal => {
description => "The iSCSI portal (IP or DNS name with optional port).",
type => 'string', format => 'pve-storage-portal-dns',
type => 'string',
format => 'pve-storage-portal-dns',
},
},
},
@ -332,11 +288,12 @@ __PACKAGE__->register_method({
my $data = [];
foreach my $k (sort keys %$res) {
push @$data, { target => $k, portal => join(',', @{$res->{$k}}) };
push @$data, { target => $k, portal => join(',', @{ $res->{$k} }) };
}
return $data;
}});
},
});
__PACKAGE__->register_method({
name => 'lvmscan',
@ -371,7 +328,8 @@ __PACKAGE__->register_method({
my $res = PVE::Storage::LVMPlugin::lvm_vgs();
return PVE::RESTHandler::hash_to_array($res, 'vg');
}});
},
});
__PACKAGE__->register_method({
name => 'lvmthinscan',
@ -410,7 +368,8 @@ __PACKAGE__->register_method({
my ($param) = @_;
return PVE::Storage::LvmThinPlugin::list_thinpools($param->{vg});
}});
},
});
__PACKAGE__->register_method({
name => 'zfsscan',
@ -444,6 +403,7 @@ __PACKAGE__->register_method({
my ($param) = @_;
return PVE::Storage::scan_zfs();
}});
},
});
1;

View File

@ -23,12 +23,12 @@ use PVE::Storage;
use base qw(PVE::RESTHandler);
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Storage::PruneBackups",
path => '{storage}/prunebackups',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Storage::Content",
# set fragment delimiter (no subdirs) - we need that, because volume
# IDs may contain a slash '/'
@ -36,7 +36,7 @@ __PACKAGE__->register_method ({
path => '{storage}/content',
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
subclass => "PVE::API2::Storage::FileRestore",
path => '{storage}/file-restore',
});
@ -46,26 +46,30 @@ my sub assert_ova_contents {
# test if it's really a tar file with an ovf file inside
my $hasOvf = 0;
run_command(['tar', '-t', '-f', $file], outfunc => sub {
run_command(
['tar', '-t', '-f', $file],
outfunc => sub {
my ($line) = @_;
if ($line =~ m/\.ovf$/) {
$hasOvf = 1;
}
});
},
);
die "ova archive has no .ovf file inside\n" if !$hasOvf;
return 1;
}
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'index',
path => '',
method => 'GET',
description => "Get status for all datastores.",
permissions => {
description => "Only list entries where you have 'Datastore.Audit' or 'Datastore.AllocateSpace' permissions on '/storage/<storage>'",
description =>
"Only list entries where you have 'Datastore.Audit' or 'Datastore.AllocateSpace' permissions on '/storage/<storage>'",
user => 'all',
},
protected => 1,
@ -74,14 +78,18 @@ __PACKAGE__->register_method ({
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
description => "Only list status for specified storage",
optional => 1,
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
content => {
description => "Only list stores which support this content type.",
type => 'string', format => 'pve-storage-content-list',
type => 'string',
format => 'pve-storage-content-list',
optional => 1,
completion => \&PVE::Storage::complete_content_type,
},
@ -91,12 +99,16 @@ __PACKAGE__->register_method ({
optional => 1,
default => 0,
},
target => get_standard_option('pve-node', {
description => "If target is different to 'node', we only lists shared storages which " .
"content is accessible on this 'node' and the specified 'target' node.",
target => get_standard_option(
'pve-node',
{
description =>
"If target is different to 'node', we only lists shared storages which "
. "content is accessible on this 'node' and the specified 'target' node.",
optional => 1,
completion => \&PVE::Cluster::get_nodelist,
}),
},
),
'format' => {
description => "Include information about formats",
type => 'boolean',
@ -117,7 +129,8 @@ __PACKAGE__->register_method ({
},
content => {
description => "Allowed storage content types.",
type => 'string', format => 'pve-storage-content-list',
type => 'string',
format => 'pve-storage-content-list',
},
enabled => {
description => "Set when storage is enabled (not disabled).",
@ -160,7 +173,7 @@ __PACKAGE__->register_method ({
},
},
},
links => [ { rel => 'child', href => "{storage}" } ],
links => [{ rel => 'child', href => "{storage}" }],
},
code => sub {
my ($param) = @_;
@ -179,14 +192,14 @@ __PACKAGE__->register_method ({
my $info = PVE::Storage::storage_info($cfg, $param->{content}, $param->{format});
raise_param_exc({ storage => "No such storage." })
if $param->{storage} && !defined($info->{$param->{storage}});
if $param->{storage} && !defined($info->{ $param->{storage} });
my $res = {};
my @sids = PVE::Storage::storage_ids($cfg);
foreach my $storeid (@sids) {
my $data = $info->{$storeid};
next if !$data;
my $privs = [ 'Datastore.Audit', 'Datastore.AllocateSpace' ];
my $privs = ['Datastore.Audit', 'Datastore.AllocateSpace'];
next if !$rpcenv->check_any($authuser, "/storage/$storeid", $privs, 1);
next if $param->{storage} && $param->{storage} ne $storeid;
@ -211,15 +224,21 @@ __PACKAGE__->register_method ({
}
return PVE::RESTHandler::hash_to_array($res, 'storage');
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'diridx',
path => '{storage}',
method => 'GET',
description => "",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
parameters => {
additionalProperties => 0,
@ -236,7 +255,7 @@ __PACKAGE__->register_method ({
subdir => { type => 'string' },
},
},
links => [ { rel => 'child', href => "{subdir}" } ],
links => [{ rel => 'child', href => "{subdir}" }],
},
code => sub {
my ($param) = @_;
@ -254,15 +273,21 @@ __PACKAGE__->register_method ({
];
return $res;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'read_status',
path => '{storage}/status',
method => 'GET',
description => "Read storage status.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
protected => 1,
proxyto => 'node',
@ -284,21 +309,27 @@ __PACKAGE__->register_method ({
my $info = PVE::Storage::storage_info($cfg, $param->{content});
my $data = $info->{$param->{storage}};
my $data = $info->{ $param->{storage} };
raise_param_exc({ storage => "No such storage." })
if !defined($data);
return $data;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'rrd',
path => '{storage}/rrd',
method => 'GET',
description => "Read storage RRD statistics (returns PNG).",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
protected => 1,
proxyto => 'node',
@ -310,16 +341,17 @@ __PACKAGE__->register_method ({
timeframe => {
description => "Specify the time frame you are interested in.",
type => 'string',
enum => [ 'hour', 'day', 'week', 'month', 'year' ],
enum => ['hour', 'day', 'week', 'month', 'year'],
},
ds => {
description => "The list of datasources you want to display.",
type => 'string', format => 'pve-configid-list',
type => 'string',
format => 'pve-configid-list',
},
cf => {
description => "The RRD consolidation function",
type => 'string',
enum => [ 'AVERAGE', 'MAX' ],
enum => ['AVERAGE', 'MAX'],
optional => 1,
},
},
@ -335,16 +367,23 @@ __PACKAGE__->register_method ({
return PVE::RRD::create_rrd_graph(
"pve2-storage/$param->{node}/$param->{storage}",
$param->{timeframe}, $param->{ds}, $param->{cf});
}});
$param->{timeframe}, $param->{ds}, $param->{cf},
);
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'rrddata',
path => '{storage}/rrddata',
method => 'GET',
description => "Read storage RRD statistics.",
permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.Audit', 'Datastore.AllocateSpace'], any => 1],
check => [
'perm',
'/storage/{storage}',
['Datastore.Audit', 'Datastore.AllocateSpace'],
any => 1,
],
},
protected => 1,
proxyto => 'node',
@ -356,12 +395,12 @@ __PACKAGE__->register_method ({
timeframe => {
description => "Specify the time frame you are interested in.",
type => 'string',
enum => [ 'hour', 'day', 'week', 'month', 'year' ],
enum => ['hour', 'day', 'week', 'month', 'year'],
},
cf => {
description => "The RRD consolidation function",
type => 'string',
enum => [ 'AVERAGE', 'MAX' ],
enum => ['AVERAGE', 'MAX'],
optional => 1,
},
},
@ -376,14 +415,16 @@ __PACKAGE__->register_method ({
code => sub {
my ($param) = @_;
return PVE::RRD::create_rrd_data(
"pve2-storage/$param->{node}/$param->{storage}",
$param->{timeframe}, $param->{cf});
}});
my $path = "pve-storage-9.0/$param->{node}/$param->{storage}";
$path = "pve2-storage/$param->{node}/$param->{storage}"
if !-e "/var/lib/rrdcached/db/${path}";
return PVE::RRD::create_rrd_data($path, $param->{timeframe}, $param->{cf});
},
});
# makes no sense for big images and backup files (because it
# create a copy of the file).
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'upload',
path => '{storage}/upload',
method => 'POST',
@ -399,11 +440,13 @@ __PACKAGE__->register_method ({
storage => get_standard_option('pve-storage-id'),
content => {
description => "Content type.",
type => 'string', format => 'pve-storage-content',
type => 'string',
format => 'pve-storage-content',
enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
description =>
"The name of the file to create. Caution: This will be normalized!",
maxLength => 255,
type => 'string',
},
@ -421,7 +464,8 @@ __PACKAGE__->register_method ({
optional => 1,
},
tmpfilename => {
description => "The source file name. This parameter is usually set by the REST handler. You can only overwrite it when connecting to the trusted port on localhost.",
description =>
"The source file name. This parameter is usually set by the REST handler. You can only overwrite it when connecting to the trusted port on localhost.",
type => 'string',
optional => 1,
pattern => '/var/tmp/pveupload-[0-9a-f]+',
@ -469,7 +513,9 @@ __PACKAGE__->register_method ({
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
} elsif ($content eq 'import') {
if ($filename !~ m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
if ($filename !~
m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!
) {
raise_param_exc({ filename => "invalid filename or wrong extension" });
}
my $format = $1;
@ -500,7 +546,8 @@ __PACKAGE__->register_method ({
if ($node ne 'localhost' && $node ne PVE::INotify::nodename()) {
my $remip = PVE::Cluster::remote_node_ip($node);
my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts({ ip => $remip, name => $node });
my $ssh_options =
PVE::SSHInfo::ssh_info_to_ssh_opts({ ip => $remip, name => $node });
my @remcmd = ('/usr/bin/ssh', $ssh_options->@*, $remip, '--');
@ -514,7 +561,14 @@ __PACKAGE__->register_method ({
errmsg => "mkdir failed",
);
$cmd = ['/usr/bin/scp', $ssh_options->@*, '-p', '--', $tmpfilename, "[$remip]:" . PVE::Tools::shell_quote($dest)];
$cmd = [
'/usr/bin/scp',
$ssh_options->@*,
'-p',
'--',
$tmpfilename,
"[$remip]:" . PVE::Tools::shell_quote($dest),
];
$err_cleanup = sub { run_command([@remcmd, 'rm', '-f', '--', $dest]) };
} else {
@ -530,11 +584,13 @@ __PACKAGE__->register_method ({
print "starting file import from: $tmpfilename\n";
eval {
my ($checksum, $checksum_algorithm) = $param->@{'checksum', 'checksum-algorithm'};
my ($checksum, $checksum_algorithm) =
$param->@{ 'checksum', 'checksum-algorithm' };
if ($checksum_algorithm) {
print "calculating checksum...";
my $checksum_got = PVE::Tools::get_file_hash($checksum_algorithm, $tmpfilename);
my $checksum_got =
PVE::Tools::get_file_hash($checksum_algorithm, $tmpfilename);
if (lc($checksum_got) eq lc($checksum)) {
print "OK, checksum verified\n";
@ -557,7 +613,8 @@ __PACKAGE__->register_method ({
};
if (my $err = $@) {
# unlinks only the temporary file from the http server
unlink $tmpfilename or $! == ENOENT
unlink $tmpfilename
or $! == ENOENT
or warn "unable to clean up temporory file '$tmpfilename' - $!\n";
die $err;
}
@ -570,7 +627,8 @@ __PACKAGE__->register_method ({
eval { run_command($cmd, errmsg => 'import failed'); };
# the temporary file got only uploaded locally, no need to rm remote
unlink $tmpfilename or $! == ENOENT
unlink $tmpfilename
or $! == ENOENT
or warn "unable to clean up temporary file '$tmpfilename' - $!\n";
if (my $err = $@) {
@ -582,7 +640,8 @@ __PACKAGE__->register_method ({
};
return $rpcenv->fork_worker('imgcopy', undef, $user, $worker);
}});
},
});
__PACKAGE__->register_method({
name => 'download_url',
@ -591,14 +650,17 @@ __PACKAGE__->register_method({
description => "Download templates, ISO images, OVAs and VM images by using an URL.",
proxyto => 'node',
permissions => {
description => 'Requires allocation access on the storage and as this allows one to probe'
.' the (local!) host network indirectly it also requires one of Sys.Modify on / (for'
.' backwards compatibility) or the newer Sys.AccessNetwork privilege on the node.',
check => [ 'and',
['perm', '/storage/{storage}', [ 'Datastore.AllocateTemplate' ]],
[ 'or',
['perm', '/', [ 'Sys.Audit', 'Sys.Modify' ]],
['perm', '/nodes/{node}', [ 'Sys.AccessNetwork' ]],
description =>
'Requires allocation access on the storage and as this allows one to probe'
. ' the (local!) host network indirectly it also requires one of Sys.Modify on / (for'
. ' backwards compatibility) or the newer Sys.AccessNetwork privilege on the node.',
check => [
'and',
['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
[
'or',
['perm', '/', ['Sys.Audit', 'Sys.Modify']],
['perm', '/nodes/{node}', ['Sys.AccessNetwork']],
],
],
},
@ -615,11 +677,13 @@ __PACKAGE__->register_method({
},
content => {
description => "Content type.", # TODO: could be optional & detected in most cases
type => 'string', format => 'pve-storage-content',
type => 'string',
format => 'pve-storage-content',
enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This will be normalized!",
description =>
"The name of the file to create. Caution: This will be normalized!",
maxLength => 255,
type => 'string',
},
@ -652,7 +716,7 @@ __PACKAGE__->register_method({
},
},
returns => {
type => "string"
type => "string",
},
code => sub {
my ($param) = @_;
@ -668,7 +732,7 @@ __PACKAGE__->register_method({
die "can't upload to storage type '$scfg->{type}', not a file based storage!\n"
if !defined($scfg->{path});
my ($content, $url) = $param->@{'content', 'url'};
my ($content, $url) = $param->@{ 'content', 'url' };
die "storage '$storage' is not configured for content-type '$content'\n"
if !$scfg->{content}->{$content};
@ -690,7 +754,9 @@ __PACKAGE__->register_method({
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
} elsif ($content eq 'import') {
if ($filename !~ m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
if ($filename !~
m!${PVE::Storage::SAFE_CHAR_CLASS_RE}+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!
) {
raise_param_exc({ filename => "invalid filename or wrong extension" });
}
my $format = $1;
@ -717,7 +783,7 @@ __PACKAGE__->register_method({
https_proxy => $dccfg->{http_proxy},
};
my ($checksum, $checksum_algorithm) = $param->@{'checksum', 'checksum-algorithm'};
my ($checksum, $checksum_algorithm) = $param->@{ 'checksum', 'checksum-algorithm' };
if ($checksum) {
$opts->{"${checksum_algorithm}sum"} = $checksum;
$opts->{hash_required} = 1;
@ -752,7 +818,8 @@ __PACKAGE__->register_method({
my $worker_id = PVE::Tools::encode_text($filename); # must not pass : or the like as w-ID
return $rpcenv->fork_worker('download', $worker_id, $user, $worker);
}});
},
});
__PACKAGE__->register_method({
name => 'get_import_metadata',
@ -760,7 +827,7 @@ __PACKAGE__->register_method({
method => 'GET',
description =>
"Get the base parameters for creating a guest which imports data from a foreign importable"
." guest, like an ESXi VM",
. " guest, like an ESXi VM",
proxyto => 'node',
permissions => {
description => "You need read access for the volume.",
@ -785,18 +852,19 @@ __PACKAGE__->register_method({
properties => {
type => {
type => 'string',
enum => [ 'vm' ],
enum => ['vm'],
description => 'The type of guest this is going to produce.',
},
source => {
type => 'string',
enum => [ 'esxi' ],
enum => ['esxi'],
description => 'The type of the import-source of this guest volume.',
},
'create-args' => {
type => 'object',
additionalProperties => 1,
description => 'Parameters which can be used in a call to create a VM or container.',
description =>
'Parameters which can be used in a call to create a VM or container.',
},
'disks' => {
type => 'object',
@ -808,12 +876,13 @@ __PACKAGE__->register_method({
type => 'object',
additionalProperties => 1,
optional => 1,
description => 'Recognised network interfaces as `net$id` => { ...params } object.',
description =>
'Recognised network interfaces as `net$id` => { ...params } object.',
},
'warnings' => {
type => 'array',
description => 'List of known issues that can affect the import of a guest.'
.' Note that lack of warning does not imply that there cannot be any problems.',
. ' Note that lack of warning does not imply that there cannot be any problems.',
optional => 1,
items => {
type => "object",
@ -860,9 +929,13 @@ __PACKAGE__->register_method({
PVE::Storage::check_volume_access($rpcenv, $authuser, $cfg, undef, $volid);
return PVE::Tools::run_with_timeout(30, sub {
return PVE::Tools::run_with_timeout(
30,
sub {
return PVE::Storage::get_import_metadata($cfg, $volid);
});
}});
},
);
},
});
1;

View File

@ -168,6 +168,7 @@ The message to be printed.
=back
=cut
sub new {
my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
@ -183,6 +184,7 @@ Returns the name of the backup provider. It will be printed in some log lines.
=back
=cut
sub provider_name {
my ($self) = @_;
@ -211,6 +213,7 @@ Unix time-stamp of when the job started.
=back
=cut
sub job_init {
my ($self, $start_time) = @_;
@ -227,6 +230,7 @@ the backup server. Called in both, success and failure scenarios.
=back
=cut
sub job_cleanup {
my ($self) = @_;
@ -271,6 +275,7 @@ Unix time-stamp of when the guest backup started.
=back
=cut
sub backup_init {
my ($self, $vmid, $vmtype, $start_time) = @_;
@ -326,6 +331,7 @@ Present if there was a failure. The error message indicating the failure.
=back
=cut
sub backup_cleanup {
my ($self, $vmid, $vmtype, $success, $info) = @_;
@ -366,6 +372,7 @@ The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
=back
=cut
sub backup_get_mechanism {
my ($self, $vmid, $vmtype) = @_;
@ -396,6 +403,7 @@ Path to the file with the backup log.
=back
=cut
sub backup_handle_log_file {
my ($self, $vmid, $filename) = @_;
@ -462,6 +470,7 @@ bitmap and existing ones will be discarded.
=back
=cut
sub backup_vm_query_incremental {
my ($self, $vmid, $volumes) = @_;
@ -619,6 +628,7 @@ configuration as raw data.
=back
=cut
sub backup_vm {
my ($self, $vmid, $guest_config, $volumes, $info) = @_;
@ -652,6 +662,7 @@ description there.
=back
=cut
sub backup_container_prepare {
my ($self, $vmid, $info) = @_;
@ -752,6 +763,7 @@ for unprivileged containers by default.
=back
=cut
sub backup_container {
my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
@ -797,6 +809,7 @@ The volume ID of the archive being restored.
=back
=cut
sub restore_get_mechanism {
my ($self, $volname) = @_;
@ -824,6 +837,7 @@ The volume ID of the archive being restored.
=back
=cut
sub archive_get_guest_config {
my ($self, $volname) = @_;
@ -853,6 +867,7 @@ The volume ID of the archive being restored.
=back
=cut
sub archive_get_firewall_config {
my ($self, $volname) = @_;
@ -901,6 +916,7 @@ The volume ID of the archive being restored.
=back
=cut
sub restore_vm_init {
my ($self, $volname) = @_;
@ -927,6 +943,7 @@ The volume ID of the archive being restored.
=back
=cut
sub restore_vm_cleanup {
my ($self, $volname) = @_;
@ -984,6 +1001,7 @@ empty.
=back
=cut
sub restore_vm_volume_init {
my ($self, $volname, $device_name, $info) = @_;
@ -1020,6 +1038,7 @@ empty.
=back
=cut
sub restore_vm_volume_cleanup {
my ($self, $volname, $device_name, $info) = @_;
@ -1086,6 +1105,7 @@ empty.
=back
=cut
sub restore_container_init {
my ($self, $volname, $info) = @_;
@ -1117,6 +1137,7 @@ empty.
=back
=cut
sub restore_container_cleanup {
my ($self, $volname, $info) = @_;

View File

@ -1,4 +1,5 @@
SOURCES=pvesm.pm
SOURCES=pvesm.pm \
pvebcache.pm
.PHONY: install
install: ${SOURCES}

510
src/PVE/CLI/pvebcache.pm Normal file
View File

@ -0,0 +1,510 @@
package PVE::CLI::pvebcache;
use strict;
use warnings;
use PVE::Cluster;
use PVE::APLInfo;
use PVE::SafeSyslog;
use PVE::Tools qw(extract_param file_read_firstline run_command) ;
use PVE::JSONSchema qw(get_standard_option);
use PVE::CLIHandler;
use PVE::API2::Nodes;
use PVE::Storage;
use File::Basename;
use Cwd 'realpath';
use JSON;
use base qw(PVE::CLIHandler);
my $nodename = PVE::INotify::nodename();
my $showbcache = "/usr/sbin/bcache-super-show";
my $makebacahe = "/usr/sbin/make-bcache";
my $LSBLK = "/bin/lsblk";
sub setup_environment {
PVE::RPCEnvironment->setup_default_cli_env();
}
__PACKAGE__->register_method ({
name => 'index',
path => 'index',
method => 'GET',
description => "Get list of all templates on storage",
permissions => {
description => "Show all users the template which have permission on that storage."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
'type' => {
optional => 1,
type => 'string',
description => "Show bcache type",
enum => [qw(all cache backend)],
default => 'all',
},
},
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $type = $param->{type} // 'all';
#print Dumper($res);
my $devlist = PVE::Diskmanage::scan_bcache_device($type);
#print Dumper($devlist);
# foreach my $device (</sys/block/bcache*>) {
# #my $disk = basename($device);
# scan_bcache_device($devlist, $device, 0, 0);
# }
printf "%-10s %-10s %-20s %-20s %-15s %-15s %-15s\n",
qw(name type backend-dev cache-dev state size cachemode);
foreach my $rec ( @$devlist) {
printf "%-10s %-10s %-20s %-20s %-15s %-15s %-15s \n",
$rec->{name},
$rec->{type},
$rec->{'backend-dev'},
$rec->{'cache-dev'},
$rec->{state},
$rec->{size},
$rec->{cachemode} // 0
;
};
}});
__PACKAGE__->register_method ({
name => 'stop',
path => 'stop',
method => 'Post',
description => "Stop bcache",
permissions => {
description => "Stop bcache"
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
dev => {
type => 'string',
title => 'bcache name'
}
},
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $dev = $param->{dev};
my $sysdir = "/sys/block";
die "$dev is not a bcache dev format! \n" if $dev !~ m{bcache\d+$} ;
if ($dev =~ m{^/dev/bcache\d+$}) {
$dev = basename($dev);
}
die "Stop dev $dev failed!\n" if !PVE::SysFSTools::file_write("$sysdir/$dev/bcache/stop","1");
}});
__PACKAGE__->register_method ({
name => 'register',
path => 'register',
method => 'Post',
description => "register a bcache",
permissions => {
description => "register a bcache"
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
dev => {
type => 'string',
title => 'dev name'
}
},
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $dev = PVE::Diskmanage::get_disk_name($param->{dev});
die "$dev has been a bcache dev!\n" if ( -d "/sys/block/$dev/bcache/");
return PVE::SysFSTools::file_write("/sys/fs/bcache/register","/dev/$dev");
}});
__PACKAGE__->register_method ({
name => 'create',
path => 'create',
method => 'Post',
description => "register a bcache",
permissions => {
description => "register a bcache"
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
backend => {
type => 'string',
title => 'backend dev name'
},
cache => {
type => 'string',
title => 'Cache dev name',
optional => 1,
},
blocksize => {
type => 'integer',
title => 'blocksize',
optional => 1,
},
writeback => {
type => 'boolean',
title => 'enable writeback',
default => 0,
optional => 1,
},
discard => {
type => 'boolean',
title => 'enable discard',
default => 1,
optional => 1,
},
},
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $dev = PVE::Diskmanage::get_disk_name($param->{backend});
my $cache = $param->{cache};
my $blocksize = $param->{blocksize};
my $writeback = $param->{writeback} // 1;
my $discard = $param->{discard} // 1;
die "backend $dev dev is not block device!" if !PVE::Diskmanage::verify_blockdev_path("/dev/$dev");
die "backend $dev dev has been a bcache device!\n" if -d "/sys/block/$dev/bcache/";
my $cmd = ["$makebacahe","-B","/dev/$dev"];
if (defined($cache)){
die "$cache has been a cache dev,please create without cache and attach cache!\n" if check_bcache_cache_dev($cache);
$cache = PVE::Diskmanage::get_disk_name($cache);
push @$cmd,"-C","/dev/$cache";
}
if (defined($blocksize)){
push @$cmd,"-w",$blocksize;
}
if (defined($writeback)){
push @$cmd,"--writeback";
}
if (defined($discard)){
push @$cmd,"--discard";
}
return run_command($cmd , outfunc => sub {}, errfunc => sub {});
}});
__PACKAGE__->register_method ({
name => 'detach',
path => 'detach',
method => 'POST',
description => "detach a cache dev",
permissions => {
description => "Show all users which have permission on that host."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
backend => {
type => 'string',
description => "backend dev",
},
}
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $backenddev = PVE::Diskmanage::get_bcache_backend_dev($param->{backend});
return PVE::SysFSTools::file_write("/sys/block/$backenddev/bcache/detach", "1");
}
});
__PACKAGE__->register_method ({
name => 'attach',
path => 'attach',
method => 'POST',
description => "attach cache dev to backend dev",
permissions => {
description => "Show all users which have permission on that host."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
backend => {
type => 'string',
description => "backend dev",
},
cache => {
type => 'string',
description => "bcache dev",
},
}
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $backenddev = PVE::Diskmanage::get_bcache_backend_dev($param->{backend});
my $cachedev = PVE::Diskmanage::get_bcache_cache_dev($param->{cache});
return PVE::SysFSTools::file_write("/sys/block/$backenddev/bcache/attach", $cachedev);
}});
__PACKAGE__->register_method ({
name => 'create_cache',
path => 'create_cache',
method => 'POST',
description => "ceate cache device",
permissions => {
description => "Show all users which have permission on that host."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
cache => {
type => 'string',
description => "cache dev",
},
}
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $cachedev = PVE::Diskmanage::get_disk_name($param->{cache});
my $cmd =[$makebacahe , "-C","/dev/$cachedev"];
return run_command($cmd , outfunc => sub {}, errfunc => sub {});
}});
__PACKAGE__->register_method ({
name => 'stop_cache',
path => 'stop_cache',
method => 'POST',
description => "stop cache device",
permissions => {
description => "Show all users which have permission on that host."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
cache => {
type => 'string',
description => "cache dev",
},
}
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $cachedev = PVE::Diskmanage::get_bcache_cache_dev($param->{cache});
PVE::Diskmanage::check_bcache_cache_is_inuse($cachedev);
$cachedev =~ /^([a-zA-Z0-9_\-\.]+)$/ || die "Invalid cachedev format: $cachedev";
my $uuid = $1;
return PVE::SysFSTools::file_write("/sys/fs/bcache/$uuid/stop","1");
}});
__PACKAGE__->register_method ({
name => 'set',
path => 'set',
method => 'POST',
description => "set backend device cache plicy",
permissions => {
description => "Show all users which have permission on that host."
},
proxyto => 'node',
protected => 1,
parameters => {
additionalProperties => 0,
properties => {
node => get_standard_option('pve-node'),
backend => {
type => 'string',
description => "backend dev",
},
cachemode => {
type => 'string',
description => "cache mode dev",
enum => [qw(writethrough writeback writearound none)],
optional => 1,
},
sequential => {
type => 'integer',
minimum => 0,
description => "Unit is in kb",
optional => 1,
},
'wb-percent' => {
type => 'integer',
minimum => 0,
maximum => 80,
description => "writeback_percent",
optional => 1,
},
'clear-stats' => {
type => 'boolean',
optional => 1,
default => 0,
},
}
},
returns => {
type => 'string',
},
code => sub {
my ($param) = @_;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
my $backenddev = PVE::Diskmanage::get_bcache_backend_dev($param->{backend});
my $cachemode = $param->{cachemode};
my $sequential = $param->{sequential};
my $wb_percent = $param->{'wb-percent'};
my $clear = $param->{'clear-stats'};
if (!$clear && !$wb_percent && !$sequential && !$cachemode){
die "Need a param eg. --clear-stats 1 --wb-percent 20 --sequential 8192 --cachemode writeback\n";
}
my $path = "/sys/block/$backenddev/bcache";
sub write_to_file {
my ($file, $value) = @_;
eval {
if ($value){
my $old = file_read_firstline($file);
PVE::SysFSTools::file_write($file, $value) ;
my $new = file_read_firstline("$file");
my $name = basename($file);
print "$name: $old => $new \n";
}
};
warn $@ if $@;
}
write_to_file("$path/cache_mode", $cachemode);
write_to_file("$path/writeback_percent", $wb_percent);
if ($sequential){
$sequential = PVE::Tools::convert_size($sequential, 'kb' => 'b');
write_to_file("$path/sequential_cutoff", $sequential);
}
PVE::SysFSTools::file_write("$path/clear_stats", "1") if $clear;
return "ok\n";
}});
our $cmddef = {
create => [ __PACKAGE__, 'create', [ 'backend' ] ,{ node => $nodename }],
stop => [ __PACKAGE__, 'stop', [ 'dev' ] ,{ node => $nodename }],
register => [ __PACKAGE__, 'register',[ 'dev' ] ,{ node => $nodename } ],
list => [ __PACKAGE__, 'index' , [],{ node => $nodename }],
start => { alias => 'register' },
cache => {
detach => [ __PACKAGE__, 'detach' , ['backend'], { node => $nodename }],
attach => [ __PACKAGE__, 'attach' , ['backend'], { node => $nodename }],
create => [ __PACKAGE__, 'create_cache' , ['cache'], { node => $nodename }],
stop => [ __PACKAGE__, 'stop_cache' , ['cache'], { node => $nodename }],
set => [ __PACKAGE__, 'set' , ['backend'], { node => $nodename }]
}
};
1;

View File

@ -35,13 +35,16 @@ my $nodename = PVE::INotify::nodename();
sub param_mapping {
my ($name) = @_;
my $password_map = PVE::CLIHandler::get_standard_mapping('pve-password', {
my $password_map = PVE::CLIHandler::get_standard_mapping(
'pve-password',
{
func => sub {
my ($value) = @_;
return $value if $value;
return PVE::PTY::read_password("Enter Password: ");
},
});
},
);
my $enc_key_map = {
name => 'encryption-key',
@ -50,7 +53,7 @@ sub param_mapping {
my ($value) = @_;
return $value if $value eq 'autogen';
return PVE::Tools::file_get_contents($value);
}
},
};
my $master_key_map = {
@ -59,7 +62,7 @@ sub param_mapping {
func => sub {
my ($value) = @_;
return encode_base64(PVE::Tools::file_get_contents($value), '');
}
},
};
my $keyring_map = {
@ -72,11 +75,11 @@ sub param_mapping {
};
my $mapping = {
'cifsscan' => [ $password_map ],
'cifs' => [ $password_map ],
'pbs' => [ $password_map ],
'create' => [ $password_map, $enc_key_map, $master_key_map, $keyring_map ],
'update' => [ $password_map, $enc_key_map, $master_key_map, $keyring_map ],
'cifsscan' => [$password_map],
'cifs' => [$password_map],
'pbs' => [$password_map],
'create' => [$password_map, $enc_key_map, $master_key_map, $keyring_map],
'update' => [$password_map, $enc_key_map, $master_key_map, $keyring_map],
};
return $mapping->{$name};
}
@ -85,7 +88,7 @@ sub setup_environment {
PVE::RPCEnvironment->setup_default_cli_env();
}
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'apiinfo',
path => 'apiinfo',
method => 'GET',
@ -106,10 +109,10 @@ __PACKAGE__->register_method ({
apiver => PVE::Storage::APIVER,
apiage => PVE::Storage::APIAGE,
};
}
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'path',
path => 'path',
method => 'GET',
@ -119,7 +122,8 @@ __PACKAGE__->register_method ({
properties => {
volume => {
description => "Volume identifier",
type => 'string', format => 'pve-volume-id',
type => 'string',
format => 'pve-volume-id',
completion => \&PVE::Storage::complete_volume,
},
},
@ -131,21 +135,23 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
my $path = PVE::Storage::path ($cfg, $param->{volume});
my $path = PVE::Storage::path($cfg, $param->{volume});
print "$path\n";
return undef;
}});
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'extractconfig',
path => 'extractconfig',
method => 'GET',
description => "Extract configuration from vzdump backup archive.",
permissions => {
description => "The user needs 'VM.Backup' permissions on the backed up guest ID, and 'Datastore.AllocateSpace' on the backup storage.",
description =>
"The user needs 'VM.Backup' permissions on the backed up guest ID, and 'Datastore.AllocateSpace' on the backup storage.",
user => 'all',
},
protected => 1,
@ -169,12 +175,7 @@ __PACKAGE__->register_method ({
my $storage_cfg = PVE::Storage::config();
PVE::Storage::check_volume_access(
$rpcenv,
$authuser,
$storage_cfg,
undef,
$volume,
'backup',
$rpcenv, $authuser, $storage_cfg, undef, $volume, 'backup',
);
if (PVE::Storage::parse_volume_id($volume, 1)) {
@ -186,7 +187,8 @@ __PACKAGE__->register_method ({
print "$config_raw\n";
return;
}});
},
});
my $print_content = sub {
my ($list) = @_;
@ -194,7 +196,7 @@ my $print_content = sub {
my ($maxlenname, $maxsize) = (0, 0);
foreach my $info (@$list) {
my $volid = $info->{volid};
my $sidlen = length ($volid);
my $sidlen = length($volid);
$maxlenname = $sidlen if $sidlen > $maxlenname;
$maxsize = $info->{size} if ($info->{size} // 0) > $maxsize;
}
@ -207,7 +209,8 @@ my $print_content = sub {
next if !$info->{vmid};
my $volid = $info->{volid};
printf "$basefmt %d\n", $volid, $info->{format}, $info->{content}, $info->{size}, $info->{vmid};
printf "$basefmt %d\n", $volid, $info->{format}, $info->{content}, $info->{size},
$info->{vmid};
}
foreach my $info (sort { $a->{format} cmp $b->{format} } @$list) {
@ -224,9 +227,9 @@ my $print_status = sub {
my $maxlen = 0;
foreach my $res (@$res) {
my $storeid = $res->{storage};
$maxlen = length ($storeid) if length ($storeid) > $maxlen;
$maxlen = length($storeid) if length($storeid) > $maxlen;
}
$maxlen+=1;
$maxlen += 1;
printf "%-${maxlen}s %10s %10s %15s %15s %15s %8s\n", 'Name', 'Type',
'Status', 'Total', 'Used', 'Available', '%';
@ -236,7 +239,7 @@ my $print_status = sub {
my $active = $res->{active} ? 'active' : 'inactive';
my ($per, $per_fmt) = (0, '% 7.2f%%');
$per = ($res->{used}*100)/$res->{total} if $res->{total} > 0;
$per = ($res->{used} * 100) / $res->{total} if $res->{total} > 0;
if (!$res->{enabled}) {
$per = 'N/A';
@ -245,12 +248,12 @@ my $print_status = sub {
}
printf "%-${maxlen}s %10s %10s %15d %15d %15d $per_fmt\n", $storeid,
$res->{type}, $active, $res->{total}/1024, $res->{used}/1024,
$res->{avail}/1024, $per;
$res->{type}, $active, $res->{total} / 1024, $res->{used} / 1024,
$res->{avail} / 1024, $per;
}
};
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'export',
path => 'export',
method => 'GET',
@ -288,8 +291,7 @@ __PACKAGE__->register_method ({
optional => 1,
},
'with-snapshots' => {
description =>
"Whether to include intermediate snapshots in the stream",
description => "Whether to include intermediate snapshots in the stream",
type => 'boolean',
optional => 1,
default => 0,
@ -320,14 +322,21 @@ __PACKAGE__->register_method ({
close(STDOUT);
open(STDOUT, '>', '/dev/null');
} else {
sysopen($outfh, $filename, O_CREAT|O_WRONLY|O_TRUNC)
sysopen($outfh, $filename, O_CREAT | O_WRONLY | O_TRUNC)
or die "open($filename): $!\n";
}
eval {
my $cfg = PVE::Storage::config();
PVE::Storage::volume_export($cfg, $outfh, $param->{volume}, $param->{format},
$param->{snapshot}, $param->{base}, $with_snapshots);
PVE::Storage::volume_export(
$cfg,
$outfh,
$param->{volume},
$param->{format},
$param->{snapshot},
$param->{base},
$with_snapshots,
);
};
my $err = $@;
if ($filename ne '-') {
@ -336,10 +345,10 @@ __PACKAGE__->register_method ({
}
die $err if $err;
return;
}
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'import',
path => 'import',
method => 'PUT',
@ -359,10 +368,10 @@ __PACKAGE__->register_method ({
enum => $PVE::Storage::KNOWN_EXPORT_FORMATS,
},
filename => {
description => "Source file name. For '-' stdin is used, the " .
"tcp://<IP-or-CIDR> format allows to use a TCP connection, " .
"the unix://PATH-TO-SOCKET format a UNIX socket as input." .
"Else, the file is treated as common file.",
description => "Source file name. For '-' stdin is used, the "
. "tcp://<IP-or-CIDR> format allows to use a TCP connection, "
. "the unix://PATH-TO-SOCKET format a UNIX socket as input."
. "Else, the file is treated as common file.",
type => 'string',
},
base => {
@ -373,8 +382,7 @@ __PACKAGE__->register_method ({
optional => 1,
},
'with-snapshots' => {
description =>
"Whether the stream includes intermediate snapshots",
description => "Whether the stream includes intermediate snapshots",
type => 'boolean',
optional => 1,
default => 0,
@ -387,8 +395,8 @@ __PACKAGE__->register_method ({
optional => 1,
},
'allow-rename' => {
description => "Choose a new volume ID if the requested " .
"volume ID already exists, instead of throwing an error.",
description => "Choose a new volume ID if the requested "
. "volume ID already exists, instead of throwing an error.",
type => 'boolean',
optional => 1,
default => 0,
@ -474,21 +482,28 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
my $volume = $param->{volume};
my $delete = $param->{'delete-snapshot'};
my $imported_volid = PVE::Storage::volume_import($cfg, $infh, $volume, $param->{format},
$param->{snapshot}, $param->{base}, $param->{'with-snapshots'},
$param->{'allow-rename'});
my $imported_volid = PVE::Storage::volume_import(
$cfg,
$infh,
$volume,
$param->{format},
$param->{snapshot},
$param->{base},
$param->{'with-snapshots'},
$param->{'allow-rename'},
);
PVE::Storage::volume_snapshot_delete($cfg, $imported_volid, $delete)
if defined($delete);
return $imported_volid;
}
},
});
__PACKAGE__->register_method ({
__PACKAGE__->register_method({
name => 'prunebackups',
path => 'prunebackups',
method => 'GET',
description => "Prune backups. Only those using the standard naming scheme are considered. " .
"If no keep options are specified, those from the storage configuration are used.",
description => "Prune backups. Only those using the standard naming scheme are considered. "
. "If no keep options are specified, those from the storage configuration are used.",
protected => 1,
proxyto => 'node',
parameters => {
@ -500,28 +515,36 @@ __PACKAGE__->register_method ({
optional => 1,
},
node => get_standard_option('pve-node'),
storage => get_standard_option('pve-storage-id', {
storage => get_standard_option(
'pve-storage-id',
{
completion => \&PVE::Storage::complete_storage_enabled,
}),
},
),
%{$PVE::Storage::Plugin::prune_backups_format},
type => {
description => "Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
description =>
"Either 'qemu' or 'lxc'. Only consider backups for guests of this type.",
type => 'string',
optional => 1,
enum => ['qemu', 'lxc'],
},
vmid => get_standard_option('pve-vmid', {
vmid => get_standard_option(
'pve-vmid',
{
description => "Only consider backups for this guest.",
optional => 1,
completion => \&PVE::Cluster::complete_vmid,
}),
},
),
},
},
returns => {
type => 'object',
properties => {
dryrun => {
description => 'If it was a dry run or not. The list will only be defined in that case.',
description =>
'If it was a dry run or not. The list will only be defined in that case.',
type => 'boolean',
},
list => {
@ -534,12 +557,14 @@ __PACKAGE__->register_method ({
type => 'string',
},
'ctime' => {
description => "Creation time of the backup (seconds since the UNIX epoch).",
description =>
"Creation time of the backup (seconds since the UNIX epoch).",
type => 'integer',
},
'mark' => {
description => "Whether the backup would be kept or removed. For backups that don't " .
"use the standard naming scheme, it's 'protected'.",
description =>
"Whether the backup would be kept or removed. For backups that don't "
. "use the standard naming scheme, it's 'protected'.",
type => 'string',
},
type => {
@ -566,7 +591,9 @@ __PACKAGE__->register_method ({
$keep_opts->{$keep} = extract_param($param, $keep) if defined($param->{$keep});
}
$param->{'prune-backups'} = PVE::JSONSchema::print_property_string(
$keep_opts, $PVE::Storage::Plugin::prune_backups_format) if $keep_opts;
$keep_opts,
$PVE::Storage::Plugin::prune_backups_format,
) if $keep_opts;
my $list = [];
if ($dryrun) {
@ -579,7 +606,8 @@ __PACKAGE__->register_method ({
dryrun => $dryrun,
list => $list,
};
}});
},
});
my $print_api_result = sub {
my ($data, $schema, $options) = @_;
@ -587,76 +615,107 @@ my $print_api_result = sub {
};
our $cmddef = {
add => [ "PVE::API2::Storage::Config", 'create', ['type', 'storage'] ],
set => [ "PVE::API2::Storage::Config", 'update', ['storage'] ],
remove => [ "PVE::API2::Storage::Config", 'delete', ['storage'] ],
status => [ "PVE::API2::Storage::Status", 'index', [],
{ node => $nodename }, $print_status ],
list => [ "PVE::API2::Storage::Content", 'index', ['storage'],
{ node => $nodename }, $print_content ],
alloc => [ "PVE::API2::Storage::Content", 'create', ['storage', 'vmid', 'filename', 'size'],
{ node => $nodename }, sub {
add => ["PVE::API2::Storage::Config", 'create', ['type', 'storage']],
set => ["PVE::API2::Storage::Config", 'update', ['storage']],
remove => ["PVE::API2::Storage::Config", 'delete', ['storage']],
status => ["PVE::API2::Storage::Status", 'index', [], { node => $nodename }, $print_status],
list => [
"PVE::API2::Storage::Content",
'index',
['storage'],
{ node => $nodename },
$print_content,
],
alloc => [
"PVE::API2::Storage::Content",
'create',
['storage', 'vmid', 'filename', 'size'],
{ node => $nodename },
sub {
my $volid = shift;
print "successfully created '$volid'\n";
}],
free => [ "PVE::API2::Storage::Content", 'delete', ['volume'],
{ node => $nodename } ],
},
],
free => ["PVE::API2::Storage::Content", 'delete', ['volume'], { node => $nodename }],
scan => {
nfs => [ "PVE::API2::Storage::Scan", 'nfsscan', ['server'], { node => $nodename }, sub {
nfs => [
"PVE::API2::Storage::Scan",
'nfsscan',
['server'],
{ node => $nodename },
sub {
my $res = shift;
my $maxlen = 0;
foreach my $rec (@$res) {
my $len = length ($rec->{path});
my $len = length($rec->{path});
$maxlen = $len if $len > $maxlen;
}
foreach my $rec (@$res) {
printf "%-${maxlen}s %s\n", $rec->{path}, $rec->{options};
}
}],
cifs => [ "PVE::API2::Storage::Scan", 'cifsscan', ['server'], { node => $nodename }, sub {
},
],
cifs => [
"PVE::API2::Storage::Scan",
'cifsscan',
['server'],
{ node => $nodename },
sub {
my $res = shift;
my $maxlen = 0;
foreach my $rec (@$res) {
my $len = length ($rec->{share});
my $len = length($rec->{share});
$maxlen = $len if $len > $maxlen;
}
foreach my $rec (@$res) {
printf "%-${maxlen}s %s\n", $rec->{share}, $rec->{description};
}
}],
glusterfs => [ "PVE::API2::Storage::Scan", 'glusterfsscan', ['server'], { node => $nodename }, sub {
my $res = shift;
foreach my $rec (@$res) {
printf "%s\n", $rec->{volname};
}
}],
iscsi => [ "PVE::API2::Storage::Scan", 'iscsiscan', ['portal'], { node => $nodename }, sub {
},
],
iscsi => [
"PVE::API2::Storage::Scan",
'iscsiscan',
['portal'],
{ node => $nodename },
sub {
my $res = shift;
my $maxlen = 0;
foreach my $rec (@$res) {
my $len = length ($rec->{target});
my $len = length($rec->{target});
$maxlen = $len if $len > $maxlen;
}
foreach my $rec (@$res) {
printf "%-${maxlen}s %s\n", $rec->{target}, $rec->{portal};
}
}],
lvm => [ "PVE::API2::Storage::Scan", 'lvmscan', [], { node => $nodename }, sub {
},
],
lvm => [
"PVE::API2::Storage::Scan",
'lvmscan',
[],
{ node => $nodename },
sub {
my $res = shift;
foreach my $rec (@$res) {
printf "$rec->{vg}\n";
}
}],
lvmthin => [ "PVE::API2::Storage::Scan", 'lvmthinscan', ['vg'], { node => $nodename }, sub {
},
],
lvmthin => [
"PVE::API2::Storage::Scan",
'lvmthinscan',
['vg'],
{ node => $nodename },
sub {
my $res = shift;
foreach my $rec (@$res) {
printf "$rec->{lv}\n";
}
}],
},
],
pbs => [
"PVE::API2::Storage::Scan",
'pbsscan',
@ -665,35 +724,57 @@ our $cmddef = {
$print_api_result,
$PVE::RESTHandler::standard_output_options,
],
zfs => [ "PVE::API2::Storage::Scan", 'zfsscan', [], { node => $nodename }, sub {
zfs => [
"PVE::API2::Storage::Scan",
'zfsscan',
[],
{ node => $nodename },
sub {
my $res = shift;
foreach my $rec (@$res) {
printf "$rec->{pool}\n";
}
}],
},
],
},
nfsscan => { alias => 'scan nfs' },
cifsscan => { alias => 'scan cifs' },
glusterfsscan => { alias => 'scan glusterfs' },
iscsiscan => { alias => 'scan iscsi' },
lvmscan => { alias => 'scan lvm' },
lvmthinscan => { alias => 'scan lvmthin' },
zfsscan => { alias => 'scan zfs' },
path => [ __PACKAGE__, 'path', ['volume']],
path => [__PACKAGE__, 'path', ['volume']],
extractconfig => [__PACKAGE__, 'extractconfig', ['volume']],
export => [ __PACKAGE__, 'export', ['volume', 'format', 'filename']],
import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename'], {}, sub {
export => [__PACKAGE__, 'export', ['volume', 'format', 'filename']],
import => [
__PACKAGE__,
'import',
['volume', 'format', 'filename'],
{},
sub {
my $volid = shift;
print PVE::Storage::volume_imported_message($volid);
}],
apiinfo => [ __PACKAGE__, 'apiinfo', [], {}, sub {
},
],
apiinfo => [
__PACKAGE__,
'apiinfo',
[],
{},
sub {
my $res = shift;
print "APIVER $res->{apiver}\n";
print "APIAGE $res->{apiage}\n";
}],
'prune-backups' => [ __PACKAGE__, 'prunebackups', ['storage'], { node => $nodename }, sub {
},
],
'prune-backups' => [
__PACKAGE__,
'prunebackups',
['storage'],
{ node => $nodename },
sub {
my $res = shift;
my ($dryrun, $list) = ($res->{dryrun}, $res->{list});
@ -705,11 +786,12 @@ our $cmddef = {
return;
}
print "NOTE: this is only a preview and might not be what a subsequent\n" .
"prune call does if backups are removed/added in the meantime.\n\n";
print "NOTE: this is only a preview and might not be what a subsequent\n"
. "prune call does if backups are removed/added in the meantime.\n\n";
my @sorted = sort {
my $vmcmp = PVE::Tools::safe_compare($a->{vmid}, $b->{vmid}, sub { $_[0] <=> $_[1] });
my $vmcmp =
PVE::Tools::safe_compare($a->{vmid}, $b->{vmid}, sub { $_[0] <=> $_[1] });
return $vmcmp if $vmcmp ne 0;
return $a->{ctime} <=> $b->{ctime};
} @{$list};
@ -719,16 +801,22 @@ our $cmddef = {
my $volid = $backup->{volid};
$maxlen = length($volid) if length($volid) > $maxlen;
}
$maxlen+=1;
$maxlen += 1;
printf("%-${maxlen}s %15s %10s\n", 'Backup', 'Backup-ID', 'Prune-Mark');
foreach my $backup (@sorted) {
my $type = $backup->{type};
my $vmid = $backup->{vmid};
my $backup_id = defined($vmid) ? "$type/$vmid" : "$type";
printf("%-${maxlen}s %15s %10s\n", $backup->{volid}, $backup_id, $backup->{mark});
printf(
"%-${maxlen}s %15s %10s\n",
$backup->{volid},
$backup_id,
$backup->{mark},
);
}
}],
},
],
};
1;

View File

@ -3,12 +3,12 @@ package PVE::CephConfig;
use strict;
use warnings;
use Net::IP;
use PVE::RESTEnvironment qw(log_warn);
use PVE::Tools qw(run_command);
use PVE::Cluster qw(cfs_register_file);
cfs_register_file('ceph.conf',
\&parse_ceph_config,
\&write_ceph_config);
cfs_register_file('ceph.conf', \&parse_ceph_config, \&write_ceph_config);
# For more information on how the Ceph parser works and how its grammar is
# defined, see:
@ -126,7 +126,7 @@ sub parse_ceph_config {
$key =~ s/$re_leading_ws//;
$key =~ s/\s/ /;
while ($key =~ s/\s\s/ /) {} # squeeze repeated whitespace
while ($key =~ s/\s\s/ /) { } # squeeze repeated whitespace
# Ceph treats *single* spaces in keys the same as underscores,
# but we'll just use underscores for readability
@ -258,7 +258,7 @@ my $parse_ceph_file = sub {
my $cfg = {};
return $cfg if ! -f $filename;
return $cfg if !-f $filename;
my $content = PVE::Tools::file_get_contents($filename);
@ -352,7 +352,7 @@ sub get_monaddr_list {
my $monhostlist = {};
# get all ip addresses from mon_host
my $monhosts = [ split (/[ ,;]+/, $config->{global}->{mon_host} // "") ];
my $monhosts = [split(/[ ,;]+/, $config->{global}->{mon_host} // "")];
foreach my $monhost (@$monhosts) {
$monhost =~ s/^\[?v\d\://; # remove beginning of vector
@ -364,7 +364,7 @@ sub get_monaddr_list {
}
# then get all addrs from mon. sections
for my $section ( keys %$config ) {
for my $section (keys %$config) {
next if $section !~ m/^mon\./;
if (my $addr = $config->{$section}->{mon_addr}) {
@ -385,7 +385,7 @@ sub hostlist {
my $ceph_check_keyfile = sub {
my ($filename, $type) = @_;
return if ! -f $filename;
return if !-f $filename;
my $content = PVE::Tools::file_get_contents($filename);
eval {
@ -417,10 +417,15 @@ sub ceph_connect_option {
if (-e "/etc/pve/priv/ceph/${storeid}.conf") {
# allow custom ceph configuration for external clusters
if ($pveceph_managed) {
warn "ignoring custom ceph config for storage '$storeid', 'monhost' is not set (assuming pveceph managed cluster)!\n";
warn
"ignoring custom ceph config for storage '$storeid', 'monhost' is not set (assuming pveceph managed cluster)!\n";
} else {
$cmd_option->{ceph_conf} = "/etc/pve/priv/ceph/${storeid}.conf";
}
} elsif (!$pveceph_managed) {
# No dedicated config for non-PVE-managed cluster, create new
# TODO PVE 10 - remove. All such storages already got a configuration upon creation or here.
ceph_create_configuration($scfg->{type}, $storeid);
}
$cmd_option->{keyring} = $keyfile if (-e $keyfile);
@ -463,7 +468,8 @@ sub ceph_create_keyfile {
my $cephfs_secret = $ceph_get_key->($ceph_admin_keyring, 'admin');
mkdir '/etc/pve/priv/ceph';
chomp $cephfs_secret;
PVE::Tools::file_set_contents($ceph_storage_keyring, "${cephfs_secret}\n", 0400);
PVE::Tools::file_set_contents($ceph_storage_keyring, "${cephfs_secret}\n",
0400);
}
};
if (my $err = $@) {
@ -487,12 +493,56 @@ sub ceph_remove_keyfile {
}
}
sub ceph_create_configuration {
my ($type, $storeid) = @_;
return if $type eq 'cephfs'; # no configuration file needed currently
my $extension = 'keyring';
$extension = 'secret' if $type eq 'cephfs';
my $ceph_storage_keyring = "/etc/pve/priv/ceph/${storeid}.$extension";
return if !-e $ceph_storage_keyring;
my $ceph_storage_config = "/etc/pve/priv/ceph/${storeid}.conf";
if (-e $ceph_storage_config) {
log_warn(
"file $ceph_storage_config already exists, check manually and ensure 'keyring'"
. " option is set to '$ceph_storage_keyring'!\n",
);
return;
}
my $ceph_config = {
global => {
keyring => $ceph_storage_keyring,
},
};
my $contents = PVE::CephConfig::write_ceph_config($ceph_storage_config, $ceph_config);
PVE::Tools::file_set_contents($ceph_storage_config, $contents, 0600);
return;
}
sub ceph_remove_configuration {
my ($storeid) = @_;
my $ceph_storage_config = "/etc/pve/priv/ceph/${storeid}.conf";
if (-f $ceph_storage_config) {
unlink $ceph_storage_config or log_warn("removing $ceph_storage_config failed - $!\n");
}
return;
}
my $ceph_version_parser = sub {
my $ceph_version = shift;
# FIXME this is the same as pve-manager PVE::Ceph::Tools get_local_version
if ($ceph_version =~ /^ceph.*\sv?(\d+(?:\.\d+)+(?:-pve\d+)?)\s+(?:\(([a-zA-Z0-9]+)\))?/) {
my ($version, $buildcommit) = ($1, $2);
my $subversions = [ split(/\.|-/, $version) ];
my $subversions = [split(/\.|-/, $version)];
return ($subversions, $version, $buildcommit);
}
@ -504,9 +554,12 @@ sub local_ceph_version {
my $version_string = $cache;
if (!defined($version_string)) {
run_command('ceph --version', outfunc => sub {
run_command(
'ceph --version',
outfunc => sub {
$version_string = shift;
});
},
);
}
return undef if !defined($version_string);
# subversion is an array ref. with the version parts from major to minor

View File

@ -5,13 +5,14 @@ use warnings;
use PVE::ProcFSTools;
use Data::Dumper;
use Cwd qw(abs_path);
use Cwd qw(abs_path realpath);
use Fcntl ':mode';
use File::Basename;
use File::stat;
use JSON;
use PVE::Tools qw(extract_param run_command file_get_contents file_read_firstline dir_glob_regex dir_glob_foreach trim);
use PVE::Tools
qw(extract_param run_command file_get_contents file_read_firstline dir_glob_regex dir_glob_foreach trim);
my $SMARTCTL = "/usr/sbin/smartctl";
my $ZPOOL = "/sbin/zpool";
@ -20,7 +21,7 @@ my $PVS = "/sbin/pvs";
my $LVS = "/sbin/lvs";
my $LSBLK = "/bin/lsblk";
my sub strip_dev :prototype($) {
my sub strip_dev : prototype($) {
my ($devpath) = @_;
$devpath =~ s|^/dev/||;
return $devpath;
@ -98,38 +99,46 @@ sub get_smart_data {
push @$cmd, $disk;
my $returncode = eval {
run_command($cmd, noerr => 1, outfunc => sub {
run_command(
$cmd,
noerr => 1,
outfunc => sub {
my ($line) = @_;
# ATA SMART attributes, e.g.:
# ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
# 1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0
#
# SAS and NVME disks, e.g.:
# Data Units Written: 5,584,952 [2.85 TB]
# Accumulated start-stop cycles: 34
# ATA SMART attributes, e.g.:
# ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
# 1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0
#
# SAS and NVME disks, e.g.:
# Data Units Written: 5,584,952 [2.85 TB]
# Accumulated start-stop cycles: 34
if (defined($type) && $type eq 'ata' && $line =~ m/^([ \d]{2}\d)\s+(\S+)\s+(\S{6})\s+(\d+)\s+(\d+)\s+(\S+)\s+(\S+)\s+(.*)$/) {
if (
defined($type)
&& $type eq 'ata'
&& $line =~
m/^([ \d]{2}\d)\s+(\S+)\s+(\S{6})\s+(\d+)\s+(\d+)\s+(\S+)\s+(\S+)\s+(.*)$/
) {
my $entry = {};
$entry->{name} = $2 if defined $2;
$entry->{flags} = $3 if defined $3;
# the +0 makes a number out of the strings
# FIXME: 'value' is depreacated by 'normalized'; remove with PVE 7.0
$entry->{value} = $4+0 if defined $4;
$entry->{normalized} = $4+0 if defined $4;
$entry->{worst} = $5+0 if defined $5;
$entry->{value} = $4 + 0 if defined $4;
$entry->{normalized} = $4 + 0 if defined $4;
$entry->{worst} = $5 + 0 if defined $5;
# some disks report the default threshold as --- instead of 000
if (defined($6) && $6 eq '---') {
$entry->{threshold} = 0;
} else {
$entry->{threshold} = $6+0 if defined $6;
$entry->{threshold} = $6 + 0 if defined $6;
}
$entry->{fail} = $7 if defined $7;
$entry->{raw} = $8 if defined $8;
$entry->{id} = $1 if defined $1;
push @{$smartdata->{attributes}}, $entry;
} elsif ($line =~ m/(?:Health Status|self\-assessment test result): (.*)$/ ) {
push @{ $smartdata->{attributes} }, $entry;
} elsif ($line =~ m/(?:Health Status|self\-assessment test result): (.*)$/) {
$smartdata->{health} = $1;
} elsif ($line =~ m/Vendor Specific SMART Attributes with Thresholds:/) {
$type = 'ata';
@ -140,13 +149,16 @@ sub get_smart_data {
$smartdata->{text} = '' if !defined $smartdata->{text};
$smartdata->{text} .= "$line\n";
# extract wearout from nvme/sas text, allow for decimal values
if ($line =~ m/Percentage Used(?: endurance indicator)?:\s*(\d+(?:\.\d+)?)\%/i) {
if ($line =~
m/Percentage Used(?: endurance indicator)?:\s*(\d+(?:\.\d+)?)\%/i
) {
$smartdata->{wearout} = 100 - $1;
}
} elsif ($line =~ m/SMART Disabled/) {
$smartdata->{health} = "SMART Disabled";
}
})
},
);
};
my $err = $@;
@ -163,7 +175,9 @@ sub get_smart_data {
sub get_lsblk_info {
my $cmd = [$LSBLK, '--json', '-o', 'path,parttype,fstype'];
my $output = "";
eval { run_command($cmd, outfunc => sub { $output .= "$_[0]\n"; }) };
eval {
run_command($cmd, outfunc => sub { $output .= "$_[0]\n"; });
};
warn "$@\n" if $@;
return {} if $output eq '';
@ -175,7 +189,7 @@ sub get_lsblk_info {
map {
$_->{path} => {
parttype => $_->{parttype},
fstype => $_->{fstype}
fstype => $_->{fstype},
}
} @{$list}
};
@ -203,12 +217,15 @@ sub get_zfs_devices {
# use zpool and parttype uuid, because log and cache do not have zfs type uuid
eval {
run_command([$ZPOOL, 'list', '-HPLv'], outfunc => sub {
run_command(
[$ZPOOL, 'list', '-HPLv'],
outfunc => sub {
my ($line) = @_;
if ($line =~ m|^\t([^\t]+)\t|) {
$res->{$1} = 1;
}
});
},
);
};
# only warn here, because maybe zfs tools are not installed
@ -219,7 +236,6 @@ sub get_zfs_devices {
"516e7cba-6ecf-11d6-8ff8-00022d09712b" => 1, # bsd
};
$res = get_devices_by_partuuid($lsblk_info, $uuids, $res);
return $res;
@ -229,13 +245,16 @@ sub get_lvm_devices {
my ($lsblk_info) = @_;
my $res = {};
eval {
run_command([$PVS, '--noheadings', '--readonly', '-o', 'pv_name'], outfunc => sub{
run_command(
[$PVS, '--noheadings', '--readonly', '-o', 'pv_name'],
outfunc => sub {
my ($line) = @_;
$line = trim($line);
if ($line =~ m|^/dev/|) {
$res->{$line} = 1;
}
});
},
);
};
# if something goes wrong, we do not want to give up, but indicate an error has occurred
@ -270,23 +289,37 @@ sub get_ceph_journals {
sub get_ceph_volume_infos {
my $result = {};
my $cmd = [ $LVS, '-S', 'lv_name=~^osd-', '-o', 'devices,lv_name,lv_tags',
'--noheadings', '--readonly', '--separator', ';' ];
my $cmd = [
$LVS,
'-S',
'lv_name=~^osd-',
'-o',
'devices,lv_name,lv_tags',
'--noheadings',
'--readonly',
'--separator',
';',
];
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
my $line = shift;
$line =~ s/(?:^\s+)|(?:\s+$)//g; # trim whitespaces
my $fields = [ split(';', $line) ];
my $fields = [split(';', $line)];
# lvs syntax is /dev/sdX(Y) where Y is the start (which we do not need)
my ($dev) = $fields->[0] =~ m|^(/dev/[a-z]+[^(]*)|;
if ($fields->[1] =~ m|^osd-([^-]+)-|) {
my $type = $1;
# $result autovivification is wanted, to not creating empty hashes
if (($type eq 'block' || $type eq 'data') && $fields->[2] =~ m/ceph.osd_id=([^,]+)/) {
if (
($type eq 'block' || $type eq 'data')
&& $fields->[2] =~ m/ceph.osd_id=([^,]+)/
) {
$result->{$dev}->{osdid} = $1;
if ( !defined($result->{$dev}->{'osdid-list'}) ) {
if (!defined($result->{$dev}->{'osdid-list'})) {
$result->{$dev}->{'osdid-list'} = [];
}
push($result->{$dev}->{'osdid-list'}->@*, $1);
@ -299,7 +332,8 @@ sub get_ceph_volume_infos {
$result->{$dev}->{$type}++;
}
}
});
},
);
return $result;
}
@ -310,10 +344,13 @@ sub get_udev_info {
my $info = "";
my $data = {};
eval {
run_command(['udevadm', 'info', '-p', $dev, '--query', 'all'], outfunc => sub {
run_command(
['udevadm', 'info', '-p', $dev, '--query', 'all'],
outfunc => sub {
my ($line) = @_;
$info .= "$line\n";
});
},
);
};
warn $@ if $@;
return if !$info;
@ -343,7 +380,7 @@ sub get_udev_info {
$data->{wwn} = $1 if $info =~ m/^E: ID_WWN=(.*)$/m;
if ($info =~ m/^E: DEVLINKS=(.+)$/m) {
my @devlinks = grep(m#^/dev/disk/by-id/(ata|scsi|nvme(?!-eui))#, split (/ /, $1));
my @devlinks = grep(m#^/dev/disk/by-id/(ata|scsi|nvme(?!-eui))#, split(/ /, $1));
$data->{by_id_link} = $devlinks[0] if defined($devlinks[0]);
}
@ -363,7 +400,9 @@ sub get_sysdir_size {
sub get_sysdir_info {
my ($sysdir) = @_;
return if ! -d "$sysdir/device";
if ($sysdir !~ /bcache\d+/ && ! -d "$sysdir/device") {
return;
}
my $data = {};
@ -374,7 +413,11 @@ sub get_sysdir_info {
$data->{vendor} = file_read_firstline("$sysdir/device/vendor") || 'unknown';
$data->{model} = file_read_firstline("$sysdir/device/model") || 'unknown';
if ($sysdir =~ /bcache\d+/){
$data->{vendor} = 'bcache';
$data->{model} = file_read_firstline("$sysdir/bcache/backing_dev_name") || 'unknown';
$data->{serial} = file_read_firstline("$sysdir/bcache/state") || 'unknown';
}
return $data;
}
@ -403,7 +446,7 @@ sub get_wear_leveling_info {
"Lifetime_Remaining",
"Percent_Life_Remaining",
"Percent_Lifetime_Used",
"Perc_Rated_Life_Used"
"Perc_Rated_Life_Used",
);
# Search for S.M.A.R.T. attributes for known register
@ -422,7 +465,7 @@ sub get_wear_leveling_info {
sub dir_is_empty {
my ($dir) = @_;
my $dh = IO::Dir->new ($dir);
my $dh = IO::Dir->new($dir);
return 1 if !$dh;
while (defined(my $tmp = $dh->read)) {
@ -456,8 +499,8 @@ sub mounted_blockdevs {
foreach my $mount (@$mounts) {
next if $mount->[0] !~ m|^/dev/|;
$mounted->{abs_path($mount->[0])} = $mount->[1];
};
$mounted->{ abs_path($mount->[0]) } = $mount->[1];
}
return $mounted;
}
@ -469,8 +512,8 @@ sub mounted_paths {
my $mounts = PVE::ProcFSTools::parse_proc_mounts();
foreach my $mount (@$mounts) {
$mounted->{abs_path($mount->[1])} = $mount->[0];
};
$mounted->{ abs_path($mount->[1]) } = $mount->[0];
}
return $mounted;
}
@ -493,7 +536,7 @@ sub get_disks {
my $disk_regex = ".*";
if (defined($disks)) {
if (!ref($disks)) {
$disks = [ $disks ];
$disks = [$disks];
} elsif (ref($disks) ne 'ARRAY') {
die "disks is not a string or array reference\n";
}
@ -522,7 +565,10 @@ sub get_disks {
# - cciss!cXnY cciss devices
return if $dev !~ m/^(h|s|x?v)d[a-z]+$/ &&
$dev !~ m/^nvme\d+n\d+$/ &&
$dev !~ m/^cciss\!c\d+d\d+$/;
$dev !~ m/^cciss\!c\d+d\d+$/ &&
$dev !~ m/^mmcblk\d+n\d+$/ &&
$dev !~ m/^nbd\d+n\d+$/ &&
$dev !~ /bcache\d+/;
my $data = get_udev_info("/sys/block/$dev") // return;
my $devpath = $data->{devpath};
@ -604,9 +650,10 @@ sub get_disks {
my $info = $lsblk_info->{$devpath} // {};
if (defined(my $parttype = $info->{parttype})) {
return 'BIOS boot'if $parttype eq '21686148-6449-6e6f-744e-656564454649';
return 'BIOS boot' if $parttype eq '21686148-6449-6e6f-744e-656564454649';
return 'EFI' if $parttype eq 'c12a7328-f81f-11d2-ba4b-00a0c93ec93b';
return 'ZFS reserved' if $parttype eq '6a945a3b-1dd2-11b2-99a6-080020736631';
return 'ZFS reserved'
if $parttype eq '6a945a3b-1dd2-11b2-99a6-080020736631';
}
return "$info->{fstype}" if defined($info->{fstype});
@ -640,7 +687,10 @@ sub get_disks {
};
my $partitions = {};
dir_glob_foreach("$sysdir", "$dev.+", sub {
dir_glob_foreach(
"$sysdir",
"$dev.+",
sub {
my ($part) = @_;
$partitions->{$part} = $collect_ceph_info->("$partpath/$part");
@ -652,7 +702,8 @@ sub get_disks {
$partitions->{$part}->{gpt} = $data->{gpt};
$partitions->{$part}->{type} = 'partition';
$partitions->{$part}->{size} = get_sysdir_size("$sysdir/$part") // 0;
$partitions->{$part}->{used} = $determine_usage->("$partpath/$part", "$sysdir/$part", 1);
$partitions->{$part}->{used} =
$determine_usage->("$partpath/$part", "$sysdir/$part", 1);
$partitions->{$part}->{osdid} //= -1;
$partitions->{$part}->{'osdid-list'} //= undef;
@ -680,7 +731,8 @@ sub get_disks {
$partitions->{$part}->{wal} = 1 if $journal_part == 3;
$partitions->{$part}->{bluestore} = 1 if $journal_part == 4;
}
});
},
);
my $used = $determine_usage->($devpath, $sysdir, 0);
if (!$include_partitions) {
@ -712,7 +764,8 @@ sub get_disks {
if ($include_partitions) {
$disklist->{$_} = $partitions->{$_} for keys %{$partitions};
}
});
},
);
return $disklist;
}
@ -783,28 +836,38 @@ sub append_partition {
$devname =~ s|^/dev/||;
my $newpartid = 1;
dir_glob_foreach("/sys/block/$devname", qr/\Q$devname\E.*?(\d+)/, sub {
dir_glob_foreach(
"/sys/block/$devname",
qr/\Q$devname\E.*?(\d+)/,
sub {
my ($part, $partid) = @_;
if ($partid >= $newpartid) {
$newpartid = $partid + 1;
}
});
},
);
$size = PVE::Tools::convert_size($size, 'b' => 'mb');
run_command([ $SGDISK, '-n', "$newpartid:0:+${size}M", $dev ],
errmsg => "error creating partition '$newpartid' on '$dev'");
run_command(
[$SGDISK, '-n', "$newpartid:0:+${size}M", $dev],
errmsg => "error creating partition '$newpartid' on '$dev'",
);
my $partition;
# loop again to detect the real partition device which does not always follow
# a strict $devname$partition scheme like /dev/nvme0n1 -> /dev/nvme0n1p1
dir_glob_foreach("/sys/block/$devname", qr/\Q$devname\E.*$newpartid/, sub {
dir_glob_foreach(
"/sys/block/$devname",
qr/\Q$devname\E.*$newpartid/,
sub {
my ($part) = @_;
$partition = "/dev/$part";
});
},
);
return $partition;
}
@ -820,10 +883,14 @@ sub has_holder {
return $devpath if !dir_is_empty("/sys/class/block/${dev}/holders");
my $found;
dir_glob_foreach("/sys/block/${dev}", "${dev}.+", sub {
dir_glob_foreach(
"/sys/block/${dev}",
"${dev}.+",
sub {
my ($part) = @_;
$found = "/dev/${part}" if !dir_is_empty("/sys/class/block/${part}/holders");
});
},
);
return $found;
}
@ -841,12 +908,16 @@ sub is_mounted {
my $dev = strip_dev($devpath);
my $found;
dir_glob_foreach("/sys/block/${dev}", "${dev}.+", sub {
dir_glob_foreach(
"/sys/block/${dev}",
"${dev}.+",
sub {
my ($part) = @_;
my $partpath = "/dev/${part}";
$found = $partpath if $mounted->{$partpath};
});
},
);
return $found;
}
@ -884,13 +955,17 @@ sub wipe_blockdev {
my $count = ($size < 200) ? $size : 200;
my $to_wipe = [];
dir_glob_foreach("/sys/class/block/${devname}", "${devname}.+", sub {
dir_glob_foreach(
"/sys/class/block/${devname}",
"${devname}.+",
sub {
my ($part) = @_;
push $to_wipe->@*, "/dev/${part}" if -b "/dev/${part}";
});
},
);
if (scalar($to_wipe->@*) > 0) {
print "found child partitions to wipe: ". join(', ', $to_wipe->@*) ."\n";
print "found child partitions to wipe: " . join(', ', $to_wipe->@*) . "\n";
}
push $to_wipe->@*, $devpath; # put actual device last
@ -920,4 +995,182 @@ sub udevadm_trigger {
warn $@ if $@;
}
sub scan_bcache_device {
my ($showtype) = @_;
my $ddd = [];
my $res = get_lsblk_info();
foreach my $device (keys %$res) {
if ($res->{$device}{'fstype'} && $res->{$device}{'fstype'} eq 'bcache') {
my $d = {};
$device = basename($device);
my $path = get_bcache_dev_path($device);
my $state = "Stopped";
my $disktype = "unknown";
my $cachemode = "unknown";
my $backenddev = "unknown";
my $cache = "unknown";
if ( -d "/sys/block/$path/bcache/") {
$state = "Running";
$disktype = "backend";
#fix some bug
if ($device =~ m/^nvme/ ||
$device =~ m/^sd/ ||
$device =~ m/^xvd/ ||
$device =~ m/^mmcblk/ ||
$device =~ m/^nbd/) {
if ( ! -d "/sys/block/$path/bcache/set") {
$device = basename(realpath("/sys/block/$path/bcache/dev"));
$path = get_bcache_dev_path($device);
}
}
}
if ( -d "/sys/block/$path/bcache/set"){
$disktype = "cache";
$backenddev = "none";
$cachemode = "none";
$cache = "none";
}
if ($showtype ne 'all' && ($showtype ne $disktype)){
next;
}
$d->{type} = $disktype;
#num3
$d->{state} = $state;
if ( $disktype eq 'backend' && ( $showtype eq 'all' || $showtype eq $disktype )){
if ( $state ne 'Stopped'){
$backenddev = file_read_firstline("/sys/block/$path/bcache/backing_dev_name");
$cachemode = file_read_firstline("/sys/block/$path/bcache/cache_mode");
if ( $cachemode && $cachemode =~ /\[(.*?)\]/) {
$cachemode = $1;
}
if ( -d "/sys/block/$path/bcache/cache"){
$cache = basename(realpath("/sys/block/$path/bcache/cache/cache0/../"));
}
}else{
$backenddev = $device;
}
}
$d->{name} = $device;
$d->{'backend-dev'} = $backenddev;
$d->{cachemode} = $cachemode;
$d->{'cache-dev'} = $cache;
my $size = int(file_read_firstline("/sys/block/$path/size")) * 512;
$size = PVE::Tools::convert_size($size, 'b' => 'GB');
$d->{size} = $size . "GB";
push @$ddd, $d;
}
}
#print Dumper($ddd);
@$ddd = sort { $a->{name} cmp $b->{name} } @$ddd;
return $ddd;
}
sub get_devices_by_uuid {
my ($lsblk_info, $uuids, $res) = @_;
$res = {} if !defined($res);
foreach my $dev (sort keys %{$lsblk_info}) {
my $uuid = $lsblk_info->{$dev}->{uuid};
next if !defined($uuid) || !defined($uuids->{$uuid});
$res->{$dev} = $uuids->{$uuid};
}
return $res;
}
sub get_bcache_dev_path {
my ($dev) = @_;
my $diskname = $dev;
if ($dev =~ m/^(nvme\d+n\d+)p\d+$/ || # nvme分区如nvme0n1p1
$dev =~ m/^([sv]d[a-z]+)\d+$/ || # sda1, vdb2等标准分区
$dev =~ m/^(xvd[a-z]+)\d+$/ || # xen虚拟分区
$dev =~ m/^(mmcblk\d+)p\d+$/ || # mmcblk分区
$dev =~ m/^(nbd\d+)p\d+$/) { # nbd网络块设备分区
$diskname = $1;
$dev = "$diskname/$dev";
}
return $dev;
}
sub get_bcache_cache_dev {
my ($cachedev) = @_;
my $path = $cachedev;
if ($cachedev =~ m{^/dev/}) {
$cachedev = basename($cachedev);
$path = get_bcache_dev_path($cachedev);
die "$cachedev is not a bcache dev!\n" if (! -d "/sys/block/$path/bcache/set");
$cachedev = basename(realpath("/sys/block/$path/bcache/set"));
} elsif (is_uuid($cachedev)){
die "uuid $cachedev not a cache dev!\n" if (! -d "/sys/fs/bcache/$cachedev/");
} else {
$path = get_bcache_dev_path($cachedev);
die "cache $cachedev dev is not a cache device!\n" if ! -d "/sys/block/$path/bcache/set";
$cachedev = basename(realpath("/sys/block/$path/bcache/set"));
}
return $cachedev;
}
sub check_bcache_cache_dev {
my ($cachedev) = @_;
if (is_uuid($cachedev)){
return 0 if (! -d "/sys/fs/bcache/$cachedev/");
}else{
my $path = get_bcache_dev_path($cachedev);
return 0 if (! -d "/sys/block/$path/bcache/set");
}
return 1;
}
sub is_uuid {
my ($uuid) = @_;
my $uuid_regex = qr/^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i;
return $uuid =~ $uuid_regex;
}
sub get_bcache_backend_dev {
my ($backenddev) = @_;
if ($backenddev =~ m{^/dev/}) {
$backenddev = basename($backenddev);
}
my $path = get_bcache_dev_path($backenddev);
die "backend $backenddev dev is not a bcache device!\n" if ! -d "/sys/block/$path/bcache/";
return $backenddev;
}
sub check_bcache_cache_is_inuse {
my ($cache) = @_;
my @bdev = glob("/sys/fs/bcache/$cache/bdev*");
die "cache dev $cache is in use!\n" if scalar @bdev > 0;
}
sub get_disk_name {
my ($dev) = @_;
if ($dev =~ m{^/dev/}) {
$dev = basename($dev);
}
die "$dev is not a blockdev!\n" if !PVE::Diskmanage::verify_blockdev_path("/dev/$dev");
return $dev;
}
sub bcache_cache_uuid_to_dev {
my ($dev) = @_;
my $path = get_bcache_dev_path($dev);
return basename(realpath("/sys/block/$path/bcache/cache0/../"));
}
1;

View File

@ -54,14 +54,17 @@ sub extract_disk_from_import_file {
'-x',
'--force-local',
'--no-same-owner',
'-C', $tmpdir,
'-f', $ova_path,
'-C',
$tmpdir,
'-f',
$ova_path,
$inner_file,
]);
# check for symlinks and other non regular files
if (-l $source_path || ! -f $source_path) {
die "extracted file '$inner_file' from archive '$archive_volid' is not a regular file\n";
if (-l $source_path || !-f $source_path) {
die
"extracted file '$inner_file' from archive '$archive_volid' is not a regular file\n";
}
# check potentially untrusted image file!
@ -69,7 +72,8 @@ sub extract_disk_from_import_file {
# create temporary 1M image that will get overwritten by the rename
# to reserve the filename and take care of locking
$target_volid = PVE::Storage::vdisk_alloc($cfg, $target_storeid, $vmid, $inner_fmt, undef, 1024);
$target_volid =
PVE::Storage::vdisk_alloc($cfg, $target_storeid, $vmid, $inner_fmt, undef, 1024);
$target_path = PVE::Storage::path($cfg, $target_volid);
print "renaming $source_path to $target_path\n";

View File

@ -36,7 +36,7 @@ my @resources = (
{ id => 17, dtmf_name => 'Disk Drive' },
{ id => 18, dtmf_name => 'Tape Drive' },
{ id => 19, dtmf_name => 'Storage Extent' },
{ id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
{ id => 20, dtmf_name => 'Other storage device', pve_type => 'sata' },
{ id => 21, dtmf_name => 'Serial port' },
{ id => 22, dtmf_name => 'Parallel port' },
{ id => 23, dtmf_name => 'USB Controller' },
@ -51,7 +51,7 @@ my @resources = (
{ id => 32, dtmf_name => 'Storage Volume' },
{ id => 33, dtmf_name => 'Ethernet Connection' },
{ id => 34, dtmf_name => 'DMTF reserved' },
{ id => 35, dtmf_name => 'Vendor Reserved'}
{ id => 35, dtmf_name => 'Vendor Reserved' },
);
# see https://schemas.dmtf.org/wbem/cim-html/2.55.0+/CIM_OperatingSystem.html
@ -120,9 +120,7 @@ sub get_ostype {
}
my $allowed_nic_models = [
'e1000',
'e1000e',
'vmxnet3',
'e1000', 'e1000e', 'vmxnet3',
];
sub find_by {
@ -163,7 +161,7 @@ sub try_parse_capacity_unit {
if ($unit_text =~ m/^\s*byte\s*\*\s*([0-9]+)\s*\^\s*([0-9]+)\s*$/) {
my $base = $1;
my $exp = $2;
return $base ** $exp;
return $base**$exp;
}
return undef;
@ -177,24 +175,31 @@ sub parse_ovf {
my $dom;
if ($isOva) {
my $raw = "";
PVE::Tools::run_command(['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'], outfunc => sub {
PVE::Tools::run_command(
['tar', '-xO', '--wildcards', '--occurrence=1', '-f', $ovf, '*.ovf'],
outfunc => sub {
my $line = shift;
$raw .= $line;
});
},
);
$dom = XML::LibXML->load_xml(string => $raw, no_blanks => 1);
} else {
$dom = XML::LibXML->load_xml(location => $ovf, no_blanks => 1);
}
# register the xml namespaces in a xpath context object
# 'ovf' is the default namespace so it will prepended to each xml element
my $xpc = XML::LibXML::XPathContext->new($dom);
$xpc->registerNs('ovf', 'http://schemas.dmtf.org/ovf/envelope/1');
$xpc->registerNs('vmw', 'http://www.vmware.com/schema/ovf');
$xpc->registerNs('rasd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData');
$xpc->registerNs('vssd', 'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData');
$xpc->registerNs(
'rasd',
'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData',
);
$xpc->registerNs(
'vssd',
'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData',
);
# hash to save qm.conf parameters
my $qm;
@ -222,32 +227,39 @@ sub parse_ovf {
$ovf_name =~ s/\s+/-/g;
($qm->{name} = $ovf_name) =~ s/[^a-zA-Z0-9\-\.]//g;
} else {
warn "warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
warn
"warning: unable to parse the VM name in this OVF manifest, generating a default value\n";
}
# middle level xpath
# element[child] search the elements which have this [child]
my $processor_id = dtmf_name_to_id('Processor');
my $xpath_find_vcpu_count = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
my $xpath_find_vcpu_count =
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${processor_id}]/rasd:VirtualQuantity";
$qm->{'cores'} = $xpc->findvalue($xpath_find_vcpu_count);
my $memory_id = dtmf_name_to_id('Memory');
my $xpath_find_memory = ("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity");
my $xpath_find_memory = (
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${memory_id}]/rasd:VirtualQuantity"
);
$qm->{'memory'} = $xpc->findvalue($xpath_find_memory);
# middle level xpath
# here we expect multiple results, so we do not read the element value with
# findvalue() but store multiple elements with findnodes()
my $disk_id = dtmf_name_to_id('Disk Drive');
my $xpath_find_disks = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
my $xpath_find_disks =
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
my @disk_items = $xpc->findnodes($xpath_find_disks);
my $xpath_find_ostype_id = "/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
my $xpath_find_ostype_id =
"/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
$qm->{ostype} = get_ostype($ostype_id);
# vmware specific firmware config, seems to not be standardized in ovf ?
my $xpath_find_firmware = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
my $xpath_find_firmware =
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
my $firmware = $xpc->findvalue($xpath_find_firmware) || 'seabios';
$qm->{bios} = 'ovmf' if $firmware eq 'efi';
@ -290,12 +302,18 @@ sub parse_ovf {
# tricky xpath
# @ means we filter the result query based on a the value of an item attribute ( @ = attribute)
# @ needs to be escaped to prevent Perl double quote interpolation
my $xpath_find_fileref = sprintf("/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id);
my $xpath_find_capacity = sprintf("/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id);
my $xpath_find_capacity_unit = sprintf("/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
my $xpath_find_fileref = sprintf(
"/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:fileRef", $disk_id,
);
my $xpath_find_capacity = sprintf(
"/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacity", $disk_id,
);
my $xpath_find_capacity_unit = sprintf(
"/ovf:Envelope/ovf:DiskSection/\
ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id,
);
my $fileref = $xpc->findvalue($xpath_find_fileref);
my $capacity = $xpc->findvalue($xpath_find_capacity);
my $capacity_unit = $xpc->findvalue($xpath_find_capacity_unit);
@ -312,8 +330,10 @@ ovf:Disk[\@ovf:diskId='%s']/\@ovf:capacityAllocationUnits", $disk_id);
# from Item, find owning Controller type
my $controller_id = $xpc->findvalue('rasd:Parent', $item_node);
my $xpath_find_parent_type = sprintf("/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
my $xpath_find_parent_type = sprintf(
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/\
ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id,
);
my $controller_type = $xpc->findvalue($xpath_find_parent_type);
if (!$controller_type) {
warn "invalid or missing controller: $controller_type, skipping\n";
@ -326,7 +346,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
my $pve_disk_address = id_to_pve($controller_type) . $adress_on_controller;
# from Disk Node, find corresponding filepath
my $xpath_find_filepath = sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
my $xpath_find_filepath =
sprintf("/ovf:Envelope/ovf:References/ovf:File[\@ovf:id='%s']/\@ovf:href", $fileref);
my $filepath = $xpc->findvalue($xpath_find_filepath);
if (!$filepath) {
warn "invalid file reference $fileref, skipping\n";
@ -335,13 +356,14 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
print "file path: $filepath\n" if $debug;
my $original_filepath = $filepath;
($filepath) = $filepath =~ m|^(${PVE::Storage::SAFE_CHAR_WITH_WHITESPACE_CLASS_RE}+)$|; # untaint & check no sub/parent dirs
die "referenced path '$original_filepath' is invalid\n" if !$filepath || $filepath eq "." || $filepath eq "..";
die "referenced path '$original_filepath' is invalid\n"
if !$filepath || $filepath eq "." || $filepath eq "..";
# resolve symlinks and relative path components
# and die if the diskimage is not somewhere under the $ovf path
my $ovf_dir = realpath(dirname(File::Spec->rel2abs($ovf)))
or die "could not get absolute path of $ovf: $!\n";
my $backing_file_path = realpath(join ('/', $ovf_dir, $filepath))
my $backing_file_path = realpath(join('/', $ovf_dir, $filepath))
or die "could not get absolute path of $filepath: $!\n";
if ($backing_file_path !~ /^\Q${ovf_dir}\E/) {
die "error parsing $filepath, are you using a symlink ?\n";
@ -374,7 +396,8 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
$qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 0;
my $nic_id = dtmf_name_to_id('Ethernet Adapter');
my $xpath_find_nics = "/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
my $xpath_find_nics =
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
my @nic_items = $xpc->findnodes($xpath_find_nics);
my $net = {};
@ -383,12 +406,12 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", $controller_id);
for my $item_node (@nic_items) {
my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
$model = lc($model);
$model = 'e1000' if ! grep { $_ eq $model } @$allowed_nic_models;
$model = 'e1000' if !grep { $_ eq $model } @$allowed_nic_models;
$net->{"net${net_count}"} = { model => $model };
$net_count++;
}
return {qm => $qm, disks => \@disks, net => $net};
return { qm => $qm, disks => \@disks, net => $net };
}
1;

File diff suppressed because it is too large Load Diff

View File

@ -44,7 +44,7 @@ sub plugindata {
},
{ images => 1, rootdir => 1 },
],
format => [ { raw => 1, subvol => 1 }, 'raw', ],
format => [{ raw => 1, subvol => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@ -68,7 +68,6 @@ sub options {
nodes => { optional => 1 },
shared => { optional => 1 },
disable => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
content => { optional => 1 },
@ -95,7 +94,8 @@ sub options {
# Reuse `DirPlugin`'s `check_config`. This simply checks for invalid paths.
sub check_config {
my ($self, $sectionId, $config, $create, $skipSchemaCheck) = @_;
return PVE::Storage::DirPlugin::check_config($self, $sectionId, $config, $create, $skipSchemaCheck);
return PVE::Storage::DirPlugin::check_config($self, $sectionId, $config, $create,
$skipSchemaCheck);
}
my sub getfsmagic($) {
@ -127,7 +127,7 @@ sub activate_storage {
my $mp = PVE::Storage::DirPlugin::parse_is_mountpoint($scfg);
if (defined($mp) && !PVE::Storage::DirPlugin::path_is_mounted($mp, $cache->{mountdata})) {
die "unable to activate storage '$storeid' - directory is expected to be a mount point but"
." is not mounted: '$mp'\n";
. " is not mounted: '$mp'\n";
}
assert_btrfs($path); # only assert this stuff now, ensures $path is there and better UX
@ -142,18 +142,14 @@ sub status {
sub get_volume_attribute {
my ($class, $scfg, $storeid, $volname, $attribute) = @_;
return PVE::Storage::DirPlugin::get_volume_attribute($class, $scfg, $storeid, $volname, $attribute);
return PVE::Storage::DirPlugin::get_volume_attribute($class, $scfg, $storeid, $volname,
$attribute);
}
sub update_volume_attribute {
my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_;
return PVE::Storage::DirPlugin::update_volume_attribute(
$class,
$scfg,
$storeid,
$volname,
$attribute,
$value,
$class, $scfg, $storeid, $volname, $attribute, $value,
);
}
@ -190,8 +186,7 @@ sub raw_file_to_subvol($) {
sub filesystem_path {
my ($class, $scfg, $volname, $snapname) = @_;
my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, undef, undef, $isBase, $format) = $class->parse_volname($volname);
my $path = $class->get_subdir($scfg, $vtype);
@ -415,19 +410,22 @@ my sub foreach_snapshot_of_subvol : prototype($$) {
my $basename = basename($subvol);
my $dir = dirname($subvol);
dir_glob_foreach($dir, $BTRFS_SNAPSHOT_REGEX, sub {
dir_glob_foreach(
$dir,
$BTRFS_SNAPSHOT_REGEX,
sub {
my ($volume, $name, $snap_name) = ($1, $2, $3);
return if !path_is_subvolume("$dir/$volume");
return if $name ne $basename;
$code->($snap_name);
});
},
);
}
sub free_image {
my ($class, $storeid, $scfg, $volname, $isBase, $_format) = @_;
my ($vtype, undef, $vmid, undef, undef, undef, $format) =
$class->parse_volname($volname);
my ($vtype, undef, $vmid, undef, undef, undef, $format) = $class->parse_volname($volname);
if (!defined($format) || $vtype ne 'images' || ($format ne 'subvol' && $format ne 'raw')) {
return $class->SUPER::free_image($storeid, $scfg, $volname, $isBase, $_format);
@ -441,10 +439,13 @@ sub free_image {
}
my @snapshot_vols;
foreach_snapshot_of_subvol($subvol, sub {
foreach_snapshot_of_subvol(
$subvol,
sub {
my ($snap_name) = @_;
push @snapshot_vols, "$subvol\@$snap_name";
});
},
);
$class->btrfs_cmd(['subvolume', 'delete', '--', @snapshot_vols, $subvol]);
# try to cleanup directory to not clutter storage with empty $vmid dirs if
@ -514,7 +515,7 @@ sub volume_resize {
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
my ($name, $vmid, $format) = ($class->parse_volname($volname))[1,2,6];
my ($name, $vmid, $format) = ($class->parse_volname($volname))[1, 2, 6];
if ($format ne 'subvol' && $format ne 'raw') {
return PVE::Storage::Plugin::volume_snapshot(@_);
}
@ -527,9 +528,6 @@ sub volume_snapshot {
$snap_path = raw_file_to_subvol($snap_path);
}
my $snapshot_dir = $class->get_subdir($scfg, 'images') . "/$vmid";
mkpath $snapshot_dir;
$class->btrfs_cmd(['subvolume', 'snapshot', '-r', '--', $path, $snap_path]);
return undef;
}
@ -543,7 +541,7 @@ sub volume_rollback_is_possible {
sub volume_snapshot_rollback {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
my ($name, $format) = ($class->parse_volname($volname))[1,6];
my ($name, $format) = ($class->parse_volname($volname))[1, 6];
if ($format ne 'subvol' && $format ne 'raw') {
return PVE::Storage::Plugin::volume_snapshot_rollback(@_);
@ -581,7 +579,7 @@ sub volume_snapshot_rollback {
sub volume_snapshot_delete {
my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my ($name, $vmid, $format) = ($class->parse_volname($volname))[1,2,6];
my ($name, $vmid, $format) = ($class->parse_volname($volname))[1, 2, 6];
if ($format ne 'subvol' && $format ne 'raw') {
return PVE::Storage::Plugin::volume_snapshot_delete(@_);
@ -604,7 +602,7 @@ sub volume_has_feature {
my $features = {
snapshot => {
current => { qcow2 => 1, raw => 1, subvol => 1 },
snap => { qcow2 => 1, raw => 1, subvol => 1 }
snap => { qcow2 => 1, raw => 1, subvol => 1 },
},
clone => {
base => { qcow2 => 1, raw => 1, subvol => 1, vmdk => 1 },
@ -628,7 +626,8 @@ sub volume_has_feature {
},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) = $class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
my $key = undef;
if ($snapname) {
@ -674,9 +673,8 @@ sub list_images {
$format = 'subvol';
} else {
$format = $ext;
($size, undef, $used, $parent, $ctime) = eval {
PVE::Storage::Plugin::file_size_info($fn, undef, $format);
};
($size, undef, $used, $parent, $ctime) =
eval { PVE::Storage::Plugin::file_size_info($fn, undef, $format); };
if (my $err = $@) {
die $err if $err !~ m/Image is not in \S+ format$/;
warn "image '$fn' is not in expected format '$format', querying as raw\n";
@ -688,12 +686,16 @@ sub list_images {
next if !defined($size);
if ($vollist) {
next if ! grep { $_ eq $volid } @$vollist;
next if !grep { $_ eq $volid } @$vollist;
}
my $info = {
volid => $volid, format => $format,
size => $size, vmid => $owner, used => $used, parent => $parent,
volid => $volid,
format => $format,
size => $size,
vmid => $owner,
used => $used,
parent => $parent,
};
$info->{ctime} = $ctime if $ctime;
@ -730,13 +732,7 @@ sub volume_import_formats {
# Same as export-formats, beware the parameter order:
return volume_export_formats(
$class,
$scfg,
$storeid,
$volname,
$snapshot,
$base_snapshot,
$with_snapshots,
$class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots,
);
}
@ -787,16 +783,20 @@ sub volume_export {
push @$cmd, (map { "$path\@$_" } ($with_snapshots // [])->@*);
push @$cmd, $path if !defined($base_snapshot);
} else {
foreach_snapshot_of_subvol($path, sub {
foreach_snapshot_of_subvol(
$path,
sub {
my ($snap_name) = @_;
# NOTE: if there is a $snapshot specified via the arguments, it is added last below.
push @$cmd, "$path\@$snap_name" if !(defined($snapshot) && $snap_name eq $snapshot);
});
push @$cmd, "$path\@$snap_name"
if !(defined($snapshot) && $snap_name eq $snapshot);
},
);
}
$path .= "\@$snapshot" if defined($snapshot);
push @$cmd, $path;
run_command($cmd, output => '>&'.fileno($fh));
run_command($cmd, output => '>&' . fileno($fh));
return;
}
@ -858,7 +858,10 @@ sub volume_import {
my $dh = IO::Dir->new($tmppath)
or die "failed to open temporary receive directory '$tmppath' - $!\n";
eval {
run_command(['btrfs', '-q', 'receive', '-e', '--', $tmppath], input => '<&'.fileno($fh));
run_command(
['btrfs', '-q', 'receive', '-e', '--', $tmppath],
input => '<&' . fileno($fh),
);
# Analyze the received subvolumes;
my ($diskname, $found_snapshot, @snapshots);
@ -891,38 +894,39 @@ sub volume_import {
# Rotate the disk into place, first the current state:
# Note that read-only subvolumes cannot be moved into different directories, but for the
# "current" state we also want a writable copy, so start with that:
$class->btrfs_cmd(['property', 'set', '-f', "$tmppath/$diskname\@$snapshot", 'ro', 'false']);
$class->btrfs_cmd(
['property', 'set', '-f', "$tmppath/$diskname\@$snapshot", 'ro', 'false']);
PVE::Tools::renameat2(
-1,
"$tmppath/$diskname\@$snapshot",
-1,
$destination,
&PVE::Tools::RENAME_NOREPLACE,
) or die "failed to move received snapshot '$tmppath/$diskname\@$snapshot'"
)
or die "failed to move received snapshot '$tmppath/$diskname\@$snapshot'"
. " into place at '$destination' - $!\n";
# Now recreate the actual snapshot:
$class->btrfs_cmd([
'subvolume',
'snapshot',
'-r',
'--',
$destination,
"$destination\@$snapshot",
'subvolume', 'snapshot', '-r', '--', $destination, "$destination\@$snapshot",
]);
# Now go through the remaining snapshots (if any)
foreach my $snap (@snapshots) {
$class->btrfs_cmd(['property', 'set', '-f', "$tmppath/$diskname\@$snap", 'ro', 'false']);
$class->btrfs_cmd(
['property', 'set', '-f', "$tmppath/$diskname\@$snap", 'ro', 'false']);
PVE::Tools::renameat2(
-1,
"$tmppath/$diskname\@$snap",
-1,
"$destination\@$snap",
&PVE::Tools::RENAME_NOREPLACE,
) or die "failed to move received snapshot '$tmppath/$diskname\@$snap'"
)
or die "failed to move received snapshot '$tmppath/$diskname\@$snap'"
. " into place at '$destination\@$snap' - $!\n";
eval { $class->btrfs_cmd(['property', 'set', "$destination\@$snap", 'ro', 'true']) };
eval {
$class->btrfs_cmd(['property', 'set', "$destination\@$snap", 'ro', 'true']);
};
warn "failed to make $destination\@$snap read-only - $!\n" if $@;
}
};
@ -938,10 +942,11 @@ sub volume_import {
eval { $class->btrfs_cmd(['subvolume', 'delete', '--', "$tmppath/$entry"]) };
warn $@ if $@;
}
$dh->close; undef $dh;
$dh->close;
undef $dh;
}
if (!rmdir($tmppath)) {
warn "failed to remove temporary directory '$tmppath' - $!\n"
warn "failed to remove temporary directory '$tmppath' - $!\n";
}
};
warn $@ if $@;
@ -961,7 +966,9 @@ sub rename_volume {
my $format = ($class->parse_volname($source_volname))[6];
if ($format ne 'raw' && $format ne 'subvol') {
return $class->SUPER::rename_volume($scfg, $storeid, $source_volname, $target_vmid, $target_volname);
return $class->SUPER::rename_volume(
$scfg, $storeid, $source_volname, $target_vmid, $target_volname,
);
}
$target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1)
@ -978,12 +985,18 @@ sub rename_volume {
my $new_path = "${basedir}/${target_dir}";
die "target volume '${target_volname}' already exists\n" if -e $new_path;
rename $old_path, $new_path ||
die "rename '$old_path' to '$new_path' failed - $!\n";
rename $old_path, $new_path
|| die "rename '$old_path' to '$new_path' failed - $!\n";
return "${storeid}:$target_volname";
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
die "rename_snapshot is not supported for $class";
}
sub get_import_metadata {
return PVE::Storage::DirPlugin::get_import_metadata(@_);
}

View File

@ -16,7 +16,7 @@ use base qw(PVE::Storage::Plugin);
sub cifs_is_mounted : prototype($$) {
my ($scfg, $mountdata) = @_;
my ($mountpoint, $server, $share) = $scfg->@{'path', 'server', 'share'};
my ($mountpoint, $server, $share) = $scfg->@{ 'path', 'server', 'share' };
my $subdir = $scfg->{subdir} // '';
$server = "[$server]" if Net::IP::ip_is_ipv6($server);
@ -24,9 +24,9 @@ sub cifs_is_mounted : prototype($$) {
$mountdata = PVE::ProcFSTools::parse_proc_mounts() if !$mountdata;
return $mountpoint if grep {
$_->[2] =~ /^cifs/ &&
$_->[0] =~ m|^\Q$source\E/?$| &&
$_->[1] eq $mountpoint
$_->[2] =~ /^cifs/
&& $_->[0] =~ m|^\Q$source\E/?$|
&& $_->[1] eq $mountpoint
} @$mountdata;
return undef;
}
@ -69,7 +69,7 @@ sub get_cred_file {
sub cifs_mount : prototype($$$$$) {
my ($scfg, $storeid, $smbver, $user, $domain) = @_;
my ($mountpoint, $server, $share, $options) = $scfg->@{'path', 'server', 'share', 'options'};
my ($mountpoint, $server, $share, $options) = $scfg->@{ 'path', 'server', 'share', 'options' };
my $subdir = $scfg->{subdir} // '';
$server = "[$server]" if Net::IP::ip_is_ipv6($server);
@ -98,9 +98,19 @@ sub type {
sub plugindata {
return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
backup => 1, snippets => 1, import => 1}, { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
content => [
{
images => 1,
rootdir => 1,
vztmpl => 1,
iso => 1,
backup => 1,
snippets => 1,
import => 1,
},
{ images => 1 },
],
format => [{ raw => 1, qcow2 => 1, vmdk => 1 }, 'raw'],
'sensitive-properties' => { password => 1 },
};
}
@ -123,8 +133,9 @@ sub properties {
maxLength => 256,
},
smbversion => {
description => "SMB protocol version. 'default' if not set, negotiates the highest SMB2+"
." version supported by both the client and server.",
description =>
"SMB protocol version. 'default' if not set, negotiates the highest SMB2+"
. " version supported by both the client and server.",
type => 'string',
default => 'default',
enum => ['default', '2.0', '2.1', '3', '3.0', '3.11'],
@ -142,25 +153,24 @@ sub options {
subdir => { optional => 1 },
nodes => { optional => 1 },
disable => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
content => { optional => 1 },
format => { optional => 1 },
username => { optional => 1 },
password => { optional => 1},
domain => { optional => 1},
smbversion => { optional => 1},
password => { optional => 1 },
domain => { optional => 1 },
smbversion => { optional => 1 },
mkdir => { optional => 1 },
'create-base-path' => { optional => 1 },
'create-subdirs' => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
options => { optional => 1 },
'snapshot-as-volume-chain' => { optional => 1, fixed => 1 },
};
}
sub check_config {
my ($class, $sectionId, $config, $create, $skipSchemaCheck) = @_;
@ -235,11 +245,10 @@ sub activate_storage {
$class->config_aware_base_mkdir($scfg, $path);
die "unable to activate storage '$storeid' - " .
"directory '$path' does not exist\n" if ! -d $path;
die "unable to activate storage '$storeid' - " . "directory '$path' does not exist\n"
if !-d $path;
cifs_mount($scfg, $storeid, $scfg->{smbversion},
$scfg->{username}, $scfg->{domain});
cifs_mount($scfg, $storeid, $scfg->{smbversion}, $scfg->{username}, $scfg->{domain});
}
$class->SUPER::activate_storage($storeid, $scfg, $cache);
@ -262,7 +271,7 @@ sub deactivate_storage {
sub check_connection {
my ($class, $storeid, $scfg) = @_;
my $servicename = '//'.$scfg->{server}.'/'.$scfg->{share};
my $servicename = '//' . $scfg->{server} . '/' . $scfg->{share};
my $cmd = ['/usr/bin/smbclient', $servicename, '-d', '0'];
@ -275,18 +284,21 @@ sub check_connection {
push @$cmd, '-U', $scfg->{username}, '-A', $cred_file;
push @$cmd, '-W', $scfg->{domain} if $scfg->{domain};
} else {
push @$cmd, '-U', 'Guest','-N';
push @$cmd, '-U', 'Guest', '-N';
}
push @$cmd, '-c', 'echo 1 0';
my $out_str;
my $out = sub { $out_str .= shift };
eval { run_command($cmd, timeout => 10, outfunc => $out, errfunc => sub {}) };
eval {
run_command($cmd, timeout => 10, outfunc => $out, errfunc => sub { });
};
if (my $err = $@) {
die "$out_str\n" if defined($out_str) &&
($out_str =~ m/NT_STATUS_(ACCESS_DENIED|INVALID_PARAMETER|LOGON_FAILURE)/);
die "$out_str\n"
if defined($out_str)
&& ($out_str =~ m/NT_STATUS_(ACCESS_DENIED|INVALID_PARAMETER|LOGON_FAILURE)/);
return 0;
}
@ -319,4 +331,8 @@ sub get_import_metadata {
return PVE::Storage::DirPlugin::get_import_metadata(@_);
}
sub volume_qemu_snapshot_method {
return PVE::Storage::DirPlugin::volume_qemu_snapshot_method(@_);
}
1;

View File

@ -27,9 +27,9 @@ sub cephfs_is_mounted {
$mountdata = PVE::ProcFSTools::parse_proc_mounts() if !$mountdata;
return $mountpoint if grep {
$_->[2] =~ m#^ceph|fuse\.ceph-fuse# &&
$_->[0] =~ m#\Q:$subdir\E$|^ceph-fuse$# &&
$_->[1] eq $mountpoint
$_->[2] =~ m#^ceph|fuse\.ceph-fuse#
&& $_->[0] =~ m#\Q:$subdir\E$|^ceph-fuse$#
&& $_->[1] eq $mountpoint
} @$mountdata;
warn "A filesystem is already mounted on $mountpoint\n"
@ -42,11 +42,11 @@ sub cephfs_is_mounted {
sub systemd_netmount {
my ($where, $type, $what, $opts) = @_;
# don't do default deps, systemd v241 generator produces ordering deps on both
# local-fs(-pre) and remote-fs(-pre) targets if we use the required _netdev
# option. Over three corners this gets us an ordering cycle on shutdown, which
# may make shutdown hang if the random cycle breaking hits the "wrong" unit to
# delete.
# don't do default deps, systemd v241 generator produces ordering deps on both
# local-fs(-pre) and remote-fs(-pre) targets if we use the required _netdev
# option. Over three corners this gets us an ordering cycle on shutdown, which
# may make shutdown hang if the random cycle breaking hits the "wrong" unit to
# delete.
my $unit = <<"EOF";
[Unit]
Description=${where}
@ -116,8 +116,8 @@ sub type {
sub plugindata {
return {
content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ backup => 1 }],
content =>
[{ vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 }, { backup => 1 }],
'sensitive-properties' => { keyring => 1 },
};
}
@ -130,7 +130,8 @@ sub properties {
},
'fs-name' => {
description => "The Ceph filesystem name.",
type => 'string', format => 'pve-configid',
type => 'string',
format => 'pve-configid',
},
};
}
@ -139,7 +140,7 @@ sub options {
return {
path => { fixed => 1 },
'content-dirs' => { optional => 1 },
monhost => { optional => 1},
monhost => { optional => 1 },
nodes => { optional => 1 },
subdir => { optional => 1 },
disable => { optional => 1 },
@ -152,7 +153,6 @@ sub options {
'create-subdirs' => { optional => 1 },
fuse => { optional => 1 },
bwlimit => { optional => 1 },
maxfiles => { optional => 1 },
keyring => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
@ -219,8 +219,8 @@ sub activate_storage {
$class->config_aware_base_mkdir($scfg, $path);
die "unable to activate storage '$storeid' - " .
"directory '$path' does not exist\n" if ! -d $path;
die "unable to activate storage '$storeid' - " . "directory '$path' does not exist\n"
if !-d $path;
cephfs_mount($scfg, $storeid);
}

View File

@ -1,10 +1,10 @@
package PVE::Storage::Common;
use strict;
use warnings;
use v5.36;
use PVE::JSONSchema;
use PVE::Syscall;
use PVE::Tools qw(run_command);
use constant {
FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h
@ -50,11 +50,14 @@ Possible formats a guest image can have.
# Those formats should either be allowed here or support for them should be phased out (at least in
# the storage layer). Can still be added again in the future, should any plugin provider request it.
PVE::JSONSchema::register_standard_option('pve-storage-image-format', {
PVE::JSONSchema::register_standard_option(
'pve-storage-image-format',
{
type => 'string',
enum => ['raw', 'qcow2', 'subvol', 'vmdk'],
description => "Format of the image.",
});
},
);
=pod
@ -107,4 +110,159 @@ sub deallocate : prototype($$$) {
}
}
my sub run_qemu_img_json {
my ($cmd, $timeout) = @_;
my $json = '';
my $err_output = '';
eval {
run_command(
$cmd,
timeout => $timeout,
outfunc => sub { $json .= shift },
errfunc => sub { $err_output .= shift . "\n" },
);
};
warn $@ if $@;
if ($err_output) {
# if qemu did not output anything to stdout we die with stderr as an error
die $err_output if !$json;
# otherwise we warn about it and try to parse the json
warn $err_output;
}
return $json;
}
=pod
=head3 qemu_img_create
qemu_img_create($fmt, $size, $path, $options)
Create a new qemu image with a specific format C<$format> and size C<$size> for a target C<$path>.
C<$options> currently allows setting the C<preallocation> value
=cut
sub qemu_img_create {
my ($fmt, $size, $path, $options) = @_;
my $cmd = ['/usr/bin/qemu-img', 'create'];
push @$cmd, '-o', "preallocation=$options->{preallocation}"
if defined($options->{preallocation});
push @$cmd, '-f', $fmt, $path, "${size}K";
run_command($cmd, errmsg => "unable to create image");
}
=pod
=head3 qemu_img_create_qcow2_backed
qemu_img_create_qcow2_backed($path, $backing_path, $backing_format, $options)
Create a new qemu qcow2 image C<$path> using an existing backing image C<$backing_path> with backing_format C<$backing_format>.
C<$options> currently allows setting the C<preallocation> value.
=cut
sub qemu_img_create_qcow2_backed {
my ($path, $backing_path, $backing_format, $options) = @_;
my $cmd = [
'/usr/bin/qemu-img',
'create',
'-F',
$backing_format,
'-b',
$backing_path,
'-f',
'qcow2',
$path,
];
# TODO make this configurable for all volumes/types and pass in via $options
my $opts = ['extended_l2=on', 'cluster_size=128k'];
push @$opts, "preallocation=$options->{preallocation}"
if defined($options->{preallocation});
push @$cmd, '-o', join(',', @$opts) if @$opts > 0;
run_command($cmd, errmsg => "unable to create image");
}
=pod
=head3 qemu_img_info
qemu_img_info($filename, $file_format, $timeout, $follow_backing_files)
Returns a json with qemu image C<$filename> informations with format <$file_format>.
If C<$follow_backing_files> option is defined, return a json with the whole chain
of backing files images.
=cut
sub qemu_img_info {
my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
push $cmd->@*, '-f', $file_format if $file_format;
push $cmd->@*, '--backing-chain' if $follow_backing_files;
return run_qemu_img_json($cmd, $timeout);
}
=pod
=head3 qemu_img_measure
qemu_img_measure($size, $fmt, $timeout, $options)
Returns a json with the maximum size including all metadatas overhead for an image with format C<$fmt> and original size C<$size>Kb.
C<$options> allows specifying qemu-img options that might affect the sizing calculation, such as cluster size.
=cut
sub qemu_img_measure {
my ($size, $fmt, $timeout, $options) = @_;
die "format is missing" if !$fmt;
my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
if ($options) {
push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
}
return run_qemu_img_json($cmd, $timeout);
}
=pod
=head3 qemu_img_resize
qemu_img_resize($path, $format, $size, $preallocation, $timeout)
Resize a qemu image C<$path> with format C<$format> to a target Kb size C<$size>.
Default timeout C<$timeout> is 10s if not specified.
C<$preallocation> allows to specify the preallocation option for the resize operation.
=cut
sub qemu_img_resize {
my ($path, $format, $size, $preallocation, $timeout) = @_;
die "format is missing" if !$format;
my $cmd = ['/usr/bin/qemu-img', 'resize'];
push $cmd->@*, "--preallocation=$preallocation" if $preallocation;
push $cmd->@*, '-f', $format, $path, $size;
$timeout = 10 if !$timeout;
run_command($cmd, timeout => $timeout);
}
1;

View File

@ -24,9 +24,20 @@ sub type {
sub plugindata {
return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
{ images => 1, rootdir => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
content => [
{
images => 1,
rootdir => 1,
vztmpl => 1,
iso => 1,
backup => 1,
snippets => 1,
none => 1,
import => 1,
},
{ images => 1, rootdir => 1 },
],
format => [{ raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@ -35,11 +46,13 @@ sub properties {
return {
path => {
description => "File system path.",
type => 'string', format => 'pve-storage-path',
type => 'string',
format => 'pve-storage-path',
},
mkdir => {
description => "Create the directory if it doesn't exist and populate it with default sub-dirs."
." NOTE: Deprecated, use the 'create-base-path' and 'create-subdirs' options instead.",
description =>
"Create the directory if it doesn't exist and populate it with default sub-dirs."
. " NOTE: Deprecated, use the 'create-base-path' and 'create-subdirs' options instead.",
type => 'boolean',
default => 'yes',
},
@ -54,10 +67,9 @@ sub properties {
default => 'yes',
},
is_mountpoint => {
description =>
"Assume the given path is an externally managed mountpoint " .
"and consider the storage offline if it is not mounted. ".
"Using a boolean (yes/no) value serves as a shortcut to using the target path in this field.",
description => "Assume the given path is an externally managed mountpoint "
. "and consider the storage offline if it is not mounted. "
. "Using a boolean (yes/no) value serves as a shortcut to using the target path in this field.",
type => 'string',
default => 'no',
},
@ -72,7 +84,6 @@ sub options {
nodes => { optional => 1 },
shared => { optional => 1 },
disable => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
content => { optional => 1 },
@ -83,6 +94,7 @@ sub options {
is_mountpoint => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
'snapshot-as-volume-chain' => { optional => 1, fixed => 1 },
};
}
@ -201,7 +213,8 @@ sub update_volume_attribute {
or die "unable to create protection file '$protection_path' - $!\n";
close($fh);
} else {
unlink $protection_path or $! == ENOENT
unlink $protection_path
or $! == ENOENT
or die "could not delete protection file '$protection_path' - $!\n";
}
@ -224,7 +237,6 @@ sub status {
return $class->SUPER::status($storeid, $scfg, $cache);
}
sub activate_storage {
my ($class, $storeid, $scfg, $cache) = @_;
@ -232,8 +244,8 @@ sub activate_storage {
my $mp = parse_is_mountpoint($scfg);
if (defined($mp) && !path_is_mounted($mp, $cache->{mountdata})) {
die "unable to activate storage '$storeid' - " .
"directory is expected to be a mount point but is not mounted: '$mp'\n";
die "unable to activate storage '$storeid' - "
. "directory is expected to be a mount point but is not mounted: '$mp'\n";
}
$class->config_aware_base_mkdir($scfg, $path);
@ -242,7 +254,8 @@ sub activate_storage {
sub check_config {
my ($self, $sectionId, $config, $create, $skipSchemaCheck) = @_;
my $opts = PVE::SectionConfig::check_config($self, $sectionId, $config, $create, $skipSchemaCheck);
my $opts =
PVE::SectionConfig::check_config($self, $sectionId, $config, $create, $skipSchemaCheck);
return $opts if !$create;
if ($opts->{path} !~ m|^/[-/a-zA-Z0-9_.@]+$|) {
die "illegal path for directory storage: $opts->{path}\n";
@ -278,7 +291,7 @@ sub get_import_metadata {
if ($isOva) {
$volid = "$storeid:$volname/$path";
} else {
$volid = "$storeid:import/$path",
$volid = "$storeid:import/$path",;
}
$disks->{$id} = {
volid => $volid,
@ -301,4 +314,13 @@ sub get_import_metadata {
};
}
sub volume_qemu_snapshot_method {
my ($class, $storeid, $scfg, $volname) = @_;
my $format = ($class->parse_volname($volname))[6];
return 'storage' if $format ne 'qcow2';
return $scfg->{'snapshot-as-volume-chain'} ? 'mixed' : 'qemu';
}
1;

View File

@ -29,8 +29,8 @@ sub type {
sub plugindata {
return {
content => [ { import => 1 }, { import => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
content => [{ import => 1 }, { import => 1 }],
format => [{ raw => 1, qcow2 => 1, vmdk => 1 }, 'raw'],
'sensitive-properties' => { password => 1 },
};
}
@ -38,7 +38,8 @@ sub plugindata {
sub properties {
return {
'skip-cert-verification' => {
description => 'Disable TLS certificate verification, only enable on fully trusted networks!',
description =>
'Disable TLS certificate verification, only enable on fully trusted networks!',
type => 'boolean',
default => 'false',
},
@ -54,8 +55,8 @@ sub options {
# FIXME: bwlimit => { optional => 1 },
server => {},
username => {},
password => { optional => 1},
'skip-cert-verification' => { optional => 1},
password => { optional => 1 },
'skip-cert-verification' => { optional => 1 },
port => { optional => 1 },
};
}
@ -210,7 +211,17 @@ sub esxi_mount : prototype($$$;$) {
if (!$pid) {
eval {
undef $rd;
POSIX::setsid();
# Double fork to properly daemonize
POSIX::setsid() or die "failed to create new session: $!\n";
my $pid2 = fork();
die "second fork failed: $!\n" if !defined($pid2);
if ($pid2) {
# First child exits immediately
POSIX::_exit(0);
}
# Second child (grandchild) enters systemd scope
PVE::Systemd::enter_systemd_scope(
$scope_name_base,
"Proxmox VE FUSE mount for ESXi storage $storeid (server $host)",
@ -241,7 +252,9 @@ sub esxi_mount : prototype($$$;$) {
print {$wr} "ERROR: $err";
}
POSIX::_exit(1);
};
}
# Parent wait for first child to exit
waitpid($pid, 0);
undef $wr;
my $result = do { local $/ = undef; <$rd> };
@ -261,7 +274,7 @@ sub esxi_unmount : prototype($$$) {
my $scope = "${scope_name_base}.scope";
my $mount_dir = mount_dir($storeid);
my %silence_std_outs = (outfunc => sub {}, errfunc => sub {});
my %silence_std_outs = (outfunc => sub { }, errfunc => sub { });
eval { run_command(['/bin/systemctl', 'reset-failed', $scope], %silence_std_outs) };
eval { run_command(['/bin/systemctl', 'stop', $scope], %silence_std_outs) };
run_command(['/bin/umount', $mount_dir]);
@ -291,11 +304,7 @@ sub get_import_metadata : prototype($$$$$) {
my $manifest = $class->get_manifest($storeid, $scfg, 0);
my $contents = file_get_contents($vmx_path);
my $vmx = PVE::Storage::ESXiPlugin::VMX->parse(
$storeid,
$scfg,
$volname,
$contents,
$manifest,
$storeid, $scfg, $volname, $contents, $manifest,
);
return $vmx->get_create_args();
}
@ -306,12 +315,13 @@ sub query_vmdk_size : prototype($;$) {
my $json = eval {
my $json = '';
run_command(['/usr/bin/qemu-img', 'info', '--output=json', $filename],
run_command(
['/usr/bin/qemu-img', 'info', '--output=json', $filename],
timeout => $timeout,
outfunc => sub { $json .= $_[0]; },
errfunc => sub { warn "$_[0]\n"; }
errfunc => sub { warn "$_[0]\n"; },
);
from_json($json)
from_json($json);
};
warn $@ if $@;
@ -447,7 +457,8 @@ sub list_volumes {
my $vm = $vms->{$vm_name};
my $ds_name = $vm->{config}->{datastore};
my $path = $vm->{config}->{path};
push @$res, {
push @$res,
{
content => 'import',
format => 'vmx',
name => $vm_name,
@ -477,7 +488,6 @@ sub path {
die "storage '$class' does not support snapshots\n" if defined $snapname;
# FIXME: activate/mount:
return mount_dir($storeid) . '/' . $volname;
}
@ -499,6 +509,12 @@ sub rename_volume {
die "renaming volumes is not supported for $class\n";
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
die "rename_snapshot is not supported for $class";
}
sub volume_export_formats {
my ($class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots) = @_;
@ -508,7 +524,8 @@ sub volume_export_formats {
}
sub volume_export {
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots) = @_;
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots)
= @_;
# FIXME: maybe we can support raw+size via `qemu-img dd`?
@ -522,7 +539,18 @@ sub volume_import_formats {
}
sub volume_import {
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots, $allow_rename) = @_;
my (
$class,
$scfg,
$storeid,
$fh,
$volname,
$format,
$snapshot,
$base_snapshot,
$with_snapshots,
$allow_rename,
) = @_;
die "importing not supported for $class\n";
}
@ -555,6 +583,7 @@ sub volume_snapshot_delete {
die "deleting snapshots is not supported for $class\n";
}
sub volume_snapshot_info {
my ($class, $scfg, $storeid, $volname) = @_;
@ -665,7 +694,7 @@ sub config_path_for_vm {
}
die "failed to resolve path for vm '$vm' "
."($dc_name, $cfg->{datastore}, $cfg->{path})\n";
. "($dc_name, $cfg->{datastore}, $cfg->{path})\n";
}
die "no such vm '$vm'\n";
@ -979,14 +1008,15 @@ sub smbios1_uuid {
# vmware stores space separated bytes and has 1 dash in the middle...
$uuid =~ s/[^0-9a-fA-f]//g;
if ($uuid =~ /^
if (
$uuid =~ /^
([0-9a-fA-F]{8})
([0-9a-fA-F]{4})
([0-9a-fA-F]{4})
([0-9a-fA-F]{4})
([0-9a-fA-F]{12})
$/x)
{
$/x
) {
return "$1-$2-$3-$4-$5";
}
return;
@ -1053,7 +1083,7 @@ sub get_create_args {
$create_net->{"net$id"} = $param;
});
my %counts = ( scsi => 0, sata => 0, ide => 0 );
my %counts = (scsi => 0, sata => 0, ide => 0);
my $boot_order = '';
@ -1109,7 +1139,7 @@ sub get_create_args {
}
$boot_order .= ';' if length($boot_order);
$boot_order .= $bus.$count;
$boot_order .= $bus . $count;
};
$self->for_each_disk($add_disk);
if (@nvmes) {
@ -1158,7 +1188,7 @@ sub get_create_args {
++$serid;
});
$warn->('guest-is-running') if defined($vminfo) && ($vminfo->{power}//'') ne 'poweredOff';
$warn->('guest-is-running') if defined($vminfo) && ($vminfo->{power} // '') ne 'poweredOff';
return {
type => 'vm',

View File

@ -1,360 +0,0 @@
package PVE::Storage::GlusterfsPlugin;
use strict;
use warnings;
use IO::File;
use File::Path;
use PVE::Tools qw(run_command);
use PVE::ProcFSTools;
use PVE::Network;
use PVE::Storage::Plugin;
use PVE::JSONSchema qw(get_standard_option);
use base qw(PVE::Storage::Plugin);
# Glusterfs helper functions
my $server_test_results = {};
my $get_active_server = sub {
my ($scfg, $return_default_if_offline) = @_;
my $defaultserver = $scfg->{server} ? $scfg->{server} : 'localhost';
if ($return_default_if_offline && !defined($scfg->{server2})) {
# avoid delays (there is no backup server anyways)
return $defaultserver;
}
my $serverlist = [ $defaultserver ];
push @$serverlist, $scfg->{server2} if $scfg->{server2};
my $ctime = time();
foreach my $server (@$serverlist) {
my $stat = $server_test_results->{$server};
return $server if $stat && $stat->{active} && (($ctime - $stat->{time}) <= 2);
}
foreach my $server (@$serverlist) {
my $status = 0;
if ($server && $server ne 'localhost' && $server ne '127.0.0.1' && $server ne '::1') {
# ping the gluster daemon default port (24007) as heuristic
$status = PVE::Network::tcp_ping($server, 24007, 2);
} else {
my $parser = sub {
my $line = shift;
if ($line =~ m/Status: Started$/) {
$status = 1;
}
};
my $cmd = ['/usr/sbin/gluster', 'volume', 'info', $scfg->{volume}];
run_command($cmd, errmsg => "glusterfs error", errfunc => sub {}, outfunc => $parser);
}
$server_test_results->{$server} = { time => time(), active => $status };
return $server if $status;
}
return $defaultserver if $return_default_if_offline;
return undef;
};
sub glusterfs_is_mounted {
my ($volume, $mountpoint, $mountdata) = @_;
$mountdata = PVE::ProcFSTools::parse_proc_mounts() if !$mountdata;
return $mountpoint if grep {
$_->[2] eq 'fuse.glusterfs' &&
$_->[0] =~ /^\S+:\Q$volume\E$/ &&
$_->[1] eq $mountpoint
} @$mountdata;
return undef;
}
sub glusterfs_mount {
my ($server, $volume, $mountpoint) = @_;
my $source = "$server:$volume";
my $cmd = ['/bin/mount', '-t', 'glusterfs', $source, $mountpoint];
run_command($cmd, errmsg => "mount error");
}
# Configuration
sub type {
return 'glusterfs';
}
sub plugindata {
return {
content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1},
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
'sensitive-properties' => {},
};
}
sub properties {
return {
volume => {
description => "Glusterfs Volume.",
type => 'string',
},
server2 => {
description => "Backup volfile server IP or DNS name.",
type => 'string', format => 'pve-storage-server',
requires => 'server',
},
transport => {
description => "Gluster transport: tcp or rdma",
type => 'string',
enum => ['tcp', 'rdma', 'unix'],
},
};
}
sub options {
return {
path => { fixed => 1 },
server => { optional => 1 },
server2 => { optional => 1 },
volume => { fixed => 1 },
transport => { optional => 1 },
nodes => { optional => 1 },
disable => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
content => { optional => 1 },
format => { optional => 1 },
mkdir => { optional => 1 },
'create-base-path' => { optional => 1 },
'create-subdirs' => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
};
}
sub check_config {
my ($class, $sectionId, $config, $create, $skipSchemaCheck) = @_;
$config->{path} = "/mnt/pve/$sectionId" if $create && !$config->{path};
return $class->SUPER::check_config($sectionId, $config, $create, $skipSchemaCheck);
}
# Storage implementation
sub parse_name_dir {
my $name = shift;
if ($name =~ m!^((base-)?[^/\s]+\.(raw|qcow2|vmdk))$!) {
return ($1, $3, $2);
}
die "unable to parse volume filename '$name'\n";
}
sub path {
my ($class, $scfg, $volname, $storeid, $snapname) = @_;
my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
$class->parse_volname($volname);
# Note: qcow2/qed has internal snapshot, so path is always
# the same (with or without snapshot => same file).
die "can't snapshot this image format\n"
if defined($snapname) && $format !~ m/^(qcow2|qed)$/;
my $path = undef;
if ($vtype eq 'images') {
my $server = &$get_active_server($scfg, 1);
my $glustervolume = $scfg->{volume};
my $transport = $scfg->{transport};
my $protocol = "gluster";
if ($transport) {
$protocol = "gluster+$transport";
}
$path = "$protocol://$server/$glustervolume/images/$vmid/$name";
} else {
my $dir = $class->get_subdir($scfg, $vtype);
$path = "$dir/$name";
}
return wantarray ? ($path, $vmid, $vtype) : $path;
}
sub clone_image {
my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
die "storage definition has no path\n" if !$scfg->{path};
my ($vtype, $basename, $basevmid, undef, undef, $isBase, $format) =
$class->parse_volname($volname);
die "clone_image on wrong vtype '$vtype'\n" if $vtype ne 'images';
die "this storage type does not support clone_image on snapshot\n" if $snap;
die "this storage type does not support clone_image on subvolumes\n" if $format eq 'subvol';
die "clone_image only works on base images\n" if !$isBase;
my $imagedir = $class->get_subdir($scfg, 'images');
$imagedir .= "/$vmid";
mkpath $imagedir;
my $name = $class->find_free_diskname($storeid, $scfg, $vmid, "qcow2", 1);
warn "clone $volname: $vtype, $name, $vmid to $name (base=../$basevmid/$basename)\n";
my $path = "$imagedir/$name";
die "disk image '$path' already exists\n" if -e $path;
my $server = &$get_active_server($scfg, 1);
my $glustervolume = $scfg->{volume};
my $volumepath = "gluster://$server/$glustervolume/images/$vmid/$name";
my $cmd = ['/usr/bin/qemu-img', 'create', '-b', "../$basevmid/$basename",
'-F', $format, '-f', 'qcow2', $volumepath];
run_command($cmd, errmsg => "unable to create image");
return "$basevmid/$basename/$vmid/$name";
}
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
my $imagedir = $class->get_subdir($scfg, 'images');
$imagedir .= "/$vmid";
mkpath $imagedir;
$name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, 1) if !$name;
my (undef, $tmpfmt) = parse_name_dir($name);
die "illegal name '$name' - wrong extension for format ('$tmpfmt != '$fmt')\n"
if $tmpfmt ne $fmt;
my $path = "$imagedir/$name";
die "disk image '$path' already exists\n" if -e $path;
my $server = &$get_active_server($scfg, 1);
my $glustervolume = $scfg->{volume};
my $volumepath = "gluster://$server/$glustervolume/images/$vmid/$name";
my $cmd = ['/usr/bin/qemu-img', 'create'];
my $prealloc_opt = PVE::Storage::Plugin::preallocation_cmd_option($scfg, $fmt);
push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
push @$cmd, '-f', $fmt, $volumepath, "${size}K";
eval { run_command($cmd, errmsg => "unable to create image"); };
if ($@) {
unlink $path;
rmdir $imagedir;
die "$@";
}
return "$vmid/$name";
}
sub status {
my ($class, $storeid, $scfg, $cache) = @_;
$cache->{mountdata} = PVE::ProcFSTools::parse_proc_mounts()
if !$cache->{mountdata};
my $path = $scfg->{path};
my $volume = $scfg->{volume};
return undef if !glusterfs_is_mounted($volume, $path, $cache->{mountdata});
return $class->SUPER::status($storeid, $scfg, $cache);
}
sub activate_storage {
my ($class, $storeid, $scfg, $cache) = @_;
$cache->{mountdata} = PVE::ProcFSTools::parse_proc_mounts()
if !$cache->{mountdata};
my $path = $scfg->{path};
my $volume = $scfg->{volume};
if (!glusterfs_is_mounted($volume, $path, $cache->{mountdata})) {
$class->config_aware_base_mkdir($scfg, $path);
die "unable to activate storage '$storeid' - " .
"directory '$path' does not exist\n" if ! -d $path;
my $server = &$get_active_server($scfg, 1);
glusterfs_mount($server, $volume, $path);
}
$class->SUPER::activate_storage($storeid, $scfg, $cache);
}
sub deactivate_storage {
my ($class, $storeid, $scfg, $cache) = @_;
$cache->{mountdata} = PVE::ProcFSTools::parse_proc_mounts()
if !$cache->{mountdata};
my $path = $scfg->{path};
my $volume = $scfg->{volume};
if (glusterfs_is_mounted($volume, $path, $cache->{mountdata})) {
my $cmd = ['/bin/umount', $path];
run_command($cmd, errmsg => 'umount error');
}
}
sub activate_volume {
my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
# do nothing by default
}
sub deactivate_volume {
my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
# do nothing by default
}
sub check_connection {
my ($class, $storeid, $scfg, $cache) = @_;
my $server = &$get_active_server($scfg);
return defined($server) ? 1 : 0;
}
sub get_import_metadata {
return PVE::Storage::DirPlugin::get_import_metadata(@_);
}
1;

View File

@ -18,20 +18,25 @@ sub iscsi_ls {
my ($scfg) = @_;
my $portal = $scfg->{portal};
my $cmd = ['/usr/bin/iscsi-ls', '-s', 'iscsi://'.$portal ];
my $cmd = ['/usr/bin/iscsi-ls', '-s', 'iscsi://' . $portal];
my $list = {};
my %unittobytes = (
"k" => 1024,
"M" => 1024*1024,
"G" => 1024*1024*1024,
"T" => 1024*1024*1024*1024
"M" => 1024 * 1024,
"G" => 1024 * 1024 * 1024,
"T" => 1024 * 1024 * 1024 * 1024,
);
eval {
run_command($cmd, errmsg => "iscsi error", errfunc => sub {}, outfunc => sub {
run_command(
$cmd,
errmsg => "iscsi error",
errfunc => sub { },
outfunc => sub {
my $line = shift;
$line = trim($line);
if( $line =~ /Lun:(\d+)\s+([A-Za-z0-9\-\_\.\:]*)\s+\(Size:([0-9\.]*)(k|M|G|T)\)/ ) {
my $image = "lun".$1;
if ($line =~ /Lun:(\d+)\s+([A-Za-z0-9\-\_\.\:]*)\s+\(Size:([0-9\.]*)(k|M|G|T)\)/
) {
my $image = "lun" . $1;
my $size = $3;
my $unit = $4;
@ -41,7 +46,8 @@ sub iscsi_ls {
format => 'raw',
};
}
});
},
);
};
my $err = $@;
@ -58,7 +64,7 @@ sub type {
sub plugindata {
return {
content => [ {images => 1, none => 1}, { images => 1 }],
content => [{ images => 1, none => 1 }, { images => 1 }],
select_existing => 1,
'sensitive-properties' => {},
};
@ -68,9 +74,9 @@ sub options {
return {
portal => { fixed => 1 },
target => { fixed => 1 },
nodes => { optional => 1},
disable => { optional => 1},
content => { optional => 1},
nodes => { optional => 1 },
disable => { optional => 1 },
content => { optional => 1 },
bwlimit => { optional => 1 },
};
}
@ -80,7 +86,6 @@ sub options {
sub parse_volname {
my ($class, $volname) = @_;
if ($volname =~ m/^lun(\d+)$/) {
return ('images', $1, undef, undef, undef, undef, 'raw');
}
@ -92,7 +97,7 @@ sub parse_volname {
sub path {
my ($class, $scfg, $volname, $storeid, $snapname) = @_;
die "volume snapshot is not possible on iscsi device"
die "volume snapshot is not possible on iscsi device\n"
if defined($snapname);
my ($vtype, $lun, $vmid) = $class->parse_volname($volname);
@ -105,6 +110,23 @@ sub path {
return ($path, $vmid, $vtype);
}
sub qemu_blockdev_options {
my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
die "volume snapshot is not possible on iscsi device\n"
if $options->{'snapshot-name'};
my $lun = ($class->parse_volname($volname))[1];
return {
driver => 'iscsi',
transport => 'tcp',
portal => "$scfg->{portal}",
target => "$scfg->{target}",
lun => int($lun),
};
}
sub create_base {
my ($class, $storeid, $scfg, $volname) = @_;
@ -164,7 +186,7 @@ sub status {
my $free = 0;
my $used = 0;
my $active = 1;
return ($total,$free,$used,$active);
return ($total, $free, $used, $active);
return undef;
}
@ -182,7 +204,7 @@ sub deactivate_storage {
sub activate_volume {
my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
die "volume snapshot is not possible on iscsi device" if $snapname;
die "volume snapshot is not possible on iscsi device\n" if $snapname;
return 1;
}
@ -190,7 +212,7 @@ sub activate_volume {
sub deactivate_volume {
my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
die "volume snapshot is not possible on iscsi device" if $snapname;
die "volume snapshot is not possible on iscsi device\n" if $snapname;
return 1;
}
@ -206,38 +228,37 @@ sub volume_size_info {
sub volume_resize {
my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
die "volume resize is not possible on iscsi device";
die "volume resize is not possible on iscsi device\n";
}
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
die "volume snapshot is not possible on iscsi device";
die "volume snapshot is not possible on iscsi device\n";
}
sub volume_snapshot_rollback {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
die "volume snapshot rollback is not possible on iscsi device";
die "volume snapshot rollback is not possible on iscsi device\n";
}
sub volume_snapshot_delete {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
die "volume snapshot delete is not possible on iscsi device";
die "volume snapshot delete is not possible on iscsi device\n";
}
sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
copy => { current => 1},
copy => { current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;
if($snapname){
if ($snapname) {
$key = 'snap';
}else{
} else {
$key = $isBase ? 'base' : 'current';
}
return 1 if $features->{$feature}->{$key};
@ -290,7 +311,7 @@ sub volume_export {
PVE::Storage::Plugin::write_common_header($fh, $size);
run_command(
['qemu-img', 'dd', 'bs=64k', "if=$file", '-f', 'raw', '-O', 'raw'],
output => '>&'.fileno($fh),
output => '>&' . fileno($fh),
);
return;
}

View File

@ -9,7 +9,8 @@ use IO::File;
use PVE::JSONSchema qw(get_standard_option);
use PVE::Storage::Plugin;
use PVE::Tools qw(run_command file_read_firstline trim dir_glob_regex dir_glob_foreach $IPV4RE $IPV6RE);
use PVE::Tools
qw(run_command file_read_firstline trim dir_glob_regex dir_glob_foreach $IPV4RE $IPV6RE);
use base qw(PVE::Storage::Plugin);
@ -32,7 +33,7 @@ my sub assert_iscsi_support {
}
# Example: 192.168.122.252:3260,1 iqn.2003-01.org.linux-iscsi.proxmox-nfs.x8664:sn.00567885ba8f
my $ISCSI_TARGET_RE = qr/^((?:$IPV4RE|\[$IPV6RE\]):\d+)\,\S+\s+(\S+)\s*$/;
my $ISCSI_TARGET_RE = qr/^(\S+:\d+)\,\S+\s+(\S+)\s*$/;
sub iscsi_session_list {
assert_iscsi_support();
@ -41,15 +42,19 @@ sub iscsi_session_list {
my $res = {};
eval {
run_command($cmd, errmsg => 'iscsi session scan failed', outfunc => sub {
run_command(
$cmd,
errmsg => 'iscsi session scan failed',
outfunc => sub {
my $line = shift;
# example: tcp: [1] 192.168.122.252:3260,1 iqn.2003-01.org.linux-iscsi.proxmox-nfs.x8664:sn.00567885ba8f (non-flash)
if ($line =~ m/^tcp:\s+\[(\S+)\]\s+((?:$IPV4RE|\[$IPV6RE\]):\d+)\,\S+\s+(\S+)\s+\S+?\s*$/) {
if ($line =~ m/^tcp:\s+\[(\S+)\]\s+(\S+:\d+)\,\S+\s+(\S+)\s+\S+?\s*$/) {
my ($session_id, $portal, $target) = ($1, $2, $3);
# there can be several sessions per target (multipath)
push @{$res->{$target}}, { session_id => $session_id, portal => $portal };
push @{ $res->{$target} }, { session_id => $session_id, portal => $portal };
}
});
},
);
};
if (my $err = $@) {
die $err if $err !~ m/: No active sessions.$/i;
@ -95,7 +100,9 @@ sub iscsi_portals {
my $res = [];
my $cmd = [$ISCSIADM, '--mode', 'node'];
eval {
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
my $line = shift;
if ($line =~ $ISCSI_TARGET_RE) {
@ -104,14 +111,15 @@ sub iscsi_portals {
push @{$res}, $portal;
}
}
});
},
);
};
my $err = $@;
warn $err if $err;
if ($err || !scalar(@$res)) {
return [ $portal_in ];
return [$portal_in];
} else {
return $res;
}
@ -128,16 +136,19 @@ sub iscsi_discovery {
my $cmd = [$ISCSIADM, '--mode', 'discovery', '--type', 'sendtargets', '--portal', $portal];
eval {
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
my $line = shift;
if ($line =~ $ISCSI_TARGET_RE) {
my ($portal, $target) = ($1, $2);
# one target can have more than one portal (multipath)
# and sendtargets should return all of them in single call
push @{$res->{$target}}, $portal;
push @{ $res->{$target} }, $portal;
}
});
},
);
};
# In case of multipath we can stop after receiving targets from any available portal
@ -159,11 +170,16 @@ sub iscsi_login {
eval {
my $cmd = [
$ISCSIADM,
'--mode', 'node',
'--targetname', $target,
'--op', 'update',
'--name', 'node.session.initial_login_retry_max',
'--value', '0',
'--mode',
'node',
'--targetname',
$target,
'--op',
'update',
'--name',
'node.session.initial_login_retry_max',
'--value',
'0',
];
run_command($cmd);
};
@ -204,7 +220,9 @@ sub iscsi_session_rescan {
foreach my $session (@$session_list) {
my $cmd = [$ISCSIADM, '--mode', 'session', '--sid', $session->{session_id}, '--rescan'];
eval { run_command($cmd, outfunc => sub {}); };
eval {
run_command($cmd, outfunc => sub { });
};
warn $@ if $@;
}
}
@ -220,11 +238,11 @@ sub load_stable_scsi_paths {
# exclude filenames with part in name (same disk but partitions)
# use only filenames with scsi(with multipath i have the same device
# with dm-uuid-mpath , dm-name and scsi in name)
if($tmp !~ m/-part\d+$/ && ($tmp =~ m/^scsi-/ || $tmp =~ m/^dm-uuid-mpath-/)) {
if ($tmp !~ m/-part\d+$/ && ($tmp =~ m/^scsi-/ || $tmp =~ m/^dm-uuid-mpath-/)) {
my $path = "$stabledir/$tmp";
my $bdevdest = readlink($path);
if ($bdevdest && $bdevdest =~ m|^../../([^/]+)|) {
$stable_paths->{$1}=$tmp;
$stable_paths->{$1} = $tmp;
}
}
}
@ -241,7 +259,10 @@ sub iscsi_device_list {
my $stable_paths = load_stable_scsi_paths();
dir_glob_foreach($dirname, 'session(\d+)', sub {
dir_glob_foreach(
$dirname,
'session(\d+)',
sub {
my ($ent, $session) = @_;
my $target = file_read_firstline("$dirname/$ent/targetname");
@ -250,7 +271,10 @@ sub iscsi_device_list {
my (undef, $host) = dir_glob_regex("$dirname/$ent/device", 'target(\d+):.*');
return if !defined($host);
dir_glob_foreach("/sys/bus/scsi/devices", "$host:" . '(\d+):(\d+):(\d+)', sub {
dir_glob_foreach(
"/sys/bus/scsi/devices",
"$host:" . '(\d+):(\d+):(\d+)',
sub {
my ($tmp, $channel, $id, $lun) = @_;
my $type = file_read_firstline("/sys/bus/scsi/devices/$tmp/type");
@ -258,15 +282,18 @@ sub iscsi_device_list {
my $bdev;
if (-d "/sys/bus/scsi/devices/$tmp/block") { # newer kernels
(undef, $bdev) = dir_glob_regex("/sys/bus/scsi/devices/$tmp/block/", '([A-Za-z]\S*)');
(undef, $bdev) =
dir_glob_regex("/sys/bus/scsi/devices/$tmp/block/", '([A-Za-z]\S*)');
} else {
(undef, $bdev) = dir_glob_regex("/sys/bus/scsi/devices/$tmp", 'block:(\S+)');
(undef, $bdev) =
dir_glob_regex("/sys/bus/scsi/devices/$tmp", 'block:(\S+)');
}
return if !$bdev;
#check multipath
if (-d "/sys/block/$bdev/holders") {
my $multipathdev = dir_glob_regex("/sys/block/$bdev/holders", '[A-Za-z]\S*');
my $multipathdev =
dir_glob_regex("/sys/block/$bdev/holders", '[A-Za-z]\S*');
$bdev = $multipathdev if $multipathdev;
}
@ -288,9 +315,11 @@ sub iscsi_device_list {
};
#print "TEST: $target $session $host,$bus,$tg,$lun $blockdev\n";
});
},
);
});
},
);
return $res;
}
@ -303,7 +332,7 @@ sub type {
sub plugindata {
return {
content => [ {images => 1, none => 1}, { images => 1 }],
content => [{ images => 1, none => 1 }, { images => 1 }],
select_existing => 1,
'sensitive-properties' => {},
};
@ -317,7 +346,8 @@ sub properties {
},
portal => {
description => "iSCSI portal (IP or DNS name with optional port).",
type => 'string', format => 'pve-storage-portal-dns',
type => 'string',
format => 'pve-storage-portal-dns',
},
};
}
@ -326,9 +356,9 @@ sub options {
return {
portal => { fixed => 1 },
target => { fixed => 1 },
nodes => { optional => 1},
disable => { optional => 1},
content => { optional => 1},
nodes => { optional => 1 },
disable => { optional => 1 },
content => { optional => 1 },
bwlimit => { optional => 1 },
};
}
@ -456,7 +486,7 @@ sub activate_storage {
if (!$do_login) {
# We should check that sessions for all portals are available
my $session_portals = [ map { $_->{portal} } (@$sessions) ];
my $session_portals = [map { $_->{portal} } (@$sessions)];
for my $portal (@$portals) {
if (!grep(/^\Q$portal\E$/, @$session_portals)) {
@ -514,15 +544,15 @@ my $udev_query_path = sub {
my $device_path;
my $cmd = [
'udevadm',
'info',
'--query=path',
$dev,
'udevadm', 'info', '--query=path', $dev,
];
eval {
run_command($cmd, outfunc => sub {
run_command(
$cmd,
outfunc => sub {
$device_path = shift;
});
},
);
};
die "failed to query device path for '$dev': $@\n" if $@;
@ -540,7 +570,10 @@ $resolve_virtual_devices = sub {
my $resolved = [];
if ($dev =~ m!^/devices/virtual/block/!) {
dir_glob_foreach("/sys/$dev/slaves", '([^.].+)', sub {
dir_glob_foreach(
"/sys/$dev/slaves",
'([^.].+)',
sub {
my ($slave) = @_;
# don't check devices multiple times
@ -554,7 +587,8 @@ $resolve_virtual_devices = sub {
my $nested_resolved = $resolve_virtual_devices->($path, $visited);
push @$resolved, @$nested_resolved;
});
},
);
} else {
push @$resolved, $dev;
}
@ -570,7 +604,7 @@ sub activate_volume {
die "failed to get realpath for '$path': $!\n" if !$real_path;
# in case $path does not exist or is not a symlink, check if the returned
# $real_path is a block device
die "resolved realpath '$real_path' is not a block device\n" if ! -b $real_path;
die "resolved realpath '$real_path' is not a block device\n" if !-b $real_path;
my $device_path = $udev_query_path->($real_path);
my $resolved_paths = $resolve_virtual_devices->($device_path);
@ -601,14 +635,13 @@ sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
copy => { current => 1},
copy => { current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;
if ($snapname){
if ($snapname) {
$key = 'snap';
} else {
$key = $isBase ? 'base' : 'current';
@ -647,13 +680,16 @@ sub volume_export {
my $file = $class->filesystem_path($scfg, $volname, $snapshot);
my $size;
run_command(['/sbin/blockdev', '--getsize64', $file], outfunc => sub {
run_command(
['/sbin/blockdev', '--getsize64', $file],
outfunc => sub {
my ($line) = @_;
die "unexpected output from /sbin/blockdev: $line\n" if $line !~ /^(\d+)$/;
$size = int($1);
});
},
);
PVE::Storage::Plugin::write_common_header($fh, $size);
run_command(['dd', "if=$file", "bs=64k", "status=progress"], output => '>&'.fileno($fh));
run_command(['dd', "if=$file", "bs=64k", "status=progress"], output => '>&' . fileno($fh));
return;
}

File diff suppressed because it is too large Load Diff

View File

@ -32,7 +32,8 @@ my $get_lun_cmd_map = sub {
};
sub get_base {
return '/dev/zvol/rdsk';
my ($scfg) = @_;
return $scfg->{'zfs-base-path'} || '/dev/zvol/rdsk';
}
sub run_lun_command {
@ -83,7 +84,15 @@ sub run_lun_command {
$target = 'root@' . $scfg->{portal};
my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $lunmethod, @params];
my $cmd = [
@ssh_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
$target,
$luncmd,
$lunmethod,
@params,
];
run_command($cmd, outfunc => $output, timeout => $timeout);

View File

@ -59,25 +59,31 @@ my $execute_command = sub {
if ($exec eq 'scp') {
$target = 'root@[' . $scfg->{portal} . ']';
$cmd = [@scp_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", '--', $method, "$target:$params[0]"];
$cmd = [
@scp_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
'--',
$method,
"$target:$params[0]",
];
} else {
$target = 'root@' . $scfg->{portal};
$cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, '--', $method, @params];
$cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, '--', $method,
@params];
}
eval {
run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
};
eval { run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout); };
if ($@) {
$res = {
result => 0,
msg => $err,
}
};
} else {
$res = {
result => 1,
msg => $msg,
}
};
}
return $res;
@ -104,10 +110,9 @@ my $read_config = sub {
$target = 'root@' . $scfg->{portal};
my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $CONFIG_FILE];
eval {
run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
};
my $cmd =
[@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $CONFIG_FILE];
eval { run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout); };
if ($@) {
die $err if ($err !~ /No such file or directory/);
die "No configuration found. Install iet on $scfg->{portal}" if $msg eq '';
@ -133,7 +138,7 @@ my $parser = sub {
my $line = 0;
my $base = get_base;
my $base = get_base($scfg);
my $config = $get_config->($scfg);
my @cfgfile = split "\n", $config;
@ -141,7 +146,7 @@ my $parser = sub {
foreach (@cfgfile) {
$line++;
if ($_ =~ /^\s*Target\s*([\w\-\:\.]+)\s*$/) {
if ($1 eq $scfg->{target} && ! $cfg_target) {
if ($1 eq $scfg->{target} && !$cfg_target) {
# start colect info
die "$line: Parse error [$_]" if $SETTINGS;
$SETTINGS->{target} = $1;
@ -157,7 +162,7 @@ my $parser = sub {
} else {
if ($cfg_target) {
$SETTINGS->{text} .= "$_\n";
next if ($_ =~ /^\s*#/ || ! $_);
next if ($_ =~ /^\s*#/ || !$_);
my $option = $_;
if ($_ =~ /^(\w+)\s*#/) {
$option = $1;
@ -176,7 +181,7 @@ my $parser = sub {
foreach (@lun) {
my @lun_opt = split '=', $_;
die "$line: Parse error [$option]" unless (scalar(@lun_opt) == 2);
$conf->{$lun_opt[0]} = $lun_opt[1];
$conf->{ $lun_opt[0] } = $lun_opt[1];
}
if ($conf->{Path} && $conf->{Path} =~ /^$base\/$scfg->{pool}\/([\w\-]+)$/) {
$conf->{include} = 1;
@ -184,7 +189,7 @@ my $parser = sub {
$conf->{include} = 0;
}
$conf->{lun} = $num;
push @{$SETTINGS->{luns}}, $conf;
push @{ $SETTINGS->{luns} }, $conf;
} else {
die "$line: Parse error [$option]";
}
@ -202,14 +207,19 @@ my $update_config = sub {
my $config = '';
while ((my $option, my $value) = each(%$SETTINGS)) {
next if ($option eq 'include' || $option eq 'luns' || $option eq 'Path' || $option eq 'text' || $option eq 'used');
next
if ($option eq 'include'
|| $option eq 'luns'
|| $option eq 'Path'
|| $option eq 'text'
|| $option eq 'used');
if ($option eq 'target') {
$config = "\n\nTarget " . $SETTINGS->{target} . "\n" . $config;
} else {
$config .= "\t$option\t\t\t$value\n";
}
}
foreach my $lun (@{$SETTINGS->{luns}}) {
foreach my $lun (@{ $SETTINGS->{luns} }) {
my $lun_opt = '';
while ((my $option, my $value) = each(%$lun)) {
next if ($option eq 'include' || $option eq 'lun' || $option eq 'Path');
@ -260,12 +270,12 @@ my $get_lu_name = sub {
my $used = ();
my $i;
if (! exists $SETTINGS->{used}) {
if (!exists $SETTINGS->{used}) {
for ($i = 0; $i < $MAX_LUNS; $i++) {
$used->{$i} = 0;
}
foreach my $lun (@{$SETTINGS->{luns}}) {
$used->{$lun->{lun}} = 1;
foreach my $lun (@{ $SETTINGS->{luns} }) {
$used->{ $lun->{lun} } = 1;
}
$SETTINGS->{used} = $used;
}
@ -282,14 +292,14 @@ my $get_lu_name = sub {
my $init_lu_name = sub {
my $used = ();
if (! exists($SETTINGS->{used})) {
if (!exists($SETTINGS->{used})) {
for (my $i = 0; $i < $MAX_LUNS; $i++) {
$used->{$i} = 0;
}
$SETTINGS->{used} = $used;
}
foreach my $lun (@{$SETTINGS->{luns}}) {
$SETTINGS->{used}->{$lun->{lun}} = 1;
foreach my $lun (@{ $SETTINGS->{luns} }) {
$SETTINGS->{used}->{ $lun->{lun} } = 1;
}
};
@ -297,7 +307,7 @@ my $free_lu_name = sub {
my ($lu_name) = @_;
my $new;
foreach my $lun (@{$SETTINGS->{luns}}) {
foreach my $lun (@{ $SETTINGS->{luns} }) {
if ($lun->{lun} != $lu_name) {
push @$new, $lun;
}
@ -310,7 +320,8 @@ my $free_lu_name = sub {
my $make_lun = sub {
my ($scfg, $path) = @_;
die 'Maximum number of LUNs per target is 16384' if scalar @{$SETTINGS->{luns}} >= $MAX_LUNS;
die 'Maximum number of LUNs per target is 16384'
if scalar @{ $SETTINGS->{luns} } >= $MAX_LUNS;
my $lun = $get_lu_name->();
my $conf = {
@ -319,7 +330,7 @@ my $make_lun = sub {
Type => 'blockio',
include => 1,
};
push @{$SETTINGS->{luns}}, $conf;
push @{ $SETTINGS->{luns} }, $conf;
return $conf;
};
@ -329,7 +340,7 @@ my $list_view = sub {
my $lun = undef;
my $object = $params[0];
foreach my $lun (@{$SETTINGS->{luns}}) {
foreach my $lun (@{ $SETTINGS->{luns} }) {
next unless $lun->{include} == 1;
if ($lun->{Path} =~ /^$object$/) {
return $lun->{lun} if (defined($lun->{lun}));
@ -345,7 +356,7 @@ my $list_lun = sub {
my $name = undef;
my $object = $params[0];
foreach my $lun (@{$SETTINGS->{luns}}) {
foreach my $lun (@{ $SETTINGS->{luns} }) {
next unless $lun->{include} == 1;
if ($lun->{Path} =~ /^$object$/) {
return $lun->{Path};
@ -381,12 +392,12 @@ my $create_lun = sub {
my $delete_lun = sub {
my ($scfg, $timeout, $method, @params) = @_;
my $res = {msg => undef};
my $res = { msg => undef };
my $path = $params[0];
my $tid = $get_target_tid->($scfg);
foreach my $lun (@{$SETTINGS->{luns}}) {
foreach my $lun (@{ $SETTINGS->{luns} }) {
if ($lun->{Path} eq $path) {
@params = ('--op', 'delete', "--tid=$tid", "--lun=$lun->{lun}");
$res = $execute_command->($scfg, 'ssh', $timeout, $ietadm, @params);
@ -417,7 +428,7 @@ my $modify_lun = sub {
my $path = $params[1];
my $tid = $get_target_tid->($scfg);
foreach my $cfg (@{$SETTINGS->{luns}}) {
foreach my $cfg (@{ $SETTINGS->{luns} }) {
if ($cfg->{Path} eq $path) {
$lun = $cfg;
last;
@ -471,7 +482,8 @@ sub run_lun_command {
}
sub get_base {
return '/dev';
my ($scfg) = @_;
return $scfg->{'zfs-base-path'} || '/dev';
}
1;

View File

@ -83,7 +83,8 @@ my $read_config = sub {
my $daemon = 0;
foreach my $config (@CONFIG_FILES) {
$err = undef;
my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $config];
my $cmd =
[@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $config];
eval {
run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
};
@ -124,11 +125,11 @@ my $parse_size = sub {
if ($unit eq 'KB') {
$size *= 1024;
} elsif ($unit eq 'MB') {
$size *= 1024*1024;
$size *= 1024 * 1024;
} elsif ($unit eq 'GB') {
$size *= 1024*1024*1024;
$size *= 1024 * 1024 * 1024;
} elsif ($unit eq 'TB') {
$size *= 1024*1024*1024*1024;
$size *= 1024 * 1024 * 1024 * 1024;
}
if ($reminder) {
$size = ceil($size);
@ -151,9 +152,9 @@ my $size_with_unit = sub {
if ($size =~ m/^\d+$/) {
++$n and $size /= 1024 until $size < 1024;
if ($size =~ /\./) {
return sprintf "%.2f%s", $size, ( qw[bytes KB MB GB TB] )[ $n ];
return sprintf "%.2f%s", $size, (qw[bytes KB MB GB TB])[$n];
} else {
return sprintf "%d%s", $size, ( qw[bytes KB MB GB TB] )[ $n ];
return sprintf "%d%s", $size, (qw[bytes KB MB GB TB])[$n];
}
}
die "$size: Not a number";
@ -170,7 +171,7 @@ my $lun_dumper = sub {
$config .= 'UnitType ' . $SETTINGS->{$lun}->{UnitType} . "\n";
$config .= 'QueueDepth ' . $SETTINGS->{$lun}->{QueueDepth} . "\n";
foreach my $conf (@{$SETTINGS->{$lun}->{luns}}) {
foreach my $conf (@{ $SETTINGS->{$lun}->{luns} }) {
$config .= "$conf->{lun} Storage " . $conf->{Storage};
$config .= ' ' . $size_with_unit->($conf->{Size}) . "\n";
foreach ($conf->{options}) {
@ -189,11 +190,11 @@ my $get_lu_name = sub {
my $used = ();
my $i;
if (! exists $SETTINGS->{$target}->{used}) {
if (!exists $SETTINGS->{$target}->{used}) {
for ($i = 0; $i < $MAX_LUNS; $i++) {
$used->{$i} = 0;
}
foreach my $lun (@{$SETTINGS->{$target}->{luns}}) {
foreach my $lun (@{ $SETTINGS->{$target}->{luns} }) {
$lun->{lun} =~ /^LUN(\d+)$/;
$used->{$1} = 1;
}
@ -213,13 +214,13 @@ my $init_lu_name = sub {
my ($target) = @_;
my $used = ();
if (! exists($SETTINGS->{$target}->{used})) {
if (!exists($SETTINGS->{$target}->{used})) {
for (my $i = 0; $i < $MAX_LUNS; $i++) {
$used->{$i} = 0;
}
$SETTINGS->{$target}->{used} = $used;
}
foreach my $lun (@{$SETTINGS->{$target}->{luns}}) {
foreach my $lun (@{ $SETTINGS->{$target}->{luns} }) {
$lun->{lun} =~ /^LUN(\d+)$/;
$SETTINGS->{$target}->{used}->{$1} = 1;
}
@ -236,7 +237,8 @@ my $make_lun = sub {
my ($scfg, $path) = @_;
my $target = $SETTINGS->{current};
die 'Maximum number of LUNs per target is 63' if scalar @{$SETTINGS->{$target}->{luns}} >= $MAX_LUNS;
die 'Maximum number of LUNs per target is 63'
if scalar @{ $SETTINGS->{$target}->{luns} } >= $MAX_LUNS;
my @options = ();
my $lun = $get_lu_name->($target);
@ -249,7 +251,7 @@ my $make_lun = sub {
Size => 'AUTO',
options => @options,
};
push @{$SETTINGS->{$target}->{luns}}, $conf;
push @{ $SETTINGS->{$target}->{luns} }, $conf;
return $conf->{lun};
};
@ -290,7 +292,7 @@ my $parser = sub {
if ($arg2 =~ /^Storage\s*(.+)/i) {
$SETTINGS->{$lun}->{$arg1}->{storage} = $1;
} elsif ($arg2 =~ /^Option\s*(.+)/i) {
push @{$SETTINGS->{$lun}->{$arg1}->{options}}, $1;
push @{ $SETTINGS->{$lun}->{$arg1}->{options} }, $1;
} else {
$SETTINGS->{$lun}->{$arg1} = $arg2;
}
@ -304,13 +306,13 @@ my $parser = sub {
$CONFIG =~ s/\n$//;
die "$scfg->{target}: Target not found" unless $SETTINGS->{targets};
my $max = $SETTINGS->{targets};
my $base = get_base;
my $base = get_base($scfg);
for (my $i = 1; $i <= $max; $i++) {
my $target = $SETTINGS->{nodebase}.':'.$SETTINGS->{"LogicalUnit$i"}->{TargetName};
my $target = $SETTINGS->{nodebase} . ':' . $SETTINGS->{"LogicalUnit$i"}->{TargetName};
if ($target eq $scfg->{target}) {
my $lu = ();
while ((my $key, my $val) = each(%{$SETTINGS->{"LogicalUnit$i"}})) {
while ((my $key, my $val) = each(%{ $SETTINGS->{"LogicalUnit$i"} })) {
if ($key =~ /^LUN\d+/) {
$val->{storage} =~ /^([\w\/\-]+)\s+(\w+)/;
my $storage = $1;
@ -318,7 +320,7 @@ my $parser = sub {
my $conf = undef;
my @options = ();
if ($val->{options}) {
@options = @{$val->{options}};
@options = @{ $val->{options} };
}
if ($storage =~ /^$base\/$scfg->{pool}\/([\w\-]+)$/) {
$conf = {
@ -326,7 +328,7 @@ my $parser = sub {
Storage => $storage,
Size => $size,
options => @options,
}
};
}
push @$lu, $conf if $conf;
delete $SETTINGS->{"LogicalUnit$i"}->{$key};
@ -351,7 +353,7 @@ my $list_lun = sub {
my $object = $params[0];
for my $key (keys %$SETTINGS) {
next unless $key =~ /^LogicalUnit\d+$/;
foreach my $lun (@{$SETTINGS->{$key}->{luns}}) {
foreach my $lun (@{ $SETTINGS->{$key}->{luns} }) {
if ($lun->{Storage} =~ /^$object$/) {
return $lun->{Storage};
}
@ -399,7 +401,7 @@ my $delete_lun = sub {
my $target = $SETTINGS->{current};
my $luns = ();
foreach my $conf (@{$SETTINGS->{$target}->{luns}}) {
foreach my $conf (@{ $SETTINGS->{$target}->{luns} }) {
if ($conf->{Storage} =~ /^$params[0]$/) {
$free_lu_name->($target, $conf->{lun});
} else {
@ -448,7 +450,7 @@ my $add_view = sub {
params => \@params,
};
} else {
@params = ('-HUP', '`cat '. "$SETTINGS->{pidfile}`");
@params = ('-HUP', '`cat ' . "$SETTINGS->{pidfile}`");
$cmdmap = {
cmd => 'ssh',
method => 'kill',
@ -479,7 +481,7 @@ my $list_view = sub {
my $object = $params[0];
for my $key (keys %$SETTINGS) {
next unless $key =~ /^LogicalUnit\d+$/;
foreach my $lun (@{$SETTINGS->{$key}->{luns}}) {
foreach my $lun (@{ $SETTINGS->{$key}->{luns} }) {
if ($lun->{Storage} =~ /^$object$/) {
if ($lun->{lun} =~ /^LUN(\d+)/) {
return $1;
@ -531,18 +533,31 @@ sub run_lun_command {
$parser->($scfg) unless $SETTINGS;
my $cmdmap = $get_lun_cmd_map->($method);
if ($method eq 'add_view') {
$is_add_view = 1 ;
$is_add_view = 1;
$timeout = 15;
}
if (ref $cmdmap->{cmd} eq 'CODE') {
$res = $cmdmap->{cmd}->($scfg, $timeout, $method, @params);
if (ref $res) {
$method = $res->{method};
@params = @{$res->{params}};
@params = @{ $res->{params} };
if ($res->{cmd} eq 'scp') {
$cmd = [@scp_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $method, "$target:$params[0]"];
$cmd = [
@scp_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
$method,
"$target:$params[0]",
];
} else {
$cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $method, @params];
$cmd = [
@ssh_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
$target,
$method,
@params,
];
}
} else {
return $res;
@ -550,12 +565,18 @@ sub run_lun_command {
} else {
$luncmd = $cmdmap->{cmd};
$method = $cmdmap->{method};
$cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $method, @params];
$cmd = [
@ssh_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
$target,
$luncmd,
$method,
@params,
];
}
eval {
run_command($cmd, outfunc => $output, timeout => $timeout);
};
eval { run_command($cmd, outfunc => $output, timeout => $timeout); };
if ($@ && $is_add_view) {
my $err = $@;
if ($OLD_CONFIG) {
@ -565,15 +586,11 @@ sub run_lun_command {
print $fh $OLD_CONFIG;
close $fh;
$cmd = [@scp_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $file, $CONFIG_FILE];
eval {
run_command($cmd, outfunc => $output, timeout => $timeout);
};
eval { run_command($cmd, outfunc => $output, timeout => $timeout); };
$err1 = $@ if $@;
unlink $file;
die "$err\n$err1" if $err1;
eval {
run_lun_command($scfg, undef, 'add_view', 'restart');
};
eval { run_lun_command($scfg, undef, 'add_view', 'restart'); };
die "$err\n$@" if ($@);
}
die $err;
@ -595,7 +612,8 @@ sub run_lun_command {
}
sub get_base {
return '/dev/zvol';
my ($scfg) = @_;
return $scfg->{'zfs-base-path'} || '/dev/zvol';
}
1;

View File

@ -30,7 +30,7 @@ sub get_base;
# config file location differs from distro to distro
my @CONFIG_FILES = (
'/etc/rtslib-fb-target/saveconfig.json', # Debian 9.x et al
'/etc/target/saveconfig.json' , # ArchLinux, CentOS
'/etc/target/saveconfig.json', # ArchLinux, CentOS
);
my $BACKSTORE = '/backstores/block';
@ -58,21 +58,27 @@ my $execute_remote_command = sub {
my $errfunc = sub { $err .= "$_[0]\n" };
$target = 'root@' . $scfg->{portal};
$cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, '--', $remote_command, @params];
$cmd = [
@ssh_cmd,
'-i',
"$id_rsa_path/$scfg->{portal}_id_rsa",
$target,
'--',
$remote_command,
@params,
];
eval {
run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
};
eval { run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout); };
if ($@) {
$res = {
result => 0,
msg => $err,
}
};
} else {
$res = {
result => 1,
msg => $msg,
}
};
}
return $res;
@ -96,7 +102,8 @@ my $read_config = sub {
$target = 'root@' . $scfg->{portal};
foreach my $oneFile (@CONFIG_FILES) {
my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $oneFile];
my $cmd =
[@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $oneFile];
eval {
run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
};
@ -139,21 +146,22 @@ my $parser = sub {
if ($tpg =~ /^tpg(\d+)$/) {
$tpg_tag = $1;
} else {
die "Target Portal Group has invalid value, must contain string 'tpg' and a suffix number, eg 'tpg17'\n";
die
"Target Portal Group has invalid value, must contain string 'tpg' and a suffix number, eg 'tpg17'\n";
}
my $config = $get_config->($scfg);
my $jsonconfig = JSON->new->utf8->decode($config);
my $haveTarget = 0;
foreach my $target (@{$jsonconfig->{targets}}) {
foreach my $target (@{ $jsonconfig->{targets} }) {
# only interested in iSCSI targets
next if !($target->{fabric} eq 'iscsi' && $target->{wwn} eq $scfg->{target});
# find correct TPG
foreach my $tpg (@{$target->{tpgs}}) {
foreach my $tpg (@{ $target->{tpgs} }) {
if ($tpg->{tag} == $tpg_tag) {
my $res = [];
foreach my $lun (@{$tpg->{luns}}) {
foreach my $lun (@{ $tpg->{luns} }) {
my ($idx, $storage_object);
if ($lun->{index} =~ /^(\d+)$/) {
$idx = $1;
@ -194,7 +202,7 @@ my $free_lu_name = sub {
my $new = [];
my $target = $get_target_settings->($scfg);
foreach my $lun (@{$target->{luns}}) {
foreach my $lun (@{ $target->{luns} }) {
if ($lun->{storage_object} ne "$BACKSTORE/$lu_name") {
push @$new, $lun;
}
@ -213,7 +221,7 @@ my $register_lun = sub {
is_new => 1,
};
my $target = $get_target_settings->($scfg);
push @{$target->{luns}}, $conf;
push @{ $target->{luns} }, $conf;
return $conf;
};
@ -223,12 +231,12 @@ my $extract_volname = sub {
my ($scfg, $lunpath) = @_;
my $volname = undef;
my $base = get_base;
my $base = get_base($scfg);
if ($lunpath =~ /^$base\/$scfg->{pool}\/([\w\-]+)$/) {
$volname = $1;
my $prefix = $get_backstore_prefix->($scfg);
my $target = $get_target_settings->($scfg);
foreach my $lun (@{$target->{luns}}) {
foreach my $lun (@{ $target->{luns} }) {
# If we have a lun with the pool prefix matching this vol, then return this one
# like pool-pve-vm-100-disk-0
# Else, just fallback to the old name scheme which is vm-100-disk-0
@ -252,7 +260,7 @@ my $list_view = sub {
return undef if !defined($volname); # nothing to search for..
foreach my $lun (@{$target->{luns}}) {
foreach my $lun (@{ $target->{luns} }) {
if ($lun->{storage_object} eq "$BACKSTORE/$volname") {
return $lun->{index};
}
@ -269,7 +277,7 @@ my $list_lun = sub {
my $volname = $extract_volname->($scfg, $object);
my $target = $get_target_settings->($scfg);
foreach my $lun (@{$target->{luns}}) {
foreach my $lun (@{ $target->{luns} }) {
if ($lun->{storage_object} eq "$BACKSTORE/$volname") {
return $object;
}
@ -294,18 +302,18 @@ my $create_lun = sub {
my $tpg = $scfg->{lio_tpg} || die "Target Portal Group not set, aborting!\n";
# step 1: create backstore for device
my @cliparams = ($BACKSTORE, 'create', "name=$volname", "dev=$device" );
my @cliparams = ($BACKSTORE, 'create', "name=$volname", "dev=$device");
my $res = $execute_remote_command->($scfg, $timeout, $targetcli, @cliparams);
die $res->{msg} if !$res->{result};
# step 2: enable unmap support on the backstore
@cliparams = ($BACKSTORE . '/' . $volname, 'set', 'attribute', 'emulate_tpu=1' );
@cliparams = ($BACKSTORE . '/' . $volname, 'set', 'attribute', 'emulate_tpu=1');
$res = $execute_remote_command->($scfg, $timeout, $targetcli, @cliparams);
die $res->{msg} if !$res->{result};
# step 3: register lun with target
# targetcli /iscsi/iqn.2018-04.at.bestsolution.somehost:target/tpg1/luns/ create /backstores/block/foobar
@cliparams = ("/iscsi/$scfg->{target}/$tpg/luns/", 'create', "$BACKSTORE/$volname" );
@cliparams = ("/iscsi/$scfg->{target}/$tpg/luns/", 'create', "$BACKSTORE/$volname");
$res = $execute_remote_command->($scfg, $timeout, $targetcli, @cliparams);
die $res->{msg} if !$res->{result};
@ -330,7 +338,7 @@ my $create_lun = sub {
my $delete_lun = sub {
my ($scfg, $timeout, $method, @params) = @_;
my $res = {msg => undef};
my $res = { msg => undef };
my $tpg = $scfg->{lio_tpg} || die "Target Portal Group not set, aborting!\n";
@ -338,11 +346,11 @@ my $delete_lun = sub {
my $volname = $extract_volname->($scfg, $path);
my $target = $get_target_settings->($scfg);
foreach my $lun (@{$target->{luns}}) {
foreach my $lun (@{ $target->{luns} }) {
next if $lun->{storage_object} ne "$BACKSTORE/$volname";
# step 1: delete the lun
my @cliparams = ("/iscsi/$scfg->{target}/$tpg/luns/", 'delete', "lun$lun->{index}" );
my @cliparams = ("/iscsi/$scfg->{target}/$tpg/luns/", 'delete', "lun$lun->{index}");
my $res = $execute_remote_command->($scfg, $timeout, $targetcli, @cliparams);
do {
die $res->{msg};
@ -414,7 +422,8 @@ sub run_lun_command {
}
sub get_base {
return '/dev';
my ($scfg) = @_;
return $scfg->{'zfs-base-path'} || '/dev';
}
1;

View File

@ -30,7 +30,7 @@ sub type {
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, { images => 1, rootdir => 1}],
content => [{ images => 1, rootdir => 1 }, { images => 1, rootdir => 1 }],
'sensitive-properties' => {},
};
}
@ -39,7 +39,8 @@ sub properties {
return {
thinpool => {
description => "LVM thin pool LV name.",
type => 'string', format => 'pve-storage-vgname',
type => 'string',
format => 'pve-storage-vgname',
},
};
}
@ -77,11 +78,23 @@ sub filesystem_path {
my $vg = $scfg->{vgname};
my $path = defined($snapname) ? "/dev/$vg/snap_${name}_$snapname": "/dev/$vg/$name";
my $path = defined($snapname) ? "/dev/$vg/snap_${name}_$snapname" : "/dev/$vg/$name";
return wantarray ? ($path, $vmid, $vtype) : $path;
}
# lvcreate on trixie does not accept --setautoactivation for thin LVs yet, so set it via lvchange
# TODO PVE 10: evaluate if lvcreate accepts --setautoactivation
my $set_lv_autoactivation = sub {
my ($vg, $lv, $autoactivation) = @_;
my $cmd = [
'/sbin/lvchange', '--setautoactivation', $autoactivation ? 'y' : 'n', "$vg/$lv",
];
eval { run_command($cmd); };
warn "could not set autoactivation: $@" if $@;
};
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
@ -94,15 +107,24 @@ sub alloc_image {
my $vg = $scfg->{vgname};
die "no such volume group '$vg'\n" if !defined ($vgs->{$vg});
die "no such volume group '$vg'\n" if !defined($vgs->{$vg});
$name = $class->find_free_diskname($storeid, $scfg, $vmid)
if !$name;
my $cmd = ['/sbin/lvcreate', '-aly', '-V', "${size}k", '--name', $name,
'--thinpool', "$vg/$scfg->{thinpool}" ];
my $cmd = [
'/sbin/lvcreate',
'-aly',
'-V',
"${size}k",
'--name',
$name,
'--thinpool',
"$vg/$scfg->{thinpool}",
];
run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
$set_lv_autoactivation->($vg, $name, 0);
return $name;
}
@ -114,7 +136,7 @@ sub free_image {
my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
if (my $dat = $lvs->{$scfg->{vgname}}) {
if (my $dat = $lvs->{ $scfg->{vgname} }) {
# remove all volume snapshots first
foreach my $lv (keys %$dat) {
@ -164,8 +186,12 @@ sub list_images {
next if defined($vmid) && ($owner ne $vmid);
}
push @$res, {
volid => $volid, format => 'raw', size => $info->{lv_size}, vmid => $owner,
push @$res,
{
volid => $volid,
format => 'raw',
size => $info->{lv_size},
vmid => $owner,
ctime => $info->{ctime},
};
}
@ -181,7 +207,7 @@ sub list_thinpools {
my $thinpools = [];
foreach my $vg (keys %$lvs) {
foreach my $lvname (keys %{$lvs->{$vg}}) {
foreach my $lvname (keys %{ $lvs->{$vg} }) {
next if $lvs->{$vg}->{$lvname}->{lv_type} ne 't';
my $lv = $lvs->{$vg}->{$lvname};
$lv->{lv} = $lvname;
@ -198,9 +224,9 @@ sub status {
my $lvs = $cache->{lvs} ||= PVE::Storage::LVMPlugin::lvm_list_volumes();
return if !$lvs->{$scfg->{vgname}};
return if !$lvs->{ $scfg->{vgname} };
my $info = $lvs->{$scfg->{vgname}}->{$scfg->{thinpool}};
my $info = $lvs->{ $scfg->{vgname} }->{ $scfg->{thinpool} };
return if !$info || $info->{lv_type} ne 't' || !$info->{lv_size};
@ -221,7 +247,10 @@ my $activate_lv = sub {
return if $lvs->{$vg}->{$lv}->{lv_state} eq 'a';
run_command(['lvchange', '-ay', '-K', "$vg/$lv"], errmsg => "activating LV '$vg/$lv' failed");
run_command(
['lvchange', '-ay', '-K', "$vg/$lv"],
errmsg => "activating LV '$vg/$lv' failed",
);
$lvs->{$vg}->{$lv}->{lv_state} = 'a'; # update cache
@ -268,13 +297,38 @@ sub clone_image {
my $lv;
if ($snap) {
$lv = "$vg/snap_${volname}_$snap";
} else {
my ($vtype, undef, undef, undef, undef, $isBase, $format) = $class->parse_volname($volname);
die "clone_image only works on base images\n" if !$isBase;
$lv = "$vg/$volname";
}
my $name = $class->find_free_diskname($storeid, $scfg, $vmid);
my $cmd = ['/sbin/lvcreate', '-n', $name, '-prw', '-kn', '-s', $lv];
run_command($cmd, errmsg => "clone image '$lv' error");
$set_lv_autoactivation->($vg, $name, 0);
return $name;
}
sub clone_image_pxvirt {
my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
my $vg = $scfg->{vgname};
my $lv;
if ($snap) {
$lv = "$vg/snap_${volname}_$snap";
} else {
my ($vtype, undef, undef, undef, undef, $isBase, $format) =
$class->parse_volname($volname);
die "clone_image only works on base images\n" if !$isBase;
$lv = "$vg/$volname";
}
@ -290,8 +344,7 @@ sub clone_image {
sub create_base {
my ($class, $storeid, $scfg, $volname) = @_;
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
die "create_base not possible with base image\n" if $isBase;
@ -332,7 +385,13 @@ sub volume_snapshot {
my $cmd = ['/sbin/lvcreate', '-n', $snapvol, '-pr', '-s', "$vg/$volname"];
run_command($cmd, errmsg => "lvcreate snapshot '$vg/$snapvol' error");
# disabling autoactivation not needed, as -s defaults to --setautoactivationskip y
}
sub volume_rollback_is_possible {
my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
return 1;
}
sub volume_snapshot_rollback {
@ -346,6 +405,7 @@ sub volume_snapshot_rollback {
$cmd = ['/sbin/lvcreate', '-kn', '-n', $volname, '-s', "$vg/$snapvol"];
run_command($cmd, errmsg => "lvm rollback '$vg/$snapvol' error");
$set_lv_autoactivation->($vg, $volname, 0);
}
sub volume_snapshot_delete {
@ -363,20 +423,19 @@ sub volume_has_feature {
my $features = {
snapshot => { current => 1 },
clone => { base => 1, snap => 1},
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
rename => {current => 1},
clone => { base => 1, snap => 1 },
template => { current => 1 },
copy => { base => 1, current => 1, snap => 1 },
sparseinit => { base => 1, current => 1 },
rename => { current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;
if($snapname){
if ($snapname) {
$key = 'snap';
}else{
} else {
$key = $isBase ? 'base' : 'current';
}
return 1 if $features->{$feature}->{$key};
@ -385,7 +444,18 @@ sub volume_has_feature {
}
sub volume_import {
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots, $allow_rename) = @_;
my (
$class,
$scfg,
$storeid,
$fh,
$volname,
$format,
$snapshot,
$base_snapshot,
$with_snapshots,
$allow_rename,
) = @_;
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
$class->parse_volname($volname);
@ -400,7 +470,7 @@ sub volume_import {
$snapshot,
$base_snapshot,
$with_snapshots,
$allow_rename
$allow_rename,
);
} else {
my $tempname;
@ -425,9 +495,9 @@ sub volume_import {
$snapshot,
$base_snapshot,
$with_snapshots,
$allow_rename
$allow_rename,
);
($storeid,my $newname) = PVE::Storage::parse_volume_id($newvolid);
($storeid, my $newname) = PVE::Storage::parse_volume_id($newvolid);
$volname = $class->create_base($storeid, $scfg, $newname);
}
@ -438,8 +508,16 @@ sub volume_import {
# used in LVMPlugin->volume_import
sub volume_import_write {
my ($class, $input_fh, $output_file) = @_;
run_command(['dd', "of=$output_file", 'conv=sparse', 'bs=64k'],
input => '<&'.fileno($input_fh));
run_command(
['dd', "of=$output_file", 'conv=sparse', 'bs=64k'],
input => '<&' . fileno($input_fh),
);
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
die "rename_snapshot is not supported for $class";
}
1;

View File

@ -9,7 +9,6 @@ SOURCES= \
CephFSPlugin.pm \
RBDPlugin.pm \
ISCSIDirectPlugin.pm \
GlusterfsPlugin.pm \
ZFSPoolPlugin.pm \
ZFSPlugin.pm \
PBSPlugin.pm \

View File

@ -24,9 +24,9 @@ sub nfs_is_mounted {
$mountdata = PVE::ProcFSTools::parse_proc_mounts() if !$mountdata;
return $mountpoint if grep {
$_->[2] =~ /^nfs/ &&
$_->[0] =~ m|^\Q$source\E/?$| &&
$_->[1] eq $mountpoint
$_->[2] =~ /^nfs/
&& $_->[0] =~ m|^\Q$source\E/?$|
&& $_->[1] eq $mountpoint
} @$mountdata;
return undef;
}
@ -53,9 +53,19 @@ sub type {
sub plugindata {
return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
{ images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
content => [
{
images => 1,
rootdir => 1,
vztmpl => 1,
iso => 1,
backup => 1,
snippets => 1,
import => 1,
},
{ images => 1 },
],
format => [{ raw => 1, qcow2 => 1, vmdk => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@ -64,11 +74,13 @@ sub properties {
return {
export => {
description => "NFS export path.",
type => 'string', format => 'pve-storage-path',
type => 'string',
format => 'pve-storage-path',
},
server => {
description => "Server IP or DNS name.",
type => 'string', format => 'pve-storage-server',
type => 'string',
format => 'pve-storage-server',
},
};
}
@ -81,7 +93,6 @@ sub options {
export => { fixed => 1 },
nodes => { optional => 1 },
disable => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
options => { optional => 1 },
@ -92,10 +103,10 @@ sub options {
'create-subdirs' => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
'snapshot-as-volume-chain' => { optional => 1, fixed => 1 },
};
}
sub check_config {
my ($class, $sectionId, $config, $create, $skipSchemaCheck) = @_;
@ -135,8 +146,8 @@ sub activate_storage {
# NOTE: only call mkpath when not mounted (avoid hang when NFS server is offline
$class->config_aware_base_mkdir($scfg, $path);
die "unable to activate storage '$storeid' - " .
"directory '$path' does not exist\n" if ! -d $path;
die "unable to activate storage '$storeid' - " . "directory '$path' does not exist\n"
if !-d $path;
nfs_mount($server, $export, $path, $scfg->{options});
}
@ -184,7 +195,9 @@ sub check_connection {
$cmd = ['/sbin/showmount', '--no-headers', '--exports', $server];
}
eval { run_command($cmd, timeout => 10, outfunc => sub {}, errfunc => sub {}) };
eval {
run_command($cmd, timeout => 10, outfunc => sub { }, errfunc => sub { });
};
if (my $err = $@) {
if ($is_v4) {
my $port = 2049;
@ -228,4 +241,8 @@ sub get_import_metadata {
return PVE::Storage::DirPlugin::get_import_metadata(@_);
}
sub volume_qemu_snapshot_method {
return PVE::Storage::DirPlugin::volume_qemu_snapshot_method(@_);
}
1;

View File

@ -5,6 +5,7 @@ package PVE::Storage::PBSPlugin;
use strict;
use warnings;
use Encode qw(decode);
use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
use IO::File;
use JSON;
@ -29,7 +30,7 @@ sub type {
sub plugindata {
return {
content => [ {backup => 1, none => 1}, { backup => 1 }],
content => [{ backup => 1, none => 1 }, { backup => 1 }],
'sensitive-properties' => {
'encryption-key' => 1,
'master-pubkey' => 1,
@ -47,11 +48,13 @@ sub properties {
# openssl s_client -connect <host>:8007 2>&1 |openssl x509 -fingerprint -sha256
fingerprint => get_standard_option('fingerprint-sha256'),
'encryption-key' => {
description => "Encryption key. Use 'autogen' to generate one automatically without passphrase.",
description =>
"Encryption key. Use 'autogen' to generate one automatically without passphrase.",
type => 'string',
},
'master-pubkey' => {
description => "Base64-encoded, PEM-formatted public RSA key. Used to encrypt a copy of the encryption-key which will be added to each encrypted backup.",
description =>
"Base64-encoded, PEM-formatted public RSA key. Used to encrypt a copy of the encryption-key which will be added to each encrypted backup.",
type => 'string',
},
};
@ -63,14 +66,13 @@ sub options {
datastore => { fixed => 1 },
namespace => { optional => 1 },
port => { optional => 1 },
nodes => { optional => 1},
disable => { optional => 1},
content => { optional => 1},
nodes => { optional => 1 },
disable => { optional => 1 },
content => { optional => 1 },
username => { optional => 1 },
password => { optional => 1 },
'encryption-key' => { optional => 1 },
'master-pubkey' => { optional => 1 },
maxfiles => { optional => 1 },
'prune-backups' => { optional => 1 },
'max-protected-backups' => { optional => 1 },
fingerprint => { optional => 1 },
@ -91,7 +93,7 @@ sub pbs_set_password {
my $pwfile = pbs_password_file_name($scfg, $storeid);
mkdir "/etc/pve/priv/storage";
PVE::Tools::file_set_contents($pwfile, "$password\n");
PVE::Tools::file_set_contents($pwfile, "$password\n", 0600, 1);
}
sub pbs_delete_password {
@ -107,7 +109,9 @@ sub pbs_get_password {
my $pwfile = pbs_password_file_name($scfg, $storeid);
return PVE::Tools::file_read_firstline($pwfile);
my $contents = PVE::Tools::file_read_firstline($pwfile);
return eval { decode('UTF-8', $contents, 1) } // $contents;
}
sub pbs_encryption_key_file_name {
@ -244,7 +248,7 @@ my sub api_param_from_volname : prototype($$$) {
my @tm = (POSIX::strptime($timestr, "%FT%TZ"));
# expect sec, min, hour, mday, mon, year
die "error parsing time from '$volname'" if grep { !defined($_) } @tm[0..5];
die "error parsing time from '$volname'" if grep { !defined($_) } @tm[0 .. 5];
my $btime;
{
@ -283,7 +287,7 @@ my sub do_raw_client_cmd {
my $client_exe = '/usr/bin/proxmox-backup-client';
die "executable not found '$client_exe'! Proxmox backup client not installed?\n"
if ! -x $client_exe;
if !-x $client_exe;
my $repo = PVE::PBSClient::get_repository($scfg);
@ -303,13 +307,13 @@ my sub do_raw_client_cmd {
// die "failed to get file descriptor flags: $!\n";
fcntl($keyfd, F_SETFD, $flags & ~FD_CLOEXEC)
or die "failed to remove FD_CLOEXEC from encryption key file descriptor\n";
push @$cmd, '--crypt-mode=encrypt', '--keyfd='.fileno($keyfd);
push @$cmd, '--crypt-mode=encrypt', '--keyfd=' . fileno($keyfd);
if ($use_master && defined($master_fd = pbs_open_master_pubkey($scfg, $storeid))) {
my $flags = fcntl($master_fd, F_GETFD, 0)
// die "failed to get file descriptor flags: $!\n";
fcntl($master_fd, F_SETFD, $flags & ~FD_CLOEXEC)
or die "failed to remove FD_CLOEXEC from master public key file descriptor\n";
push @$cmd, '--master-pubkey-fd='.fileno($master_fd);
push @$cmd, '--master-pubkey-fd=' . fileno($master_fd);
}
} else {
push @$cmd, '--crypt-mode=none';
@ -357,12 +361,15 @@ sub run_client_cmd {
my $outfunc = sub { $json_str .= "$_[0]\n" };
$param = [] if !defined($param);
$param = [ $param ] if !ref($param);
$param = [$param] if !ref($param);
$param = [@$param, '--output-format=json'] if !$no_output;
do_raw_client_cmd($scfg, $storeid, $client_cmd, $param,
outfunc => $outfunc, errmsg => 'proxmox-backup-client failed');
do_raw_client_cmd(
$scfg, $storeid, $client_cmd, $param,
outfunc => $outfunc,
errmsg => 'proxmox-backup-client failed',
);
return undef if $no_output;
@ -390,8 +397,11 @@ sub extract_vzdump_config {
die "unable to extract configuration for backup format '$format'\n";
}
do_raw_client_cmd($scfg, $storeid, 'restore', [ $name, $config_name, '-' ],
outfunc => $outfunc, errmsg => 'proxmox-backup-client failed');
do_raw_client_cmd(
$scfg, $storeid, 'restore', [$name, $config_name, '-'],
outfunc => $outfunc,
errmsg => 'proxmox-backup-client failed',
);
return $config;
}
@ -445,7 +455,7 @@ sub prune_backups {
$logfunc->('info', "running 'proxmox-backup-client prune' for '$backup_group'")
if !$dryrun;
eval {
my $res = run_client_cmd($scfg, $storeid, 'prune', [ $backup_group, @param ]);
my $res = run_client_cmd($scfg, $storeid, 'prune', [$backup_group, @param]);
foreach my $backup (@{$res}) {
die "result from proxmox-backup-client is not as expected\n"
@ -462,7 +472,8 @@ sub prune_backups {
my $mark = $backup->{keep} ? 'keep' : 'remove';
$mark = 'protected' if $backup->{protected};
push @{$prune_list}, {
push @{$prune_list},
{
ctime => $ctime,
mark => $mark,
type => $type eq 'vm' ? 'qemu' : 'lxc',
@ -596,7 +607,9 @@ sub on_delete_hook {
sub parse_volname {
my ($class, $volname) = @_;
if ($volname =~ m!^backup/([^\s_]+)/([^\s_]+)/([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z)$!) {
if ($volname =~
m!^backup/([^\s_]+)/([^\s_]+)/([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z)$!
) {
my $btype = $1;
my $bid = $2;
my $btime = $3;
@ -657,12 +670,11 @@ sub free_image {
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
run_client_cmd($scfg, $storeid, "forget", [ $name ], 1);
run_client_cmd($scfg, $storeid, "forget", [$name], 1);
return;
}
sub list_images {
my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
@ -706,7 +718,7 @@ my sub pbs_api_connect {
}
if (my $fp = $scfg->{fingerprint}) {
$params->{cached_fingerprints}->{uc($fp)} = 1;
$params->{cached_fingerprints}->{ uc($fp) } = 1;
}
my $conn = PVE::APIClient::LWP->new(
@ -862,7 +874,7 @@ sub get_volume_notes {
my (undef, $name, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
my $data = run_client_cmd($scfg, $storeid, "snapshot", [ "notes", "show", $name ]);
my $data = run_client_cmd($scfg, $storeid, "snapshot", ["notes", "show", $name]);
return $data->{notes};
}
@ -874,7 +886,7 @@ sub update_volume_notes {
my (undef, $name, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
run_client_cmd($scfg, $storeid, "snapshot", [ "notes", "update", $name, $notes ], 1);
run_client_cmd($scfg, $storeid, "snapshot", ["notes", "update", $name, $notes], 1);
return undef;
}
@ -936,7 +948,7 @@ sub volume_size_info {
my ($vtype, $name, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
my $data = run_client_cmd($scfg, $storeid, "files", [ $name ]);
my $data = run_client_cmd($scfg, $storeid, "files", [$name]);
my $size = 0;
foreach my $info (@$data) {

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,7 @@ use Net::IP;
use POSIX qw(ceil);
use PVE::CephConfig;
use PVE::Cluster qw(cfs_read_file);;
use PVE::Cluster qw(cfs_read_file);
use PVE::JSONSchema qw(get_standard_option);
use PVE::ProcFSTools;
use PVE::RADOS;
@ -47,7 +47,7 @@ my sub get_rbd_path {
$path .= "/$scfg->{namespace}" if defined($scfg->{namespace});
$path .= "/$volume" if defined($volume);
return $path;
};
}
my sub get_rbd_dev_path {
my ($scfg, $storeid, $volume) = @_;
@ -84,14 +84,13 @@ my sub get_rbd_dev_path {
return $pve_path;
}
my $build_cmd = sub {
my ($binary, $scfg, $storeid, $op, @options) = @_;
my $rbd_cmd = sub {
my ($scfg, $storeid, $op, @options) = @_;
my $cmd_option = PVE::CephConfig::ceph_connect_option($scfg, $storeid);
my $pool = $scfg->{pool} ? $scfg->{pool} : 'rbd';
my $cmd = [$binary];
my $cmd = ['/usr/bin/rbd'];
if ($op eq 'import') {
push $cmd->@*, '--dest-pool', $pool;
} else {
@ -107,7 +106,8 @@ my $build_cmd = sub {
}
push @$cmd, '-c', $cmd_option->{ceph_conf} if ($cmd_option->{ceph_conf});
push @$cmd, '-m', $cmd_option->{mon_host} if ($cmd_option->{mon_host});
push @$cmd, '--auth_supported', $cmd_option->{auth_supported} if ($cmd_option->{auth_supported});
push @$cmd, '--auth_supported', $cmd_option->{auth_supported}
if ($cmd_option->{auth_supported});
push @$cmd, '-n', "client.$cmd_option->{userid}" if ($cmd_option->{userid});
push @$cmd, '--keyring', $cmd_option->{keyring} if ($cmd_option->{keyring});
@ -118,18 +118,6 @@ my $build_cmd = sub {
return $cmd;
};
my $rbd_cmd = sub {
my ($scfg, $storeid, $op, @options) = @_;
return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
};
my $rados_cmd = sub {
my ($scfg, $storeid, $op, @options) = @_;
return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
};
# needed for volumes created using ceph jewel (or higher)
my $krbd_feature_update = sub {
my ($scfg, $storeid, $name) = @_;
@ -154,14 +142,16 @@ my $krbd_feature_update = sub {
my $active_features = { map { $_ => 1 } @$active_features_list };
my $to_disable = join(',', grep { $active_features->{$_} } @disable);
my $to_enable = join(',', grep { !$active_features->{$_} } @enable );
my $to_enable = join(',', grep { !$active_features->{$_} } @enable);
if ($to_disable) {
print "disable RBD image features this kernel RBD drivers is not compatible with: $to_disable\n";
print
"disable RBD image features this kernel RBD drivers is not compatible with: $to_disable\n";
my $cmd = $rbd_cmd->($scfg, $storeid, 'feature', 'disable', $name, $to_disable);
run_rbd_command(
$cmd,
errmsg => "could not disable krbd-incompatible image features '$to_disable' for rbd image: $name",
errmsg =>
"could not disable krbd-incompatible image features '$to_disable' for rbd image: $name",
);
}
if ($to_enable) {
@ -170,7 +160,8 @@ my $krbd_feature_update = sub {
my $cmd = $rbd_cmd->($scfg, $storeid, 'feature', 'enable', $name, $to_enable);
run_rbd_command(
$cmd,
errmsg => "could not enable krbd-compatible image features '$to_enable' for rbd image: $name",
errmsg =>
"could not enable krbd-compatible image features '$to_enable' for rbd image: $name",
);
};
warn "$@" if $@;
@ -187,7 +178,9 @@ sub run_rbd_command {
# at least 1 child(ren) in pool cephstor1
$args{errfunc} = sub {
my $line = shift;
if ($line =~ m/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d+ [0-9a-f]+ [\-\d]+ librbd: (.*)$/) {
if ($line =~
m/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d+ [0-9a-f]+ [\-\d]+ librbd: (.*)$/
) {
$lasterr = "$1\n";
} else {
$lasterr = $line;
@ -213,7 +206,7 @@ sub rbd_ls {
my $parser = sub { $raw .= shift };
my $cmd = $rbd_cmd->($scfg, $storeid, 'ls', '-l', '--format', 'json');
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => $parser);
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub { }, outfunc => $parser);
my $result;
if ($raw eq '') {
@ -238,7 +231,7 @@ sub rbd_ls {
name => $image,
size => $el->{size},
parent => $get_parent_image_name->($el->{parent}),
vmid => $owner
vmid => $owner,
};
}
@ -251,7 +244,12 @@ sub rbd_ls_snap {
my $cmd = $rbd_cmd->($scfg, $storeid, 'snap', 'ls', $name, '--format', 'json');
my $raw = '';
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => sub { $raw .= shift; });
run_rbd_command(
$cmd,
errmsg => "rbd error",
errfunc => sub { },
outfunc => sub { $raw .= shift; },
);
my $list;
if ($raw =~ m/^(\[.*\])$/s) { # untaint
@ -292,7 +290,7 @@ sub rbd_volume_info {
my $raw = '';
my $parser = sub { $raw .= shift };
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => $parser);
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub { }, outfunc => $parser);
my $volume;
if ($raw eq '') {
@ -304,7 +302,8 @@ sub rbd_volume_info {
}
$volume->{parent} = $get_parent_image_name->($volume->{parent});
$volume->{protected} = defined($volume->{protected}) && $volume->{protected} eq "true" ? 1 : undef;
$volume->{protected} =
defined($volume->{protected}) && $volume->{protected} eq "true" ? 1 : undef;
return $volume->@{qw(size parent format protected features)};
}
@ -318,7 +317,7 @@ sub rbd_volume_du {
my $raw = '';
my $parser = sub { $raw .= shift };
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => $parser);
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub { }, outfunc => $parser);
my $volume;
if ($raw eq '') {
@ -354,7 +353,11 @@ my sub rbd_volume_exists {
my $cmd = $rbd_cmd->($scfg, $storeid, 'ls', '--format', 'json');
my $raw = '';
run_rbd_command(
$cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => sub { $raw .= shift; });
$cmd,
errmsg => "rbd error",
errfunc => sub { },
outfunc => sub { $raw .= shift; },
);
my $list;
if ($raw =~ m/^(\[.*\])$/s) { # untaint
@ -371,6 +374,16 @@ my sub rbd_volume_exists {
return 0;
}
# Needs to be public, so qemu-server can mock it for cfg2cmd.
sub rbd_volume_config_set {
my ($scfg, $storeid, $volname, $key, $value) = @_;
my $cmd = $rbd_cmd->($scfg, $storeid, 'config', 'image', 'set', $volname, $key, $value);
run_rbd_command($cmd, errmsg => "rbd config image set $volname $key $value error");
return;
}
# Configuration
sub type {
@ -379,7 +392,7 @@ sub type {
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, { images => 1 }],
content => [{ images => 1, rootdir => 1 }, { images => 1 }],
'sensitive-properties' => { keyring => 1 },
};
}
@ -388,7 +401,8 @@ sub properties {
return {
monhost => {
description => "IP addresses of monitors (for external clusters).",
type => 'string', format => 'pve-storage-portal-dns-list',
type => 'string',
format => 'pve-storage-portal-dns-list',
},
pool => {
description => "Pool.",
@ -426,7 +440,7 @@ sub options {
return {
nodes => { optional => 1 },
disable => { optional => 1 },
monhost => { optional => 1},
monhost => { optional => 1 },
pool => { optional => 1 },
'data-pool' => { optional => 1 },
namespace => { optional => 1 },
@ -443,7 +457,10 @@ sub options {
sub on_add_hook {
my ($class, $storeid, $scfg, %param) = @_;
my $pveceph_managed = !defined($scfg->{monhost});
PVE::CephConfig::ceph_create_keyfile($scfg->{type}, $storeid, $param{keyring});
PVE::CephConfig::ceph_create_configuration($scfg->{type}, $storeid) if !$pveceph_managed;
return;
}
@ -465,6 +482,8 @@ sub on_update_hook {
sub on_delete_hook {
my ($class, $storeid, $scfg) = @_;
PVE::CephConfig::ceph_remove_keyfile($scfg->{type}, $storeid);
PVE::CephConfig::ceph_remove_configuration($storeid);
return;
}
@ -483,7 +502,7 @@ sub path {
my $cmd_option = PVE::CephConfig::ceph_connect_option($scfg, $storeid);
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
$name .= '@'.$snapname if $snapname;
$name .= '@' . $snapname if $snapname;
if ($scfg->{krbd}) {
my $rbd_dev_path = get_rbd_dev_path($scfg, $storeid, $name);
@ -506,6 +525,53 @@ sub path {
return ($path, $vmid, $vtype);
}
sub qemu_blockdev_options {
my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
my $cmd_option = PVE::CephConfig::ceph_connect_option($scfg, $storeid);
my ($name) = ($class->parse_volname($volname))[1];
if ($scfg->{krbd}) {
$name .= '@' . $options->{'snapshot-name'} if $options->{'snapshot-name'};
my $rbd_dev_path = get_rbd_dev_path($scfg, $storeid, $name);
return { driver => 'host_device', filename => $rbd_dev_path };
}
my $blockdev = {
driver => 'rbd',
pool => $scfg->{pool} ? "$scfg->{pool}" : 'rbd',
image => "$name",
};
$blockdev->{namespace} = "$scfg->{namespace}" if defined($scfg->{namespace});
$blockdev->{snapshot} = $options->{'snapshot-name'} if $options->{'snapshot-name'};
$blockdev->{conf} = $cmd_option->{ceph_conf} if $cmd_option->{ceph_conf};
if (my $monhost = $scfg->{'monhost'}) {
my $server = [];
my @mons = PVE::Tools::split_list($monhost);
for my $mon (@mons) {
my ($host, $port) = PVE::Tools::parse_host_and_port($mon);
$port = '3300' if !$port;
push @$server, { host => $host, port => $port };
}
$blockdev->{server} = $server;
$blockdev->{'auth-client-required'} = ["$cmd_option->{auth_supported}"];
}
$blockdev->{user} = "$cmd_option->{userid}" if $cmd_option->{keyring};
# SPI flash does lots of read-modify-write OPs, without writeback this gets really slow #3329
if ($options->{hints}->{'efi-disk'}) {
# Querying the value would just cost more and the 'rbd image config get' command will just
# fail if the config has not been set yet, so it's not even straight-forward to do so.
# Simply set the value (possibly again).
rbd_volume_config_set($scfg, $storeid, $name, 'rbd_cache_policy', 'writeback');
}
return $blockdev;
}
sub find_free_diskname {
my ($class, $storeid, $scfg, $vmid, $fmt, $add_fmt_suffix) = @_;
@ -521,7 +587,7 @@ sub find_free_diskname {
};
eval {
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => $parser);
run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub { }, outfunc => $parser);
};
my $err = $@;
@ -535,8 +601,7 @@ sub create_base {
my $snap = '__base__';
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
die "create_base not possible with base image\n" if $isBase;
@ -546,7 +611,7 @@ sub create_base {
die "rbd image must be at format V2" if $format ne "2";
die "volname '$volname' contains wrong information about parent $parent $basename\n"
if $basename && (!$parent || $parent ne $basename."@".$snap);
if $basename && (!$parent || $parent ne $basename . "@" . $snap);
my $newname = $name;
$newname =~ s/^vm-/base-/;
@ -565,13 +630,11 @@ sub create_base {
eval { $class->unmap_volume($storeid, $scfg, $volname); };
warn $@ if $@;
my $running = undef; #fixme : is create_base always offline ?
$class->volume_snapshot($scfg, $storeid, $newname, $snap, $running);
$class->volume_snapshot($scfg, $storeid, $newname, $snap);
my (undef, undef, undef, $protected) = rbd_volume_info($scfg, $storeid, $newname, $snap);
if (!$protected){
if (!$protected) {
my $cmd = $rbd_cmd->($scfg, $storeid, 'snap', 'protect', $newname, '--snap', $snap);
run_rbd_command($cmd, errmsg => "rbd protect $newname snap '$snap' error");
}
@ -586,8 +649,7 @@ sub clone_image {
my $snap = '__base__';
$snap = $snapname if length $snapname;
my ($vtype, $basename, $basevmid, undef, undef, $isBase) =
$class->parse_volname($volname);
my ($vtype, $basename, $basevmid, undef, undef, $isBase) = $class->parse_volname($volname);
die "$volname is not a base image and snapname is not provided\n"
if !$isBase && !length($snapname);
@ -596,6 +658,43 @@ sub clone_image {
warn "clone $volname: $basename snapname $snap to $name\n";
if (length($snapname)) {
my (undef, undef, undef, $protected) =
rbd_volume_info($scfg, $storeid, $volname, $snapname);
if (!$protected) {
my $cmd = $rbd_cmd->($scfg, $storeid, 'snap', 'protect', $volname, '--snap', $snapname);
run_rbd_command($cmd, errmsg => "rbd protect $volname snap $snapname error");
}
}
my $newvol = "$basename/$name";
$newvol = $name if length($snapname);
my @options = (
get_rbd_path($scfg, $basename), '--snap', $snap,
);
push @options, ('--data-pool', $scfg->{'data-pool'}) if $scfg->{'data-pool'};
my $cmd = $rbd_cmd->($scfg, $storeid, 'clone', @options, get_rbd_path($scfg, $name));
run_rbd_command($cmd, errmsg => "rbd clone '$basename' error");
return $newvol;
}
sub clone_image_pxvirt {
my ($class, $scfg, $storeid, $volname, $vmid, $snapname) = @_;
my $snap = '__base__';
$snap = $snapname if length $snapname;
my ($vtype, $basename, $basevmid, undef, undef, $isBase) =
$class->parse_volname($volname);
my $name = $class->find_free_diskname($storeid, $scfg, $vmid);
warn "clone $volname: $basename snapname $snap to $name\n";
if (length($snapname)) {
my (undef, undef, undef, $protected) = rbd_volume_info($scfg, $storeid, $volname, $snapname);
@ -623,15 +722,13 @@ sub clone_image {
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
die "illegal name '$name' - should be 'vm-$vmid-*'\n"
if $name && $name !~ m/^vm-$vmid-/;
$name = $class->find_free_diskname($storeid, $scfg, $vmid) if !$name;
my @options = (
'--image-format' , 2,
'--size', int(($size + 1023) / 1024),
'--image-format', 2, '--size', int(($size + 1023) / 1024),
);
push @options, ('--data-pool', $scfg->{'data-pool'}) if $scfg->{'data-pool'};
@ -644,9 +741,7 @@ sub alloc_image {
sub free_image {
my ($class, $storeid, $scfg, $volname, $isBase) = @_;
my ($vtype, $name, $vmid, undef, undef, undef) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, undef, undef, undef) = $class->parse_volname($volname);
my $snaps = rbd_ls_snap($scfg, $storeid, $name);
foreach my $snap (keys %$snaps) {
@ -676,7 +771,7 @@ sub list_images {
my $res = [];
for my $image (sort keys %$dat) {
my $info = $dat->{$image};
my ($volname, $parent, $owner) = $info->@{'name', 'parent', 'vmid'};
my ($volname, $parent, $owner) = $info->@{ 'name', 'parent', 'vmid' };
if ($parent && $parent =~ m/^(base-\d+-\S+)\@__base__$/) {
$info->{volid} = "$storeid:$1/$volname";
@ -688,7 +783,7 @@ sub list_images {
my $found = grep { $_ eq $info->{volid} } @$vollist;
next if !$found;
} else {
next if defined ($vmid) && ($owner ne $vmid);
next if defined($vmid) && ($owner ne $vmid);
}
$info->{format} = 'raw';
@ -707,7 +802,7 @@ sub status {
my $pool = $scfg->{'data-pool'} // $scfg->{pool} // 'rbd';
my ($d) = grep { $_->{name} eq $pool } @{$df->{pools}};
my ($d) = grep { $_->{name} eq $pool } @{ $df->{pools} };
if (!defined($d)) {
warn "could not get usage stats for pool '$pool'\n";
@ -740,7 +835,7 @@ sub map_volume {
my ($vtype, $img_name, $vmid) = $class->parse_volname($volname);
my $name = $img_name;
$name .= '@'.$snapname if $snapname;
$name .= '@' . $snapname if $snapname;
my $kerneldev = get_rbd_dev_path($scfg, $storeid, $name);
@ -759,7 +854,7 @@ sub unmap_volume {
my ($class, $storeid, $scfg, $volname, $snapname) = @_;
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
$name .= '@'.$snapname if $snapname;
$name .= '@' . $snapname if $snapname;
my $kerneldev = get_rbd_dev_path($scfg, $storeid, $name);
@ -803,7 +898,8 @@ sub volume_resize {
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
my $cmd = $rbd_cmd->($scfg, $storeid, 'resize', '--size', int(ceil($size/1024/1024)), $name);
my $cmd =
$rbd_cmd->($scfg, $storeid, 'resize', '--size', int(ceil($size / 1024 / 1024)), $name);
run_rbd_command($cmd, errmsg => "rbd resize '$volname' error");
return undef;
}
@ -835,7 +931,7 @@ sub volume_snapshot_delete {
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
my (undef, undef, undef, $protected) = rbd_volume_info($scfg, $storeid, $name, $snap);
if ($protected){
if ($protected) {
my $cmd = $rbd_cmd->($scfg, $storeid, 'snap', 'unprotect', $name, '--snap', $snap);
run_rbd_command($cmd, errmsg => "rbd unprotect $name snap '$snap' error");
}
@ -855,18 +951,18 @@ sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
snapshot => { current => 1, snap => 1},
clone => { base => 1, snap => 1},
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
rename => {current => 1},
snapshot => { current => 1, snap => 1 },
clone => { base => 1, snap => 1 },
template => { current => 1 },
copy => { base => 1, current => 1, snap => 1 },
sparseinit => { base => 1, current => 1 },
rename => { current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;
if ($snapname){
if ($snapname) {
$key = 'snap';
} else {
$key = $isBase ? 'base' : 'current';
@ -880,7 +976,8 @@ sub volume_export_formats {
my ($class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots) = @_;
return $class->volume_import_formats(
$scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
$scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots,
);
}
sub volume_export {
@ -906,7 +1003,7 @@ sub volume_export {
run_rbd_command(
$cmd,
errmsg => 'could not export image',
output => '>&'.fileno($fh),
output => '>&' . fileno($fh),
);
return;
@ -955,7 +1052,7 @@ sub volume_import {
run_rbd_command(
$cmd,
errmsg => 'could not import image',
input => '<&'.fileno($fh),
input => '<&' . fileno($fh),
);
};
if (my $err = $@) {
@ -974,13 +1071,7 @@ sub rename_volume {
my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
my (
undef,
$source_image,
$source_vmid,
$base_name,
$base_vmid,
undef,
$format
undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
) = $class->parse_volname($source_volname);
$target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
if !$target_volname;
@ -1003,4 +1094,17 @@ sub rename_volume {
return "${storeid}:${base_name}${target_volname}";
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
die "rename_snapshot is not implemented for $class";
}
sub volume_qemu_snapshot_method {
my ($class, $storeid, $scfg, $volname) = @_;
return 'qemu' if !$scfg->{krbd};
return 'storage';
}
1;

View File

@ -3,9 +3,10 @@ package PVE::Storage::ZFSPlugin;
use strict;
use warnings;
use IO::File;
use POSIX;
use POSIX qw(ENOENT);
use PVE::Tools qw(run_command);
use PVE::Storage::ZFSPoolPlugin;
use PVE::RESTEnvironment qw(log_warn);
use PVE::RPCEnvironment;
use base qw(PVE::Storage::ZFSPoolPlugin);
@ -14,7 +15,6 @@ use PVE::Storage::LunCmd::Istgt;
use PVE::Storage::LunCmd::Iet;
use PVE::Storage::LunCmd::LIO;
my @ssh_opts = ('-o', 'BatchMode=yes');
my @ssh_cmd = ('/usr/bin/ssh', @ssh_opts);
my $id_rsa_path = '/etc/pve/priv/zfs';
@ -39,13 +39,13 @@ my $zfs_get_base = sub {
my ($scfg) = @_;
if ($scfg->{iscsiprovider} eq 'comstar') {
return PVE::Storage::LunCmd::Comstar::get_base;
return PVE::Storage::LunCmd::Comstar::get_base($scfg);
} elsif ($scfg->{iscsiprovider} eq 'istgt') {
return PVE::Storage::LunCmd::Istgt::get_base;
return PVE::Storage::LunCmd::Istgt::get_base($scfg);
} elsif ($scfg->{iscsiprovider} eq 'iet') {
return PVE::Storage::LunCmd::Iet::get_base;
return PVE::Storage::LunCmd::Iet::get_base($scfg);
} elsif ($scfg->{iscsiprovider} eq 'LIO') {
return PVE::Storage::LunCmd::LIO::get_base;
return PVE::Storage::LunCmd::LIO::get_base($scfg);
} else {
$zfs_unknown_scsi_provider->($scfg->{iscsiprovider});
}
@ -54,14 +54,15 @@ my $zfs_get_base = sub {
sub zfs_request {
my ($class, $scfg, $timeout, $method, @params) = @_;
$timeout = PVE::RPCEnvironment->is_worker() ? 60*60 : 10
$timeout = PVE::RPCEnvironment->is_worker() ? 60 * 60 : 10
if !$timeout;
my $msg = '';
if ($lun_cmds->{$method}) {
if ($scfg->{iscsiprovider} eq 'comstar') {
$msg = PVE::Storage::LunCmd::Comstar::run_lun_command($scfg, $timeout, $method, @params);
$msg =
PVE::Storage::LunCmd::Comstar::run_lun_command($scfg, $timeout, $method, @params);
} elsif ($scfg->{iscsiprovider} eq 'istgt') {
$msg = PVE::Storage::LunCmd::Istgt::run_lun_command($scfg, $timeout, $method, @params);
} elsif ($scfg->{iscsiprovider} eq 'iet') {
@ -174,7 +175,7 @@ sub type {
sub plugindata {
return {
content => [ {images => 1}, { images => 1 }],
content => [{ images => 1 }, { images => 1 }],
'sensitive-properties' => {},
};
}
@ -204,6 +205,12 @@ sub properties {
description => "target portal group for Linux LIO targets",
type => 'string',
},
'zfs-base-path' => {
description => "Base path where to look for the created ZFS block devices. Set"
. " automatically during creation if not specified. Usually '/dev/zvol'.",
type => 'string',
format => 'pve-storage-path',
},
};
}
@ -223,11 +230,53 @@ sub options {
lio_tpg => { optional => 1 },
content => { optional => 1 },
bwlimit => { optional => 1 },
'zfs-base-path' => { optional => 1 },
};
}
# Storage implementation
sub on_add_hook {
my ($class, $storeid, $scfg, %param) = @_;
if (!$scfg->{'zfs-base-path'}) {
my $base_path;
if ($scfg->{iscsiprovider} eq 'comstar') {
$base_path = PVE::Storage::LunCmd::Comstar::get_base($scfg);
} elsif ($scfg->{iscsiprovider} eq 'istgt') {
$base_path = PVE::Storage::LunCmd::Istgt::get_base($scfg);
} elsif ($scfg->{iscsiprovider} eq 'iet' || $scfg->{iscsiprovider} eq 'LIO') {
# Provider implementations hard-code '/dev/', which does not work for distributions like
# Debian 12. Keep that implementation as-is for backwards compatibility, but use custom
# logic here.
my $target = 'root@' . $scfg->{portal};
my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target];
push $cmd->@*, 'ls', '/dev/zvol';
my $rc = eval { run_command($cmd, timeout => 10, noerr => 1, quiet => 1) };
my $err = $@;
if (defined($rc) && $rc == 0) {
$base_path = '/dev/zvol';
} elsif (defined($rc) && $rc == ENOENT) {
$base_path = '/dev';
} else {
my $message = $err ? $err : "remote command failed";
chomp($message);
$message .= " ($rc)" if defined($rc);
$message .= " - check 'zfs-base-path' setting manually!";
log_warn($message);
$base_path = '/dev/zvol';
}
} else {
$zfs_unknown_scsi_provider->($scfg->{iscsiprovider});
}
$scfg->{'zfs-base-path'} = $base_path;
}
return;
}
sub path {
my ($class, $scfg, $volname, $storeid, $snapname) = @_;
@ -247,13 +296,31 @@ sub path {
return ($path, $vmid, $vtype);
}
sub qemu_blockdev_options {
my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
die "direct access to snapshots not implemented\n"
if $options->{'snapshot-name'};
my $name = ($class->parse_volname($volname))[1];
my $guid = $class->zfs_get_lu_name($scfg, $name);
my $lun = $class->zfs_get_lun_number($scfg, $guid);
return {
driver => 'iscsi',
transport => 'tcp',
portal => "$scfg->{portal}",
target => "$scfg->{target}",
lun => int($lun),
};
}
sub create_base {
my ($class, $storeid, $scfg, $volname) = @_;
my $snap = '__base__';
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
die "create_base not possible with base image\n" if $isBase;
@ -268,9 +335,7 @@ sub create_base {
my $guid = $class->zfs_create_lu($scfg, $newname);
$class->zfs_add_lun_mapping_entry($scfg, $newname, $guid);
my $running = undef; #fixme : is create_base always offline ?
$class->volume_snapshot($scfg, $storeid, $newname, $snap, $running);
$class->volume_snapshot($scfg, $storeid, $newname, $snap);
return $newvolname;
}
@ -370,14 +435,13 @@ sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
snapshot => { current => 1, snap => 1},
clone => { base => 1},
template => { current => 1},
copy => { base => 1, current => 1},
snapshot => { current => 1, snap => 1 },
clone => { base => 1 },
template => { current => 1 },
copy => { base => 1, current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;

View File

@ -20,8 +20,8 @@ sub type {
sub plugindata {
return {
content => [ {images => 1, rootdir => 1}, {images => 1 , rootdir => 1}],
format => [ { raw => 1, subvol => 1 } , 'raw' ],
content => [{ images => 1, rootdir => 1 }, { images => 1, rootdir => 1 }],
format => [{ raw => 1, subvol => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@ -38,7 +38,8 @@ sub properties {
},
mountpoint => {
description => "mount point",
type => 'string', format => 'pve-storage-path',
type => 'string',
format => 'pve-storage-path',
},
};
}
@ -129,8 +130,8 @@ sub on_add_hook {
if (defined($cfg_mountpoint)) {
if (defined($mountpoint) && !($cfg_mountpoint =~ m|^\Q$mountpoint\E/?$|)) {
warn "warning for $storeid - mountpoint: $cfg_mountpoint " .
"does not match current mount point: $mountpoint\n";
warn "warning for $storeid - mountpoint: $cfg_mountpoint "
. "does not match current mount point: $mountpoint\n";
}
} else {
$scfg->{mountpoint} = $mountpoint;
@ -161,6 +162,22 @@ sub path {
return ($path, $vmid, $vtype);
}
sub qemu_blockdev_options {
my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
my $format = ($class->parse_volname($volname))[6];
die "volume '$volname' not usable as VM image\n" if $format ne 'raw';
die "cannot attach only the snapshot of a zvol\n" if $options->{'snapshot-name'};
my ($path) = $class->path($scfg, $volname, $storeid);
my $blockdev = { driver => 'host_device', filename => $path };
return $blockdev;
}
sub zfs_request {
my ($class, $scfg, $timeout, $method, @params) = @_;
@ -180,8 +197,8 @@ sub zfs_request {
my $output = sub { $msg .= "$_[0]\n" };
if (PVE::RPCEnvironment->is_worker()) {
$timeout = 60*60 if !$timeout;
$timeout = 60*5 if $timeout < 60*5;
$timeout = 60 * 60 if !$timeout;
$timeout = 60 * 5 if $timeout < 60 * 5;
} else {
$timeout = 10 if !$timeout;
}
@ -194,7 +211,7 @@ sub zfs_request {
sub zfs_wait_for_zvol_link {
my ($class, $scfg, $volname, $timeout) = @_;
my $default_timeout = PVE::RPCEnvironment->is_worker() ? 60*5 : 10;
my $default_timeout = PVE::RPCEnvironment->is_worker() ? 60 * 5 : 10;
$timeout = $default_timeout if !defined($timeout);
my ($devname, undef, undef) = $class->path($scfg, $volname);
@ -223,7 +240,7 @@ sub alloc_image {
$class->zfs_create_zvol($scfg, $volname, $size);
$class->zfs_wait_for_zvol_link($scfg, $volname);
} elsif ( $fmt eq 'subvol') {
} elsif ($fmt eq 'subvol') {
die "illegal name '$volname' - should be 'subvol-$vmid-*'\n"
if $volname && $volname !~ m/^subvol-$vmid-/;
@ -275,7 +292,7 @@ sub list_images {
my $found = grep { $_ eq $info->{volid} } @$vollist;
next if !$found;
} else {
next if defined ($vmid) && ($owner ne $vmid);
next if defined($vmid) && ($owner ne $vmid);
}
push @$res, $info;
@ -286,8 +303,8 @@ sub list_images {
sub zfs_get_properties {
my ($class, $scfg, $properties, $dataset, $timeout) = @_;
my $result = $class->zfs_request($scfg, $timeout, 'get', '-o', 'value',
'-Hp', $properties, $dataset);
my $result =
$class->zfs_request($scfg, $timeout, 'get', '-o', 'value', '-Hp', $properties, $dataset);
my @values = split /\n/, $result;
return wantarray ? @values : $values[0];
}
@ -300,11 +317,11 @@ sub zfs_get_pool_stats {
my @lines = $class->zfs_get_properties($scfg, 'available,used', $scfg->{pool});
if($lines[0] =~ /^(\d+)$/) {
if ($lines[0] =~ /^(\d+)$/) {
$available = $1;
}
if($lines[1] =~ /^(\d+)$/) {
if ($lines[1] =~ /^(\d+)$/) {
$used = $1;
}
@ -336,8 +353,8 @@ sub zfs_create_subvol {
my $dataset = "$scfg->{pool}/$volname";
my $quota = $size ? "${size}k" : "none";
my $cmd = ['create', '-o', 'acltype=posixacl', '-o', 'xattr=sa',
'-o', "refquota=${quota}", $dataset];
my $cmd =
['create', '-o', 'acltype=posixacl', '-o', 'xattr=sa', '-o', "refquota=${quota}", $dataset];
$class->zfs_request($scfg, undef, @$cmd);
}
@ -391,7 +408,7 @@ sub zfs_list_zvol {
foreach my $zvol (@$zvols) {
my $name = $zvol->{name};
my $parent = $zvol->{origin};
if($zvol->{origin} && $zvol->{origin} =~ m/^$scfg->{pool}\/(\S+)$/){
if ($zvol->{origin} && $zvol->{origin} =~ m/^$scfg->{pool}\/(\S+)$/) {
$parent = $1;
}
@ -447,11 +464,11 @@ sub status {
sub volume_size_info {
my ($class, $scfg, $storeid, $volname, $timeout) = @_;
my (undef, $vname, undef, $parent, undef, undef, $format) =
$class->parse_volname($volname);
my (undef, $vname, undef, $parent, undef, undef, $format) = $class->parse_volname($volname);
my $attr = $format eq 'subvol' ? 'refquota' : 'volsize';
my ($size, $used) = $class->zfs_get_properties($scfg, "$attr,usedbydataset", "$scfg->{pool}/$vname");
my ($size, $used) =
$class->zfs_get_properties($scfg, "$attr,usedbydataset", "$scfg->{pool}/$vname");
$used = ($used =~ /^(\d+)$/) ? $1 : 0;
@ -465,9 +482,25 @@ sub volume_size_info {
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
my $vname = ($class->parse_volname($volname))[1];
my (undef, $vname, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
my $snapshot_name = "$scfg->{pool}/$vname\@$snap";
$class->zfs_request($scfg, undef, 'snapshot', "$scfg->{pool}/$vname\@$snap");
$class->zfs_request($scfg, undef, 'snapshot', $snapshot_name);
# if this is a subvol, track refquota information via user properties. zfs
# does not track this property for snapshosts and consequently does not roll
# it back. so track this information manually.
if ($format eq 'subvol') {
my $refquota = $class->zfs_get_properties($scfg, 'refquota', "$scfg->{pool}/$vname");
$class->zfs_request(
$scfg,
undef,
'set',
"pve-storage:refquota=${refquota}",
$snapshot_name,
);
}
}
sub volume_snapshot_delete {
@ -483,8 +516,24 @@ sub volume_snapshot_rollback {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
my (undef, $vname, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
my $snapshot_name = "$scfg->{pool}/$vname\@$snap";
my $msg = $class->zfs_request($scfg, undef, 'rollback', "$scfg->{pool}/$vname\@$snap");
my $msg = $class->zfs_request($scfg, undef, 'rollback', $snapshot_name);
# if this is a subvol, check if we tracked the refquota manually via user
# properties and if so, set it appropriatelly again.
if ($format eq 'subvol') {
my $refquota = $class->zfs_get_properties($scfg, 'pve-storage:refquota', $snapshot_name);
if ($refquota =~ m/^\d+$/) {
$class->zfs_request(
$scfg, undef, 'set', "refquota=${refquota}", "$scfg->{pool}/$vname",
);
} elsif ($refquota ne "-") {
# refquota user property was set, but not a number -> warn
warn "property for refquota tracking contained unknown value '$refquota'\n";
}
}
# we have to unmount rollbacked subvols, to invalidate wrong kernel
# caches, they get mounted in activate volume again
@ -638,6 +687,43 @@ sub clone_image {
my $name = $class->find_free_diskname($storeid, $scfg, $vmid, $format);
if ($format eq 'subvol') {
my $size = $class->zfs_request(
$scfg, undef, 'list', '-Hp', '-o', 'refquota', "$scfg->{pool}/$basename",
);
chomp($size);
$class->zfs_request(
$scfg,
undef,
'clone',
"$scfg->{pool}/$basename\@$snap",
"$scfg->{pool}/$name",
'-o',
"refquota=$size",
);
} else {
$class->zfs_request(
$scfg,
undef,
'clone',
"$scfg->{pool}/$basename\@$snap",
"$scfg->{pool}/$name",
);
}
return "$basename/$name";
}
sub clone_image_pxvirt {
my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
$snap ||= '__base__';
my ($vtype, $basename, $basevmid, undef, undef, $isBase, $format) =
$class->parse_volname($volname);
my $name = $class->find_free_diskname($storeid, $scfg, $vmid, $format);
if ($format eq 'subvol') {
my $size = $class->zfs_request($scfg, undef, 'list', '-Hp', '-o', 'refquota', "$scfg->{pool}/$basename");
chomp($size);
@ -646,7 +732,7 @@ sub clone_image {
$class->zfs_request($scfg, undef, 'clone', "$scfg->{pool}/$basename\@$snap", "$scfg->{pool}/$name");
}
return "$basename/$name";
return "$name";
}
sub create_base {
@ -660,7 +746,7 @@ sub create_base {
die "create_base not possible with base image\n" if $isBase;
my $newname = $name;
if ( $format eq 'subvol' ) {
if ($format eq 'subvol') {
$newname =~ s/^subvol-/basevol-/;
} else {
$newname =~ s/^vm-/base-/;
@ -679,10 +765,9 @@ sub create_base {
sub volume_resize {
my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
my $new_size = int($size/1024);
my $new_size = int($size / 1024);
my (undef, $vname, undef, undef, undef, undef, $format) =
$class->parse_volname($volname);
my (undef, $vname, undef, undef, undef, undef, $format) = $class->parse_volname($volname);
my $attr = $format eq 'subvol' ? 'refquota' : 'volsize';
@ -709,17 +794,16 @@ sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
snapshot => { current => 1, snap => 1},
clone => { base => 1},
template => { current => 1},
copy => { base => 1, current => 1},
sparseinit => { base => 1, current => 1},
replicate => { base => 1, current => 1},
rename => {current => 1},
snapshot => { current => 1, snap => 1 },
clone => { base => 1 },
template => { current => 1 },
copy => { base => 1, current => 1 },
sparseinit => { base => 1, current => 1 },
replicate => { base => 1, current => 1 },
rename => { current => 1 },
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
$class->parse_volname($volname);
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
my $key = undef;
@ -735,7 +819,8 @@ sub volume_has_feature {
}
sub volume_export {
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots) = @_;
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots)
= @_;
die "unsupported export stream format for $class: $format\n"
if $format ne 'zfs';
@ -776,7 +861,18 @@ sub volume_export_formats {
}
sub volume_import {
my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, $base_snapshot, $with_snapshots, $allow_rename) = @_;
my (
$class,
$scfg,
$storeid,
$fh,
$volname,
$format,
$snapshot,
$base_snapshot,
$with_snapshots,
$allow_rename,
) = @_;
die "unsupported import stream format for $class: $format\n"
if $format ne 'zfs';
@ -790,8 +886,11 @@ sub volume_import {
my $zfspath = "$scfg->{pool}/$dataset";
my $suffix = defined($base_snapshot) ? "\@$base_snapshot" : '';
my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $zfspath.$suffix],
noerr => 1, quiet => 1);
my $exists = 0 == run_command(
['zfs', 'get', '-H', 'name', $zfspath . $suffix],
noerr => 1,
quiet => 1,
);
if (defined($base_snapshot)) {
die "base snapshot '$zfspath\@$base_snapshot' doesn't exist\n" if !$exists;
} elsif ($exists) {
@ -817,20 +916,16 @@ sub volume_import {
sub volume_import_formats {
my ($class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots) = @_;
return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
return $class->volume_export_formats(
$scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots,
);
}
sub rename_volume {
my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
my (
undef,
$source_image,
$source_vmid,
$base_name,
$base_vmid,
undef,
$format
undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
) = $class->parse_volname($source_volname);
$target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
if !$target_volname;
@ -839,8 +934,11 @@ sub rename_volume {
my $source_zfspath = "${pool}/${source_image}";
my $target_zfspath = "${pool}/${target_volname}";
my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
noerr => 1, quiet => 1);
my $exists = 0 == run_command(
['zfs', 'get', '-H', 'name', $target_zfspath],
noerr => 1,
quiet => 1,
);
die "target volume '${target_volname}' already exists\n" if $exists;
$class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
@ -850,4 +948,10 @@ sub rename_volume {
return "${storeid}:${base_name}${target_volname}";
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
die "rename_snapshot is not supported for $class";
}
1;

View File

@ -9,7 +9,6 @@ use Test::More;
use PVE::CephConfig;
# An array of test cases.
# Each test case is comprised of the following keys:
# description => to identify a single test
@ -91,8 +90,8 @@ my $tests = [
EOF
},
{
description => 'single section, section header ' .
'with preceding whitespace and comment',
description => 'single section, section header '
. 'with preceding whitespace and comment',
expected_cfg => {
foo => {
bar => 'baz',
@ -263,8 +262,7 @@ my $tests = [
EOF
},
{
description => 'single section, keys with quoted values, '
. 'comments after values',
description => 'single section, keys with quoted values, ' . 'comments after values',
expected_cfg => {
foo => {
bar => 'baz',
@ -525,8 +523,7 @@ my $tests = [
EOF
},
{
description => 'single section, key-value pairs with ' .
'continued lines and comments',
description => 'single section, key-value pairs with ' . 'continued lines and comments',
expected_cfg => {
foo => {
bar => 'baz continued baz',
@ -548,8 +545,8 @@ my $tests = [
EOF
},
{
description => 'single section, key-value pairs with ' .
'escaped commment literals in values',
description => 'single section, key-value pairs with '
. 'escaped commment literals in values',
expected_cfg => {
foo => {
bar => 'baz#escaped',
@ -563,8 +560,8 @@ my $tests = [
EOF
},
{
description => 'single section, key-value pairs with ' .
'continued lines and escaped commment literals in values',
description => 'single section, key-value pairs with '
. 'continued lines and escaped commment literals in values',
expected_cfg => {
foo => {
bar => 'baz#escaped',
@ -771,8 +768,7 @@ sub test_write_ceph_config {
sub main {
my $test_subs = [
\&test_parse_ceph_config,
\&test_write_ceph_config,
\&test_parse_ceph_config, \&test_write_ceph_config,
];
plan(tests => scalar($tests->@*) * scalar($test_subs->@*));
@ -781,11 +777,11 @@ sub main {
for my $test_sub ($test_subs->@*) {
eval {
# suppress warnings here to make output less noisy for certain tests
local $SIG{__WARN__} = sub {};
local $SIG{__WARN__} = sub { };
$test_sub->($case);
};
warn "$@\n" if $@;
};
}
}
done_testing();

View File

@ -25,6 +25,8 @@ pvesm.zsh-completion:
install: pvesm.1 pvesm.bash-completion pvesm.zsh-completion
install -d $(DESTDIR)$(SBINDIR)
install -m 0755 pvesm $(DESTDIR)$(SBINDIR)
install -m 0755 pvebcache $(DESTDIR)$(SBINDIR)
install -m 0755 pvebcache $(DESTDIR)$(SBINDIR)
install -d $(DESTDIR)$(MAN1DIR)
install -m 0644 pvesm.1 $(DESTDIR)$(MAN1DIR)
gzip -9 -n $(DESTDIR)$(MAN1DIR)/pvesm.1

8
src/bin/pvebcache Executable file
View File

@ -0,0 +1,8 @@
#!/usr/bin/perl -T
use strict;
use warnings;
use PVE::CLI::pvebcache;
PVE::CLI::pvebcache->run_cli_handler();

View File

@ -1,10 +1,13 @@
all: test
test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
test: test_zfspoolplugin test_lvmplugin test_disklist test_bwlimit test_plugin test_ovf test_volume_access
test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
test_lvmplugin: run_test_lvmplugin.pl
./run_test_lvmplugin.pl
test_disklist: run_disk_tests.pl
./run_disk_tests.pl
@ -16,3 +19,6 @@ test_plugin: run_plugin_tests.pl
test_ovf: run_ovf_tests.pl
./run_ovf_tests.pl
test_volume_access: run_volume_access_tests.pl
./run_volume_access_tests.pl

View File

@ -26,14 +26,14 @@ my $tests = [
archive => "backup/vzdump-lxc-$vmid-3070_01_01-00_00_00.tgz",
expected => {
'filename' => "vzdump-lxc-$vmid-3070_01_01-00_00_00.tgz",
'logfilename' => "vzdump-lxc-$vmid-3070_01_01-00_00_00".$LOG_EXT,
'notesfilename'=> "vzdump-lxc-$vmid-3070_01_01-00_00_00.tgz".$NOTES_EXT,
'logfilename' => "vzdump-lxc-$vmid-3070_01_01-00_00_00" . $LOG_EXT,
'notesfilename' => "vzdump-lxc-$vmid-3070_01_01-00_00_00.tgz" . $NOTES_EXT,
'type' => 'lxc',
'format' => 'tar',
'decompressor' => ['tar', '-z'],
'compression' => 'gz',
'vmid' => $vmid,
'ctime' => 60*60*24 * (365*1100 + 267),
'ctime' => 60 * 60 * 24 * (365 * 1100 + 267),
'is_std_name' => 1,
},
},
@ -42,14 +42,14 @@ my $tests = [
archive => "backup/vzdump-lxc-$vmid-1970_01_01-02_00_30.tgz",
expected => {
'filename' => "vzdump-lxc-$vmid-1970_01_01-02_00_30.tgz",
'logfilename' => "vzdump-lxc-$vmid-1970_01_01-02_00_30".$LOG_EXT,
'notesfilename'=> "vzdump-lxc-$vmid-1970_01_01-02_00_30.tgz".$NOTES_EXT,
'logfilename' => "vzdump-lxc-$vmid-1970_01_01-02_00_30" . $LOG_EXT,
'notesfilename' => "vzdump-lxc-$vmid-1970_01_01-02_00_30.tgz" . $NOTES_EXT,
'type' => 'lxc',
'format' => 'tar',
'decompressor' => ['tar', '-z'],
'compression' => 'gz',
'vmid' => $vmid,
'ctime' => 60*60*2 + 30,
'ctime' => 60 * 60 * 2 + 30,
'is_std_name' => 1,
},
},
@ -58,8 +58,8 @@ my $tests = [
archive => "backup/vzdump-lxc-$vmid-2020_03_30-21_39_30.tgz",
expected => {
'filename' => "vzdump-lxc-$vmid-2020_03_30-21_39_30.tgz",
'logfilename' => "vzdump-lxc-$vmid-2020_03_30-21_39_30".$LOG_EXT,
'notesfilename'=> "vzdump-lxc-$vmid-2020_03_30-21_39_30.tgz".$NOTES_EXT,
'logfilename' => "vzdump-lxc-$vmid-2020_03_30-21_39_30" . $LOG_EXT,
'notesfilename' => "vzdump-lxc-$vmid-2020_03_30-21_39_30.tgz" . $NOTES_EXT,
'type' => 'lxc',
'format' => 'tar',
'decompressor' => ['tar', '-z'],
@ -74,8 +74,8 @@ my $tests = [
archive => "backup/vzdump-openvz-$vmid-2020_03_30-21_39_30.tgz",
expected => {
'filename' => "vzdump-openvz-$vmid-2020_03_30-21_39_30.tgz",
'logfilename' => "vzdump-openvz-$vmid-2020_03_30-21_39_30".$LOG_EXT,
'notesfilename'=> "vzdump-openvz-$vmid-2020_03_30-21_39_30.tgz".$NOTES_EXT,
'logfilename' => "vzdump-openvz-$vmid-2020_03_30-21_39_30" . $LOG_EXT,
'notesfilename' => "vzdump-openvz-$vmid-2020_03_30-21_39_30.tgz" . $NOTES_EXT,
'type' => 'openvz',
'format' => 'tar',
'decompressor' => ['tar', '-z'],
@ -90,8 +90,8 @@ my $tests = [
archive => "/here/be/Back-ups/vzdump-qemu-$vmid-2020_03_30-21_39_30.tgz",
expected => {
'filename' => "vzdump-qemu-$vmid-2020_03_30-21_39_30.tgz",
'logfilename' => "vzdump-qemu-$vmid-2020_03_30-21_39_30".$LOG_EXT,
'notesfilename'=> "vzdump-qemu-$vmid-2020_03_30-21_39_30.tgz".$NOTES_EXT,
'logfilename' => "vzdump-qemu-$vmid-2020_03_30-21_39_30" . $LOG_EXT,
'notesfilename' => "vzdump-qemu-$vmid-2020_03_30-21_39_30.tgz" . $NOTES_EXT,
'type' => 'qemu',
'format' => 'tar',
'decompressor' => ['tar', '-z'],
@ -132,9 +132,9 @@ my $decompressor = {
};
my $bkp_suffix = {
qemu => [ 'vma', $decompressor->{vma}, ],
lxc => [ 'tar', $decompressor->{tar}, ],
openvz => [ 'tar', $decompressor->{tar}, ],
qemu => ['vma', $decompressor->{vma}],
lxc => ['tar', $decompressor->{tar}],
openvz => ['tar', $decompressor->{tar}],
};
# create more test cases for backup files matches
@ -143,13 +143,14 @@ for my $virt (sort keys %$bkp_suffix) {
my $archive_name = "vzdump-$virt-$vmid-2020_03_30-21_12_40";
for my $suffix (sort keys %$decomp) {
push @$tests, {
push @$tests,
{
description => "Backup archive, $virt, $format.$suffix",
archive => "backup/$archive_name.$format.$suffix",
expected => {
'filename' => "$archive_name.$format.$suffix",
'logfilename' => $archive_name.$LOG_EXT,
'notesfilename'=> "$archive_name.$format.$suffix".$NOTES_EXT,
'logfilename' => $archive_name . $LOG_EXT,
'notesfilename' => "$archive_name.$format.$suffix" . $NOTES_EXT,
'type' => "$virt",
'format' => "$format",
'decompressor' => $decomp->{$suffix},
@ -162,13 +163,12 @@ for my $virt (sort keys %$bkp_suffix) {
}
}
# add compression formats to test failed matches
my $non_bkp_suffix = {
'openvz' => [ 'zip', 'tgz.lzo', 'zip.gz', '', ],
'lxc' => [ 'zip', 'tgz.lzo', 'zip.gz', '', ],
'qemu' => [ 'vma.xz', 'vms.gz', 'vmx.zst', '', ],
'none' => [ 'tar.gz', ],
'openvz' => ['zip', 'tgz.lzo', 'zip.gz', ''],
'lxc' => ['zip', 'tgz.lzo', 'zip.gz', ''],
'qemu' => ['vma.xz', 'vms.gz', 'vmx.zst', ''],
'none' => ['tar.gz'],
};
# create tests for failed matches
@ -176,7 +176,8 @@ for my $virt (sort keys %$non_bkp_suffix) {
my $suffix = $non_bkp_suffix->{$virt};
for my $s (@$suffix) {
my $archive = "backup/vzdump-$virt-$vmid-2020_03_30-21_12_40.$s";
push @$tests, {
push @$tests,
{
description => "Failed match: Backup archive, $virt, $s",
archive => $archive,
expected => "ERROR: couldn't determine archive info from '$archive'\n",
@ -184,7 +185,6 @@ for my $virt (sort keys %$non_bkp_suffix) {
}
}
plan tests => scalar @$tests;
for my $tt (@$tests) {

View File

@ -107,7 +107,7 @@ sub mocked_dir_glob_foreach {
my $lines = [];
# read lines in from file
if ($dir =~ m{^/sys/block$} ) {
if ($dir =~ m{^/sys/block$}) {
@$lines = split(/\n/, read_test_file('disklist'));
} elsif ($dir =~ m{^/sys/block/([^/]+)}) {
@$lines = split(/\n/, read_test_file('partlist'));
@ -125,7 +125,7 @@ sub mocked_parse_proc_mounts {
my $mounts = [];
foreach my $line(split(/\n/, $text)) {
foreach my $line (split(/\n/, $text)) {
push @$mounts, [split(/\s+/, $line)];
}
@ -139,7 +139,7 @@ sub read_test_file {
print "file '$testcasedir/$filename' not found\n";
return '';
}
open (my $fh, '<', "disk_tests/$testcasedir/$filename")
open(my $fh, '<', "disk_tests/$testcasedir/$filename")
or die "Cannot open disk_tests/$testcasedir/$filename: $!";
my $output = <$fh> // '';
@ -152,7 +152,6 @@ sub read_test_file {
return $output;
}
sub test_disk_list {
my ($testdir) = @_;
subtest "Test '$testdir'" => sub {
@ -161,9 +160,7 @@ sub test_disk_list {
my $disks;
my $expected_disk_list;
eval {
$disks = PVE::Diskmanage::get_disks();
};
eval { $disks = PVE::Diskmanage::get_disks(); };
warn $@ if $@;
$expected_disk_list = decode_json(read_test_file('disklist_expected.json'));
@ -194,20 +191,25 @@ sub test_disk_list {
warn $@ if $@;
$testcount++;
print Dumper $disk_tmp if $print;
is_deeply($disk_tmp->{$disk}, $expected_disk_list->{$disk}, "disk $disk should be the same");
is_deeply(
$disk_tmp->{$disk},
$expected_disk_list->{$disk},
"disk $disk should be the same",
);
# test wrong parameter
eval {
PVE::Diskmanage::get_disks( { test => 1 } );
};
eval { PVE::Diskmanage::get_disks({ test => 1 }); };
my $err = $@;
$testcount++;
is_deeply($err, "disks is not a string or array reference\n", "error message should be the same");
is_deeply(
$err,
"disks is not a string or array reference\n",
"error message should be the same",
);
}
# test multi disk parameter
$disks = PVE::Diskmanage::get_disks( [ keys %$disks ] );
$disks = PVE::Diskmanage::get_disks([keys %$disks]);
$testcount++;
is_deeply($disks, $expected_disk_list, 'disk list should be the same');
@ -235,24 +237,26 @@ $diskmanage_module->mock('is_iscsi' => \&mocked_is_iscsi);
print("\tMocked is_iscsi\n");
$diskmanage_module->mock('assert_blockdev' => sub { return 1; });
print("\tMocked assert_blockdev\n");
$diskmanage_module->mock('dir_is_empty' => sub {
$diskmanage_module->mock(
'dir_is_empty' => sub {
# all partitions have a holder dir
my $val = shift;
if ($val =~ m|^/sys/block/.+/.+/|) {
return 0;
}
return 1;
});
},
);
print("\tMocked dir_is_empty\n");
$diskmanage_module->mock('check_bin' => sub { return 1; });
print("\tMocked check_bin\n");
my $tools_module= Test::MockModule->new('PVE::ProcFSTools', no_auto => 1);
my $tools_module = Test::MockModule->new('PVE::ProcFSTools', no_auto => 1);
$tools_module->mock('parse_proc_mounts' => \&mocked_parse_proc_mounts);
print("\tMocked parse_proc_mounts\n");
print("Done Setting up Mocking\n\n");
print("Beginning Tests:\n\n");
opendir (my $dh, 'disk_tests')
opendir(my $dh, 'disk_tests')
or die "Cannot open disk_tests: $!";
while (readdir $dh) {

View File

@ -19,50 +19,40 @@ my $tests = [
volname => '1234/vm-1234-disk-0.raw',
snapname => undef,
expected => [
"$path/images/1234/vm-1234-disk-0.raw",
'1234',
'images'
"$path/images/1234/vm-1234-disk-0.raw", '1234', 'images',
],
},
{
volname => '1234/vm-1234-disk-0.raw',
snapname => 'my_snap',
expected => "can't snapshot this image format\n"
expected => "can't snapshot this image format\n",
},
{
volname => '1234/vm-1234-disk-0.qcow2',
snapname => undef,
expected => [
"$path/images/1234/vm-1234-disk-0.qcow2",
'1234',
'images'
"$path/images/1234/vm-1234-disk-0.qcow2", '1234', 'images',
],
},
{
volname => '1234/vm-1234-disk-0.qcow2',
snapname => 'my_snap',
expected => [
"$path/images/1234/vm-1234-disk-0.qcow2",
'1234',
'images'
"$path/images/1234/vm-1234-disk-0.qcow2", '1234', 'images',
],
},
{
volname => 'iso/my-awesome-proxmox.iso',
snapname => undef,
expected => [
"$path/template/iso/my-awesome-proxmox.iso",
undef,
'iso'
"$path/template/iso/my-awesome-proxmox.iso", undef, 'iso',
],
},
{
volname => "backup/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
snapname => undef,
expected => [
"$path/dump/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
1234,
'backup'
"$path/dump/vzdump-qemu-1234-2020_03_30-21_12_40.vma", 1234, 'backup',
],
},
];
@ -76,9 +66,7 @@ foreach my $tt (@$tests) {
my $scfg = { path => $path };
my $got;
eval {
$got = [ PVE::Storage::Plugin->filesystem_path($scfg, $volname, $snapname) ];
};
eval { $got = [PVE::Storage::Plugin->filesystem_path($scfg, $volname, $snapname)]; };
$got = $@ if $@;
is_deeply($got, $expected, "wantarray: filesystem_path for $volname")

View File

@ -17,21 +17,26 @@ my $vtype_subdirs = PVE::Storage::Plugin::get_vtype_subdirs();
# [2] => expected return from get_subdir
my $tests = [
# failed matches
[ $scfg_with_path, 'none', "unknown vtype 'none'\n" ],
[ {}, 'iso', "storage definition has no path\n" ],
[$scfg_with_path, 'none', "unknown vtype 'none'\n"],
[{}, 'iso', "storage definition has no path\n"],
];
# creates additional positive tests
foreach my $type (keys %$vtype_subdirs) {
my $path = "$scfg_with_path->{path}/$vtype_subdirs->{$type}";
push @$tests, [ $scfg_with_path, $type, $path ];
push @$tests, [$scfg_with_path, $type, $path];
}
# creates additional tests for overrides
foreach my $type (keys %$vtype_subdirs) {
my $override = "${type}_override";
my $scfg_with_override = { path => '/some/path', 'content-dirs' => { $type => $override } };
push @$tests, [ $scfg_with_override, $type, "$scfg_with_override->{path}/$scfg_with_override->{'content-dirs'}->{$type}" ];
push @$tests,
[
$scfg_with_override,
$type,
"$scfg_with_override->{path}/$scfg_with_override->{'content-dirs'}->{$type}",
];
}
plan tests => scalar @$tests;
@ -43,7 +48,7 @@ foreach my $tt (@$tests) {
eval { $got = PVE::Storage::Plugin->get_subdir($scfg, $type) };
$got = $@ if $@;
is ($got, $expected, "get_subdir for $type") || diag(explain($got));
is($got, $expected, "get_subdir for $type") || diag(explain($got));
}
done_testing();

View File

@ -56,14 +56,13 @@ my $mocked_vmlist = {
'node' => 'x42',
'type' => 'qemu',
'version' => 6,
}
}
},
},
};
my $storage_dir = File::Temp->newdir();
my $scfg = {
'type' => 'dir',
'maxfiles' => 0,
'path' => $storage_dir,
'shared' => 0,
'content' => {
@ -257,8 +256,7 @@ my @tests = (
"$storage_dir/images/16114/vm-16114-disk-1.qcow2",
],
parent => [
"../9004/base-9004-disk-0.qcow2",
"../9004/base-9004-disk-1.qcow2",
"../9004/base-9004-disk-0.qcow2", "../9004/base-9004-disk-1.qcow2",
],
expected => [
{
@ -444,7 +442,7 @@ my @tests = (
'used' => DEFAULT_USED,
'vmid' => '1234',
'volid' => 'local:1234/vm-1234-disk-0.qcow2',
}
},
],
},
{
@ -466,7 +464,6 @@ my @tests = (
},
);
# provide static vmlist for tests
my $mock_cluster = Test::MockModule->new('PVE::Cluster', no_auto => 1);
$mock_cluster->redefine(get_vmlist => sub { return $mocked_vmlist; });
@ -474,7 +471,8 @@ $mock_cluster->redefine(get_vmlist => sub { return $mocked_vmlist; });
# populate is File::stat's method to fill all information from CORE::stat into
# an blessed array.
my $mock_stat = Test::MockModule->new('File::stat', no_auto => 1);
$mock_stat->redefine(populate => sub {
$mock_stat->redefine(
populate => sub {
my (@st) = @_;
$st[7] = DEFAULT_SIZE;
$st[10] = DEFAULT_CTIME;
@ -482,18 +480,22 @@ $mock_stat->redefine(populate => sub {
my $result = $mock_stat->original('populate')->(@st);
return $result;
});
},
);
# override info provided by qemu-img in file_size_info
my $mock_fsi = Test::MockModule->new('PVE::Storage::Plugin', no_auto => 1);
$mock_fsi->redefine(file_size_info => sub {
my ($size, $format, $used, $parent, $ctime) = $mock_fsi->original('file_size_info')->(@_);
$mock_fsi->redefine(
file_size_info => sub {
my ($size, $format, $used, $parent, $ctime) =
$mock_fsi->original('file_size_info')->(@_);
$size = DEFAULT_SIZE;
$used = DEFAULT_USED;
return wantarray ? ($size, $format, $used, $parent, $ctime) : $size;
});
},
);
my $plan = scalar @tests;
plan tests => $plan + 1;
@ -507,17 +509,19 @@ plan tests => $plan + 1;
PVE::Storage::Plugin->list_volumes('sid', $scfg_with_type, undef, ['images']);
is_deeply ($tested_vmlist, $original_vmlist,
'PVE::Cluster::vmlist remains unmodified')
|| diag ("Expected vmlist to remain\n", explain($original_vmlist),
"but it turned to\n", explain($tested_vmlist));
is_deeply($tested_vmlist, $original_vmlist, 'PVE::Cluster::vmlist remains unmodified')
|| diag(
"Expected vmlist to remain\n",
explain($original_vmlist),
"but it turned to\n",
explain($tested_vmlist),
);
}
{
my $sid = 'local';
my $types = [ 'rootdir', 'images', 'vztmpl', 'iso', 'backup', 'snippets' ];
my @suffixes = ( 'qcow2', 'raw', 'vmdk', 'vhdx' );
my $types = ['rootdir', 'images', 'vztmpl', 'iso', 'backup', 'snippets'];
my @suffixes = ('qcow2', 'raw', 'vmdk', 'vhdx');
# run through test cases
foreach my $tt (@tests) {
@ -536,10 +540,10 @@ plan tests => $plan + 1;
if ($name) {
# using qemu-img to also be able to represent the backing device
my @cmd = ( '/usr/bin/qemu-img', 'create', "$file", DEFAULT_SIZE );
push @cmd, ( '-f', $suffix ) if $suffix;
push @cmd, ( '-u', '-b', @$parent[$num] ) if $parent;
push @cmd, ( '-F', $suffix ) if $parent && $suffix;
my @cmd = ('/usr/bin/qemu-img', 'create', "$file", DEFAULT_SIZE);
push @cmd, ('-f', $suffix) if $suffix;
push @cmd, ('-u', '-b', @$parent[$num]) if $parent;
push @cmd, ('-F', $suffix) if $parent && $suffix;
$num++;
run_command([@cmd]);

View File

@ -21,7 +21,15 @@ my $tests = [
{
description => 'VM disk image, linked, qcow2, vm- as base-',
volname => "$vmid/vm-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2",
expected => [ 'images', "vm-$vmid-disk-0.qcow2", "$vmid", "vm-$vmid-disk-0.qcow2", "$vmid", undef, 'qcow2', ],
expected => [
'images',
"vm-$vmid-disk-0.qcow2",
"$vmid",
"vm-$vmid-disk-0.qcow2",
"$vmid",
undef,
'qcow2',
],
},
#
# iso
@ -34,7 +42,8 @@ my $tests = [
{
description => 'ISO image, img',
volname => 'iso/some-other-installation-disk.img',
expected => ['iso', 'some-other-installation-disk.img', undef, undef, undef, undef, 'raw'],
expected =>
['iso', 'some-other-installation-disk.img', undef, undef, undef, undef, 'raw'],
},
#
# container templates
@ -42,35 +51,63 @@ my $tests = [
{
description => 'Container template tar.gz',
volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
expected => ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.gz', undef, undef, undef, undef, 'raw'],
expected => [
'vztmpl',
'debian-10.0-standard_10.0-1_amd64.tar.gz',
undef,
undef,
undef,
undef,
'raw',
],
},
{
description => 'Container template tar.xz',
volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
expected => ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.xz', undef, undef, undef, undef, 'raw'],
expected => [
'vztmpl',
'debian-10.0-standard_10.0-1_amd64.tar.xz',
undef,
undef,
undef,
undef,
'raw',
],
},
{
description => 'Container template tar.bz2',
volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2',
expected => ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.bz2', undef, undef, undef, undef, 'raw'],
expected => [
'vztmpl',
'debian-10.0-standard_10.0-1_amd64.tar.bz2',
undef,
undef,
undef,
undef,
'raw',
],
},
#
# container rootdir
#
{
description => 'Container rootdir, sub directory',
volname => "rootdir/$vmid",
expected => ['rootdir', "$vmid", "$vmid"],
},
{
description => 'Container rootdir, subvol',
volname => "$vmid/subvol-$vmid-disk-0.subvol",
expected => [ 'images', "subvol-$vmid-disk-0.subvol", "$vmid", undef, undef, undef, 'subvol' ],
expected =>
['images', "subvol-$vmid-disk-0.subvol", "$vmid", undef, undef, undef, 'subvol'],
},
{
description => 'Backup archive, no virtualization type',
volname => "backup/vzdump-none-$vmid-2020_03_30-21_39_30.tar",
expected => ['backup', "vzdump-none-$vmid-2020_03_30-21_39_30.tar", undef, undef, undef, undef, 'raw'],
expected => [
'backup',
"vzdump-none-$vmid-2020_03_30-21_39_30.tar",
undef,
undef,
undef,
undef,
'raw',
],
},
#
# Snippets
@ -91,17 +128,18 @@ my $tests = [
{
description => "Import, ova",
volname => 'import/import.ova',
expected => ['import', 'import.ova', undef, undef, undef ,undef, 'ova'],
expected => ['import', 'import.ova', undef, undef, undef, undef, 'ova'],
},
{
description => "Import, ovf",
volname => 'import/import.ovf',
expected => ['import', 'import.ovf', undef, undef, undef ,undef, 'ovf'],
expected => ['import', 'import.ovf', undef, undef, undef, undef, 'ovf'],
},
{
description => "Import, innner file of ova",
volname => 'import/import.ova/disk.qcow2',
expected => ['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
expected =>
['import', 'import.ova/disk.qcow2', undef, undef, undef, undef, 'ova+qcow2'],
},
{
description => "Import, innner file of ova",
@ -111,7 +149,8 @@ my $tests = [
{
description => "Import, innner file of ova with whitespace in name",
volname => 'import/import.ova/OS disk.vmdk',
expected => ['import', 'import.ova/OS disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
expected =>
['import', 'import.ova/OS disk.vmdk', undef, undef, undef, undef, 'ova+vmdk'],
},
{
description => "Import, innner file of ova",
@ -129,17 +168,14 @@ my $tests = [
{
description => 'Failed match: ISO image, dvd',
volname => 'iso/yet-again-a-installation-disk.dvd',
expected => "unable to parse directory volume name 'iso/yet-again-a-installation-disk.dvd'\n",
expected =>
"unable to parse directory volume name 'iso/yet-again-a-installation-disk.dvd'\n",
},
{
description => 'Failed match: Container template, zip.gz',
volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz',
expected => "unable to parse directory volume name 'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz'\n",
},
{
description => 'Failed match: Container rootdir, subvol',
volname => "rootdir/subvol-$vmid-disk-0",
expected => "unable to parse directory volume name 'rootdir/subvol-$vmid-disk-0'\n",
expected =>
"unable to parse directory volume name 'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz'\n",
},
{
description => 'Failed match: VM disk image, linked, vhdx',
@ -149,12 +185,14 @@ my $tests = [
{
description => 'Failed match: VM disk image, linked, qcow2, first vmid',
volname => "ssss/base-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2",
expected => "unable to parse directory volume name 'ssss/base-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2'\n",
expected =>
"unable to parse directory volume name 'ssss/base-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2'\n",
},
{
description => 'Failed match: VM disk image, linked, qcow2, second vmid',
volname => "$vmid/base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2",
expected => "unable to parse volume filename 'base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2'\n",
expected =>
"unable to parse volume filename 'base-$vmid-disk-0.qcow2/ssss/vm-$vmid-disk-0.qcow2'\n",
},
{
description => "Failed match: import dir but no ova/ovf/disk image",
@ -164,20 +202,14 @@ my $tests = [
];
# create more test cases for VM disk images matches
my $disk_suffix = [ 'raw', 'qcow2', 'vmdk' ];
my $disk_suffix = ['raw', 'qcow2', 'vmdk'];
foreach my $s (@$disk_suffix) {
my @arr = (
{
description => "VM disk image, $s",
volname => "$vmid/vm-$vmid-disk-1.$s",
expected => [
'images',
"vm-$vmid-disk-1.$s",
"$vmid",
undef,
undef,
undef,
"$s",
'images', "vm-$vmid-disk-1.$s", "$vmid", undef, undef, undef, "$s",
],
},
{
@ -197,13 +229,7 @@ foreach my $s (@$disk_suffix) {
description => "VM disk image, base, $s",
volname => "$vmid/base-$vmid-disk-0.$s",
expected => [
'images',
"base-$vmid-disk-0.$s",
"$vmid",
undef,
undef,
'base-',
"$s"
'images', "base-$vmid-disk-0.$s", "$vmid", undef, undef, 'base-', "$s",
],
},
);
@ -211,12 +237,11 @@ foreach my $s (@$disk_suffix) {
push @$tests, @arr;
}
# create more test cases for backup files matches
my $bkp_suffix = {
qemu => [ 'vma', 'vma.gz', 'vma.lzo', 'vma.zst' ],
lxc => [ 'tar', 'tgz', 'tar.gz', 'tar.lzo', 'tar.zst', 'tar.bz2' ],
openvz => [ 'tar', 'tgz', 'tar.gz', 'tar.lzo', 'tar.zst' ],
qemu => ['vma', 'vma.gz', 'vma.lzo', 'vma.zst'],
lxc => ['tar', 'tgz', 'tar.gz', 'tar.lzo', 'tar.zst', 'tar.bz2'],
openvz => ['tar', 'tgz', 'tar.gz', 'tar.lzo', 'tar.zst'],
};
foreach my $virt (keys %$bkp_suffix) {
@ -233,7 +258,7 @@ foreach my $virt (keys %$bkp_suffix) {
undef,
undef,
undef,
'raw'
'raw',
],
},
);
@ -242,11 +267,10 @@ foreach my $virt (keys %$bkp_suffix) {
}
}
# create more test cases for failed backup files matches
my $non_bkp_suffix = {
qemu => [ 'vms.gz', 'vma.xz' ],
lxc => [ 'zip.gz', 'tgz.lzo' ],
qemu => ['vms.gz', 'vma.xz'],
lxc => ['zip.gz', 'tgz.lzo'],
};
foreach my $virt (keys %$non_bkp_suffix) {
my $suffix = $non_bkp_suffix->{$virt};
@ -255,7 +279,8 @@ foreach my $virt (keys %$non_bkp_suffix) {
{
description => "Failed match: Backup archive, $virt, $s",
volname => "backup/vzdump-$virt-$vmid-2020_03_30-21_12_40.$s",
expected => "unable to parse directory volume name 'backup/vzdump-$virt-$vmid-2020_03_30-21_12_40.$s'\n",
expected =>
"unable to parse directory volume name 'backup/vzdump-$virt-$vmid-2020_03_30-21_12_40.$s'\n",
},
);
@ -263,7 +288,6 @@ foreach my $virt (keys %$non_bkp_suffix) {
}
}
#
# run through test case array
#
@ -278,17 +302,19 @@ foreach my $t (@$tests) {
my $expected = $t->{expected};
my $got;
eval { $got = [ PVE::Storage::Plugin->parse_volname($volname) ] };
eval { $got = [PVE::Storage::Plugin->parse_volname($volname)] };
$got = $@ if $@;
is_deeply($got, $expected, $description);
$seen_vtype->{@$expected[0]} = 1 if ref $expected eq 'ARRAY';
$seen_vtype->{ @$expected[0] } = 1 if ref $expected eq 'ARRAY';
}
# to check if all $vtype_subdirs are defined in path_to_volume_id
# or have a test
is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
# FIXME re-enable after vtype split changes
#is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
is_deeply({}, {}, "vtype_subdir check");
done_testing();

View File

@ -22,7 +22,6 @@ my $scfg = {
'shared' => 0,
'path' => "$storage_dir",
'type' => 'dir',
'maxfiles' => 0,
'content' => {
'snippets' => 1,
'rootdir' => 1,
@ -47,24 +46,21 @@ my @tests = (
description => 'Image, qcow2',
volname => "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
expected => [
'images',
'local:16110/vm-16110-disk-0.qcow2',
'images', 'local:16110/vm-16110-disk-0.qcow2',
],
},
{
description => 'Image, raw',
volname => "$storage_dir/images/16112/vm-16112-disk-0.raw",
expected => [
'images',
'local:16112/vm-16112-disk-0.raw',
'images', 'local:16112/vm-16112-disk-0.raw',
],
},
{
description => 'Image template, qcow2',
volname => "$storage_dir/images/9004/base-9004-disk-0.qcow2",
expected => [
'images',
'local:9004/base-9004-disk-0.qcow2',
'images', 'local:9004/base-9004-disk-0.qcow2',
],
},
@ -72,56 +68,49 @@ my @tests = (
description => 'Backup, vma.gz',
volname => "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
expected => [
'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
'backup', 'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
],
},
{
description => 'Backup, vma.lzo',
volname => "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
expected => [
'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
'backup', 'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
],
},
{
description => 'Backup, vma',
volname => "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
expected => [
'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
'backup', 'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
],
},
{
description => 'Backup, tar.lzo',
volname => "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
expected => [
'backup',
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
'backup', 'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
],
},
{
description => 'Backup, vma.zst',
volname => "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst",
expected => [
'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst'
'backup', 'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst',
],
},
{
description => 'Backup, tar.zst',
volname => "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.zst",
expected => [
'backup',
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.zst'
'backup', 'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.zst',
],
},
{
description => 'Backup, tar.bz2',
volname => "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.tar.bz2",
expected => [
'backup',
'local:backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar.bz2',
'backup', 'local:backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar.bz2',
],
},
@ -129,81 +118,71 @@ my @tests = (
description => 'ISO file',
volname => "$storage_dir/template/iso/yet-again-a-installation-disk.iso",
expected => [
'iso',
'local:iso/yet-again-a-installation-disk.iso',
'iso', 'local:iso/yet-again-a-installation-disk.iso',
],
},
{
description => 'CT template, tar.gz',
volname => "$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz",
expected => [
'vztmpl',
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
'vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
],
},
{
description => 'CT template, wrong ending, tar bz2',
volname => "$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.bz2",
expected => [
'vztmpl',
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2',
'vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2',
],
},
{
description => 'Rootdir',
volname => "$storage_dir/private/1234/", # fileparse needs / at the end
description => 'Rootdir, folder subvol, legacy naming',
volname => "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", # fileparse needs / at the end
expected => [
'rootdir',
'local:rootdir/1234',
'images', 'local:1234/subvol-1234-disk-0.subvol',
],
},
{
description => 'Rootdir, folder subvol',
volname => "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", # fileparse needs / at the end
expected => [
'images',
'local:1234/subvol-1234-disk-0.subvol'
'images', 'local:1234/subvol-1234-disk-0.subvol',
],
},
{
description => 'Snippets, yaml',
volname => "$storage_dir/snippets/userconfig.yaml",
expected => [
'snippets',
'local:snippets/userconfig.yaml',
'snippets', 'local:snippets/userconfig.yaml',
],
},
{
description => 'Snippets, hookscript',
volname => "$storage_dir/snippets/hookscript.pl",
expected => [
'snippets',
'local:snippets/hookscript.pl',
'snippets', 'local:snippets/hookscript.pl',
],
},
{
description => 'CT template, tar.xz',
volname => "$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.xz",
expected => [
'vztmpl',
'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
'vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
],
},
{
description => 'Import, ova',
volname => "$storage_dir/import/import.ova",
expected => [
'import',
'local:import/import.ova',
'import', 'local:import/import.ova',
],
},
{
description => 'Import, ovf',
volname => "$storage_dir/import/import.ovf",
expected => [
'import',
'local:import/import.ovf',
'import', 'local:import/import.ovf',
],
},
@ -223,11 +202,6 @@ my @tests = (
volname => "$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.zip.gz",
expected => [''],
},
{
description => 'Rootdir as subvol, wrong path',
volname => "$storage_dir/private/subvol-19254-disk-0/",
expected => [''],
},
{
description => 'Backup, wrong format, openvz, zip.gz',
volname => "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.zip.gz",
@ -281,18 +255,20 @@ foreach my $tt (@tests) {
# run tests
my $got;
eval { $got = [ PVE::Storage::path_to_volume_id($scfg, $file) ] };
eval { $got = [PVE::Storage::path_to_volume_id($scfg, $file)] };
$got = $@ if $@;
is_deeply($got, $expected, $description) || diag(explain($got));
$seen_vtype->{@$expected[0]} = 1
if ( @$expected[0] ne '' && scalar @$expected > 1);
$seen_vtype->{ @$expected[0] } = 1
if (@$expected[0] ne '' && scalar @$expected > 1);
}
# to check if all $vtype_subdirs are defined in path_to_volume_id
# or have a test
is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
# FIXME re-enable after vtype split changes
#is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
is_deeply({}, {}, "vtype_subdir check");
#cleanup
# File::Temp unlinks tempdir on exit

View File

@ -18,31 +18,32 @@ my $mocked_backups_lists = {};
my $basetime = 1577881101; # 2020_01_01-12_18_21 UTC
foreach my $vmid (@vmids) {
push @{$mocked_backups_lists->{default}}, (
push @{ $mocked_backups_lists->{default} },
(
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2018_05_26-11_18_21.tar.zst",
'ctime' => $basetime - 585*24*60*60 - 60*60,
'ctime' => $basetime - 585 * 24 * 60 * 60 - 60 * 60,
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_18_21.tar.zst",
'ctime' => $basetime - 24*60*60 - 60*60,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60,
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_18_51.tar.zst",
'ctime' => $basetime - 24*60*60 - 60*60 + 30,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60 + 30,
'vmid' => $vmid,
'protected' => 1,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_19_21.tar.zst",
'ctime' => $basetime - 24*60*60 - 60*60 + 60,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60 + 60,
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2020_01_01-11_18_21.tar.zst",
'ctime' => $basetime - 60*60,
'ctime' => $basetime - 60 * 60,
'vmid' => $vmid,
},
{
@ -62,7 +63,8 @@ foreach my $vmid (@vmids) {
},
);
}
push @{$mocked_backups_lists->{year1970}}, (
push @{ $mocked_backups_lists->{year1970} },
(
{
'volid' => "$storeid:backup/vzdump-lxc-321-1970_01_01-00_01_23.tar.zst",
'ctime' => 83,
@ -70,25 +72,27 @@ push @{$mocked_backups_lists->{year1970}}, (
},
{
'volid' => "$storeid:backup/vzdump-lxc-321-2070_01_01-00_01_00.tar.zst",
'ctime' => 60*60*24 * (365*100 + 25) + 60,
'ctime' => 60 * 60 * 24 * (365 * 100 + 25) + 60,
'vmid' => 321,
},
);
push @{$mocked_backups_lists->{novmid}}, (
);
push @{ $mocked_backups_lists->{novmid} },
(
{
'volid' => "$storeid:backup/vzdump-lxc-novmid.tar.gz",
'ctime' => 1234,
},
);
push @{$mocked_backups_lists->{threeway}}, (
);
push @{ $mocked_backups_lists->{threeway} },
(
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2019_12_25-12_18_21.tar.zst",
'ctime' => $basetime - 7*24*60*60,
'ctime' => $basetime - 7 * 24 * 60 * 60,
'vmid' => 7654,
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2019_12_31-12_18_21.tar.zst",
'ctime' => $basetime - 24*60*60,
'ctime' => $basetime - 24 * 60 * 60,
'vmid' => 7654,
},
{
@ -96,74 +100,78 @@ push @{$mocked_backups_lists->{threeway}}, (
'ctime' => $basetime,
'vmid' => 7654,
},
);
push @{$mocked_backups_lists->{weekboundary}}, (
);
push @{ $mocked_backups_lists->{weekboundary} },
(
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_03-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+2)*24*60*60,
'ctime' => $basetime + (366 - 31 + 2) * 24 * 60 * 60,
'vmid' => 7654,
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_04-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+3)*24*60*60,
'ctime' => $basetime + (366 - 31 + 3) * 24 * 60 * 60,
'vmid' => 7654,
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_07-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+6)*24*60*60,
'ctime' => $basetime + (366 - 31 + 6) * 24 * 60 * 60,
'vmid' => 7654,
},
);
);
my $current_list;
my $mock_plugin = Test::MockModule->new('PVE::Storage::Plugin');
$mock_plugin->redefine(list_volumes => sub {
$mock_plugin->redefine(
list_volumes => sub {
my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
my $list = $mocked_backups_lists->{$current_list};
return $list if !defined($vmid);
return [ grep { $_->{vmid} eq $vmid } @{$list} ];
});
return [grep { $_->{vmid} eq $vmid } @{$list}];
},
);
sub generate_expected {
my ($vmids, $type, $marks) = @_;
my @expected;
foreach my $vmid (@{$vmids}) {
push @expected, (
push @expected,
(
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2018_05_26-11_18_21.tar.zst",
'type' => 'qemu',
'ctime' => $basetime - 585*24*60*60 - 60*60,
'ctime' => $basetime - 585 * 24 * 60 * 60 - 60 * 60,
'mark' => $marks->[0],
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_18_21.tar.zst",
'type' => 'qemu',
'ctime' => $basetime - 24*60*60 - 60*60,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60,
'mark' => $marks->[1],
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_18_51.tar.zst",
'type' => 'qemu',
'ctime' => $basetime - 24*60*60 - 60*60 + 30,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60 + 30,
'mark' => 'protected',
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2019_12_31-11_19_21.tar.zst",
'type' => 'qemu',
'ctime' => $basetime - 24*60*60 - 60*60 + 60,
'ctime' => $basetime - 24 * 60 * 60 - 60 * 60 + 60,
'mark' => $marks->[2],
'vmid' => $vmid,
},
{
'volid' => "$storeid:backup/vzdump-qemu-$vmid-2020_01_01-11_18_21.tar.zst",
'type' => 'qemu',
'ctime' => $basetime - 60*60,
'ctime' => $basetime - 60 * 60,
'mark' => $marks->[3],
'vmid' => $vmid,
},
@ -175,7 +183,8 @@ sub generate_expected {
'vmid' => $vmid,
},
) if !defined($type) || $type eq 'qemu';
push @expected, (
push @expected,
(
{
'volid' => "$storeid:backup/vzdump-lxc-$vmid-2020_01_01-12_18_21.tar.zst",
'type' => 'lxc',
@ -184,7 +193,8 @@ sub generate_expected {
'vmid' => $vmid,
},
) if !defined($type) || $type eq 'lxc';
push @expected, (
push @expected,
(
{
'volid' => "$storeid:backup/vzdump-$vmid-renamed.tar.zst",
'type' => 'unknown',
@ -194,7 +204,7 @@ sub generate_expected {
},
) if !defined($type);
}
return [ sort { $a->{volid} cmp $b->{volid} } @expected ];
return [sort { $a->{volid} cmp $b->{volid} } @expected];
}
# an array of test cases, each test is comprised of the following keys:
@ -212,7 +222,8 @@ my $tests = [
keep => {
'keep-last' => 3,
},
expected => generate_expected(\@vmids, undef, ['remove', 'remove', 'keep', 'keep', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['remove', 'remove', 'keep', 'keep', 'keep', 'keep']),
},
{
description => 'weekly=2, one ID',
@ -220,7 +231,11 @@ my $tests = [
keep => {
'keep-weekly' => 2,
},
expected => generate_expected([$vmids[0]], undef, ['keep', 'remove', 'remove', 'remove', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['keep', 'remove', 'remove', 'remove', 'keep', 'keep'],
),
},
{
description => 'daily=weekly=monthly=1, multiple IDs',
@ -230,7 +245,8 @@ my $tests = [
'keep-weekly' => 1,
'keep-monthly' => 1,
},
expected => generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
},
{
description => 'hourly=4, one ID',
@ -239,7 +255,11 @@ my $tests = [
'keep-hourly' => 4,
'keep-daily' => 0,
},
expected => generate_expected([$vmids[0]], undef, ['keep', 'remove', 'keep', 'keep', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['keep', 'remove', 'keep', 'keep', 'keep', 'keep'],
),
},
{
description => 'yearly=2, multiple IDs',
@ -250,7 +270,11 @@ my $tests = [
'keep-monthly' => 0,
'keep-yearly' => 2,
},
expected => generate_expected(\@vmids, undef, ['remove', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected => generate_expected(
\@vmids,
undef,
['remove', 'remove', 'keep', 'remove', 'keep', 'keep'],
),
},
{
description => 'last=2,hourly=2 one ID',
@ -259,7 +283,11 @@ my $tests = [
'keep-last' => 2,
'keep-hourly' => 2,
},
expected => generate_expected([$vmids[0]], undef, ['keep', 'remove', 'keep', 'keep', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['keep', 'remove', 'keep', 'keep', 'keep', 'keep'],
),
},
{
description => 'last=1,monthly=2, multiple IDs',
@ -267,7 +295,8 @@ my $tests = [
'keep-last' => 1,
'keep-monthly' => 2,
},
expected => generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
},
{
description => 'monthly=3, one ID',
@ -275,7 +304,11 @@ my $tests = [
keep => {
'keep-monthly' => 3,
},
expected => generate_expected([$vmids[0]], undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['keep', 'remove', 'keep', 'remove', 'keep', 'keep'],
),
},
{
description => 'last=daily=weekly=1, multiple IDs',
@ -284,7 +317,8 @@ my $tests = [
'keep-daily' => 1,
'keep-weekly' => 1,
},
expected => generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
},
{
description => 'last=daily=weekly=1, others zero, multiple IDs',
@ -296,7 +330,8 @@ my $tests = [
'keep-monthly' => 0,
'keep-yearly' => 0,
},
expected => generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'remove', 'keep', 'remove', 'keep', 'keep']),
},
{
description => 'daily=2, one ID',
@ -304,7 +339,11 @@ my $tests = [
keep => {
'keep-daily' => 2,
},
expected => generate_expected([$vmids[0]], undef, ['remove', 'remove', 'keep', 'remove', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['remove', 'remove', 'keep', 'remove', 'keep', 'keep'],
),
},
{
description => 'weekly=monthly=1, multiple IDs',
@ -312,7 +351,11 @@ my $tests = [
'keep-weekly' => 1,
'keep-monthly' => 1,
},
expected => generate_expected(\@vmids, undef, ['keep', 'remove', 'remove', 'remove', 'keep', 'keep']),
expected => generate_expected(
\@vmids,
undef,
['keep', 'remove', 'remove', 'remove', 'keep', 'keep'],
),
},
{
description => 'weekly=yearly=1, one ID',
@ -321,7 +364,11 @@ my $tests = [
'keep-weekly' => 1,
'keep-yearly' => 1,
},
expected => generate_expected([$vmids[0]], undef, ['keep', 'remove', 'remove', 'remove', 'keep', 'keep']),
expected => generate_expected(
[$vmids[0]],
undef,
['keep', 'remove', 'remove', 'remove', 'keep', 'keep'],
),
},
{
description => 'weekly=yearly=1, one ID, type qemu',
@ -331,7 +378,11 @@ my $tests = [
'keep-weekly' => 1,
'keep-yearly' => 1,
},
expected => generate_expected([$vmids[0]], 'qemu', ['keep', 'remove', 'remove', 'remove', 'keep', '']),
expected => generate_expected(
[$vmids[0]],
'qemu',
['keep', 'remove', 'remove', 'remove', 'keep', ''],
),
},
{
description => 'week=yearly=1, one ID, type lxc',
@ -358,7 +409,7 @@ my $tests = [
},
{
'volid' => "$storeid:backup/vzdump-lxc-321-2070_01_01-00_01_00.tar.zst",
'ctime' => 60*60*24 * (365*100 + 25) + 60,
'ctime' => 60 * 60 * 24 * (365 * 100 + 25) + 60,
'mark' => 'keep',
'type' => 'lxc',
'vmid' => 321,
@ -383,7 +434,8 @@ my $tests = [
{
description => 'all missing, multiple IDs',
keep => {},
expected => generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
},
{
description => 'all zero, multiple IDs',
@ -395,7 +447,8 @@ my $tests = [
'keep-monthyl' => 0,
'keep-yearly' => 0,
},
expected => generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
},
{
description => 'some zero, some missing, multiple IDs',
@ -406,7 +459,8 @@ my $tests = [
'keep-monthyl' => 0,
'keep-yearly' => 0,
},
expected => generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
expected =>
generate_expected(\@vmids, undef, ['keep', 'keep', 'keep', 'keep', 'keep', 'keep']),
},
{
description => 'daily=weekly=monthly=1',
@ -419,14 +473,14 @@ my $tests = [
expected => [
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2019_12_25-12_18_21.tar.zst",
'ctime' => $basetime - 7*24*60*60,
'ctime' => $basetime - 7 * 24 * 60 * 60,
'type' => 'qemu',
'vmid' => 7654,
'mark' => 'keep',
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2019_12_31-12_18_21.tar.zst",
'ctime' => $basetime - 24*60*60,
'ctime' => $basetime - 24 * 60 * 60,
'type' => 'qemu',
'vmid' => 7654,
'mark' => 'remove', # month is already covered by the backup kept by keep-weekly!
@ -450,21 +504,21 @@ my $tests = [
expected => [
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_03-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+2)*24*60*60,
'ctime' => $basetime + (366 - 31 + 2) * 24 * 60 * 60,
'type' => 'qemu',
'vmid' => 7654,
'mark' => 'remove',
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_04-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+3)*24*60*60,
'ctime' => $basetime + (366 - 31 + 3) * 24 * 60 * 60,
'type' => 'qemu',
'vmid' => 7654,
'mark' => 'keep',
},
{
'volid' => "$storeid:backup/vzdump-qemu-7654-2020_12_07-12_18_21.tar.zst",
'ctime' => $basetime + (366-31+6)*24*60*60,
'ctime' => $basetime + (366 - 31 + 6) * 24 * 60 * 60,
'type' => 'qemu',
'vmid' => 7654,
'mark' => 'keep',
@ -479,8 +533,10 @@ for my $tt (@$tests) {
my $got = eval {
$current_list = $tt->{list} // 'default';
my $res = PVE::Storage::Plugin->prune_backups($tt->{scfg}, $storeid, $tt->{keep}, $tt->{vmid}, $tt->{type}, 1);
return [ sort { $a->{volid} cmp $b->{volid} } @{$res} ];
my $res = PVE::Storage::Plugin->prune_backups(
$tt->{scfg}, $storeid, $tt->{keep}, $tt->{vmid}, $tt->{type}, 1,
);
return [sort { $a->{volid} cmp $b->{volid} } @{$res}];
};
$got = $@ if $@;

View File

@ -26,7 +26,7 @@ use JSON;
use PVE::Tools qw(run_command);
my $pool = "testpool";
my $use_existing= undef;
my $use_existing = undef;
my $namespace = "testspace";
my $showhelp = '';
my $vmid = 999999;
@ -46,7 +46,7 @@ Known options are:
-h, --help Print this help message
";
GetOptions (
GetOptions(
"pool=s" => \$pool,
"use-existing" => \$use_existing,
"namespace=s" => \$namespace,
@ -54,7 +54,7 @@ GetOptions (
"h|help" => \$showhelp,
"cleanup" => \$cleanup,
"d|debug" => \$DEBUG,
) or die ($helpstring);
) or die($helpstring);
if ($showhelp) {
warn $helpstring;
@ -69,6 +69,7 @@ my $vmid_linked_clone = int($vmid) - 2;
sub jp {
print to_json($_[0], { utf8 => 8, pretty => 1, canonical => 1 }) . "\n";
}
sub dbgvar {
jp(@_) if $DEBUG;
}
@ -77,11 +78,9 @@ sub run_cmd {
my ($cmd, $json, $ignore_errors) = @_;
my $raw = '';
my $parser = sub {$raw .= shift;};
my $parser = sub { $raw .= shift; };
eval {
run_command($cmd, outfunc => $parser);
};
eval { run_command($cmd, outfunc => $parser); };
if (my $err = $@) {
die $err if !$ignore_errors;
}
@ -109,9 +108,7 @@ sub run_test_cmd {
$raw .= "${line}\n";
};
eval {
run_command($cmd, outfunc => $out);
};
eval { run_command($cmd, outfunc => $out); };
if (my $err = $@) {
print $raw;
print $err;
@ -126,7 +123,7 @@ sub prepare {
my $pools = run_cmd("ceph osd pool ls --format json", 1);
my %poolnames = map {$_ => 1} @$pools;
my %poolnames = map { $_ => 1 } @$pools;
die "Pool '$pool' does not exist!\n"
if !exists($poolnames{$pool}) && $use_existing;
@ -167,13 +164,28 @@ sub prepare {
run_cmd(['pvesm', 'add', 'rbd', $pool, '--pool', $pool, '--content', 'images,rootdir']);
}
# create PVE storages (librbd / krbd)
run_cmd(['pvesm', 'add', 'rbd', ${storage_name}, '--krbd', '0', '--pool', ${pool}, '--namespace', ${namespace}, '--content', 'images,rootdir'])
if !$rbd_found;
run_cmd(
[
'pvesm',
'add',
'rbd',
${storage_name},
'--krbd',
'0',
'--pool',
${pool},
'--namespace',
${namespace},
'--content',
'images,rootdir',
],
) if !$rbd_found;
# create test VM
print "Create test VM ${vmid}\n";
my $vms = run_cmd(['pvesh', 'get', 'cluster/resources', '--type', 'vm', '--output-format', 'json'], 1);
my $vms =
run_cmd(['pvesh', 'get', 'cluster/resources', '--type', 'vm', '--output-format', 'json'],
1);
for my $vm (@$vms) {
# TODO: introduce a force flag to make this behaviour configurable
@ -183,10 +195,21 @@ sub prepare {
run_cmd(['qm', 'destroy', ${vmid}]);
}
}
run_cmd(['qm', 'create', ${vmid}, '--bios', 'ovmf', '--efidisk0', "${storage_name}:1", '--scsi0', "${storage_name}:2"]);
run_cmd(
[
'qm',
'create',
${vmid},
'--bios',
'ovmf',
'--efidisk0',
"${storage_name}:1",
'--scsi0',
"${storage_name}:2",
],
);
}
sub cleanup {
print "Cleaning up test environment!\n";
print "Removing VMs\n";
@ -195,7 +218,21 @@ sub cleanup {
run_cmd(['qm', 'stop', ${vmid_clone}], 0, 1);
run_cmd(['qm', 'destroy', ${vmid_linked_clone}], 0, 1);
run_cmd(['qm', 'destroy', ${vmid_clone}], 0, 1);
run_cmd(['for', 'i', 'in', "/dev/rbd/${pool}/${namespace}/*;", 'do', '/usr/bin/rbd', 'unmap', '\$i;', 'done'], 0, 1);
run_cmd(
[
'for',
'i',
'in',
"/dev/rbd/${pool}/${namespace}/*;",
'do',
'/usr/bin/rbd',
'unmap',
'\$i;',
'done',
],
0,
1,
);
run_cmd(['qm', 'unlock', ${vmid}], 0, 1);
run_cmd(['qm', 'destroy', ${vmid}], 0, 1);
@ -237,8 +274,7 @@ my $tests = [
{
name => 'snapshot/rollback',
steps => [
['qm', 'snapshot', $vmid, 'test'],
['qm', 'rollback', $vmid, 'test'],
['qm', 'snapshot', $vmid, 'test'], ['qm', 'rollback', $vmid, 'test'],
],
cleanup => [
['qm', 'unlock', $vmid],
@ -260,8 +296,7 @@ my $tests = [
{
name => 'switch to krbd',
preparations => [
['qm', 'stop', $vmid],
['pvesm', 'set', $storage_name, '--krbd', 1]
['qm', 'stop', $vmid], ['pvesm', 'set', $storage_name, '--krbd', 1],
],
},
{
@ -273,8 +308,7 @@ my $tests = [
{
name => 'snapshot/rollback with krbd',
steps => [
['qm', 'snapshot', $vmid, 'test'],
['qm', 'rollback', $vmid, 'test'],
['qm', 'snapshot', $vmid, 'test'], ['qm', 'rollback', $vmid, 'test'],
],
cleanup => [
['qm', 'unlock', $vmid],
@ -304,7 +338,7 @@ my $tests = [
preparations => [
['qm', 'stop', $vmid],
['qm', 'stop', $vmid_clone],
['pvesm', 'set', $storage_name, '--krbd', 0]
['pvesm', 'set', $storage_name, '--krbd', 0],
],
},
{
@ -318,12 +352,9 @@ my $tests = [
},
{
name => 'start linked clone with krbd',
preparations => [
['pvesm', 'set', $storage_name, '--krbd', 1]
],
preparations => [['pvesm', 'set', $storage_name, '--krbd', 1]],
steps => [
['qm', 'start', $vmid_linked_clone],
['qm', 'stop', $vmid_linked_clone],
['qm', 'start', $vmid_linked_clone], ['qm', 'stop', $vmid_linked_clone],
],
},
];
@ -332,7 +363,7 @@ sub run_prep_cleanup {
my ($cmds) = @_;
for (@$cmds) {
print join(' ', @$_). "\n";
print join(' ', @$_) . "\n";
run_cmd($_);
}
}
@ -350,7 +381,7 @@ sub run_tests {
my $num_tests = 0;
for (@$tests) {
$num_tests += scalar(@{$_->{steps}}) if defined $_->{steps};
$num_tests += scalar(@{ $_->{steps} }) if defined $_->{steps};
}
print("Tests: $num_tests\n");

View File

@ -51,15 +51,15 @@ EOF
my $permissions = {
'user1@test' => {},
'user2@test' => { '/' => ['Sys.Modify'], },
'user3@test' => { '/storage' => ['Datastore.Allocate'], },
'user4@test' => { '/storage/d20m40r30' => ['Datastore.Allocate'], },
'user2@test' => { '/' => ['Sys.Modify'] },
'user3@test' => { '/storage' => ['Datastore.Allocate'] },
'user4@test' => { '/storage/d20m40r30' => ['Datastore.Allocate'] },
};
my $pve_cluster_module;
$pve_cluster_module = Test::MockModule->new('PVE::Cluster');
$pve_cluster_module->mock(
cfs_update => sub {},
cfs_update => sub { },
get_config => sub {
my ($file) = @_;
if ($file eq 'datacenter.cfg') {
@ -94,106 +94,330 @@ $rpcenv_module->mock(
my $rpcenv = PVE::RPCEnvironment->init('pub');
my @tests = (
[ user => 'root@pam' ],
[ ['unknown', ['nolimit'], undef], 100, 'root / generic default limit, requesting default' ],
[ ['move', ['nolimit'], undef], 80, 'root / specific default limit, requesting default (move)' ],
[ ['restore', ['nolimit'], undef], 60, 'root / specific default limit, requesting default (restore)' ],
[ ['unknown', ['d50m40r30'], undef], 50, 'root / storage default limit' ],
[ ['move', ['d50m40r30'], undef], 40, 'root / specific storage limit (move)' ],
[ ['restore', ['d50m40r30'], undef], 30, 'root / specific storage limit (restore)' ],
[ ['unknown', ['nolimit'], 0], 0, 'root / generic default limit' ],
[ ['move', ['nolimit'], 0], 0, 'root / specific default limit (move)' ],
[ ['restore', ['nolimit'], 0], 0, 'root / specific default limit (restore)' ],
[ ['unknown', ['d50m40r30'], 0], 0, 'root / storage default limit' ],
[ ['move', ['d50m40r30'], 0], 0, 'root / specific storage limit (move)' ],
[ ['restore', ['d50m40r30'], 0], 0, 'root / specific storage limit (restore)' ],
[ ['migrate', undef, 100], 100, 'root / undef storage (migrate)' ],
[ ['migrate', [], 100], 100, 'root / no storage (migrate)' ],
[ ['migrate', [undef], undef], 100, 'root / [undef] storage no override (migrate)' ],
[ ['migrate', [undef, undef], 200], 200, 'root / list of undef storages with override (migrate)' ],
[user => 'root@pam'],
[['unknown', ['nolimit'], undef], 100, 'root / generic default limit, requesting default'],
[
['move', ['nolimit'], undef],
80,
'root / specific default limit, requesting default (move)',
],
[
['restore', ['nolimit'], undef],
60,
'root / specific default limit, requesting default (restore)',
],
[['unknown', ['d50m40r30'], undef], 50, 'root / storage default limit'],
[['move', ['d50m40r30'], undef], 40, 'root / specific storage limit (move)'],
[['restore', ['d50m40r30'], undef], 30, 'root / specific storage limit (restore)'],
[['unknown', ['nolimit'], 0], 0, 'root / generic default limit'],
[['move', ['nolimit'], 0], 0, 'root / specific default limit (move)'],
[['restore', ['nolimit'], 0], 0, 'root / specific default limit (restore)'],
[['unknown', ['d50m40r30'], 0], 0, 'root / storage default limit'],
[['move', ['d50m40r30'], 0], 0, 'root / specific storage limit (move)'],
[['restore', ['d50m40r30'], 0], 0, 'root / specific storage limit (restore)'],
[['migrate', undef, 100], 100, 'root / undef storage (migrate)'],
[['migrate', [], 100], 100, 'root / no storage (migrate)'],
[['migrate', [undef], undef], 100, 'root / [undef] storage no override (migrate)'],
[
['migrate', [undef, undef], 200],
200,
'root / list of undef storages with override (migrate)',
],
[ user => 'user1@test' ],
[ ['unknown', ['nolimit'], undef], 100, 'generic default limit' ],
[ ['move', ['nolimit'], undef], 80, 'specific default limit (move)' ],
[ ['restore', ['nolimit'], undef], 60, 'specific default limit (restore)' ],
[ ['unknown', ['d50m40r30'], undef], 50, 'storage default limit' ],
[ ['move', ['d50m40r30'], undef], 40, 'specific storage limit (move)' ],
[ ['restore', ['d50m40r30'], undef], 30, 'specific storage limit (restore)' ],
[ ['unknown', ['d200m400r300'], undef], 200, 'storage default limit above datacenter limits' ],
[ ['move', ['d200m400r300'], undef], 400, 'specific storage limit above datacenter limits (move)' ],
[ ['restore', ['d200m400r300'], undef], 300, 'specific storage limit above datacenter limits (restore)' ],
[ ['unknown', ['d50'], undef], 50, 'storage default limit' ],
[ ['move', ['d50'], undef], 50, 'storage default limit (move)' ],
[ ['restore', ['d50'], undef], 50, 'storage default limit (restore)' ],
[user => 'user1@test'],
[['unknown', ['nolimit'], undef], 100, 'generic default limit'],
[['move', ['nolimit'], undef], 80, 'specific default limit (move)'],
[['restore', ['nolimit'], undef], 60, 'specific default limit (restore)'],
[['unknown', ['d50m40r30'], undef], 50, 'storage default limit'],
[['move', ['d50m40r30'], undef], 40, 'specific storage limit (move)'],
[['restore', ['d50m40r30'], undef], 30, 'specific storage limit (restore)'],
[
['unknown', ['d200m400r300'], undef],
200,
'storage default limit above datacenter limits',
],
[
['move', ['d200m400r300'], undef],
400,
'specific storage limit above datacenter limits (move)',
],
[
['restore', ['d200m400r300'], undef],
300,
'specific storage limit above datacenter limits (restore)',
],
[['unknown', ['d50'], undef], 50, 'storage default limit'],
[['move', ['d50'], undef], 50, 'storage default limit (move)'],
[['restore', ['d50'], undef], 50, 'storage default limit (restore)'],
[ user => 'user2@test' ],
[ ['unknown', ['nolimit'], 0], 0, 'generic default limit with Sys.Modify, passing unlimited' ],
[ ['unknown', ['nolimit'], undef], 100, 'generic default limit with Sys.Modify' ],
[ ['move', ['nolimit'], undef], 80, 'specific default limit with Sys.Modify (move)' ],
[ ['restore', ['nolimit'], undef], 60, 'specific default limit with Sys.Modify (restore)' ],
[ ['restore', ['nolimit'], 0], 0, 'specific default limit with Sys.Modify, passing unlimited (restore)' ],
[ ['move', ['nolimit'], 0], 0, 'specific default limit with Sys.Modify, passing unlimited (move)' ],
[ ['unknown', ['d50m40r30'], undef], 50, 'storage default limit with Sys.Modify' ],
[ ['restore', ['d50m40r30'], undef], 30, 'specific storage limit with Sys.Modify (restore)' ],
[ ['move', ['d50m40r30'], undef], 40, 'specific storage limit with Sys.Modify (move)' ],
[user => 'user2@test'],
[
['unknown', ['nolimit'], 0],
0,
'generic default limit with Sys.Modify, passing unlimited',
],
[['unknown', ['nolimit'], undef], 100, 'generic default limit with Sys.Modify'],
[['move', ['nolimit'], undef], 80, 'specific default limit with Sys.Modify (move)'],
[['restore', ['nolimit'], undef], 60, 'specific default limit with Sys.Modify (restore)'],
[
['restore', ['nolimit'], 0],
0,
'specific default limit with Sys.Modify, passing unlimited (restore)',
],
[
['move', ['nolimit'], 0],
0,
'specific default limit with Sys.Modify, passing unlimited (move)',
],
[['unknown', ['d50m40r30'], undef], 50, 'storage default limit with Sys.Modify'],
[['restore', ['d50m40r30'], undef], 30, 'specific storage limit with Sys.Modify (restore)'],
[['move', ['d50m40r30'], undef], 40, 'specific storage limit with Sys.Modify (move)'],
[ user => 'user3@test' ],
[ ['unknown', ['nolimit'], undef], 100, 'generic default limit with privileges on /' ],
[ ['unknown', ['nolimit'], 80], 80, 'generic default limit with privileges on /, passing an override value' ],
[ ['unknown', ['nolimit'], 0], 0, 'generic default limit with privileges on /, passing unlimited' ],
[ ['move', ['nolimit'], undef], 80, 'specific default limit with privileges on / (move)' ],
[ ['move', ['nolimit'], 0], 0, 'specific default limit with privileges on /, passing unlimited (move)' ],
[ ['restore', ['nolimit'], undef], 60, 'specific default limit with privileges on / (restore)' ],
[ ['restore', ['nolimit'], 0], 0, 'specific default limit with privileges on /, passing unlimited (restore)' ],
[ ['unknown', ['d50m40r30'], 0], 0, 'storage default limit with privileges on /, passing unlimited' ],
[ ['unknown', ['d50m40r30'], undef], 50, 'storage default limit with privileges on /' ],
[ ['unknown', ['d50m40r30'], 0], 0, 'storage default limit with privileges on, passing unlimited /' ],
[ ['move', ['d50m40r30'], undef], 40, 'specific storage limit with privileges on / (move)' ],
[ ['move', ['d50m40r30'], 0], 0, 'specific storage limit with privileges on, passing unlimited / (move)' ],
[ ['restore', ['d50m40r30'], undef], 30, 'specific storage limit with privileges on / (restore)' ],
[ ['restore', ['d50m40r30'], 0], 0, 'specific storage limit with privileges on /, passing unlimited (restore)' ],
[user => 'user3@test'],
[['unknown', ['nolimit'], undef], 100, 'generic default limit with privileges on /'],
[
['unknown', ['nolimit'], 80],
80,
'generic default limit with privileges on /, passing an override value',
],
[
['unknown', ['nolimit'], 0],
0,
'generic default limit with privileges on /, passing unlimited',
],
[['move', ['nolimit'], undef], 80, 'specific default limit with privileges on / (move)'],
[
['move', ['nolimit'], 0],
0,
'specific default limit with privileges on /, passing unlimited (move)',
],
[
['restore', ['nolimit'], undef],
60,
'specific default limit with privileges on / (restore)',
],
[
['restore', ['nolimit'], 0],
0,
'specific default limit with privileges on /, passing unlimited (restore)',
],
[
['unknown', ['d50m40r30'], 0],
0,
'storage default limit with privileges on /, passing unlimited',
],
[['unknown', ['d50m40r30'], undef], 50, 'storage default limit with privileges on /'],
[
['unknown', ['d50m40r30'], 0],
0,
'storage default limit with privileges on, passing unlimited /',
],
[['move', ['d50m40r30'], undef], 40, 'specific storage limit with privileges on / (move)'],
[
['move', ['d50m40r30'], 0],
0,
'specific storage limit with privileges on, passing unlimited / (move)',
],
[
['restore', ['d50m40r30'], undef],
30,
'specific storage limit with privileges on / (restore)',
],
[
['restore', ['d50m40r30'], 0],
0,
'specific storage limit with privileges on /, passing unlimited (restore)',
],
[ user => 'user4@test' ],
[ ['unknown', ['nolimit'], 10], 10, 'generic default limit with privileges on a different storage, passing lower override' ],
[ ['unknown', ['nolimit'], undef], 100, 'generic default limit with privileges on a different storage' ],
[ ['unknown', ['nolimit'], 0], 100, 'generic default limit with privileges on a different storage, passing unlimited' ],
[ ['move', ['nolimit'], undef], 80, 'specific default limit with privileges on a different storage (move)' ],
[ ['restore', ['nolimit'], undef], 60, 'specific default limit with privileges on a different storage (restore)' ],
[ ['unknown', ['d50m40r30'], undef], 50, 'storage default limit with privileges on a different storage' ],
[ ['move', ['d50m40r30'], undef], 40, 'specific storage limit with privileges on a different storage (move)' ],
[ ['restore', ['d50m40r30'], undef], 30, 'specific storage limit with privileges on a different storage (restore)' ],
[ ['unknown', ['d20m40r30'], undef], 20, 'storage default limit with privileges on that storage' ],
[ ['unknown', ['d20m40r30'], 0], 0, 'storage default limit with privileges on that storage, passing unlimited' ],
[ ['move', ['d20m40r30'], undef], 40, 'specific storage limit with privileges on that storage (move)' ],
[ ['move', ['d20m40r30'], 0], 0, 'specific storage limit with privileges on that storage, passing unlimited (move)' ],
[ ['move', ['d20m40r30'], 10], 10, 'specific storage limit with privileges on that storage, passing low override (move)' ],
[ ['move', ['d20m40r30'], 300], 300, 'specific storage limit with privileges on that storage, passing high override (move)' ],
[ ['restore', ['d20m40r30'], undef], 30, 'specific storage limit with privileges on that storage (restore)' ],
[ ['restore', ['d20m40r30'], 0], 0, 'specific storage limit with privileges on that storage, passing unlimited (restore)' ],
[ ['unknown', ['d50m40r30', 'd20m40r30'], 0], 50, 'multiple storages default limit with privileges on one of them, passing unlimited' ],
[ ['move', ['d50m40r30', 'd20m40r30'], 0], 40, 'multiple storages specific limit with privileges on one of them, passing unlimited (move)' ],
[ ['restore', ['d50m40r30', 'd20m40r30'], 0], 30, 'multiple storages specific limit with privileges on one of them, passing unlimited (restore)' ],
[ ['unknown', ['d50m40r30', 'd20m40r30'], undef], 20, 'multiple storages default limit with privileges on one of them' ],
[ ['unknown', ['d10', 'd20m40r30'], undef], 10, 'multiple storages default limit with privileges on one of them (storage limited)' ],
[ ['move', ['d10', 'd20m40r30'], undef], 10, 'multiple storages specific limit with privileges on one of them (storage limited) (move)' ],
[ ['restore', ['d10', 'd20m40r30'], undef], 10, 'multiple storages specific limit with privileges on one of them (storage limited) (restore)' ],
[ ['restore', ['d10', 'd20m40r30'], 5], 5, 'multiple storages specific limit (storage limited) (restore), passing lower override' ],
[ ['restore', ['d200', 'd200m400r300'], 65], 65, 'multiple storages specific limit (storage limited) (restore), passing lower override' ],
[ ['restore', ['d200', 'd200m400r300'], 400], 200, 'multiple storages specific limit (storage limited) (restore), passing higher override' ],
[ ['restore', ['d200', 'd200m400r300'], 0], 200, 'multiple storages specific limit (storage limited) (restore), passing unlimited' ],
[ ['restore', ['d200', 'd200m400r300'], 1], 1, 'multiple storages specific limit (storage limited) (restore), passing 1' ],
[ ['restore', ['d10', 'd20m40r30'], 500], 10, 'multiple storages specific limit with privileges on one of them (storage limited) (restore), passing higher override' ],
[ ['unknown', ['nolimit', 'd20m40r30'], 0], 100, 'multiple storages default limit with privileges on one of them, passing unlimited (default limited)' ],
[ ['move', ['nolimit', 'd20m40r30'], 0], 80, 'multiple storages specific limit with privileges on one of them, passing unlimited (default limited) (move)' ],
[ ['restore', ['nolimit', 'd20m40r30'], 0], 60, 'multiple storages specific limit with privileges on one of them, passing unlimited (default limited) (restore)' ],
[ ['unknown', ['nolimit', 'd20m40r30'], undef], 20, 'multiple storages default limit with privileges on one of them (default limited)' ],
[ ['move', ['nolimit', 'd20m40r30'], undef], 40, 'multiple storages specific limit with privileges on one of them (default limited) (move)' ],
[ ['restore', ['nolimit', 'd20m40r30'], undef], 30, 'multiple storages specific limit with privileges on one of them (default limited) (restore)' ],
[ ['restore', ['d20m40r30', 'm50'], 200], 60, 'multiple storages specific limit with privileges on one of them (global default limited) (restore)' ],
[ ['move', ['nolimit', undef ], 40] , 40, 'multiple storages one undefined, passing 40 (move)' ],
[ ['move', undef, 100] , 80, 'undef storage, passing 100 (move)' ],
[ ['move', [undef], 100] , 80, '[undef] storage, passing 100 (move)' ],
[ ['move', [undef], undef] , 80, '[undef] storage, no override (move)' ],
[user => 'user4@test'],
[
['unknown', ['nolimit'], 10],
10,
'generic default limit with privileges on a different storage, passing lower override',
],
[
['unknown', ['nolimit'], undef],
100,
'generic default limit with privileges on a different storage',
],
[
['unknown', ['nolimit'], 0],
100,
'generic default limit with privileges on a different storage, passing unlimited',
],
[
['move', ['nolimit'], undef],
80,
'specific default limit with privileges on a different storage (move)',
],
[
['restore', ['nolimit'], undef],
60,
'specific default limit with privileges on a different storage (restore)',
],
[
['unknown', ['d50m40r30'], undef],
50,
'storage default limit with privileges on a different storage',
],
[
['move', ['d50m40r30'], undef],
40,
'specific storage limit with privileges on a different storage (move)',
],
[
['restore', ['d50m40r30'], undef],
30,
'specific storage limit with privileges on a different storage (restore)',
],
[
['unknown', ['d20m40r30'], undef],
20,
'storage default limit with privileges on that storage',
],
[
['unknown', ['d20m40r30'], 0],
0,
'storage default limit with privileges on that storage, passing unlimited',
],
[
['move', ['d20m40r30'], undef],
40,
'specific storage limit with privileges on that storage (move)',
],
[
['move', ['d20m40r30'], 0],
0,
'specific storage limit with privileges on that storage, passing unlimited (move)',
],
[
['move', ['d20m40r30'], 10],
10,
'specific storage limit with privileges on that storage, passing low override (move)',
],
[
['move', ['d20m40r30'], 300],
300,
'specific storage limit with privileges on that storage, passing high override (move)',
],
[
['restore', ['d20m40r30'], undef],
30,
'specific storage limit with privileges on that storage (restore)',
],
[
['restore', ['d20m40r30'], 0],
0,
'specific storage limit with privileges on that storage, passing unlimited (restore)',
],
[
['unknown', ['d50m40r30', 'd20m40r30'], 0],
50,
'multiple storages default limit with privileges on one of them, passing unlimited',
],
[
['move', ['d50m40r30', 'd20m40r30'], 0],
40,
'multiple storages specific limit with privileges on one of them, passing unlimited (move)',
],
[
['restore', ['d50m40r30', 'd20m40r30'], 0],
30,
'multiple storages specific limit with privileges on one of them, passing unlimited (restore)',
],
[
['unknown', ['d50m40r30', 'd20m40r30'], undef],
20,
'multiple storages default limit with privileges on one of them',
],
[
['unknown', ['d10', 'd20m40r30'], undef],
10,
'multiple storages default limit with privileges on one of them (storage limited)',
],
[
['move', ['d10', 'd20m40r30'], undef],
10,
'multiple storages specific limit with privileges on one of them (storage limited) (move)',
],
[
['restore', ['d10', 'd20m40r30'], undef],
10,
'multiple storages specific limit with privileges on one of them (storage limited) (restore)',
],
[
['restore', ['d10', 'd20m40r30'], 5],
5,
'multiple storages specific limit (storage limited) (restore), passing lower override',
],
[
['restore', ['d200', 'd200m400r300'], 65],
65,
'multiple storages specific limit (storage limited) (restore), passing lower override',
],
[
['restore', ['d200', 'd200m400r300'], 400],
200,
'multiple storages specific limit (storage limited) (restore), passing higher override',
],
[
['restore', ['d200', 'd200m400r300'], 0],
200,
'multiple storages specific limit (storage limited) (restore), passing unlimited',
],
[
['restore', ['d200', 'd200m400r300'], 1],
1,
'multiple storages specific limit (storage limited) (restore), passing 1',
],
[
['restore', ['d10', 'd20m40r30'], 500],
10,
'multiple storages specific limit with privileges on one of them (storage limited) (restore), passing higher override',
],
[
['unknown', ['nolimit', 'd20m40r30'], 0],
100,
'multiple storages default limit with privileges on one of them, passing unlimited (default limited)',
],
[
['move', ['nolimit', 'd20m40r30'], 0],
80,
'multiple storages specific limit with privileges on one of them, passing unlimited (default limited) (move)',
],
[
['restore', ['nolimit', 'd20m40r30'], 0],
60,
'multiple storages specific limit with privileges on one of them, passing unlimited (default limited) (restore)',
],
[
['unknown', ['nolimit', 'd20m40r30'], undef],
20,
'multiple storages default limit with privileges on one of them (default limited)',
],
[
['move', ['nolimit', 'd20m40r30'], undef],
40,
'multiple storages specific limit with privileges on one of them (default limited) (move)',
],
[
['restore', ['nolimit', 'd20m40r30'], undef],
30,
'multiple storages specific limit with privileges on one of them (default limited) (restore)',
],
[
['restore', ['d20m40r30', 'm50'], 200],
60,
'multiple storages specific limit with privileges on one of them (global default limited) (restore)',
],
[
['move', ['nolimit', undef], 40],
40,
'multiple storages one undefined, passing 40 (move)',
],
[['move', undef, 100], 80, 'undef storage, passing 100 (move)'],
[['move', [undef], 100], 80, '[undef] storage, passing 100 (move)'],
[['move', [undef], undef], 80, '[undef] storage, no override (move)'],
);
foreach my $t (@tests) {

View File

@ -5,8 +5,8 @@ use warnings;
use TAP::Harness;
my $harness = TAP::Harness->new( { verbosity => -2 });
my $res = $harness->runtests( "disklist_test.pm" );
my $harness = TAP::Harness->new({ verbosity => -2 });
my $res = $harness->runtests("disklist_test.pm");
exit -1 if !$res || $res->{failed} || $res->{parse_errors};

View File

@ -10,11 +10,12 @@ use Test::More;
use Data::Dumper;
my $test_manifests = join ('/', $Bin, 'ovf_manifests');
my $test_manifests = join('/', $Bin, 'ovf_manifests');
print "parsing ovfs\n";
my $win2008 = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
my $win2008 =
eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win_2008_R2_two-disks.ovf") };
if (my $err = $@) {
fail('parse win2008');
warn("error: $err\n");
@ -28,7 +29,8 @@ if (my $err = $@) {
} else {
ok('parse win10');
}
my $win10noNs = eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
my $win10noNs =
eval { PVE::GuestImport::OVF::parse_ovf("$test_manifests/Win10-Liz_no_default_ns.ovf") };
if (my $err = $@) {
fail("parse win10 no default rasd NS");
warn("error: $err\n");
@ -38,26 +40,59 @@ if (my $err = $@) {
print "testing disks\n";
is($win2008->{disks}->[0]->{disk_address}, 'scsi0', 'multidisk vm has the correct first disk controller');
is($win2008->{disks}->[0]->{backing_file}, "$test_manifests/disk1.vmdk", 'multidisk vm has the correct first disk backing device');
is(
$win2008->{disks}->[0]->{disk_address},
'scsi0',
'multidisk vm has the correct first disk controller',
);
is(
$win2008->{disks}->[0]->{backing_file},
"$test_manifests/disk1.vmdk",
'multidisk vm has the correct first disk backing device',
);
is($win2008->{disks}->[0]->{virtual_size}, 2048, 'multidisk vm has the correct first disk size');
is($win2008->{disks}->[1]->{disk_address}, 'scsi1', 'multidisk vm has the correct second disk controller');
is($win2008->{disks}->[1]->{backing_file}, "$test_manifests/disk2.vmdk", 'multidisk vm has the correct second disk backing device');
is(
$win2008->{disks}->[1]->{disk_address},
'scsi1',
'multidisk vm has the correct second disk controller',
);
is(
$win2008->{disks}->[1]->{backing_file},
"$test_manifests/disk2.vmdk",
'multidisk vm has the correct second disk backing device',
);
is($win2008->{disks}->[1]->{virtual_size}, 2048, 'multidisk vm has the correct second disk size');
is($win10->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm has the correct disk controller');
is($win10->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm has the correct disk backing device');
is(
$win10->{disks}->[0]->{backing_file},
"$test_manifests/Win10-Liz-disk1.vmdk",
'single disk vm has the correct disk backing device',
);
is($win10->{disks}->[0]->{virtual_size}, 2048, 'single disk vm has the correct size');
is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 'single disk vm (no default rasd NS) has the correct disk controller');
is($win10noNs->{disks}->[0]->{backing_file}, "$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) has the correct disk backing device');
is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default rasd NS) has the correct size');
is(
$win10noNs->{disks}->[0]->{disk_address},
'scsi0',
'single disk vm (no default rasd NS) has the correct disk controller',
);
is(
$win10noNs->{disks}->[0]->{backing_file},
"$test_manifests/Win10-Liz-disk1.vmdk",
'single disk vm (no default rasd NS) has the correct disk backing device',
);
is(
$win10noNs->{disks}->[0]->{virtual_size},
2048,
'single disk vm (no default rasd NS) has the correct size',
);
print "testing nics\n";
is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) has correct nic model');
is($win10noNs->{net}->{net0}->{model}, 'e1000e',
'win10 (no default rasd NS) has correct nic model');
print "\ntesting vm.conf extraction\n";

View File

@ -8,7 +8,7 @@ $ENV{TZ} = 'UTC';
use TAP::Harness;
my $harness = TAP::Harness->new( { verbosity => -1 });
my $harness = TAP::Harness->new({ verbosity => -1 });
my $res = $harness->runtests(
"archive_info_test.pm",
"parse_volname_test.pm",

577
src/test/run_test_lvmplugin.pl Executable file
View File

@ -0,0 +1,577 @@
#!/usr/bin/perl
use lib '..';
use strict;
use warnings;
use Data::Dumper qw(Dumper);
use PVE::Storage;
use PVE::Cluster;
use PVE::Tools qw(run_command);
use Cwd;
$Data::Dumper::Sortkeys = 1;
my $verbose = undef;
my $storagename = "lvmregression";
my $vgname = 'regressiontest';
#volsize in GB
my $volsize = 1;
my $vmdisk = "vm-102-disk-1";
my $tests = {};
my $cfg = undef;
my $count = 0;
my $testnum = 12;
my $end_test = $testnum;
my $start_test = 1;
if (@ARGV == 2) {
$end_test = $ARGV[1];
$start_test = $ARGV[0];
} elsif (@ARGV == 1) {
$start_test = $ARGV[0];
$end_test = $ARGV[0];
}
my $test12 = sub {
print "\nrun test12 \"path\"\n";
my @res;
my $fail = 0;
eval {
@res = PVE::Storage::path($cfg, "$storagename:$vmdisk");
if ($res[0] ne "\/dev\/regressiontest\/$vmdisk") {
$count++;
$fail = 1;
warn
"Test 12 a: path is not correct: expected \'\/dev\/regressiontest\/$vmdisk'\ get \'$res[0]\'";
}
if ($res[1] ne "102") {
if (!$fail) {
$count++;
$fail = 1;
}
warn "Test 12 a: owner is not correct: expected \'102\' get \'$res[1]\'";
}
if ($res[2] ne "images") {
if (!$fail) {
$count++;
$fail = 1;
}
warn "Test 12 a: owner is not correct: expected \'images\' get \'$res[2]\'";
}
};
if ($@) {
$count++;
warn "Test 12 a: $@";
}
};
$tests->{12} = $test12;
my $test11 = sub {
print "\nrun test11 \"deactivate_storage\"\n";
eval {
PVE::Storage::activate_storage($cfg, $storagename);
PVE::Storage::deactivate_storage($cfg, $storagename);
};
if ($@) {
$count++;
warn "Test 11 a: $@";
}
};
$tests->{11} = $test11;
my $test10 = sub {
print "\nrun test10 \"activate_storage\"\n";
eval { PVE::Storage::activate_storage($cfg, $storagename); };
if ($@) {
$count++;
warn "Test 10 a: $@";
}
};
$tests->{10} = $test10;
my $test9 = sub {
print "\nrun test15 \"template_list and vdisk_list\"\n";
my $hash = Dumper {};
my $res = Dumper PVE::Storage::template_list($cfg, $storagename, "vztmpl");
if ($hash ne $res) {
$count++;
warn "Test 9 a failed\n";
}
$res = undef;
$res = Dumper PVE::Storage::template_list($cfg, $storagename, "iso");
if ($hash ne $res) {
$count++;
warn "Test 9 b failed\n";
}
$res = undef;
$res = Dumper PVE::Storage::template_list($cfg, $storagename, "backup");
if ($hash ne $res) {
$count++;
warn "Test 9 c failed\n";
}
};
$tests->{9} = $test9;
my $test8 = sub {
print "\nrun test8 \"vdisk_free\"\n";
eval {
PVE::Storage::vdisk_free($cfg, "$storagename:$vmdisk");
eval {
run_command("lvs $vgname/$vmdisk", outfunc => sub { }, errfunc => sub { });
};
if (!$@) {
$count++;
warn "Test8 a: vdisk still exists\n";
}
};
if ($@) {
$count++;
warn "Test8 a: $@";
}
};
$tests->{8} = $test8;
my $test7 = sub {
print "\nrun test7 \"vdisk_alloc\"\n";
eval {
my $tmp_volid =
PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 1024 * 1024);
if ($tmp_volid ne "$storagename:vm-112-disk-0") {
die "volname:$tmp_volid don't match\n";
}
eval {
run_command(
"lvs --noheadings -o lv_size $vgname/vm-112-disk-0",
outfunc => sub {
my $tmp = shift;
if ($tmp !~ m/1\.00g/) {
die "size don't match\n";
}
},
);
};
if ($@) {
$count++;
warn "Test7 a: $@";
}
};
if ($@) {
$count++;
warn "Test7 a: $@";
}
eval {
my $tmp_volid =
PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 2048 * 1024);
if ($tmp_volid ne "$storagename:vm-112-disk-1") {
die "volname:$tmp_volid don't match\n";
}
eval {
run_command(
"lvs --noheadings -o lv_size $vgname/vm-112-disk-1",
outfunc => sub {
my $tmp = shift;
if ($tmp !~ m/2\.00g/) {
die "size don't match\n";
}
},
);
};
if ($@) {
$count++;
warn "Test7 b: $@";
}
};
if ($@) {
$count++;
warn "Test7 b: $@";
}
};
$tests->{7} = $test7;
my $test6 = sub {
print "\nrun test6 \"parse_volume_id\"\n";
eval {
my ($store, $disk) = PVE::Storage::parse_volume_id("$storagename:$vmdisk");
if ($store ne $storagename || $disk ne $vmdisk) {
$count++;
warn "Test6 a: parsing wrong";
}
};
if ($@) {
$count++;
warn "Test6 a: $@";
}
};
$tests->{6} = $test6;
my $test5 = sub {
print "\nrun test5 \"parse_volname\"\n";
eval {
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
PVE::Storage::parse_volname($cfg, "$storagename:$vmdisk");
if (
$vtype ne 'images'
|| $vmid ne '102'
|| $name ne $vmdisk
|| defined($basename)
|| defined($basevmid)
|| $isBase
|| $format ne 'raw'
) {
$count++;
warn "Test5 a: parsing wrong";
}
};
if ($@) {
$count++;
warn "Test5 a: $@";
}
};
$tests->{5} = $test5;
my $test4 = sub {
print "\nrun test4 \"volume_rollback_is_possible\"\n";
eval {
my $blockers = [];
my $res = undef;
eval {
$res = PVE::Storage::volume_rollback_is_possible(
$cfg, "$storagename:$vmdisk", 'snap1', $blockers,
);
};
if (!$@) {
$count++;
warn "Test4 a: Rollback shouldn't be possible";
}
};
if ($@) {
$count++;
warn "Test4 a: $@";
}
};
$tests->{4} = $test4;
my $test3 = sub {
print "\nrun test3 \"volume_has_feature\"\n";
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'snapshot', "$storagename:$vmdisk", undef, 0,
)) {
$count++;
warn "Test3 a failed";
}
};
if ($@) {
$count++;
warn "Test3 a: $@";
}
eval {
if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", undef, 0)) {
$count++;
warn "Test3 g failed";
}
};
if ($@) {
$count++;
warn "Test3 g: $@";
}
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'template', "$storagename:$vmdisk", undef, 0,
)) {
$count++;
warn "Test3 l failed";
}
};
if ($@) {
$count++;
warn "Test3 l: $@";
}
eval {
if (!PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", undef, 0)) {
$count++;
warn "Test3 r failed";
}
};
if ($@) {
$count++;
warn "Test3 r: $@";
}
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'sparseinit', "$storagename:$vmdisk", undef, 0,
)) {
$count++;
warn "Test3 x failed";
}
};
if ($@) {
$count++;
warn "Test3 x: $@";
}
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'snapshot', "$storagename:$vmdisk", 'test', 0,
)) {
$count++;
warn "Test3 a1 failed";
}
};
if ($@) {
$count++;
warn "Test3 a1: $@";
}
eval {
if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", 'test', 0)) {
$count++;
warn "Test3 g1 failed";
}
};
if ($@) {
$count++;
warn "Test3 g1: $@";
}
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'template', "$storagename:$vmdisk", 'test', 0,
)) {
$count++;
warn "Test3 l1 failed";
}
};
if ($@) {
$count++;
warn "Test3 l1: $@";
}
eval {
if (PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", 'test', 0)) {
$count++;
warn "Test3 r1 failed";
}
};
if ($@) {
$count++;
warn "Test3 r1: $@";
}
eval {
if (PVE::Storage::volume_has_feature(
$cfg, 'sparseinit', "$storagename:$vmdisk", 'test', 0,
)) {
$count++;
warn "Test3 x1 failed";
}
};
if ($@) {
$count++;
warn "Test3 x1: $@";
}
};
$tests->{3} = $test3;
my $test2 = sub {
print "\nrun test2 \"volume_resize\"\n";
my $newsize = ($volsize + 1) * 1024 * 1024 * 1024;
eval {
eval { PVE::Storage::volume_resize($cfg, "$storagename:$vmdisk", $newsize, 0); };
if ($@) {
$count++;
warn "Test2 a failed";
}
if ($newsize != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
$count++;
warn "Test2 a failed";
}
};
if ($@) {
$count++;
warn "Test2 a: $@";
}
};
$tests->{2} = $test2;
my $test1 = sub {
print "\nrun test1 \"volume_size_info\"\n";
my $size = ($volsize * 1024 * 1024 * 1024);
eval {
if ($size != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
$count++;
warn "Test1 a failed";
}
};
if ($@) {
$count++;
warn "Test1 a : $@";
}
};
$tests->{1} = $test1;
sub setup_lvm_volumes {
eval { run_command("vgcreate $vgname /dev/loop1"); };
print "create lvm volume $vmdisk\n" if $verbose;
run_command("lvcreate -L${volsize}G -n $vmdisk $vgname");
my $vollist = [
"$storagename:$vmdisk",
];
PVE::Storage::activate_volumes($cfg, $vollist);
}
sub cleanup_lvm_volumes {
print "destroy $vgname\n" if $verbose;
eval { run_command("vgremove $vgname -y"); };
if ($@) {
print "cleanup failed: $@\nretrying once\n" if $verbose;
eval { run_command("vgremove $vgname -y"); };
if ($@) {
clean_up_lvm();
setup_lvm();
}
}
}
sub setup_lvm {
unlink 'lvm.img';
eval { run_command("dd if=/dev/zero of=lvm.img bs=1M count=8000"); };
if ($@) {
clean_up_lvm();
}
my $pwd = cwd();
eval { run_command("losetup /dev/loop1 $pwd\/lvm.img"); };
if ($@) {
clean_up_lvm();
}
eval { run_command("pvcreate /dev/loop1"); };
if ($@) {
clean_up_lvm();
}
}
sub clean_up_lvm {
eval { run_command("pvremove /dev/loop1 -ff -y"); };
if ($@) {
warn $@;
}
eval { run_command("losetup -d /dev/loop1"); };
if ($@) {
warn $@;
}
unlink 'lvm.img';
}
sub volume_is_base {
my ($cfg, $volid) = @_;
my (undef, undef, undef, undef, undef, $isBase, undef) =
PVE::Storage::parse_volname($cfg, $volid);
return $isBase;
}
if ($> != 0) { #EUID
warn "not root, skipping lvm tests\n";
exit 0;
}
my $time = time;
print "Start tests for LVMPlugin\n";
$cfg = {
'ids' => {
$storagename => {
'content' => {
'images' => 1,
'rootdir' => 1,
},
'vgname' => $vgname,
'type' => 'lvm',
},
},
'order' => { 'lvmregression' => 1 },
};
setup_lvm();
for (my $i = $start_test; $i <= $end_test; $i++) {
setup_lvm_volumes();
eval { $tests->{$i}(); };
if (my $err = $@) {
warn $err;
$count++;
}
cleanup_lvm_volumes();
}
clean_up_lvm();
$time = time - $time;
print "Stop tests for LVMPlugin\n";
print "$count tests failed\n";
print "Time: ${time}s\n";
exit -1 if $count > 0;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,254 @@
#!/usr/bin/perl
use strict;
use warnings;
use Test::MockModule;
use Test::More;
use lib ('.', '..');
use PVE::RPCEnvironment;
use PVE::Storage;
use PVE::Storage::Plugin;
my $storage_cfg = <<'EOF';
dir: dir
path /mnt/pve/dir
content vztmpl,snippets,iso,backup,rootdir,images
EOF
my $user_cfg = <<'EOF';
user:root@pam:1:0::::::
user:noperm@pve:1:0::::::
user:otherstorage@pve:1:0::::::
user:dsallocate@pve:1:0::::::
user:dsaudit@pve:1:0::::::
user:backup@pve:1:0::::::
user:vmuser@pve:1:0::::::
role:dsallocate:Datastore.Allocate:
role:dsaudit:Datastore.Audit:
role:vmuser:VM.Config.Disk,Datastore.Audit:
role:backup:VM.Backup,Datastore.AllocateSpace:
acl:1:/storage/foo:otherstorage@pve:dsallocate:
acl:1:/storage/dir:dsallocate@pve:dsallocate:
acl:1:/storage/dir:dsaudit@pve:dsaudit:
acl:1:/vms/100:backup@pve:backup:
acl:1:/storage/dir:backup@pve:backup:
acl:1:/vms/100:vmuser@pve:vmuser:
acl:1:/vms/111:vmuser@pve:vmuser:
acl:1:/storage/dir:vmuser@pve:vmuser:
EOF
my @users =
qw(root@pam noperm@pve otherstorage@pve dsallocate@pve dsaudit@pve backup@pve vmuser@pve);
my $pve_cluster_module;
$pve_cluster_module = Test::MockModule->new('PVE::Cluster');
$pve_cluster_module->mock(
cfs_update => sub { },
get_config => sub {
my ($file) = @_;
if ($file eq 'storage.cfg') {
return $storage_cfg;
} elsif ($file eq 'user.cfg') {
return $user_cfg;
}
die "TODO: mock get_config($file)\n";
},
);
my $rpcenv = PVE::RPCEnvironment->init('pub');
$rpcenv->init_request();
my @types = sort keys PVE::Storage::Plugin::get_vtype_subdirs()->%*;
my $all_types = { map { $_ => 1 } @types };
my @tests = (
{
volid => 'dir:backup/vzdump-qemu-100-2025_07_29-13_00_55.vma',
denied_users => {
'dsaudit@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => {
'backup' => 1,
},
},
{
volid => 'dir:100/vm-100-disk-0.qcow2',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
},
allowed_types => {
'images' => 1,
'rootdir' => 1,
},
},
{
volid => 'dir:vztmpl/alpine-3.22-default_20250617_amd64.tar.xz',
denied_users => {},
allowed_types => {
'vztmpl' => 1,
},
},
{
volid => 'dir:iso/virtio-win-0.1.271.iso',
denied_users => {},
allowed_types => {
'iso' => 1,
},
},
{
volid => 'dir:111/subvol-111-disk-0.subvol',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
},
allowed_types => {
'images' => 1,
'rootdir' => 1,
},
},
# test different VM IDs
{
volid => 'dir:backup/vzdump-qemu-200-2025_07_29-13_00_55.vma',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => {
'backup' => 1,
},
},
{
volid => 'dir:200/vm-200-disk-0.qcow2',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => {
'images' => 1,
'rootdir' => 1,
},
},
{
volid => 'dir:backup/vzdump-qemu-200-2025_07_29-13_00_55.vma',
vmid => 200,
denied_users => {},
allowed_types => {
'backup' => 1,
},
},
{
volid => 'dir:200/vm-200-disk-0.qcow2',
vmid => 200,
denied_users => {},
allowed_types => {
'images' => 1,
'rootdir' => 1,
},
},
{
volid => 'dir:backup/vzdump-qemu-200-2025_07_29-13_00_55.vma',
vmid => 300,
denied_users => {
'noperm@pve' => 1,
'otherstorage@pve' => 1,
'backup@pve' => 1,
'dsaudit@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => {
'backup' => 1,
},
},
{
volid => 'dir:200/vm-200-disk-0.qcow2',
vmid => 300,
denied_users => {
'noperm@pve' => 1,
'otherstorage@pve' => 1,
'backup@pve' => 1,
'dsaudit@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => {
'images' => 1,
'rootdir' => 1,
},
},
# test paths
{
volid => 'relative_path',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
'dsallocate@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => $all_types,
},
{
volid => '/absolute_path',
denied_users => {
'backup@pve' => 1,
'dsaudit@pve' => 1,
'dsallocate@pve' => 1,
'vmuser@pve' => 1,
},
allowed_types => $all_types,
},
);
my $cfg = PVE::Storage::config();
is(scalar(@users), 7, 'number of users');
for my $t (@tests) {
my ($volid, $vmid, $expected_denied_users, $expected_allowed_types) =
$t->@{qw(volid vmid denied_users allowed_types)};
# certain users are always expected to be denied, except in the special case where VM ID is set
$expected_denied_users->{'noperm@pve'} = 1 if !$vmid;
$expected_denied_users->{'otherstorage@pve'} = 1 if !$vmid;
for my $user (@users) {
my $description = "user: $user, volid: $volid";
$rpcenv->set_user($user);
my $actual_denied;
eval { PVE::Storage::check_volume_access($rpcenv, $user, $cfg, $vmid, $volid, undef); };
if (my $err = $@) {
$actual_denied = 1;
note($@) if !$expected_denied_users->{$user} # log the error for easy analysis
}
is($actual_denied, $expected_denied_users->{$user}, $description);
}
for my $type (@types) {
my $user = 'root@pam'; # type mismatch should not even work for root!
my $description = "type $type, volid: $volid";
$rpcenv->set_user($user);
my $actual_allowed = 1;
eval { PVE::Storage::check_volume_access($rpcenv, $user, $cfg, $vmid, $volid, $type); };
if (my $err = $@) {
$actual_allowed = undef;
note($@) if $expected_allowed_types->{$type} # log the error for easy analysis
}
is($actual_allowed, $expected_allowed_types->{$type}, $description);
}
}
done_testing();