The Perl part of the API methods primarily defines the API schema,
checks for any needed priviledges and then calls the actual Rust
implementation exposed via perlmod. Any errors returned by the Rust
code are translated into PVE::Exception, so that the API call fails
with the correct HTTP error code.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
This commit adds a new Perl module, PVE::API2::Cluster::Notification.
The module will contain all API handlers for the new notification
subsystem.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
If the new 'target-replication' option in datacenter.cfg is set to a
notification target, we send notifications that way. If it is not set,
we continue send a notification to the default target (mail to
root@pam).
There is also a new 'replication' option. It controls whether to send
a notification at all.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
... instead of using sendmail directly
If the new 'target-package-updates' is set, we send a notification to
this target. If not, we continue to send a mail to root@pam (if the
mail address is configured)
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
... instead of using sendmail directly.
If the new 'notification-target' parameter is set,
we send the notification to this endpoint or group.
If 'mailto' is set, we add a temporary endpoint and a
temporary group containg both targets.
This commit also refactors the old 'sendmail' sub heavily:
- Use template-based notification text instead of endless
string concatenations
- Removing the old plaintext/HTML table rendering in favor of
the new template/property-based approach offered by the
`proxmox-notify` crate.
- Rename `sendmail` sub to `send_notification`
- Breaking out some of the code into helper subs, hopefully
reducing the spaghetti factor a bit
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
When the newly introduced optional parameter "transfer" is set, the user
add a vm/container to a pool even if it is already in one. If so it will
be removed from the old pool
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
Alter style to make the parameter check more concise
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
i have added it to the pci api call, but forgot to add it for usb
otherwise adding a mapped usb device only works on the node where the
gui is connected to
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
actually drop the deprecated ones from the API routes index and
ensure the replacement /pool is returned (/cfg already was)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this adds the typical section config crud API calls for
USB and PCI resource mapping to /cluster/mapping/{TYPE}
the only special thing that this series does is the list call
for both has a special 'check-node' parameter that uses the
'proxyto_callback' to reroute the api call to the given node
so that it can check the validity of the mapping for that node
in the future when we e.g. broadcast the lspci output via pmxcfs
we drop the proxyto_callback and directly use the info from
pmxcfs (or we drop the parameter and always check all nodes)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is weird and buggy and breaches the unpriv./priv. separation of
our api daemons, so root-only for now and possibly removal soon.
note that this had several limitations already anyway, like running
in sync context and thus failing after 30s.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Rather than failing with an error claiming that the job doesn't exist.
The disabled status will be visible in the result of the call.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
nested conditionals stretching over multiple lines are always a bit hard to
untangle, so let's make it explicit:
1. is the interface a bridge
2. if it is, are we looking for one?
3. is it something else that we are looking for?
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Like it did here before 9f65a584 ("api: backup: update: check
permissions of delete params too") and like it does in the create
case.
This should not have a practical effect, it's mostly for consistency
and to avoid anybody reading anything into the different orders of
checks between update and create.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In particular this ensures that the user is allowed to remove data on
the storage, because configuring low retention results in removed
older backups. Of course setting the storage itself also needs to
require the same privilege then.
This is a breaking API change, but it seems sensible to require
permissions on the affected storage too.
Jobs with a dumpdir setting can be configured by root only.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With Proxmox VE 8, we'll have support for a enterprise ceph repo,
accessed through Proxmox VE subscriptions, to provide more broadly
tested ceph updates for production setups.
Replace the test-repository parameter with an actual enum of
selectable repo types for:
- test (same as previously selected through setting test-repository)
- no-subscription (the previous default, then named "main")
- enterprise (new and the default now, recommended for production)
Note that writing the auth-part is a bit hacky and might/should be
improved.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The pve_verify_cidr{,v4,v6} functions were originally intended for
the /etc/network/interfaces API endpoints and thus are a bit
restrictive. For example, as reported in the community forum[0],
pve_verify_cidr() does not consider '0::/0' and '0::/1' to be valid.
The error message in this scenario being
> value does not look like a valid CIDR network
is also confusing, as the first thought of users will be that it comes
from the passed-in monitor address.
The public networks are not written here and read from the Ceph config
and via a RADOS mon command, so no need to try and verify them. If
something really would go wrong during parsing, the
get_local_ip_from_cidr() call would complain afterwards.
[0]: https://forum.proxmox.com/threads/125226/
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
if a job has no schedule and is executed via "Schedule now" but fails, the
following will be printed to journal/syslog:
Mar 21 13:05:01 host02 pvescheduler[203343]: send/receive failed, cleaning up snapshot(s)..
Mar 21 13:05:01 host02 pvescheduler[203343]: 100-0: got unexpected replication job error - command 'set -o pipefail && pvesm export local-zfs:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1679400300__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=host03' root@10.0.74.3 -- pvesm import local-zfs:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1679400300__ -allow-rename 0' failed: exit code 255
Mar 21 13:05:01 host02 pvescheduler[203343]: Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Replication.pm line 107.
defaulting to the fallback schedule '*/15' makes the spurious warning go away.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Consolidating the different config paths lets us add more as needed
without polluting our API with too many 'configxxx' endpoints.
The config and configdb paths are renamed under the ceph/cfg path:
* config -> raw (returns the ceph.conf file as is)
* configdb -> db (returns the ceph config db contents)
The old paths are still available and need to be dropped at some point.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
/nodes/{node}/ceph/pools/{pool} returns the pool details right away on a
GET. This makes it bad practice to add additional sub API endpoints.
By deprecating it and replacing it with /nodes/{node}/ceph/pool/{pool}
(singular instead of plural) we can turn that into an index GET
response, making it possible to expand it more in the future.
The GET call returning the pool details is moved into
/nodes/{node}/ceph/pool/{pool}/status
The code in the new Pool.pm is basically a copy of Pools.pm to avoid
a close coupling with the old code as it is possible that it will divert
until we can entirely remove the old code.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process id
* memory usage
* info about devices used (bdev/block, db, wal)
* size
* disks used (sdX)
...
* network addresses and ports used
...
Memory usage and PID are retrieved from systemd while the rest can be
retrieved from the metadata provided by Ceph.
The second one (lv-info) returns the following infos for a logical
volume:
* creation time
* lv name
* lv path
* lv size
* lv uuid
* vg name
Possible volumes are:
* block (default value if not provided)
* db
* wal
'ceph-volume' is used to gather the infos, except for the creation time
of the LV which is retrieved via 'lvs'.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>