These allow one to indirectly access resources from the POV of the
Proxmox VE cluster nodes. While gotify is relatively harmless, smtp
could already cause more problems to admins that are not aware of the
implications of allowing users to add targets while having some open
smtp relay that is only accessible from networks the PVE nodes can
access but not the user that can talk with PVE's API. The webhook one
is then pretty much free-form and might cause some adverse effects in
environments that are only loosely guarded, and while that might point
at general security problems, it's likely that admins will still place
the blame at our projects.
So while the former should not be problematic, the new not yet fully
released webhooks could have some impact. That said, it currently
requires Mapping.Modify, which is a intermediate powerful level priv,
so it's not like any user could use this. Still, hedging for the
safer side here seems the better choice for now, we still can open
this up if there's user feedback and we deem it safe enough doing so.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The API itself allows several list separators. The network configuration
for bridge_vids expects a space separated list. We therefore convert it
initially to a space separated list.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
we already have the information parsed, so it's cheap, and we already
have a mechanism in place that adds 'ha-<hastate>' as a css class, so
let's reuse that.
I chose a blue wrench, as wrenches are associated with 'maintenance',
and because the state is different than 'online' and 'offline', but
don't make it yellow since it's not really a 'failure' state.
Users mentioned in the forum that this would be nice:
https://forum.proxmox.com/threads/125768/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
These just call the API implementation via the perl-rs bindings.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
The get_targets API endpoint is now implemented in Rust.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
Stick to the pattern
single: "(for type 'foo')"
multiple: "(for types 'foo', 'bar'(...) and 'last-type')"
Also adapt line-wrapping accordingly (for a 100 column limit) and fix
some minor typos (and one phrasing) while at it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The endpoint name is mostly used for calling the API methods as first
class perl method through autoloader, but manager is a leaf node
w.r.t. dependencies and there is no existing call for either of the
two.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently, when we have a PCI resource mapping, we manually check only
the available models for the first PCI entry. This often works, but not
always, since one could have completely different devices in one
mapping, or with the new NVIDIA sysfs api we don't get the generally
available models.
To improve this, extend the parameter for the PCI ID to accept both,
PCI IDs or named mappings, and for the latter mappings, iterate over
all local PCI devices in it and extract the mdev types.
Rename also the parameter to better reflect what it accepts. While the
this is changing a API parameter, it's not a breaking change in this
specific case because the parameter is derived from the URL path, and
any attempt to include the parameter with a name manually is not
possible and will result in an error:
duplicate parameter (already defined in URI) with conflicting values!
Since we cannot reach the API handler without giving the parameter
already via the URL, there is no way to give it via name.
Accepting named mappings directly in this API endpoint also vastly
simplifies the UI code, since we now only have to give the mapping to
the selector instead of an (arbitrarily selected) PCI id from that
mapping.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: also split pciid into pci-id for readability and reword message
slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
used the same description as for the guests.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: avoid having netin twice, change to netout once ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was missing, but it mostly well defined since we're using the rust
bindings here. I copied most descriptions over from the PBS api, except
the ones only existing here (like sockets and level)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
both the task and system log API endpoints support downloading the log data.
annotate the API method schema accordingly to allow passing the newly
introduced checks in the API handler that limit download functionality to
annotated endpoints.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
jobs converted from vzdump.cron have an ID of the format
$digest:$counter
where $digest is the hash of the vzdump.cron file, and $counter is the
position of the job within the crontab.
while the section config schema pretends jobs.cfg's section IDs are
of type pve-configid, that is not enforced anywhere, and the API
endpoints managing such jobs allowed arbitrary strings in the past.
the ':' character is not allowed by `pve-configid`, but it is by the
section config parsers and the Job API.
convert the API schema to use the unification of previous definition
used by the job API, and what the section config parser accepts.
Fixes: f5a97f1f5 (api: jobs: vzdump: pass job 'job-id' parameter)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and put it into PVE::VZDump because there is a cycle between
PVE::Jobs::VZDump, PVE::API2::VZDump and PVE::API2::Backups
that prevents any of those containing it for now.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This new endpoint returns node, storage and guest metrics in JSON
format. The endpoint supports history/max-age parameters, allowing
the caller to query the recent metric history as recorded by the
PVE::PullMetric module.
The returned data format is quite simple, being an array of
metric records, including a value, a metric name, an id to identify
the object (e.g. qemu/100, node/foo), a timestamp and a type
('gauge', 'derive', ...). The latter property makes the format
self-describing and aids the metric collector in choosing a
representation for storing the metric data.
[
...
{
"metric": "cpu_avg1",
"value": 0.12,
"timestamp": 170053205,
"id": "node/foo",
"type": "gauge"
},
...
]
Some experiments were made in regards to making the format
more 'efficient', e.g. by grouping based on timestamps/ids, resulting
in a much more nested/complicated data format. While that
certainly reduces the size of the raw JSON response by quite a bit,
after GZIP compression the differences are negligible (the
simple, flat data format as described above compresses by a factor
of 25 for large clusters!). Also, the slightly increased CPU load
of compressing the larger amount of data when e.g. polling once a
minute is so small that it's indistinguishable from noise in relation
to a usual hypervisor workload. Thus the simpler, format was
chosen. One benefit of this format is that it is more or less already
the exact same format as the one Prometheus uses, but in JSON format -
so adding a Prometheus metric scraping endpoint should not be much
work at all.
The API endpoint collects metrics for the whole cluster by calling
the same endpoint for all cluster nodes. To avoid endless request
recursion, the 'local-only' request parameter is provided. If this
parameter is set, the endpoint implementation will only return metrics
for the local node, avoiding a loop.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
[WB: remove unused $start_time leftover from benchmarks]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This new API route returns known notification metadata fields and
a list of known possible values. This will be used by the UI to
provide suggestions when adding/modifying match rules.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
This allows us to access the backup job id in the send_notification
function, where we can set it as metadata for the notification.
The 'job-id' parameter can only be used by 'root@pam' to prevent
abuse. This has the side effect that manually triggered backup jobs
cannot have the 'job-id' parameter at the moment. To mitigate that,
manually triggered backup jobs could be changed so that they
are not performed by a direct API call by the UI, but by requesting
pvescheduler to execute the job in the near future (similar to how
manually triggered replication jobs work).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
[ TL: fleece in d/control bump for guest-common now that the version
is known ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ceph MDS IDs cannot start with a number [0].
[0] https://docs.ceph.com/en/latest/man/8/ceph-mds/
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
The field contains the hostname of the host (without any domain part)
which sends the notification. This field can be used in match-field
match rules.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
- The man page warns about the usage of `hostname -f`, since a host
may have multiple domains (or none at all)
- The fallback PVE::INotify::nodename() already only returned the
hostname without the domain part
- Fencing notifications didn't include the domain part anyway
This may result in soft-breakage for any users who have already relied
on the domain being present. If there is need for it, it could include
a fqdn metadata field.
The hostname property used for rendering the notification template
is unaffected for now.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
This allows users to create notification match rules for specific
replication jobs, if they so desire.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
This commit adapts notification sending for
- package update
- replication
- backups
to use named templates (installed in /usr/share/pve-manager/templates)
instead of passing template strings defined in code to the
notification stack.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Similar to how Datastore.AllocateSpace is required for the backup
storage, it should also be required for the fleecing storage.
Removing a fleecing storage from a job does not require more
permissions than for modifying the job.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Due to Ceph dropping privileges when running the 'ceph-crash' daemon
[0], it is necessary to allow the daemon to authenticate with its
cluster in a safe manner.
In order to avoid exposing sensitive keyrings or somehow escalating
its privileges again, 'ceph-crash' is therefore provided with its own
keyring in the '/etc/pve/ceph' directory. This directory, due to being
on 'pmxcfs', may be read by members of the 'www-data' group, which
'ceph-crash' is made part of [1].
Expected Configuration
----------------------
1. A keyring file named '/etc/pve/ceph/ceph.client.crash.keyring'
exists
2. A section named 'client.crash' exists in '/etc/pve/ceph.conf'
3. The 'client.crash' section has a key named 'keyring' which
references the keyring file as '/etc/pve/ceph/$cluster.$name.keyring'
4. The 'client.crash' section has *no* key named 'key'
New Clusters
------------
The keyring file is created and the conf file is updated after the first
monitor has been created (when calling `pveceph mon create`).
Existing Clusters
-----------------
A new helper script creates and configures the 'client.crash' keyring in
`postinst`, if:
* Ceph is installed
* Ceph is initialized ('/etc/pve/ceph.conf' and '/etc/pve/ceph' exist)
* Connection to RADOS is successful
If the above conditions are met, the helper script ensures that the
existing configuration matches the expected configuration mentioned
above.
The configuration is not changed if it is already as expected.
The helper script may be called again manually if the `postinst` hook
fails. It is installed to '/usr/share/pve-manager/helpers/pve-init-ceph-crash'.
Existing `client.crash` Key
---------------------------
If a key named 'client.crash' already exists within the cluster, it is
reused and not regenerated.
[0]: https://github.com/ceph/ceph/pull/48713
[1]: https://git.proxmox.com/?p=ceph.git;a=commitdiff;h=f72c698a55905d93e9a0b7b95674616547deba8a
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
This commit adds the '/etc/pve/ceph' directory to our overall expected
Ceph configuration.
This directory is meant to store cluster-wide, non-private
configuration files used by Ceph applications and services that are
executed with lower privileges, such as 'ceph-crash.service'.
The existence of the directory is now also checked for when checking
whether Ceph is configured correctly. This makes it easier for our
other tooling to rely on the directory's existence, reducing the
number of otherwise needless frequent checking.
* For new clusters: `pveceph init` now creates '/etc/pve/ceph' when
called.
* For existing clusters: The 'postinst' hook this commit adds ensures
that '/etc/pve/ceph' is created when updating.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Allows to configure a custom broadcast address to use when sending a
wake on lan packet to wake a remote node.
Default behaviour remains to fallback to 255.255.255.255.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to optionally configure a local interface name to which to
bind to when sending a wake on lan packet to wake a remote node.
Default behaviour remains to send the packet via the interface for
the default gateway.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Moves the wakeonlan property to be a property string, with current mac
address as default key. This allows to later add further optional
properties such as bind-interface and broadcast-address.
Adds the `get_wakeonlan_config` helper function to parse the string
when read from the node config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
[ TL: also improve if-expression wrapping ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The Ceph monitor removal assertion contains a condition that checks
whether the given mon ID actually exists and thus may be removed.
The first part of the condition checks whether the hash returned by
`get_services_info` [0] contains the key "mon.$monid". However, the
hash's keys are never prefixed with "mon.", which makes this check
incorrect.
This is fixed by just using "$monid" directly.
The second part checks whether the mon hashes returned by
Ceph contain the "name" key before comparing the key with the given
mon ID. This key existence check is also incorrect; in particular:
* If the lookup `$_->{name}` evaluates to e.g. "foo", the check
passes, because "foo" is truthy. [1]
* If the lookup `$_->{name}` evaluates to "0", the check fails,
because "0" is falsy (due to it being equivalent to the number 0,
according to Perl [1]).
This is solved by using the inbuilt `defined()` instead of relying on
Perl's definition of truthiness.
[0]: https://git.proxmox.com/?p=pve-manager.git;a=blob;f=PVE/Ceph/Services.pm;h=e0f31e8eb6bc9b3777b3d0d548497276efaa5c41;hb=HEAD#l112
[1]: https://perldoc.perl.org/perldata#Scalar-values
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5198
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
This was restricted to Sys.Modify + Sys.Audit on the whole cluster to
ensure that only trusted users get access to a method that can scan
the (local) network from the POV of the Proxmox VE node, even if only
through HTTP HEAD requests.
Nowadays there's enough user interest [0] to warrant a separate access
privilege to cover such an use case, and while most of the requests
are for the download-url storage API endpoint, this method here is not
only a bit less powerful than the storage one, it's rather tied to the
latter anyway (e.g. for querying the metadata of a URL in the web UI
for name and size before downloading it to a storage).
For backwards compatibility keep the old check and add the new
privilege as alternative to fulfill the permission requirements of
that API endpoint.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=5254
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
I recently added the same info to PMG and added them to the return
schema, so copying them over here comes for free, and while far from
complete but better than nothing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>