so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for backwards compatibility. Otherwise, e.g. listing backup jobs with
pvesh get /cluster/backup is broken. And suddenly not having the
property anymore would be a breaking API change.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This avoids errors about the use of uninitialized values if the 'pool'
parameter is not present in the storage configuration.
The 'pool' property for an RBD storage config is not mandatory.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Also generalizes the way vzdump property strings are handled for jobs.
Something similar could be done in VZDump.pm, but there the maxfiles
and prune-backups settings are currently coupled, so a dedicated
parse_performance() is used instead. Can be changed once maxfiles is
dropped.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
by extracting the JSON-encoded-string schema and dumping it into the
verbose description it at least shows up in the API viewer.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since this was missing a proper return type definition the api viewer
couldn't display the endpoint (`retinfs.items` was undefined). also
the `pvesh` command would complain that it cannot properly format the
return type because the variable `$item_type` in `CLIFormatter.pm` was
not defined.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Mention which optional parameters will be used for the replicated
metadata pool but won't have an effect on the erasure coded data pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The crush rule is an optional paramter which can be used for the
metadata pool, but the erasure coded data pool will always get its own
crush rule. Therefore this parameter can not be adapted.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a schedule only has a limited amount of runs it can happen
(e.g. 2022-10-01 8:00/30), $next will be undef after the last run.
Exit early in that case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The osd dump already contains the pool type in numerical format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The behavior of always adding the storage config was lost in commit
23c407e. But it is more sensible to make it a default that can be
changed if needed.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
moved to a format string 'erasurce-coded', that allows also to drop
most of the param existence checking as we can set the correct
optional'ness in there. Also avoids bloating the API to much for
just this.
Reuse the $rados connection more often to avoid to much
overhead/lingering sockets (the rados connection stays around in the
background to allow efficient reuse)
really should be three separate commits, but too intertwined and too
late for me to care tbh.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
commit 89d146f207 introduced permission
checks here that caused all regular bridges to be removed from the
returned list as soon as the SDN package is installed, unless the user
is root@pam or there exists a VNET with the same ID.
this is arguably a breaking change, so limit the priv check to actually
defined VNETs for the time being, and add ALL regular bridges
uncondtionally like before.
get_local_vnets already filters by the same prvs, so we need to get the
full config to find out which IDs are VNETs and which are not.
once/iff we introduce ACL paths for *all* bridges in the future, we can
limit accordingly here.
CC: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
but rather multiple times becoming exponentially less frequent.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
on create we require either starttime (+dow) or a schedule, but when
updating an existing job, this is not necessary
before we changed to schedules, the starttime was not optional either on
update, but i think there is no reason to require the user to send the
schedule/startime along every time.
the gui will send all values every time, so that was never a problem there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This remove vmbr* from bridgeselector if user have access to vnets.
if user need to have also access to vmbr, we can add a permission
in path "/sdn/vnets/vmbrX"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Using
pvesh create /nodes/pve701/apt/repositories --path
"/etc/apt/sources.list" --index 0 --enabled 1
reliably leads to
error: invalid type: string "0", expected usize
Coerce to int to avoid this. I was not able to trigger the issue with
the "enabled" option being a string here (in PMG I was), but be on the
safe side and coerce there too. Otherwise it might get triggered by a
future, completely unrelated change further up in the API call
handling.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To avoid being blacklisted because of the default, quite popular,
libwww-perl user-agent like reported in community forum [0].
[0]: https://forum.proxmox.com/threads/104081/
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A virtual package does not have SelectedState Install, but the
dependency will still be satisfied if a package providing it has.
Fixes a bug, wrongly showing that postfix will be installed, when a
different mail-transport-agent is installed and a pve-manager update
is available:
https://forum.proxmox.com/threads/103413/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and calculate it by getting the next event after 'now' since
we currently have no way to get the last run time for jobs only running
on different cluster nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of accumulating the whole output of 'mini-journalreader' in
the api call (this can be quite big), use the download mechanic of the
http-server to stream the output to the client.
we lose some error handling possibilities, but we do not have
to allocate anything here, and since perl does not free memory after
allocating[0] this is our desired behaviour.
to keep api compatiblitiy, we need to give the journalreader the '-j'
flag to let it output json.
also tell the http server that the encoding is gzip and pipe
the output through it.
0: https://perldoc.perl.org/perlfaq3#How-can-I-free-an-array-or-hash-so-my-program-shrinks?
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
had it lying around and did not felt condensed/code-golfed to me,
rather a bit more expressive (surely biased though)..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the old web ui sends the days as seperate parameters, which will
be concatenated by a null-byte in the api, causing it to land it this
way in the jobs.cfg
to fix this, split+join the list to get a well-formed dow list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
else this happens:
"Use of inherited AUTOLOAD for non-method PVE::API2::Backup::uuid() is
no longer allowed at /usr/share/perl5/PVE/API2/Backup.pm line 198.
(500)"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
not only the given parameters, e.g. at the moment, the gui will
never send a 'verify-certificate' parameter, even if set in the config
by using the complete resulting config, we test the actual settings.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Note that this does not only allow partitions to be used, but for DB
and WAL disks, one more type of disk, that wasn't allowed before.
Namely, GPT-partitioned disks with any partitions detected as used.
The reason is get_disks' behavior:
* Without $include_partitions=1, the disk will have the same usage
as it's first used partition, and thus wasn't allowed. (Except in
the case that usage was LVM, where the check was bypassed, but
luckily OSD creation just failed later because no Ceph volume
group would be detected).
* With $include_partitions=1, the disk will have usage 'partitions'
and thus be allowed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
a simple api call to simulate calendar event triggers
takes a schedule, an optional number (default 10), an optional starttime
(default 'now') and returns a list with unix timestamps, as well as
humanly readable utc timestamps.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in addition to listing the vzdump.cron jobs, also list from the
jobs.cfg file.
updates/creations go into the new jobs.cfg only now
and on update, starttime+dow get converted to a schedule
this transformation is straight forward, since 'dow'
is already in a compatible format (e.g. 'mon,tue') and we simply
append the starttime (if any)
id on creation is optional for now (for api compat), but will
be autogenerated (uuid). on update, we simply take the id from before
(the ids of the other entries in vzdump.cron will change but they would
anyway)
as long as we have the vzdump.cron file, we must lock both
vzdump.cron and jobs.cfg, since we often update both
we also change the backupinfo api call to read the jobs.cfg too
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since commit: 8a3a300b ("ceph services: drop broadcasting legacy
version pmxcfs KV")
The 'ceph-version' kv is not broadcasted anymore, so we should not
query it, instead use get_ceph_versions
Also drop the other legacy keys for the versions
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. The Diskmanage package
relies upon lsblk for certain info, and lsblk queries the udev
database. Ensure the information is updated by manually calling
'udevadm trigger' for the changed devices.
Without the fix, and a bit of bad luck, a cleaned up disk could still
show up as an 'LVM2_member' for example.
[0]: https://github.com/systemd/systemd/issues/18525
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is mostly a copy of the wipe_disks helper with the difference
that it also uses wipefs on the device and its partitions.
Remove the wipe_disks helper as no users remain.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Adds some missing descriptions to the api/man page documentation, for
certain options of the `pvenode` command. Some minor language fix-ups
are also included
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
so that they are documented and get displayed by pvesh/pvenode
all those fields must exists (since they come from the upid)
aside from the exitstatus, so marking that as optional
forum user reported that they are missing:
https://forum.proxmox.com/threads/ergebnis-eines-tasks-per-api-abfragen.92267/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we filtered out devices which belong into the 'Generic System Peripheral'
category, but this can contain actual useful pci devices
users want to pass through, so simply do not filter it by default.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
metadata is gained using a HEAD request.
Due to the ability of this api endpoint to request files on internal
networks (which would not be visible/accessible from outside) it is
restricted to users with permissions `Sys.Audit` and `Sys.Modify` on
`/`. Users with these permissions are able to alter node (network)
config anyway, so this should not create any further security risk.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
And also add the 'backup-info' endpoint to the index.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This had a myriad of issues:
* marked as protected, thus forwarded to the privileged daemon even
if it just returned static information
* did not return directory index but a "stub" string, which does not
makes sense.
* not named index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
VM names are returned by the endpoint anyway, therefore it makes sense
to add it to the endpoint specification so it also appears in the API
docs and is visible when using pvesh with text output.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to PBS. The 'errors' filter parameter still takes precedence
(overrides this)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to renamed PVE::Tools helper method ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we have lots of information already parsed and cached, use that and
give the frontend more to work with/display.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a common function to download arbitrary files from urls has been
defined as PVE::Tools::download_file_from_url and is now used.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Multiple public networks can be defined in the ceph.conf. The networks need to
be routed to each other.
Support handling multiple IPs for a single monitor. By default, one address from
each public network is selected for monitor creation, but, as before, it can be
overwritten with the mon-address parameter, now taking a list of addresses.
On removal, make sure the all addresses are removed from the mon_host entry in
the ceph configuration.
Originally-by: Alwin Antreich <a.antreich@proxmox.com>
[handling of multiple addresses]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by also comparing the canonical form to decide when to remove an address. When
getting the IP from the rados information, also drop eventual brackets, so our
existing function can handle it. Add the brackets back within the
remove_addr_from_mon_host function.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Partially based on pve-storage's CephConfig.pm get_monaddr_list, but the
interface is not the best for the use case here.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation for supporting multiple addresses. The config section does not
allow more than one public_addr.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
mostly relevant to prepare support for IPv4/IPv6 dual stack mode as a special
case of the planned support for mutliple public networks.
As before, only set the false value when we are dealing with the first address,
but also be explicit about the IPv4 case as the defaults might change in the
future.
Then, when an address of a different type comes along later, set the relevant
bind option to true.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The change not to pass the 'upgrade' parameter in the frontend was made in
953f6e9bb3 (the commit doesn't talk about it, it's
likely an accidental squash of two changes)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The switch to 'cmd' was made by commit af39a6f09651e15d1c83536e25493a2212efd7d3
in the pve-xtermjs repo and is included in 4.7.0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
everywhere where Pool.Allocate was unnecessarly used it was replaced
with Pool.Audit.
`/cluster/resources` now returns pool infomation for guests only if
the requesting user has the Pool.Audit permission on the pool.
`/pool/` now returns only pools where the requesting user has the
Pool.Audit permission.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently the check for used ports for bonds and bridges happens
while rendering '/etc/network/interfaces.new' in PVE::Inotify
(pve-common).
However at that stage the new/updated interface is already merged
with the old settings, making it impossible to indicate where a NIC
is currently used.
The code is adapted from the renderer in
PVE::Inotify::__write_etc_network_interfaces.
Tested on a virtual PVE instance.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check explicitly for type host, so filter for that first
and create a hash map for easier usage afterwards.
Drop the error when there's no tree, as either RADOS error'd on bad
command already, or there really is no tree (but RADOS worked OK), in
which case we simply return that the OSD did not belong to this node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow destroying only OSDs that belong to the node that has been specified in
the API path.
So if
- OSD 1 belongs to node A and
- OSD 2 belongs to node B
then
- pvesh delete nodes/A/ceph/osd/1 is allowed but
- pvesh delete nodes/A/ceph/osd/2 is not
Destroying an OSD via GUI automatically inserts the correct node
into the API path.
pveceph automatically insert the local node into the API call, too.
Consequently, it can now only destroy local OSDs (fix#2053).
- pveceph osd destroy 1 is allowed on node A but
- pveceph osd destroy 2 is not
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it doesn't get out of scope too early.
Regression introduced by 5620e5761e as pointed
out by Fabian Grünbichler.
Reported in the community forum:
https://forum.proxmox.com/threads/limit-simultaneous-backup-jobs.87489
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
avoid further crowding the top-level node API path with such
"what can some part of the node currently do" stuff, rather move it
down.
The QEMU cpu stuff should move also down there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as 'machine-types', so it is clear this refers to QEMU machines, not the
local machine (as one might think, this being a 'node' API call).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This API call is the predecessor of /nodes/{node}/disks/list, which has seen a
few more improvements. The latter API call should be used instead, and the web
UI already does so.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
like we do in it for the storage section configs
we will need this to store the token for influxdbs http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the device list from ceph-volume lvm list, and decode the json
output, which at that point is tainted (perlsec (1)).
Untaint it here before calling, because it is currently the only
call-site using the information in a problematic way (run_command).
(the only other call-site being in pve5to6)
Alternatively we could untaint while reading the information, but then
should only return a small subset of the ceph-volume output.
The issue is most likely due to
cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in pve-common ('run_command:
improve performance for logging and long lines'),
Tested on a virtual testsetup by creating OSDs with second DB disk,
and destroying it via GUI (did not manage to get the error without the
DB disk)
Reported via our community forum:
https://forum.proxmox.com/threads/insecure-dependency-in-exec-during-osd-destroy.79574/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
by deleteing it from $ceph_param we deleted it also from
$param since it was only a reference
fix it by extracting it beforehand
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If the command itself allows it, which normally means it has good
verification of passed arguments.
We may want to re-evaluate security here if we allow execution for a
group of non-root users.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the information comes only from the key value store in the pmxcfs, so
we do not actually require ceph to be installed.
So only check if ceph is locally initialized and create the rados
connection after the early return when only versions scope is set.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We actually wanted to use that scheme for more new API paths, lets
see if it is really fitting starting with this.
Use the new widget-toolkit submitUrl helper to add the ID on create.
And unify the edit/create window creation, which may fit better in a
separate commit, it's quite small and was to cumbersome to untangle,
so just go against my one rules here... :(
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a nodename.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and make the two options mutally exclusive as long
as they are specified on the same level (e.g. both
from the storage configuration). Otherwise prefer
option > storage config > default (only maxfiles has a default currently).
Defines the backup limit for prune-backups as the sum of all
keep-values.
There is no perfect way to determine whether a
new backup would trigger a removal with prune later:
1. we would need a way to include the not yet existing backup
in a 'prune --dry-run' check.
2. even if we had that check, if it's executed right before
a full hour, and the actual backup happens after the full
hour, the information from the check is not correct.
So in some cases, we allow backup jobs with remove=0, that
will lead to a removal when the next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and regular users to read all their own tasks as well as those of their
associated tokens.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Else, a user would need to renew it first before being able to revoke
it, which does not make much sense..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this normally just means that the old cert is already expired, we do
not care for that - after all: we got a new (renewed) valid cert
successfully.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If source is missing, pvesr will set it via job_status
on the next run. But the info is already present here,
so it doesn't hurt to use it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
reload is actually preferred, and even if most of the time this even
won't reach the API, allowing to start them is still definitively
fine!
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In some situations Ceph's auto-detection doesn't recognize the device
class correctly. The option allows to set it directly on osd create,
instead of altering it afterwards. This way the cluster doesn't need to
shift data back and forth unnecessarily.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Adds a new api endpoint at cluster/backupinfo for cluster wide backup
stuff. This is necessary because cluster/backup expects a backup job ID
at the next level and thus other endpoints are hard to impossible to
implement under that hierarchy.
The only api endpoint available for now is the `not_backed_up` which
returns a list of all guests which are not covered by any backup job.
The top level index endpoint is left unsused for now to be available for
a more generic summary endpoint in the future.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This patch adds a new API endpoint that returns a list of included
guests, their volumes and whether they are included in a backup.
The output is formatted to be used with the extJS tree panel.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The `guest include` logic handling `all` and `exclude` parameters was in
the `PVE::VZDump->exec_backup()` method. Moving this logic into the
`get_included_guests` method allows us to simplify and generalize it.
This helps to make the overall logic easier to test and develop other
features around vzdump backup jobs.
The method now returns a hash with node names as keys mapped to arrays
of VMIDs on these nodes that are included in the vzdump job.
The VZDump API call to create a new backup is adapted to use the new
method to create the list of local VMIDs and the skiplist.
Permission checks are kept where they are to be able to handle missing
permissions according to the current context. The old behavior to die
on a backup job when the user is missing the permission to a guest and
the job is not an 'all' or 'exclude' job is kept.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
otherwise the ACME endpoint might return the ordered domain in lower
case and we fail to find our plugin config.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
allow users with Sys.Modify to modify custom or ACME certificates. those
users can already hose the system in plenty of ways, no reason to
restrict this in particular to being root@pam only.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As a first step to make the whole guest include logic more testable the
part from the API endpoint has been moved to its own method with as
little changes as possible.
Everything concerning `all` and `exclude` logic is still in the
PVE::VZDump->exec_backup() method.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.
The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
and do a breaks on older network package as we do not depend on it
due to it being an optional/experimental feature, so reverse the
depends with the breaks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since this API endpoint is used for the node selector in the GUI, which
causes quite widespread breakage.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we need to parse the config even if it does not exist - it will return
the 'standalone' entry that's needed to be backwards compatible with
existing setups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
which returns a list of challenge api types with the schema of their
required data (if it exists)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to my changes from proxmox-acme schema def and change
path from challengeschema to challenge-schema ]
As proxmox-acme has now a default delay for DNS challenge plugins,
which is the important one. Those are just for not overloading the
acme servers with a lot of requests, but once the challenge was
propagate they have it verified pretty quickly, so reduce delay for
checking validation after first requesting it down to 10 seconds
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
re-loading it always would mean that we could potentially switch the
config to something completely different, and the mix of the previous
and the old could result in total bogus actions.
Better to use the same one for one full order, even if it may get
"outdated" it was still valid in the past and most important
coherent.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Drop various leftovers from the storage content API module this was
based on, e.g., ACME plugins have no fixed options and the like.
Also, the descriptions shouldn't mention "storage".
Further, drop the "update_config" "helper" with its operations
effectively only increasing code complexity and adding another rabbit
hole to jump into.
IF, this should have been factoring out the lock+read+write cycle
only, living the rest to a passed CODE-ref, but honestly that saves
only really the read and write config lines, and at this point
nothing is really gained, so just let it be.
Should have been actually three or so separate patches, but to deep
into this rabbit hole to care..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for now mostly due to the "nice" property of the acmedomains which
do not use their property key as index but actually the doamain.
Without this one could set up duplicated domain entries just fine,
but once using them -> error.
This is not nice UX, so verify node config before writing an updated
one out, to catch those issues.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add checks, encoding of loaded data files, update API path, proper inclusion into API tree
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of relying that the authorization URLs and the ordered
identifiers are sorted the same way for already validated
authorizations.
on the contrary, RFC 8555 even says:
"The authorizations required are dictated by server policy; there may
not be a 1:1 relationship between the order identifiers and the
authorizations required."
authorizations MUST always include a single identifier, no matter which
state they are in.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>