everywhere where Pool.Allocate was unnecessarly used it was replaced
with Pool.Audit.
`/cluster/resources` now returns pool infomation for guests only if
the requesting user has the Pool.Audit permission on the pool.
`/pool/` now returns only pools where the requesting user has the
Pool.Audit permission.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently the check for used ports for bonds and bridges happens
while rendering '/etc/network/interfaces.new' in PVE::Inotify
(pve-common).
However at that stage the new/updated interface is already merged
with the old settings, making it impossible to indicate where a NIC
is currently used.
The code is adapted from the renderer in
PVE::Inotify::__write_etc_network_interfaces.
Tested on a virtual PVE instance.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check explicitly for type host, so filter for that first
and create a hash map for easier usage afterwards.
Drop the error when there's no tree, as either RADOS error'd on bad
command already, or there really is no tree (but RADOS worked OK), in
which case we simply return that the OSD did not belong to this node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow destroying only OSDs that belong to the node that has been specified in
the API path.
So if
- OSD 1 belongs to node A and
- OSD 2 belongs to node B
then
- pvesh delete nodes/A/ceph/osd/1 is allowed but
- pvesh delete nodes/A/ceph/osd/2 is not
Destroying an OSD via GUI automatically inserts the correct node
into the API path.
pveceph automatically insert the local node into the API call, too.
Consequently, it can now only destroy local OSDs (fix#2053).
- pveceph osd destroy 1 is allowed on node A but
- pveceph osd destroy 2 is not
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it doesn't get out of scope too early.
Regression introduced by 5620e5761e as pointed
out by Fabian Grünbichler.
Reported in the community forum:
https://forum.proxmox.com/threads/limit-simultaneous-backup-jobs.87489
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
avoid further crowding the top-level node API path with such
"what can some part of the node currently do" stuff, rather move it
down.
The QEMU cpu stuff should move also down there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as 'machine-types', so it is clear this refers to QEMU machines, not the
local machine (as one might think, this being a 'node' API call).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This API call is the predecessor of /nodes/{node}/disks/list, which has seen a
few more improvements. The latter API call should be used instead, and the web
UI already does so.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
like we do in it for the storage section configs
we will need this to store the token for influxdbs http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the device list from ceph-volume lvm list, and decode the json
output, which at that point is tainted (perlsec (1)).
Untaint it here before calling, because it is currently the only
call-site using the information in a problematic way (run_command).
(the only other call-site being in pve5to6)
Alternatively we could untaint while reading the information, but then
should only return a small subset of the ceph-volume output.
The issue is most likely due to
cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in pve-common ('run_command:
improve performance for logging and long lines'),
Tested on a virtual testsetup by creating OSDs with second DB disk,
and destroying it via GUI (did not manage to get the error without the
DB disk)
Reported via our community forum:
https://forum.proxmox.com/threads/insecure-dependency-in-exec-during-osd-destroy.79574/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
by deleteing it from $ceph_param we deleted it also from
$param since it was only a reference
fix it by extracting it beforehand
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If the command itself allows it, which normally means it has good
verification of passed arguments.
We may want to re-evaluate security here if we allow execution for a
group of non-root users.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the information comes only from the key value store in the pmxcfs, so
we do not actually require ceph to be installed.
So only check if ceph is locally initialized and create the rados
connection after the early return when only versions scope is set.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We actually wanted to use that scheme for more new API paths, lets
see if it is really fitting starting with this.
Use the new widget-toolkit submitUrl helper to add the ID on create.
And unify the edit/create window creation, which may fit better in a
separate commit, it's quite small and was to cumbersome to untangle,
so just go against my one rules here... :(
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>