this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check explicitly for type host, so filter for that first
and create a hash map for easier usage afterwards.
Drop the error when there's no tree, as either RADOS error'd on bad
command already, or there really is no tree (but RADOS worked OK), in
which case we simply return that the OSD did not belong to this node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow destroying only OSDs that belong to the node that has been specified in
the API path.
So if
- OSD 1 belongs to node A and
- OSD 2 belongs to node B
then
- pvesh delete nodes/A/ceph/osd/1 is allowed but
- pvesh delete nodes/A/ceph/osd/2 is not
Destroying an OSD via GUI automatically inserts the correct node
into the API path.
pveceph automatically insert the local node into the API call, too.
Consequently, it can now only destroy local OSDs (fix#2053).
- pveceph osd destroy 1 is allowed on node A but
- pveceph osd destroy 2 is not
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it doesn't get out of scope too early.
Regression introduced by 5620e5761e as pointed
out by Fabian Grünbichler.
Reported in the community forum:
https://forum.proxmox.com/threads/limit-simultaneous-backup-jobs.87489
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
avoid further crowding the top-level node API path with such
"what can some part of the node currently do" stuff, rather move it
down.
The QEMU cpu stuff should move also down there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as 'machine-types', so it is clear this refers to QEMU machines, not the
local machine (as one might think, this being a 'node' API call).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This API call is the predecessor of /nodes/{node}/disks/list, which has seen a
few more improvements. The latter API call should be used instead, and the web
UI already does so.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
like we do in it for the storage section configs
we will need this to store the token for influxdbs http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the device list from ceph-volume lvm list, and decode the json
output, which at that point is tainted (perlsec (1)).
Untaint it here before calling, because it is currently the only
call-site using the information in a problematic way (run_command).
(the only other call-site being in pve5to6)
Alternatively we could untaint while reading the information, but then
should only return a small subset of the ceph-volume output.
The issue is most likely due to
cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in pve-common ('run_command:
improve performance for logging and long lines'),
Tested on a virtual testsetup by creating OSDs with second DB disk,
and destroying it via GUI (did not manage to get the error without the
DB disk)
Reported via our community forum:
https://forum.proxmox.com/threads/insecure-dependency-in-exec-during-osd-destroy.79574/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
by deleteing it from $ceph_param we deleted it also from
$param since it was only a reference
fix it by extracting it beforehand
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If the command itself allows it, which normally means it has good
verification of passed arguments.
We may want to re-evaluate security here if we allow execution for a
group of non-root users.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the information comes only from the key value store in the pmxcfs, so
we do not actually require ceph to be installed.
So only check if ceph is locally initialized and create the rados
connection after the early return when only versions scope is set.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We actually wanted to use that scheme for more new API paths, lets
see if it is really fitting starting with this.
Use the new widget-toolkit submitUrl helper to add the ID on create.
And unify the edit/create window creation, which may fit better in a
separate commit, it's quite small and was to cumbersome to untangle,
so just go against my one rules here... :(
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a nodename.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and make the two options mutally exclusive as long
as they are specified on the same level (e.g. both
from the storage configuration). Otherwise prefer
option > storage config > default (only maxfiles has a default currently).
Defines the backup limit for prune-backups as the sum of all
keep-values.
There is no perfect way to determine whether a
new backup would trigger a removal with prune later:
1. we would need a way to include the not yet existing backup
in a 'prune --dry-run' check.
2. even if we had that check, if it's executed right before
a full hour, and the actual backup happens after the full
hour, the information from the check is not correct.
So in some cases, we allow backup jobs with remove=0, that
will lead to a removal when the next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and regular users to read all their own tasks as well as those of their
associated tokens.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>