VM names are returned by the endpoint anyway, therefore it makes sense
to add it to the endpoint specification so it also appears in the API
docs and is visible when using pvesh with text output.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We actually wanted to use that scheme for more new API paths, lets
see if it is really fitting starting with this.
Use the new widget-toolkit submitUrl helper to add the ID on create.
And unify the edit/create window creation, which may fit better in a
separate commit, it's quite small and was to cumbersome to untangle,
so just go against my one rules here... :(
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a new api endpoint at cluster/backupinfo for cluster wide backup
stuff. This is necessary because cluster/backup expects a backup job ID
at the next level and thus other endpoints are hard to impossible to
implement under that hierarchy.
The only api endpoint available for now is the `not_backed_up` which
returns a list of all guests which are not covered by any backup job.
The top level index endpoint is left unsused for now to be available for
a more generic summary endpoint in the future.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
add checks, encoding of loaded data files, update API path, proper inclusion into API tree
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With this configuration it is possible to use many different plugins
with different providers and users.
Signed-off-by: Wolfgang Link <w.link@proxmox.com>
using the new get_guest_config_property helper from pve-cluster,
which allows us to get this info with relatively low overhead.
With a somewhat realistic setup of 303 guest configurations here my
API call timing changes from ~ 24 to 26 ms without this to 26 to 28
ms with this patch applied, which seems reasonable.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph luminous does not use the 'name' property in the metadata
everywhere, so fall back to 'id'
this makes the ceph dashboard usable while having still luminous
(relevant for upgrading)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add two new api calls in /cluster/ceph
status:
the same as /nodes/NODE/ceph/status, but accessible without
nodename, which we don't need, as in the hyperconverged case, all nodes
have the ceph.conf which contains the info on how to connect to the
monitors
metadata:
combines data from the cluster filesystem about the services,
as well as the 'ceph YYY metadata' info we get from ceph.
with this info we can convieniently display which services exists,
which are running and which versions they have
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise this potentially returns outdated information (like the
cluster being quorate when corosync has crashed on all nodes 5 minutes
ago).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
It makes sense to not give users without Sys.Audit permissions to
much information over a node and this is relatively easy and cheap to
check and enforce at those two points.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Oguz Bektas <o.bektas@proxmox.com>
over from the time where corosync was still bnased on XML configs
(pre PVE 4.0). This was not used, and XML::Parser is not Export
based, so it does not pushes some methods into the using modules
namespace
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
openvz is deprecated but can still be a return value
maxcpu can be a real number (e.g., for CT if cpulimit is 1.5 and
cores is not set), and may not be an integer
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for registering, updating, refreshing and deactiving a PVE-managed ACME
account, as well as for retrieving the (optional, but required if
available) terms of service of the ACME API provider / CA.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With ignored or still queued services we have no hastate for a
service in the manager status available.
As we use hastate in the web UI to determine if a service is
configured for HA this could lead to confusion there.
For example, the VM/CT 'Manage HA' window thinks tries to add the
service again if its in the 'ignored' state, and then the backend
errors out because it is already configured.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
extjs cannot "convert" and id from other fields, so the ids in the
diffstore and the realstore are different and we re-add every element on
every update
to mitigate this, we generate the id (which is "uid:hostname") in the
backend, and simply use it in the frontend
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds a hastate field to all vms/ct which have ha enabled
we will use this for showing the error state in the tree (in the webgui)
and for the cluster dashboard (to count the error state guests)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With the new calculation $pe->{maxcpu} was used before being
initialized to zero. Moving the initialization up.
Additionally setting $pe->{cpu} to $entry->{cpu} if maxcpu
is not set seems pointless as with its factor (maxcpu)
initialized to zero it is cancelled out anyway.
we only added the % of the vms in a pool
which lead to wrong results
e.g. having a pool with 3 vms with 4 cores each and a
cpu usage of 50% each (2 cores at 100%)
lead to :
vm1 50%
vm2 50%
vm3 50%
pool 150%
instead we new calculate the percentage for the whole pool
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>