if passing the hook script command as string, it might get interpreted
as shell command with side-effects. this is pretty harmless, since only
root is allowed to set the script parameter anyway, but making it more
robust and future-proof does not hurt.
tested with a reproducer of "/bin/echo $(touch $(whoami))" as script
parameter, with a file with that name existing, being executable and
having the following contents:
----8<----
echo "hello from hook script"
---->8----
without this change, the hookscript itself is not executed, but
'/bin/sh -c "/bin/echo $(touch $(whoami)) job"' and similar calls are,
which cause the file 'root' to be touched in the current working
directory of the vzdump process (or task worker).
with this change, the file is executed as is without any side-effects of
shell commands in the file name, and the 'hello from hook script' lines
are printed whenever the hook script is called by vzdump.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Also rename disks to blockdevices, as especially with the iSCSI stuff
the listed devices do not necessarily need to be real disks.
Grouping current(ish) load info in it's own system-load section seems
to be a better fit in general too.
bios/pci stuff is all hardware, so group it there to avoid adding to
much sections.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Makes all report-def entries hashes which allows to drop some
handling for the ARRAY ref case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And move the helper methods up and scope them to module-local only
Uses the fact that perl methods return the last statement, so the
dir2text sub closures in the command list do not need to be changed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
PSI can be queried in /proc/pressure/{cpu,io,memory} for the
corresponding resources. this helps us track down disruptions caused
by resource overcommitment.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This patch fixes a regression for hosts disabling ipv6 via kernel
commandline ('ipv6.disable=1')introduced in commit
e224b7d2e6
(disabling IPv6 via sysctl did not exhibit these problems)
by hardcoding the address to '::', pveproxy and spiceproxy failed to
start with:
'unable to create socket - Address family not supported by protocol'
This patch depends on the commit in pve-common, which tries first
binding to '::' and then falling back to '0.0.0.0', and needs a
versioned dependency bump on libpve-common-perl.
With this patch the listening addresses are (`ss -tlnp |grep 8006` output)
* ipv6 disabled via kernel cmdline: '0.0.0.0:8006'
* sysctl net.ipv6.conf.all.disable_ipv6=1: '*:8006'
* sysctl net.ipv6.bindv6only=1: '[::]:8006'
* else: '*:8006'
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To make them load the updated librados2, as else they may potentially
not be able to communicate with the potentially newer ceph monitors,
as Debian 10 ships Jewel (12.2) by default...
While we could do some more fancy signaling to the workers to reload
the lib, that is rather a PITA and complex solution for something
that happens once in a blue moon.
We may want to add a trigger in ceph for this on updates though, that
would effectively fix this too - but needs to be thought out better.
So for now lets go with the simplest solution.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The $host variable is set to "::0" by default to listen on wildcard
(with 'Domain' => PF_INET6).
If 'LISTEN_IP' is defined in /etc/default/pveproxy, that IP will be used
instead.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Currently the check for used ports for bonds and bridges happens
while rendering '/etc/network/interfaces.new' in PVE::Inotify
(pve-common).
However at that stage the new/updated interface is already merged
with the old settings, making it impossible to indicate where a NIC
is currently used.
The code is adapted from the renderer in
PVE::Inotify::__write_etc_network_interfaces.
Tested on a virtual PVE instance.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On file upload, the check for CSRF tokens was already skipped when
performing user authentication. This now happens for API tokens also.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check explicitly for type host, so filter for that first
and create a hash map for easier usage afterwards.
Drop the error when there's no tree, as either RADOS error'd on bad
command already, or there really is no tree (but RADOS worked OK), in
which case we simply return that the OSD did not belong to this node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow destroying only OSDs that belong to the node that has been specified in
the API path.
So if
- OSD 1 belongs to node A and
- OSD 2 belongs to node B
then
- pvesh delete nodes/A/ceph/osd/1 is allowed but
- pvesh delete nodes/A/ceph/osd/2 is not
Destroying an OSD via GUI automatically inserts the correct node
into the API path.
pveceph automatically insert the local node into the API call, too.
Consequently, it can now only destroy local OSDs (fix#2053).
- pveceph osd destroy 1 is allowed on node A but
- pveceph osd destroy 2 is not
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
These 2 files can be helpful for issues with multipath. The multipath -v3
output is too large most of the time and not required for analyzing and
solving the issues.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it doesn't get out of scope too early.
Regression introduced by 5620e5761e as pointed
out by Fabian Grünbichler.
Reported in the community forum:
https://forum.proxmox.com/threads/limit-simultaneous-backup-jobs.87489
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Otherwise storage_info() cannot be used for (at least) PBS storages from an API
call without 'protected => 1', because the password cannot be read from
'/etc/pve/priv'. Note that the function itself does not need the storage to be
active, because it only uses storage_config() and get_backup_dir().
AFAICT new() is the only existing user of this function and can be responsible
for activating the storage itself.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we set the api prefix by default to '/' so we always triggered
the the replacement and added '///' which is wrong and does not
work for the 'health' api path
(influxdb returns 404 for 'https://ip:port///health')
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the forwards compatible api of 1.8 only contains this path
(not api/v2/health) and it it also contained in the v2 api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
I normally use a reverse proxy in front of my influxdb instances,
proxying all from the /influx/ path to the only locally listening
influxdb. So here I'd need to set "influx" as api-path-prefix.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Not a hard error, some network box (proxy) down the line could add it
for us, or it could be just not required, so ...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>