add:
* HA status
* ceph osd df tree
* ceph conf file and conf db
* ceph versions
removed:
* ceph status, as pveceph status is now printing the same information
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Errors from storage_info() are newline-terminated, because perl would append
the line number otherwise. Chomp those errors, because sendmail() relies
on the presence of a newline to decide if it's multiple problems or only one.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Fixes the case where reading from /etc/vzdump.conf fails.
Also convert the options read from /etc/vzdump.conf before the loop. That
avoids showing a wrong warning when 'prune-backups' is configured in
/etc/vzdump.conf, and maxfiles isn't. Previously, because 'maxfiles' from the
schema defaults was automatically set, the call to parse_prune_backups_maxfiles
after the loop threw the warning that both options are defined.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The phrasing left some room to speculation when this would actually be
run, e.g. after cloning a full VM?
Currently the only instances where it is used is after moving a disk or
migrating a VM.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Since I had to look up the cause of the error-message in our source
explaining why exactly ceph-operations fail, because
/etc/ceph/ceph.conf exists.
reported via our community forum:
https://forum.proxmox.com/threads/osd-ersetzen-neu-erstellen.80793/
quickly tested on a virtual ceph cluster
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
so that if one disables the plugin (e.g. because it is offline),
it will work even when the server is not reachable
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and prefer storage, because the storage configuration might contain more
settings. Warning is preferable over dying, because all backups would be
affected (even if they don't use the vzdump.conf parameters) and the settings
could've been compatible (i.e. dumpdir being the storage's dump dir). Previously
one of the two options would randomly be chosen in the loop in new(), because of
perl hash iteration.
Reported here: https://forum.proxmox.com/threads/vzdump-times-out-very-often-on-zfs-storage-pool.80035/post-354277
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since pve-container commit
c48a25452dccca37b3915e49b7618f6880aeafb1
the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
Use that and improve the logging in case some error happens in the future.
Such an error will only be logged once per pvestatd run,
so it does not spam the log.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of relying on the contentTypeField (which does not need to
exists, e.g. for iscsi), explicitely write it into the panel/icon
mapping and check that
better would be if we query the backend about storage capabilities,
but such an api call does not exist yet, so this should be ok for now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To get a more complete picture, instead of mocking storage_config,
PVE::Cluster's get_config is mocked. This ensures that the prune-backups
validation and the maxfiles conversion are also called.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
so that we can sunset the usbscan from pve-storage's scan API, which
was never fitting there anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
findRecord does not match exactly, but only at the beginning and
case insensitive, by default. Change all calls to be case sensitive
and an exactmatch (we never want the default behaviour afaics).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by default, findRecord only anchors at the beginning, e.g.
findRecord('id', 'foo');
could find either an entry with id 'foo', 'foobar', 'foobaz', etc.
extend the call to set 'exactMatch' to true, else we sometimes
used the content types of storage e.g., 'local-lvm' for showing
the panels for 'local'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it is not that short in other languages, e.g. German
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We get the device list from ceph-volume lvm list, and decode the json
output, which at that point is tainted (perlsec (1)).
Untaint it here before calling, because it is currently the only
call-site using the information in a problematic way (run_command).
(the only other call-site being in pve5to6)
Alternatively we could untaint while reading the information, but then
should only return a small subset of the ceph-volume output.
The issue is most likely due to
cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in pve-common ('run_command:
improve performance for logging and long lines'),
Tested on a virtual testsetup by creating OSDs with second DB disk,
and destroying it via GUI (did not manage to get the error without the
DB disk)
Reported via our community forum:
https://forum.proxmox.com/threads/insecure-dependency-in-exec-during-osd-destroy.79574/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>