needs an organization/bucket (previously db) and an optional token
the http client does not fit exactly in the connect/send/disconnect
scheme, so it simply creates a request in 'connect',
does the actual http connection in 'send' and nothing in 'disconnect'
max-body-size is set to 25.000.000 bytes by default (the influxdb default)
and the timeout to 1 second (same as default graphite tcp timeout)
the token (if given) gets saved in /etc/pve/priv/metricserver/$ID.pw
it is optional, because the 1.8.x compatibility api does not need
authentication (in contrast to influxdb 2.x)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
like we do in it for the storage section configs
we will need this to store the token for influxdbs http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by providing the id or cfg to have better context in those methods
we will need that for influxdb http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while this format is probably not much in use currently, the plan is to make it
the default format in Debian, see 'man 5 sources.list'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When a job is updated, verify_vzdump_parameters() is called twice. This led to
parse_property_string being called with the already parsed option.
Reported on the pve-user mailing list:
https://lists.proxmox.com/pipermail/pve-user/2021-January/172258.html
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Found by having the console open and getting a message that 'me' (the
controller) does not have a method getRootNode()
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
this was previously a display field, where submitValue defaults to
false, so we required to enable it explicitly. As it changed to a
combo box we can drop that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In the VM create wizard we automatically set e1000 for Windows and virtio for
Linux. We should also do this when adding a network device in the hardware
view.
OSDefaults.generic.networkCard (=e1000) is always available. Hence, leave this
as default value for the field and then try to get the ostype via API and
overwrite the default e1000.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
once we get over the allowed line length we must move each atom
(e.g., argument or other value) to a separate line, not pairing a
few.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add:
* HA status
* ceph osd df tree
* ceph conf file and conf db
* ceph versions
removed:
* ceph status, as pveceph status is now printing the same information
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Errors from storage_info() are newline-terminated, because perl would append
the line number otherwise. Chomp those errors, because sendmail() relies
on the presence of a newline to decide if it's multiple problems or only one.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Fixes the case where reading from /etc/vzdump.conf fails.
Also convert the options read from /etc/vzdump.conf before the loop. That
avoids showing a wrong warning when 'prune-backups' is configured in
/etc/vzdump.conf, and maxfiles isn't. Previously, because 'maxfiles' from the
schema defaults was automatically set, the call to parse_prune_backups_maxfiles
after the loop threw the warning that both options are defined.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The phrasing left some room to speculation when this would actually be
run, e.g. after cloning a full VM?
Currently the only instances where it is used is after moving a disk or
migrating a VM.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Since I had to look up the cause of the error-message in our source
explaining why exactly ceph-operations fail, because
/etc/ceph/ceph.conf exists.
reported via our community forum:
https://forum.proxmox.com/threads/osd-ersetzen-neu-erstellen.80793/
quickly tested on a virtual ceph cluster
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
so that if one disables the plugin (e.g. because it is offline),
it will work even when the server is not reachable
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and prefer storage, because the storage configuration might contain more
settings. Warning is preferable over dying, because all backups would be
affected (even if they don't use the vzdump.conf parameters) and the settings
could've been compatible (i.e. dumpdir being the storage's dump dir). Previously
one of the two options would randomly be chosen in the loop in new(), because of
perl hash iteration.
Reported here: https://forum.proxmox.com/threads/vzdump-times-out-very-often-on-zfs-storage-pool.80035/post-354277
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since pve-container commit
c48a25452dccca37b3915e49b7618f6880aeafb1
the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
Use that and improve the logging in case some error happens in the future.
Such an error will only be logged once per pvestatd run,
so it does not spam the log.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of relying on the contentTypeField (which does not need to
exists, e.g. for iscsi), explicitely write it into the panel/icon
mapping and check that
better would be if we query the backend about storage capabilities,
but such an api call does not exist yet, so this should be ok for now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To get a more complete picture, instead of mocking storage_config,
PVE::Cluster's get_config is mocked. This ensures that the prune-backups
validation and the maxfiles conversion are also called.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>