When creating a backup the log part can make the mail too big to be
transferred. To ensure delivery, two measures are taken:
1. Always omit the status lines
2. Omit the whole log part if a mail becomes (too) big
Additionally, add a check for missing log files.
Co-developed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The data passed to this closure was never free'd, depending on the
count of VM/CTs one could get >1 MB of RSS (!) memory leaked per
statd status cycle update run...
We could also use Scalar::Util's weaken, to weak a copy of this
variable, but as a simple undef works lets do that with a comment..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it seems that we have a reference leak or the like somewhere in the
(graphite?) status plugin, while the recent transaction based update
mechanism made it slightly better, it's still bad with a lot of VMs..
Until we can track that down, or abandon perl for good, avoid to
frequent restarts by allowing statd to grow 15 MB of memory usage
after initial calibration (it's memory usage at the 10th cycle)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For now it only handles the plugin registration and the two recently
integrated helpers.
But, this is a prepartation to move the external metrics server
update mechanic from a stateless always-newly-connect-send-disconnect
to a statefull transaction based mechanis; see later patches
keep the PVE::Status::Plugin use in pvestatd, as we read the cfs
hosted status.cfg there, and the parser is defined by the common
status plugin base module.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
include the version as string and as parts, as we do the split
already. Also include the build commit, so if we re-release a ceph
version, we can differ here too.
Use node as key, to make the new entry a bit more general, could be
easily expanded with other infos, if required.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add and change the return signature for the wantarray case, which can
safely done as this is only used once (statd), and there only the
first elemen, the full version string, is used - so no breakage
potential there
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in preparation of doing real transactions, with one batch connect +
send + disconnect, and not hundreds of those per update cycle..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Spice foldersharing needs the webdavd daemon installed inside the guest.
This patch adds a hint to remind the user to install it in the VM.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Instead of doing multiple sends, for each status metric line one,
assemble it all in a string and send it out in a single go.
Per VM/CT/Node we had >10 lines to send, so this is quite the
reduction. But, also note that thanks to Nagler's delay algorithm
this may not had a big effect for TCP, as it buffered those small
writes anyhow.
For UDP it can reduce the packet count on the line dramatically,
though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after rethinking this it felt weird, sockets already can to this
themself, so I checked out the IO::Socket::Timeout module, and yeah,
it's just a OOP wrapper for this, hiding the "scary" struct pack.
So instead of adding that as dependency lets do it ourself.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this way, backend only settings do not get lost (like 'size', 'shared')
when editing in the gui
this was most obvious with the new pending options, as every time
we edited a mp, we lost its size, and even setting the options
to exactly the same as the originals, we still had the mp as
'pending', but without the size
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this helper conditionally sets the given value to the given property
on the given object, optionally a different value
this is useful for our MP/HD Edit panels, where we set the options
of the drive/mp this way for every gui option we have
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ObjectGrid (an ancestor of PendingObjectGrid) does already have
a 'reload' function which does exactly the same, so get rid
of the local one here
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is for TCP only, and TCP needs roughly 1.5 time of the Round
Trip Time for connection setup, So, with 1 second timeout we're still
good for connections with 660 ms latency in-between.
The assumption is that most of the time the status server is
relatively near (same datacenter, or region), and connections to it
are datacenter grade, and not like a spotty GPRS modem.
So, reduce this timeout to ensure that we do not block to long.
If anybody needs higher timeouts they can just change the default
anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This change allows sending statistics to graphite over TCP.
So far only UDP is possible, which is not available in some environments, like behind a loadbalancer.
Configuration example:
~ $ cat /etc/pve/status.cfg
graphite:
server 10.20.30.40
port 2003
path proxmox
proto tcp
timeout 3
Signed-off-by: Martin Verges <martin.verges@croit.io>
rather than reducing the total job count during execution (and that
not for all cases) do some checks first and pass only the known good
nodes to the for-each-node-POST-request loop, so we can omit all
checks there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previous behaviour was bugged and displayed "Node is offline" for all
non-selected nodes (only 1 can be selected at a time).
Also fix progress window to show correct number of nodes in backup job.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Removed in commit fcb8022169 as we
wanted to re-use Debian Busters upstream version, but we re-uploaded
our own again. And besides that, this version would be still
interesting if it was not uploaded by us..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>