this section needs general rework/expansion but to be able to link to
it already now add a reference and only do a minimal title update.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added a section on ransomware. This includes a bullet point in the
main features section and a section in the backup storage section.
The latter section lists mitigation resources in pbs as well as best
practices.
Updated capitalization to be consistent in main features. Imo, since
these are bullet points and not headings, they should be in lowercase
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
Reviewed-by: Stefan Hanreich <s.hanreich@proxmox.com>
Reviewed-by: Stefan Sterz <s.sterz@proxmox.com>
rationale is that it makes the backup much safer than 'none', but does not
incur a big of a performance hit as 'file'.
here some benchmark:
data to be backed up:
~14GiB semi-random test images between 12kiB and 4GiB
that results in ~11GiB chunks (more than ram available on the target)
PBS setup:
virtualized (on an idle machine), PBS itself was also idle
8 cores (kvm64 on Intel 12700k) and 8 GiB memory
all virtual disks are on LVM with discard and iothread on
the HDD is a 4TB Seagate ST4000DM000 drive, and the NVME is a 2TB
Crucial CT2000P5PSSD8
i tested each disk with ext4/xfs/zfs (default created with the gui)
with 5 runs each, inbetween the caches are flushed and the filesystem synced
i removed the biggest and smallest result and from the remaining 3
results built the average (percentage is relative to the 'none' result)
result:
test none filesystem file
hdd - ext4 125.67s 140.39s (+11.71%) 358.10s (+184.95%)
hdd - xfs 92.18s 102.64s (+11.35%) 351.58s (+281.41%)
hdd - zfs 94.82s 104.00s (+9.68%) 309.13s (+226.02%)
nvme - ext4 60.44s 60.26s (-0.30%) 60.47s (+0.05%)
nvme - xfs 60.11s 60.47s (+0.60%) 60.49s (+0.63%)
nvme - zfs 60.83s 60.85s (+0.03%) 60.80s (-0.05%)
So all in all, it does not seem to make a difference for nvme drives,
for hdds 'filesystem' increases backup time by ~10%, while
for 'file' it largely depends on the filesystem, but always
in the range of factor ~3 - ~4
Note that this does not take into account parallel actions, such as gc,
verify or other backups.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we can still do that as notifications for prune jobs weren't released
yet.
We may want to evaluate if we adapt (some) other notification types
too on next major release.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
under some conditions, the smartctl exitcode sets bit 2, even if the
smartctl call succeeded, but has e.g. some warnings derived from the
attributes
we do the same in pve, but it is only the first step in fixing #4353, since
we probably should parse the smartcl output better to include
such warnings
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
is left in the datastore. Before, the GUI would report "Never" for the
estimated time full, because the value provided in the backend was in
the past. To get around this, the GUI now reports "Full" if the value
for available reaches 0.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The API now exposes the field 'available' as well, with which the
unprivileged total is calculated in all corresponsing views in the
frontend.
The rrd charts now also display the total as the unprivileged total
if available, otherwise the absolute total is used.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The rrd data now includes tracking the available field in disk usage.
The calculation for the estimated_time_full was adapted to use the
total for the unpriviliged user, which is the sum of used + available.
The total for unprivileged users is preferable, because datastores are
always written to by the backup user. Which means that any storage
space reserved for root is unusable for our purposes.
To avoid resetting the estimate when switching to this new version,
the backend will try to use the available value to calculate the
unprivileged total. When that is not an option, it will fall back to
using the absolute total.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true. If the limit parameter is set to
0, all lines in the tasklog will be returned in json format.
To make a file stream and a json response in the same API call work, I
had to use one of the lower level apimethod types from the
proxmox-router. Therefore, the routing declarations and parameter
schemas have been changed accordingly.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
This new subcommand compares a pxar archive in two different
snapshots and prints a list of added/modified/deleted file
entries.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
The backend doesn't have an 'enable' option, but 'disable'. Convert
it to avoid a negative value that is checked "enabled".
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Capsule it in a small QMPSock struct impl, make the usage nicer as
the caller should not have to care & keep track of the initial socket
state+details.
A send_raw and send Value method should cover most needs.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is on top of the normal memory, and over 1.3 GB required is just
huge, sadly the commit adding this has zero details about what setups
fail and what work again with the change, so hard to tell, but any
setup that needs that much sounds like a bug in ZFS or remaining code
here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoid the need to loop a parameter through a dozen function which all
don't care about it at all; iff this should be a global oncecell or
lock guarded param.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by adding 'dynamic-memory' parameter that controls if we automatically
increase the memory of the guest vm or not
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
which registers a binary in /root/.forward and handles mail forwarding
to the mail addresss configured for root@pam in PBS. Similar to how it
is done in PVE currently.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a backup contains a drive with zfs on it, the default memory
size (up to 384 MiB) is often not enough to hold the zfs metadata
to improve that situation, add memory dynamically (1GiB) when a path is
requested that is on zfs. Note that the image must be started with a
kernel capable of memory hotplug.
to achieve that, we also have to add a qmp socket to the vm, so that
we can later connect and add the memory backend and dimm
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
fixups for DatastoreFSyncLevel:
* use derive for Default
* add some more derives (Clone, Copy)
chunk store:
* drop to_owned for chunk_dir_path
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the dropped .into() is guarded by the bumped build-dependency on
proxmox-sys 0.4.1, the missing Eq is a new clippy lint.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
With the old code the rate limit parameters got passed in their own
dictionary under the limit key, but the API expects the rate-limit
settings as top-level keys. This commit correctly sets the rate-limit
parameters so the API actually uses them.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
currently, we don't (f)sync on chunk insertion (or at any point after
that), which can lead to broken chunks in case of e.g. an unexpected
powerloss. To fix that, offer a tuning option for datastores that
controls the level of syncs it does:
* None (default): same as current state, no (f)syncs done at any point
* Filesystem: at the end of a backup, the datastore issues
a syncfs(2) to the filesystem of the datastore
* File: issues an fsync on each chunk as they get inserted
(using our 'replace_file' helper) and a fsync on the directory handle
a small benchmark showed the following (times in mm:ss):
setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner
size none filesystem file
2GiB (fits in ram) 00:13 0:41 01:00
33GiB 05:21 05:31 13:45
so if the backup fits in memory, there is a large difference between all
of the modes (expected), but as soon as it exceeds the memory size,
the difference between not syncing and syncing the fs at the end becomes
much smaller.
i also tested on an nvme, but there the syncs basically made no difference
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>