In case we run into a ready check timeout, query the drive, and
increase the timeout to 2 hours and 5 minutes if it's calibrating (5
minutes headroom). This is effectively a generalization of commit
0b1a30aa ("tape: adapt format_media for LTO9+"), which increased the
timeout for the format procedure, while this here covers also tape
that were not explicitly formatted but get auto-formatted indirectly
on the first action changing a fresh tape, like e.g. barcode labeling.
The actual reason for this is that since LTO-9, initial loading of
tapes into a drive can block up to 2 hours according to the spec. One
can find the IBM and HP LTO SCSI references rather easily [0][1]
As for the timeout, IBM says it only in their recommendations:
> Although most optimizations will complete within 60 minutes some
> optimizations may take up to 2 hours.
And HP states:
> Media initialization adds a variable amount of time to the
> initialization process that typically takes between 20 minutes and
> 2 hours.
So it seems there not a hard limit and depends, but most ordinary
setups should be covered and in my tests it always took around the 1
hour mark.
0: IBM LTO-9 https://www.ibm.com/support/pages/system/files/inline-files/LTO%20SCSI%20Reference_GA32-0928-05%20(EXTERNAL)_0.pdf
1: HP LTO-9 https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001239en_us&page=GUID-D7147C7F-2016-0901-0921-000000000450.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250415114043.2389789-1-d.csapak@proxmox.com
[TL: extend commit message with info that Dominik provided in a
reply]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
During phase 1 of garbage collection referenced chunks are marked as
in use by iterating over all index files and updating the atime on
the chunks referenced by these.
In an edge case for long running garbage collection jobs, where a
newly added snapshot (created after the start of GC) reused known
chunks from a previous snapshot, but the previous snapshot index
referencing them disappeared before the marking phase could reach
that index (e.g. pruned because only 1 snapshot to be kept by
retention setting), known chunks from that previous index file might
not be marked (given that by none of the other index files it was
marked).
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") this is even less likely as now the
iteration reads also index files added during phase 1, and
therefore either the new or the previous index file will account for
these chunks (the previous backup snapshot can only be pruned after
the new one finished, since locked). There remains however a small
race window between the reading of the snapshots in the backup group
and the reading of the actual index files for marking.
Fix this race by:
1. Checking if the last snapshot of a group disappeared and if so
2. generate the list again, looking for new index files previously
not accounted for
3. To avoid possible endless looping, lock the group if the snapshot
list changed even after the 10th time (which will lead to
concurrent operations to this group failing).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-3-c.ebner@proxmox.com
Instead of returning a None, fail if the open index reader is called
on a blob file. Blobs cannot be read as index anyways and this allows
to distinguish cases where the index file cannot be read because
vanished.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-2-c.ebner@proxmox.com
the below commit accidentally switched this lock to an exclusive lock
when it should just be a shared one as that is sufficient for a
reader:
e2c1866b: datastore/api/backup: prepare for fix of #3935 by adding
lock helpers
this has already caused failed backups for a user with a sync job that
runs while they are trying to create a new backup.
https://forum.proxmox.com/threads/165038
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") not only snapshots present at the start of
the garbage collection run are considered for marking, but also newly
added ones. Take these into account by adapting the total index file
counter used for the progress output.
Further, correctly take into account also index files which have been
pruned during GC, therefore present in the list of still to process
index files but never encountered by the datastore iterators. These
would otherwise be interpreted incorrectly as strange paths and logged
accordingly, causing confusion as reported in the community forum [0].
Fixes: https://forum.proxmox.com/threads/164968/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
else parallel builds of the static binaries will not work correctly, just like
with the regular .do-cargo-build.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The debian package providing the dynamically linked version of the
proxmox-backup-client is packaged together with the pxar executable.
To be in line and for user convenience, include a statically linked
version of pxar to the static package as well.
Renames STATIC_BIN env variable to STATIC_BINS to reflect that this
now covers multiple binaries and store rustc flags in its own
variable so they can be reused since `cargo rustc` does not allow
invocations with multiple `--package` arguments at once.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Extends the log messages written to the server's backup worker task
log to include the snapshot name which is used as previous snapshot.
This information facilitates debugging efforts, as the previous
snapshot might have been pruned since.
For example, instead of
```
download 'index.json.blob' from previous backup.
register chunks in 'drive-scsi0.img.fidx' from previous backup.
download 'drive-scsi0.img.fidx' from previous backup.
```
this now logs
```
download 'index.json.blob' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
register chunks in 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
download 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Discurage the use of the statically linked binary for systems where
the regular one is available.
Moves the previous note into it's own section and link to the
installation section.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250410093059.130504-1-c.ebner@proxmox.com
To have something in the docs.
In the long run we want a somewhat fancy and safe mechanism to host
these builds directly on the CDN and implement querying that for
updates, verified with a backed in public key, but for starters this
very basic docs has to suffice.
We could also describe how to extract the client from the .deb through
`ar` or `dpkg -x`, but that feels a bit to hacky for the docs, maybe
better explained on-demand in the forum or the like.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a note mentioning that the statically linked binary does not use
the same mechanism for name resolution as the regular client, in
particular that this does not support NSS.
The statically linked binary cannot use the `getaddrinfo` based name
resolution because of possible ABI incompatibility. It therefore is
conditionally compiled and linked using the name resolution provided
by hickory-resolver, part of hickory-dns [0].
[0] https://github.com/hickory-dns/hickory-dns
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The dependency on the `getaddrinfo` based `GaiResolver` used by
default for the `HttpClient` is not suitable for the statically
linked binary of the `proxmox-backup-client`, because of the
dependency on glibc NSS libraries, as described in glibc's FAQs [0].
As a workaround, conditionally compile the binary using the `hickory-dns`
resolver.
[0] https://sourceware.org/glibc/wiki/FAQ#Even_statically_linked_programs_need_some_shared_libraries_which_is_not_acceptable_for_me.__What_can_I_do.3F
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
FG: bump proxmox-http dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4788
Build and package the a statically linked binary version of
proxmox-backup-client to facilitate updates and distribution.
This provides a mechanism to obtain and repackage the client for
external parties and Linux distributions.
The statically linked client is provided as dedicated package,
conflicting with the regular package.
Since the RUSTFLAGS env variables are not preserved when building
with dpkg-buildpackage, invoke via `cargo rustc` instead which allows
to set the recquried arguments.
Credit goes also to Christoph Heiss, as this patch is loosely based
on his pre-existing work for the proxmox-auto-install-assistant [0],
which provided a good template.
Also, place the libsystemd stub into its own subdirectory for cleaner
separation from the compiled artifacts.
[0] https://lore.proxmox.com/pve-devel/20240816161942.2044889-1-c.heiss@proxmox.com/
Suggested-by: Christoph Heiss <c.heiss@proxmox.com>
Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: fold in fixups
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since it affects whether cargo puts build artifacts directly into
target/debug (or target/release) or into a target-specific
sub-directory.
the package build will always pass `--target $(DEB_HOST_RUST_TYPE)`,
since it invokes the cargo wrapper in /usr/share/cargo/bin/cargo, so
this change unifies the behaviour across plain `make` and `make
deb`.
direct calls to `cargo build/test/..` will still work as before.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This aligns the tooltips to how we have in in Proxmox VE. Using "view"
instead of "open" should make it clear, that this is a safe read-only
action.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20241118104959.95159-1-a.lauterer@proxmox.com
This section is meant to give a basic overview on how to use
custom templates for notifications. It will be expanded in the
future, providing a more detailed view on how templates are resolved,
existing fallback mechanisms, available templates, template
variables and helpers.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Alexander Zeidler <a.zeidler@proxmox.com>
Link: https://lore.proxmox.com/20250409084628.125951-1-l.wagner@proxmox.com
to avoid interpreting HTML in the message when displaying the mask.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Explicitly mention how to set the rate limit for sync jobs in push
direction to avoid possible confusion.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-2-c.ebner@proxmox.com
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Commit 9aa213b8 ("ui: sync job: adapt edit window to be used for pull
and push") adapted the sync job edit so jobs in both, push and pull
can be edited using the same window. This however did not include the
switching of the direction to which the http client rate limit is
applied to.
Fix this by further adding the edit field for `rate-out` and
conditionally hide the less useful rate limit direction (rate-out for
pull and rate-in for push). This allows to preserve the values if
explicitly set via the sync job config.
Reported in the community forum:
https://forum.proxmox.com/threads/163414/
Fixes: 9aa213b8 ("ui: sync job: adapt edit window to be used for pull and push")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-1-c.ebner@proxmox.com
instead of just the top-most context/error, which often excludes
relevant information, such as when locking fails.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the widget type is the most important property as it defines how every
other property will be interpreted, so it should always come first.
Move name afterwards, as that is almost always the key for how the
data will be send to the backend and thus also quite relevant.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That width is already used in a few places, we might even want to
change the edit window default in the future.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ensure title-case is honored, while at it drop the "snapshot" for the
advanced options, we do not use that for non-advanced option like
"Removed Vanished" either. This avoids that some field labels wrap
over multiple lines, at least for English.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For now just in the general datacenter option view, not when editing
the tuning options. For also allowing one to enter this we should
first provide our backend implementation as WASM to avoid having to
redo this in JavaScript.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was getting cramped, and while it might be actually even nicer to
got to more verbose style like we use for advanced settings of backup
jobs in Proxmox VE, with actual sentences describing the options basic
effects and rationale.
But this is way quicker to do and adds already a bit more rationale,
and we can always do more later on when there's less release time
pressure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a bullet point to the listed datastore tuning parameters,
describing its functionality, implications and typical values.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-4-c.ebner@proxmox.com
[TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Displays and allows to edit the configured LRU cache capacity via the
datastore tuning parameters.
A step of 1024 is used in the number field for convenience when using
the buttons, more fine grained values can be set by typing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-3-c.ebner@proxmox.com
[TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to control the capacity of the cache used to track recently
touched chunks via the configured value in the datastore tuning
options. Log the configured value to the task log, if an explicit
value is set, allowing the user to confirm the setting and debug.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-2-c.ebner@proxmox.com
Skip over snapshots which have not been verified or encrypted if the
sync jobs has set the flags accordingly.
A snapshot is considered as encrypted, if all the archives in the
manifest have `CryptMode::Encrypt`. A snapshot is considered as
verified, when the manifest's verify state is set to
`VerifyState::Ok`.
This allows to only synchronize a subset of the snapshots, which are
known to be fine (verified) or which are known to be encrypted. The
latter is of most interest for sync jobs in push direction to
untrusted or less trusted remotes, where it might be desired to not
expose unencrypted contents.
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6072
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-4-c.ebner@proxmox.com
Extend the sync job config api to adapt the 'encrypted-only' and
'verified-only' flags, allowing to include only encrypted and/or
verified backup snapshots, excluding others from the sync.
Set these flags to the sync jobs push or pull parameters on job
invocation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-3-c.ebner@proxmox.com
... and move token deletion into new `do_delete_token` function.
Since it'll be resued later on user deletion.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Mostly taken from pve-docs and adapted as needed.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
The comment & default property can be updated for the built-in PBS
realm, which the AuthSimplePanel from widget-toolkit implements.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
This uses the functionality previously introduced in widget-toolkit as
part of this series, which is gated behind this flag.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
The built-in PAM and PBS use slightly different API paths, without the
type in the URL, as that would be redundant anyway. Thus move the
setting to per-realm.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
[TL: commit subject style fixe]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For the built-in PBS authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
For the built-in PAM authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Currently, the built-in PAM and PBS authentication realms are (hackily)
hardcoded. Replace that with the new, proper API types for these two
realms, thus treating them like any other authentication realm.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Whenever the `default` field is set to `true` for any realm, the
`default` field must be unset first from all realms to ensure that only
ever exactly one realm is the default.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Now that all the realms support this field, add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Document the gc-atime-cutoff option and describe the behavior it
controls, by adding it as additional bullet point to the
documented datastore tuning options.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to set the atime cutoff for phase 2 of garbage collection in
the datastores tuning parameters. This value changes the time after
which a chunk is not considered in use anymore if it falls outside of
the cutoff window.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the user configured atime cutoff over the default 24h 5m
margin if explicitly set, otherwise fallback to the default.
Move the minimum atime calculation based on the atime cutoff to the
sweep_unused_chunks() callside and pass in the calculated values, as
to have the logic in the same place.
Add log outputs shownig which cutoff and minimum access time is used
by the garbage collection.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Document the gc-atime-safety-check flag and describe the behavior it
controls, by adding it as additional bullet point to the documented
datastore tuning options.
This also fixes the intendation for the cli example how to set the
sync level, to make it clear that still belongs to the previous
bullet point.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to edit the atime safety check flag via the datastore tuning
options edit window.
Do not expose the flag for datastore creation as it is strongly
discouraged to create datastores on filesystems not correctly handling
atime updates as the garbage collection expects. It is nevertheless
still possible to create a datastore via the cli and pass in the
`--tuning gc-atime-safety-check=false` option.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Check if the filesystem backing the chunk store actually updates the
atime to avoid potential data loss in phase 2 of garbage collection,
in case the atime update is not honored.
Perform the check before phase 1 of garbage collection, as well as
on datastore creation. The latter to early detect and disallow
datastore creation on filesystem configurations which otherwise most
likely would lead to data losses. To perform the check also when
reusing an existing datastore, open the chunks store also on reuse.
Enable the atime update check by default, but allow to opt-out by
setting a datastore tuning parameter flag for backwards compatibility.
This is honored by both, garbage collection and datastore creation.
The check uses a 4 MiB fixed sized, unencypted and compressed chunk
as test marker, inserted if not present. This all zero-chunk is very
likely anyways for unencrypted backup contents with large all-zero
regions using fixed size chunking (e.g. VMs).
To avoid cases were the timestamp will not be updated because of the
Linux kernels timestamp granularity, sleep in-between chunk insert
(including an atime update if pre-existing) and the subsequent
stating + utimensat for 1 second.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5982
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Inserting a new chunk into the chunk store as process running with
root priviledger currently does not set an explicit ownership on the
chunk file. As a consequence this will lead to permission issues if
the chunk is operated on by a codepath executed in the less
privileged proxy task running as `backup` user.
Therefore, explicitly set the ownership and permissions of the chunk
file upon insert, if the process is executed as `root` user.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
We are going to add more credentials so it makes sense to have a common
helper to get the secrets.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
otherwise tokens without comments can no longer be created as the api
will reject the additional `delete` parameter. this bug was introduced
by commit:
3fdf876: api: token: make comment deletable
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
group or namespace removal can fail if the old locking mechanism is
still in use, as it is unsafe to properly clean up in that scenario.
return an error message that explains how to rectify that situation.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
[TL: address simple merge conflict and fine tune message to admins]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is only needed for removing the group if the last snapshot is
removed, ignore locking failures, as the user can't do anything to
rectify the situation anymore.
log the locking error for debugging purposes, though.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
[TL: line-wrap comment at 100cc and fix bullet-point indentation]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See commit 27dd7377 ("fix #3935: datastore/api/backup: move datastore
locking to '/run'") for details, as I'll bump PBS now we can fixate
the version and drop the safety-net "reminder" from d/rules again.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To reduce the number of atimes updates, keep track of the recently
marked chunks in phase 1 of garbage to avoid multiple atime updates
via expensive utimensat() calls.
Recently touched chunks are tracked by storing the chunk digests in
an LRU cache of fixed capacity. By inserting a digest, the chunk will
be the most recently touched one and if already present in the cache
before insert, the atime update can be skipped. The cache capacity of
1024 * 1024 was chosen as compromise between required memory usage
and the size of an index file referencing a 4 TiB fixed size chunked
image (with 4MiB chunk size).
The previous change to iterate over the datastore contents using the
datastore's iterator helps for increased cache hits, as subsequent
snapshots are most likely to share common chunks.
Basic benchmarking:
Number of utimensat calls shows significatn reduction:
unpatched: 31591944
patched: 1495136
Total GC runtime shows significatn reduction (average of 3 runs):
unpatched: 155.4 ± 3.5 s
patched: 22.8 ± 0.5 s
VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, average of 3 runs):
unpatched before: 1196028 ± 0 kB
unpatched after: 1196028 ± 0 kB
unpatched before: 1163337 ± 28317 kB
unpatched after: 1330906 ± 29280 kB
delta: 167569 kB
Dependence on the cache capacity:
capacity runtime[s] VmPeakDiff[kB]
1*1024 66.221 0
10*1024 36.164 0
100*1024 23.141 0
1024*1024 22.188 101060
10*1024*1024 23.178 689660
100*1024*1024 25.135 5507292
Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special
Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54
Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB
Distribution of snapshots (binned by month):
2023-11 11
2023-12 16
2024-01 30
2024-02 38
2024-03 17
2024-04 37
2024-05 17
2024-06 59
2024-07 99
2024-08 96
2024-09 115
2024-10 35
2024-11 42
2024-12 37
2025-01 162
2025-02 489
2025-03 1884
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5331
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of iterating over all index files found in the datastore in
an unstructured manner, use the datastore iterators to logically
iterate over them as other datastore operations will.
This allows to better distinguish index files in unexpected locations
from ones in their expected location, warning the user of unexpected
ones to allow to act on possible missconfigurations. Further, this
will allow to integrate marking of snapshots with missing chunks as
incomplete/corrupt more easily and helps improve cache hits when
introducing LRU caching to avoid multiple atime updates in phase 1 of
garbage collection.
This now iterates twice over the index files, as indices in
unexpected locations are still considered by generating the list of
all index files to be found in the datastore and removing regular
index files from that list, leaving unexpected ones behind.
Further, align terminology by renaming the `list_images` method to
a more fitting `list_index_files` and the variable names accordingly.
This will reduce possible confusion since throughout the codebase and
in the documentation files referencing the data chunks are referred
to as index files. The term image on the other hand is associated
with virtual machine images and other large binary data stored as
fixed-size chunks.
Basic benchmarking:
Total GC runtime shows no significatn change (average of 3 runs):
unpatched: 155.4 ± 2.6 s
patched: 155.4 ± 3.5 s
VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, no changes for all 3 runs):
unpatched before: 1196032 kB
unpatched after: 1196032 kB
patched before: 1196028 kB
patched after: 1196028 kB
List image shows a slight increase due to the switch to a HashSet
(average of 3 runs):
unpatched: 64.2 ± 8.4 ms
patched: 72.8 ± 3.7 ms
Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special
Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54
Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB
Distribution of snapshots (binned by month):
2023-11 11
2023-12 16
2024-01 30
2024-02 38
2024-03 17
2024-04 37
2024-05 17
2024-06 59
2024-07 99
2024-08 96
2024-09 115
2024-10 35
2024-11 42
2024-12 37
2025-01 162
2025-02 489
2025-03 1884
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Refactor the archive type and index file reader opening with its
error handling into a helper method for better reusability.
This allows to use the same logic for both, expected image paths
and unexpected image paths when iterating trough the datastore
in a hierarchical manner.
Improve error handling by switching to anyhow's error context.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Until now errors are shown ignoring the anyhow error context. In
order to allow the garbage collection to return additional error
context, format the error including the context as single line.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a boolean return type to LruCache::insert(), telling if the node
was already present in the cache or if it was newly inserted.
This will allow to use the LRU cache for garbage collection, where
it is required to skip atime updates for chunks already marked in
use.
That improves phase 1 garbage collection performance by avoiding,
multiple atime updates.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, the only way to delete a comment on a token is to set it to
just spaces. Since we trim it in the endpoint, it gets deleted as a
side effect. This allows the comment to be deleted properly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Using a single thread for reading is not optimal in some cases, e.g.
when the underlying storage can handle reads from multiple threads in
parallel.
We use the ParallelHandler to handle the actual reads. Make the
sync_channel buffer size depend on the number of threads so we have
space for two chunks per thread. (But keep the minimum to 3 like
before).
How this impacts the backup speed largely depends on the underlying
storage and how the backup is laid out on it.
I benchmarked the following setups:
* Setup A: relatively spread out backup on a virtualized pbs on single HDDs
* Setup B: mostly sequential chunks on a virtualized pbs on single HDDs
* Setup C: backup on virtualized pbs on a fast NVME
* Setup D: backup on bare metal pbs with ZFS in a RAID10 with 6 HDDs
and 2 fast special devices in a mirror
(values are reported in MB/s as seen in the task log, caches were
cleared between runs, backups were bigger than the memory available)
setup 1 thread 2 threads 4 threads 8 threads
A 55 70 80 95
B 110 89 100 108
C 294 294 294 294
D 118 180 300 300
So there are cases where multiple read threads speed up the tape backup
(dramatically). On the other hand there are situations where reading
from a single thread is actually faster, probably because we can read
from the HDD sequentially.
I left the default value of '1' to not change the default behavior.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[TL: update comment about mpsc buffer size for clarity and drop
commented-out debug-code]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In preparation for skipping over snapshots when synchronizing with
encrypted/verified only flags set. In these cases, the manifest has
to be fetched from the remote and it's status checked. If the
snapshot should be skipped, the snapshot directory including the
temporary manifest file has to be cleaned up, given the snapshot
directory has been newly created. By reorganizing the current
snapshot pull logic, this can be achieved more easily.
The `corrupt` flag will be set to `false` in the snapshot
prefiltering, so the previous explicit distinction for newly created
snapshot directories must not be preserved.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The template files for this one have simply been copied from PVE,
including the HTML template.
In PBS we actually don't provide any HTML templates for any other type
of notification, so especially with the template override mechanism on
the horizon, it's probably better to remove this template until we
also provide an HTML version for the other types as well.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The next commit is going to add a separate submodule for notification
template data types.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Empty backup groups are not visible in the API or GUI. This led to a
confusing issue where users were unable to create a group because it
already existed and was still owned by another user. Resolve this
issue by removing the group if its last snapshot is removed.
Also fixes an issue where removing a group used the non-atomic
`remove_dir_all()` function when destroying a group unconditionally.
This could lead to two different threads suddenly holding a lock to
the same group. Make sure that the new locking mechanism is used,
which prevents that, before removing the group. This is also a bit
more conservative now, as it specifically removes the owner file and
group directory separately to avoid accidentally removing snapshots in
case we made an oversight.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
when two clients change the owner of a backup store, a race condition
arose. add locking to avoid this.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
adds double stat'ing and removes directory hierarchy to bring manifest
locking in-line with other locks used by the BackupDir trait.
if the old locking mechanism is still supposed to be used, this still
falls back to the previous lock file. however, we already add double
stat'ing since it is trivial to do here and should only provide better
safety when it comes to removing locks.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
to avoid issues when removing a group or snapshot directory where two
threads hold a lock to the same directory, move locking to the tmpfs
backed '/run' directory. also adds double stat'ing to make it possible
to remove locks without certain race condition issues.
this new mechanism is only employed when we can be sure, that a reboot
has occured so that all processes are using the new locking mechanism.
otherwise, two separate process could assume they have exclusive
rights to a group or snapshot.
bumps the rust version to 1.81 so we can use `std::fs::exists` without
issue.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[TL: drop unused format_err import]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid duplicate code, add helpers for locking groups and snapshots
to the BackupGroup and BackupDir traits respectively and refactor
existing code to use them.
this also adapts error handling by adding relevant context to each
locking helper call site. otherwise, we might loose valuable
information useful for debugging. note, however, that users that
relied on specific error messages will break.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Since we are exposing functions now to get the password and encryption
password this should be private.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Allows to load credentials passed down by systemd. A possible use-case
is safely storing the server's password in a file encrypted by the
systems TPM, e.g. via
```
systemd-ask-password -n | systemd-creds encrypt --name=proxmox-backup-client.password - my-api-token.cred
```
which then can be used via
```
systemd-run --pipe --wait --property=LoadCredentialEncrypted=proxmox-backup-client.password:my-api-token.cred \
proxmox-backup-client ...
```
or from inside a service.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Adapt the description for the backup specification to use
`archive-name` and `type` over `label` and `ext`, to be in line with
the terminology used in the documentation.
Further, explicitley describe the `path` as `source-path` to be less
ambigouos.
In order to avoid formatting issues in the man pages because of line
breaks after a hyphen, show the backup specification description in
multiple lines.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently if any `?`/`bail!` happens between mounting and completing
the creation process unmounting will be skipped. Adding this guard
solves that problem and makes it easier to add things in the future
without having to worry about a disk not being unmounted in case of a
failed creation.
Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
fixes the clippy warning on types T implementing Copy:
```
warning: using `clone` on type `T` which implements the `Copy` trait
```
followed by formatting fixups via `cargo fmt`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Drop the pub scope for `DataStore`s `list_images` method.
This method is only used to generate a list of index files found in
the datastore for iteration during garbage collection. There are no
other call sites and this is intended to only be used within the
module itself. Allows to be more flexible for future method signature
adaptions.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
by switching on deprecations and using some backported types already
available on 0.14:
- use body::HttpBody::collect() instead of to_bytes() directly on Body
- use server::conn::http2::Builder instead of server::conn::Http with
http2_only
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid upgrading to hyper 1 / http 1 right now. this is a Debian/Proxmox
specific workaround.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Commit:
efc09f63c (docs: tech overview: avoid 'we' and other small style fixes/additions)
introduced the comparison with 13 lottery games, but sadly without any
mention how to arrive at that number.
When calculating I did arrive at 8-9 games (8 is more probable, 9 is
less probable), so rewrite to 'chance is lower than 8 lottery games' and
give the calculation directly inline as a reference.
Fixes: efc09f63 ("docs: tech overview: avoid 'we' and other small style fixes/additions")
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[TL: reference commit that introduced this]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The type `Box<dyn IndexFile + Send>>, usize, Vec<(usize, u64)>` is not
Sync so it makes more sense to use Rc. This is suggested by clippy.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Use the UTIME_NOW and UTIME_OMIT constants defined in libc crate
instead of redefining them. This improves consistency, as utimesat
and its timespec parameter are also defined via the libc crate.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When wiping a block device with a GUID partition table, the header
backup might get left behind at the end of the disk. This commit also
wipes the last 4096 bytes of the disk, making sure that a GPT header
backup is erased, even from disks with 4k sector sizes.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Replace the external invocation of `dd` with direct file writes using
`std::os::unix::fs::FileExt::write_all_at` to zero out the start of the
disk.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Mention in the docs and the api parameter description the limitations
for archive name labels. They must contain alphanumerics, hyphens and
underscores only to match the regex pattern.
By setting this in the api parameter description, it will be included
in the man page for proxmox-backup-client.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6185
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add the new pbs-api-types crate to the cargo override section. Reorder
the overrides to be alphabetic.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Previously we just wrote to syslog directly. This doesn't work anymore
since the tracing update and we won't get any output in the tasklog.
Reported-by: https://forum.proxmox.com/threads/158764/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Fixes a race condition where the backup upload stream can miss an
error returned by pxar::create_archive, because the error state is
only set after the backup stream was already polled.
On instantiation, `PxarBackupStream` spawns a future handling the
pxar archive creation, which sends the encoded pxar archive stream
(or streams in case of split archives) through a channel, received
by the pxar backup stream on polling.
In case this channel is closed as signaled by returning an error, the
poll logic will propagate an eventual error occurred during pxar
creation by taking it from the `PxarBackupStream`.
As this error might not have been set just yet, this can lead to
incorrectly terminating a backup snapshot with success, eventhough an
error occurred.
To fix this, introduce a dedicated notifier for each stream instance
and wait for the archiver to signal it has finished via this
notification channel. In addition, extend the `PxarBackupStream` by a
`finished` flag to allow early return on subsequent polls, which
would otherwise block, waiting for a new notification.
In case of premature termination of the pxar backup stream, no
additional measures have to been taken, as the abort handle already
terminates the archive creation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The prune schedule simulator returned "X/Y is not an integer" error
for a schedule that uses a `start..end` hour range combined with a
`/`-separated time step-gap, while that works out fine for actual
prune jobs in PBS.
Previously, a schedule like `5..23/3` was mistakenly interpreted as
hour-start = `5`, hour-end = `23/3`, hour-step = `1`, resulting in
above parser error for hour-end. By splitting the right hand side on
`/` to extract the step and normalizing that we correctly get
hour-start = `5`, hour-end = `23`, hour-step = `3`.
Short reminder: hours and minutes part are treated as separate and can
both be declared as range, step or range-step, so `5..23/3:15` does
not mean the step size is 3:15 (i.e. 3.25 hours or 195 minutes) but
rather 3 hours step size and each resulting interval happens on the
15 minute of that hour.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[TL: add context to commit message partially copied from bug report
and add a short reminder how these intervals work, can be confusing]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add new markers so that we can refer to the chapters.
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Use a intermediate variable for the frequently used datastore name and
backup snapshod name, while it's not often the case the diff(stat)
makes a good argument that it's worth it here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 8ea00f6e ("allow to abort verify jobs") errors
propagated up to the verify jobs worker call side are interpreted as
job aborts.
The manifest update did not honor this, leading to the verify job
being aborted with the misleading log entry:
`verification failed - job aborted`
Instead, handle the manifest update error non-fatal just like any
other verification related error, log it including the error message
and continue verification with the next item.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes: 5b7f4455 ("docs: add manual page for verification.cfg")
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[TL: add references to commit that this fixes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Have our docgen tool generate a synopsis for the prune.cfg schema, and
use that output in a new prune.cfg manpage, and include it in the
appropriate appendix of our html/pdf rendered admin guide.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[TL: expand commit message and keep alphabetical order for configs in
the guide.]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the missing log context for cases were a prune is not executed as
dedicated tokio task.
Commit 432de66a ("api: make prune-group a real workertask") moved the
prune group logic into it's own tokio task conditionally.
However, the log context was missing for cases where no dedicated
task/thread is started, leading to the worker task state being
unknown after finish, as no logs are written to the worker task log
file.
Reported in the community forum:
https://forum.proxmox.com/threads/161273/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com
* Improved errors when panics occur and the panic message is a
formatted (not static) string. This worked already for &str literals,
but not for Strings.
Downcasting to both &str and String is also done by the Rust Standard
Library in the default panic handler. See:
b605c65b6e/library/std/src/panicking.rs (L777)
* Switched from eprintln! to tracing::error when logging panics in the
task scheduler.
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Fixes the suspicious_open_options clippy lint, for example:
```
warning: file opened with `create`, but `truncate` behavior not defined
--> src/api2/tape/restore.rs:1713:18
|
1713 | .create(true)
| ^^^^^^^^^^^^- help: add: `.truncate(true)`
|
= help: if you intend to overwrite an existing file entirely, call `.truncate(true)`
= help: if you instead know that you may want to keep some parts of the old file, call `.truncate(false)`
= help: alternatively, use `.append(true)` to append to the file instead of overwriting it
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_open_options
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
As per its documentation [1]:
> If .create_new(true) is set, .create() and .truncate() are ignored.
This gets rid of the "file opened with `create`, but `truncate`
behavior not defined " clippy warnings.
[1] https://doc.rust-lang.org/std/fs/struct.OpenOptions.html#method.create_new
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Instead of using and depending on the `http` crate directly, use and
depend on the re-exported `hyper::http`. Adapt namespace prefixes
accordingly.
This makes sure the `hyper::http` types are version compatible and
allows to possibly depend on incompatible versions of `http` in the
workspace in the future.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, the list of backups only shows removed backups and is
missing backups that are kept, though they are shown correctly in the
calendar view.
The reason is that a refactor (see Fixes tag) moved the definition of
a custom field renderer referencing `me` to a scope where `me` is not
defined. This causes the renderer to error out for "kept" backups,
which apparently causes the grid to skip the rows altogether (without
any messages in the console).
Fix this by replacing the broken `me` reference.
Fixes: bb044304 ("prune sim: move PruneList to more static declaration")
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
We moved the whole code from the pbs-api-types subdirectory into the proxmox
git repository and build a rust debian package for the crate.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Fixes:
warning: needless call to `as_bytes()`
--> pbs-tape/src/sg_pt_changer.rs:913:45
|
913 | let rem = SCSI_VOLUME_TAG_LEN - voltag.as_bytes().len();
| ^^^^^^^^^^^^^^^^^^^^^^^ help: `len()` can be called directly on strings: `voltag.len()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_as_bytes
= note: `#[warn(clippy::needless_as_bytes)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Commit da11d226 ("fix #5710: api: backup: stat known chunks on backup
finish") introduced a seemingly cheap server side check to verify
existence of known chunks in the chunk store by stating. This check
however does not scale for large backup snapshots which might contain
millions of known chunks, as reported in the community forum [0].
Revert the changes for now instead of making this opt-in/opt-out, a
more general approach has to be thought out to mark backup snapshots
which fail verification.
Link to the report in the forum:
[0] https://forum.proxmox.com/threads/158812/
Fixes: da11d226 ("fix #5710: api: backup: stat known chunks on backup finish")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
[ TL: add tag to subject and shorten it ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this moves the parsing of the concrete DataStoreConfig as well as the
check whether a store is mounted after the authorization checks.
otherwise we always check for all datastore whether they are mounted,
even if the requesting user has no privileges to list the specified
datastore anyway.
this may improve performance for large setups, as we won't need to stat
mounted datastores regardless of the useres privileges. this was
suggested on the mailing list [1].
[1]: https://lore.proxmox.com/pbs-devel/embeb48874-d400-4e69-ae0f-2cc56a39d592@93f95f61.com/
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
QEMU CLI option parsing requires doubling the commas for values, this
seems to be also used when a combined option is used to pass down the
key=value pairs to the internal options, like for the combined -drive
option that was replaced by the slightly lower-level blockdev option
in commit 668b8383 ("file restore: qemu helper: switch to more modern
blockdev option for drives"). So there we now could drop the comma
duplication as blockdev directly interprets these options, thus no
need for escaping the comma.
We missed two instances because they were not part of the "main"
format string, which broke some use cases.
Fixes: 668b8383 ("file restore: qemu helper: switch to more modern blockdev option for drives")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: add more context, but it's a bit guesstimation ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Gotify and webhook targets will use the HTTP proxy settings from
node.cfg, the documentation should mention this.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Change detection mode set to metadata compares regular file entries
metadata to the reference metadata archive of the previous run. The
`pxar::format::Stat` as stored in `pxar::Metadata` however does not
include the actual file size, it only partially stores information
gathered from stating the file.
This means however that the actual file size is never compared and
therefore, that if the file size did change, but the other metadata
information did not (including the mtime which might have been
restored), that file will be incorrectly reused.
A subsequent restore will however fail, because the expected file size
as encoded in the metadata archive does not match the file size as
stored in the payload archive.
Fix this by adding the missing file size check, comparing the size
for the given file against the one stored in the metadata archive.
Link to issue reported in community forum:
https://forum.proxmox.com/threads/158722/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
From the QEMU man page:
> The most explicit way to describe disks is to use a combination of
> -device to specify the hardware device and -blockdev to describe the
> backend. The device defines what the guest sees and the backend
> describes how QEMU handles the data. It is the only guaranteed stable
> interface for describing block devices and as such is recommended for
> management tools and scripting.
> The -drive option combines the device and backend into a single
> command line option which is a more human friendly. There is however
> no interface stability guarantee although some older board models
> still need updating to work with the modern blockdev forms.
From the perspective of live restore, there should be no behavioral
change, except that the used driver is now explicitly specified. The
'-device' options are still the same, the fact that 'if=none' is gone
shouldn't matter, because the '-device' option was already used to
define the interface (i.e. virito-blk) and the 'id' option needed to
be replaced with 'node-name'.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Remove the `log` dependency in pbs-client and change all the invocations
to tracing logs.
No functional change intended.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
This was introduced by commit fdea4e53 ("client: implement prepare
reference method") to read a reference metadata archive for detection
of unchanged, reusable files when using change detection mode set to
`metadata`.
Avoid unnecessary cloning of the atomic reference counted
`BackupReader` instance, as it is used exclusively for this codepath.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes the cargo doc warning:
```
warning: unresolved link to `UserInformation`
--> src/auth.rs:418:53
|
418 | /// Check if a userid is enabled and return a [`UserInformation`] handle.
| ^^^^^^^^^^^^^^^ no item named `UserInformation` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the cargo doc lint:
```
warning: unclosed HTML tag `uuid`
--> pbs-datastore/src/datastore.rs:60:41
|
60 | /// - could not stat /dev/disk/by-uuid/<uuid>
| ^^^^^^
|
= note: `#[warn(rustdoc::invalid_html_tags)]` on by default
warning: unclosed HTML tag `uuid`
--> pbs-datastore/src/datastore.rs:61:26
|
61 | /// - /dev/disk/by-uuid/<uuid> is not a block device
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Also fix `Entries` link.
Fixes the cargo doc lint:
```
warning: redundant explicit link target
--> pbs-client/src/pxar/extract.rs:212:27
|
212 | /// * The [`Entry`][E]'s filename is invalid (contains nul bytes or a slash)
| ------- ^ explicit target is redundant
| |
| because label contains path that resolves to same destination
|
note: referenced explicit link target defined here
--> pbs-client/src/pxar/extract.rs:221:14
|
221 | /// [E]: pxar::Entry
| ^^^^^^^^^^^
= note: when a link's destination is not specified,
the label is used to resolve intra-doc links
= note: `#[warn(rustdoc::redundant_explicit_links)]` on by default
help: remove explicit link target
|
212 | /// * The [`Entry`]'s filename is invalid (contains nul bytes or a slash)
| ~~~~~~~~~
warning: redundant explicit link target
--> pbs-client/src/pxar/extract.rs:215:37
|
215 | /// fetching the next [`Entry`][E]), the error may be handled by the
| ------- ^ explicit target is redundant
| |
| because label contains path that resolves to same destination
|
note: referenced explicit link target defined here
--> pbs-client/src/pxar/extract.rs:221:14
|
221 | /// [E]: pxar::Entry
| ^^^^^^^^^^^
= note: when a link's destination is not specified,
the label is used to resolve intra-doc links
help: remove explicit link target
|
215 | /// fetching the next [`Entry`]), the error may be handled by the
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the cargo doc lint:
```
warning: this URL is not a hyperlink
--> pbs-datastore/src/data_blob.rs:555:5
|
555 | /// https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: bare URLs are not automatically turned into clickable links
= note: `#[warn(rustdoc::bare_urls)]` on by default
help: use an automatic link instead
|
555 | /// <https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h>
| + +
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
otherwise users will get a `b.store is null` error in the console and
a loading spinner is shown for a while.
the issue in question seems to stem from the event handler that gets
attached when the "Prune & GC Jobs" tab is opened for a specific
datastore. however, that event handler should *not* be attached for
the "Datastore" -> "Prune & GC Jobs" panel. it seems that the event
handler does still get attached, and will fire in the "Datastore"
view if it hasn't fired while opened in a specific datastore
(it should only trigger a single time).
that scenario seems to occur when a different tab was previously
selected in a specific datastore and navigation is triggered via the
side bar from the "Datastore" -> "Prune GC Jobs" to a specific
datastore. that leads to the "Prune & GC Jobs" view for that specific
datastore being opened very briefly in which the event handler gets
attached, navigation then automatically moves to the previously
selected tab. this will stop the store from updating ensuring that
the event is never triggered. when we then move to
the "Datastore" -> "Prune & GC Jobs" tab again the event handler will
be triggered but the store of the view is null leading to the error.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Since we don't want to have lingering file descriptors on any fork +
exec, like the reload code from the proxmox-daemon crate we're using
for the rest-server(s) does, as that can have serious side effects and
even cause hangs.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ TL: Reword commit message ]}
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise:
```
warning: unclosed HTML tag `uid`
--> proxmox-file-restore/src/main.rs:686:63
|
686 | /// "www-data", so we use a custom one in /run/proxmox-backup/<uid> instead.
| ^^^^^
|
= note: `#[warn(rustdoc::invalid_html_tags)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Otherwise:
```
warning: unresolved link to `PRIVILEGES`
--> pbs-config/src/acl.rs:15:71
|
15 | /// Map of pre-defined [Roles](Role) to their associated [privileges](PRIVILEGES) combination
| ^^^^^^^^^^ no item named `PRIVILEGES` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
```
warning: field assignment outside of initializer for an instance created with Default::default()
--> pbs-datastore/src/chunker.rs:431:5
|
431 | ctx.total = buffer.len() as u64;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: consider initializing the variable with `chunker::Context { total: buffer.len() as u64, ..Default::default() }` and removing relevant reassignments
--> pbs-datastore/src/chunker.rs:430:5
|
430 | let mut ctx = Context::default();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#field_reassign_with_default
= note: `#[warn(clippy::field_reassign_with_default)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy lint:
```
warning: empty line after doc comment
--> src/tape/pool_writer/mod.rs:441:5
|
441 | / /// updated.
442 | |
| |_
...
448 | / pub fn append_snapshot_archive(
449 | | &mut self,
450 | | snapshot_reader: &SnapshotReader,
451 | | ) -> Result<(bool, usize), Error> {
| |_____________________________________- the comment documents this method
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#empty_line_after_doc_comments
= help: if the empty line is unintentional remove it
help: if the documentation should include the empty line include it in the comment
|
442 | ///
|
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Use the trait implementations of `ApiVersion` to perform operator
based version comparisons. This makes the comparison more readable
and reduces the risk for errors.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Derive and implement the traits to allow comparison of two
`ApiVersion` instances for more direct and easy api version
comparisons. Further, add some basic test cases to reduce risk of
regressions.
This is useful for e.g. feature compatibility checks by comparing api
versions of remote instances.
Example comparison:
```
api_version >= ApiVersion::new(3, 3, 0)
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The `ApiVersion` type was introduced in commit a926803b
("api/api-types: refactor api endpoint version, add api types")
including the `repoid`, added for completeness when converting from
a pre-existing `ApiVersionInfo` instance, as returned by the
`version` api endpoint.
Drop the additional `repoid` field, since this is currently not used,
can be obtained fro the `ApiVersionInfo` as well and only hinders the
implementation for easy api version comparison.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes the question_mark clippy lint:
```
warning: this `let...else` may be rewritten with the `?` operator
--> pbs-datastore/src/datastore.rs:101:5
|
101 | / let Some(ref device_uuid) = config.backing_device else {
102 | | return None;
103 | | };
| |______^ help: replace it with: `let ref device_uuid = config.backing_device?;`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#question_mark
= note: `#[warn(clippy::question_mark)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the lines_filter_map_ok clippy lint:
```
warning: `filter_map()` will run forever if the iterator repeatedly produces an `Err`
--> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:195:14
|
195 | .filter_map(Result::ok)
| ^^^^^^^^^^^^^^^^^^^^^^ help: replace with: `map_while(Result::ok)`
|
note: this expression returning a `std::io::Lines` may produce an infinite number of `Err` in case of a read error
--> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:193:18
|
193 | for f in BufReader::new(File::open("/proc/filesystems")?)
| __________________^
194 | | .lines()
| |____________________^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#lines_filter_map_ok
= note: `#[warn(clippy::lines_filter_map_ok)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the single_component_path_imports clippy lint:
```
warning: this import is redundant
--> proxmox-file-restore/src/block_driver_qemu.rs:15:1
|
15 | use proxmox_systemd;
| ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
= note: `#[warn(clippy::single_component_path_imports)]` on by default
warning: this import is redundant
--> proxmox-backup-client/src/mount.rs:19:1
|
19 | use proxmox_systemd;
| ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
= note: `#[warn(clippy::single_component_path_imports)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the suspicious_doc_comments clippy lints:
```
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/main.rs:1:1
|
1 | ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
= note: `#[warn(clippy::suspicious_doc_comments)]` on by default
help: use an inner doc comment to document the parent module or crate
|
1 | //! Daemon binary to run inside a micro-VM for secure single file restore of disk images
|
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/proxmox_restore_daemon/mod.rs:1:1
|
1 | ///! File restore VM related functionality
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
|
1 | //! File restore VM related functionality
|
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs:1:1
|
1 | ///! File-restore API running inside the restore VM
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
|
1 | //! File-restore API running inside the restore VM
|
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The mount types were probably here for compatibility with older proxmox-sys.
Fixes the useless_conversion clippy lints:
```
warning: useless conversion to the same type: `std::os::fd::OwnedFd`
--> proxmox-backup-client/src/mount.rs:172:23
|
172 | let pr: OwnedFd = pr.into(); // until next sys bump
| ^^^^^^^^^ help: consider removing `.into()`: `pr`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
= note: `#[warn(clippy::useless_conversion)]` on by default
warning: useless conversion to the same type: `std::os::fd::OwnedFd`
--> proxmox-backup-client/src/mount.rs:173:23
|
173 | let pw: OwnedFd = pw.into();
| ^^^^^^^^^ help: consider removing `.into()`: `pw`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
warning: useless conversion to the same type: `pbs_api_types::BackupArchiveName`
--> proxmox-file-restore/src/main.rs:484:18
|
484 | &archive_name.try_into()?,
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= help: consider removing `.try_into()`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
= note: `#[warn(clippy::useless_conversion)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the needless_option_as_deref clippy lint:
```
warning: derefed type is same as origin
--> proxmox-backup-client/src/main.rs:1154:21
|
1154 | payload_target.as_ref().as_deref(),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `payload_target.as_ref()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_option_as_deref
= note: `#[warn(clippy::needless_option_as_deref)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the doc_lazy_continuation clippy lint, e.g.:
```
warning: doc list item without indentation
--> src/server/pull.rs:764:5
|
764 | /// -- attempt to pull each NS in turn
| ^
|
= help: if this is supposed to be its own paragraph, add a blank line
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuation
help: indent this line
|
764 | /// -- attempt to pull each NS in turn
| ++
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the get_first clippy lint:
```
warning: accessing first element with `matching_stores.get(0)`
--> src/bin/proxmox_backup_manager/datastore.rs:284:26
|
284 | if let Some(store) = matching_stores.get(0) {
| ^^^^^^^^^^^^^^^^^^^^^^ help: try: `matching_stores.first()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#get_first
= note: `#[warn(clippy::get_first)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The current version check does not cover cases where the minor
version is 3, but the release version is below 11. Fix this by
extending the check accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: re-sort line to go from bigger to smaller ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently, showing the Datastore summary page leads to errors since
the status returned by the API does not contain any fields that are
checked by the component rendering the datastore summary. We solve
this by checking if the datastore is currently mounted first and mask
the element if it is currently unmounted.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
The tooltip text shown for the remove vanished flag when hovering
is incorrect for push direction. By using `sync target` over `local`,
make the text agnostic to the actual sync direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Our package uses <x>.<y>.<z>-<rev> as version format, here we get
version=<x>.<y> and release=<z>, so we rendered the version like
<x>.<y>-<z>, which is rather wrong.
And while the return value of the API call might be a bit odd and
should probably change (or at least add a full version property), but
for now it's what it is, so at least render it correctly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I made a mistake and applied the v1 not the v2 of the series, show
this by merging the actual v2; albeit this should not be done to
frequently to avoid making the git history to messy – sorry!
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.
There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.
The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to check for a pre-existing mount unit.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.
There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.
The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to check for a pre-existing mount unit.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: move format template variable directly into string ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pass the `-E` option to, quoting it's man-page, "don't use a saved
environment (the structure caching all cross-references, but rebuild
it completely."
As with reusing the environment one gets some empty results for
synopsis stuff depending on build order, for example the synopsis in
the command-syntax appendix HTML output is empty while the same
synopsis used for the dedicated HTML page is complete.
By making the build-log more verbose I caught the attention of some
emitted 'env-purge-doc' events from sphinx; while this itself might be
harmless (I didn't followed the rat tail to its end), it made me a bit
suspicious about caching and wrong/missing invalidation.
With ignoring the environment this is fixed, a diffoscope comparison
shows that not only the command-syntax page, but many others have the
various synposis content added again. There are solely added lines, no
removed nor changed, so it seems fine to enabled that option without
an in-depth sphinx review.
Note, I first suspected the use of a separate "doctree pickles" cache
directory (`-d` option) and is used for all output types besides the
man-pages one, which uses the default .doctree directory.
But changing the man-page target to also use the custom doctree cache
had no effect on the build-result whatsoever (compared with
diffoscope).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are particularly problematic since GC will walk the whole datastore tree
on the file system, and will thus pick up indices (but not chunks!) from nested
directories that are ignored in other code paths that use our regular
iterators..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and improve the variable namign while we are at it. this allows the check to be
re-used in other code paths, like when starting a garbage collection.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
there two kinds of overlap we need to check here:
- two removable datastores backed by the same device must not have nested
relative paths on the device
- any two datastores must not have nested absolute paths
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To be in line with the updated permission requirements, as
Datastore.Audit is now required to read and edit sync jobs in push
direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Sync jobs in push and pull direction require a different set of
privileges for the various api methods provided. Update the
descriptitons to include the push direction and list them
accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Users require `Datastore.Audit` on the source datastore to read sync
jobs. Further restrict also the permissions to modify sync jobs in
push direction to include the `Datastore.Audit` permission on the
source, as otherwise a user is able to create or edit sync jobs in
push direction, but not able to see them.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
* consistently use "medium" (singular), as only one is needed for
installation (installation-media.rst not renamed)
* add short introduction to recently added chapter "Installation Media"
* update minimum required flash drive storage space to 2 GB
* remove CD-ROM (too little storage space) but keep DVD
* mention explicitly that data get overwritten on installation media /
installation target disks
* mention that using `dd` will require root privileges
* add accidentally cut off text when copying from PVE docs
* add reference labels to currently needed section titles
* reword some paragraphs for completeness and readability
* mention all installation methods in the intro of "Server Installation"
* add the boot order as possible boot issue
* remove recently added redundant product website hyperlinks (as earlier
with commit 34407477e2)
* fix broken heading level of APT-based PBC repo
* slightly reorder sub-chapters of "Installation":
After adding the chapter "Installation Media" (d363818641), the chapter
order under "Installation" is:
1. System Requirements
2. Installation Media
3. Debian Package Repositories
4. Server Installation
5. Client Installation
But repos are more likely to be configured after installation, and for
other installation methods chapter links exist anyway. So to keep the
chapter order more logical, "Debian Package Repositories" is now moved
after "Client Installation".
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
pbs-datastore::datastore::is_datastore_mounted_at() verifies that the
mounted file system has the expected UUID. Therefore we don't have to
error out if we try to mount an already mounted removable datastore.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Show the full error context when fetching the remote target
namespaces fails. As logging of the error is handled by the calling
sync job, reformat the error to include the error context before
returning.
Instead of the error
```
TASK ERROR: Fetching remote namespaces failed, remote returned error
```
the user is now presented with an error like
```
TASK ERROR: Fetching remote namespaces failed, remote returned error: datastore 'removable1' is not mounted
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To ensure we can bind to the emptyText of a display-edit field,
otherwise the empty text can be confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We don't use the abbreviation anywhere else in our UI or docs.
To avoid any confusion about this (loaded) abbreviation, this
commits replaces it with the full word "Namespace".
There is more than enough space in the top bar for the larger button
size, even on low resolution screens (checked on 1280x700).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
instead of resetting to the originalValue. This makes it behave like
other similar fields (e.g. the combogrid).
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In an earlier version of this series the datastore path was absolute
for removable datastores. This is simply a leftover that was missed
when changing that to relative paths.
Reported-by: Markus Frank <m.frank@proxmox.com>
Fixes: 94a068e31 ("api: node: allow creation of removable datastore through directory endpoint")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
when loading the verification state for a local snapshot, it must
first be ensured that it actually exists, else the lack of manifest
will be interpreted as corrupt snapshot triggering a "resync" that is
actually a sync of all missing snapshots, not just the newer ones,
which is what's actually wanted here.
The diff is best seen by telling git to ignore the whitespace changes.
Fixes: 0974ddfa ("fix #3786: api: add resync-corrupt option to sync jobs")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ TL: reword subject and add a bit to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes a regression introduced when switching from the plain string
to be used for archive names to the BackupArchiveName api type in
commit addfae26 ("api types: introduce `BackupArchiveName` type").
The archive name now always is stored including the server archive
name extension. Adapt the check for which archive types to display
the progress log output to reflect this change.
Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
the current phrase leads to clumsy log messages such as:
> datastore 'store' is in datastore is being unmounted
this commit re-phrases that too:
> datastore 'store' is unavailable: datastore is being unmounted
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
A lot of removable datastore actions can alter the system state
(mounting, unmounting), so require Sys.Modify for lack of better
alternative.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ TL: improve commit subject and add access-description for create,
and delete, where we do a dynamic access check ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the SyncJobConfig struct now contains a 'sync-direction' property, we can
omit the 'direction' property of the SyncJobStatus struct. This makes a
few adaptions in the ui necessary:
* use the correct field
* handle 'pull' as default (since we don't necessarily get a
'sync-direction' in that case)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In case we add another direction or another call site, doing it
without a wildcard match arm seems cleaner and more future-proof.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: adapt subject/message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Jobs for both sync directions are now stored using the same `sync`
config section type, so drop the outdated helpers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Add the sync direction for the sync job as optional config parameter
and refrain from using the config section type for conditional
direction check, as they are now the same (see previous commit).
Use the configured sync job parameter instead of passing it to the
various methods as function parameter and only filter based on sync
direction if an optional api parameter to distingush/filter based on
direction is given.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Use `sync` as config section type string for both, sync jobs in push
and pull direction, renaming the now combined config plugin to sync
plugin.
Commit bcd80bf9 ("api types/config: add `sync-push` config type for
push sync jobs") introduced the additional config type with the
intend to reduce possible misconfiguration. Partially revert this to
use the same config type string again, since the misconfiguration
can happen nevertheless (by editing the config type) and currently
sync job configs are only listed partially when fetched via the
config api endpoint. The filtering based on the additional api
parameter is however retained.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
filter by (remote) store, remote, id, owner, direction.
Local store is only included on the globabl view not the datastore
specific one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of just the id, which makes the list in the global datastore
view a bit more easier to digest (since it's now sorted by store first)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
but add a separate column for the direction so one still sees the
separate jobs.
change the 'local owner/user' to a single column, but add a tooltip in
the header to explain when it does what.
This makes the 'SyncJobsPullPushView' unnecessary, so delete it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that one can list all sync jobs, both pull and push, at the same
time. To not confuse existing clients that only know of pull syncs, show
only them by default and make the 'all' parameter opt-in. (But add a
todo for 4.x to change that)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Commit addfae26 ("api types: introduce `BackupArchiveName` type")
introduced a dedicated archive name api type to add rust type
checking and bundle helpers to the api type. Since this, the backup
archive name to server archive name mapping is handled by its parser.
This however did not cover the `.conf` extension used for VM config
files. Add the missing `.conf` to `.conf.blob` to the match statement
and the test cases.
Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Specifically about jobs and how they behave when the datastore is not
mounted, how to create and use deivices with multiple datatstores on
multiple PBS instances and options how to handle failed unmounts.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... and deserialize with default if field is missing in data.
Reported-by: Aaron Lauterer <a.lauterer@proxmox.com>
Fixes: 76609915d6 ("pbs-api-types: add mount_status field to DataStoreListItem")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
So it is possible to reset it after a failed unmount, or abort an
unmount task by resetting it through the API.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
The same word occurring twice in succession can lead to the brain
skipping the second occurrence. Change the name of the archives in the
example from backup.pxar to archive-name.pxar to avoid that effect.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
[ TL: squash in Christian's suggestion to use 'archive-name.pxar' ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The `Add datastore` window labels the flag for creating a removable
datastore as `Removable datastore`, while creating the datastore via the
storage/disks interface will refer to it as `is removable`.
Use the same `Removable datastore` as label for both locations.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Explain that the change detection mode data makes sure that no files
are considered reusable, even if their metadata might match and that
the use of ctime and inode number is not possible for detection of
unchanged files if the filesystem was synced to a temporary location,
therefore the mtime and size are used for detection.
Also note the reduced deduplication when storing snaphshots with
mixed archive formats on the same datastore.
Further, mention the backwards compatibility to older version of the
Proxmox Backup Server.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The example commands in the Change Detection Mode / File Exclusion
section are missing the command in the client invocation. Add the
backup command to the examples, so they are actually valid.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Add onlineHelp link to the consent-banner docs section in the popup when
inserting the consent-banner text.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
This allows users to add/edit new webhook targets.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
If a datastore with a prune job is removed, the prune job is preserverd
as it is stored in /etc/proxmox-backup/prune.cfg. We also create a
default prune job for every datastore – this means that when reusing a
datastore that previously existed, you end up with duplicate prune jobs.
To avoid this we check if a prune job already exists, and when it does,
we refrain from creating the default one. (We also check if specific
keep-options have been added, if yes, then we create the job
nevertheless.)
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Switch over using the controller of the info panel directly, avoiding
firing events, and add a single store load to cause the mask-logic
when the status update store goes from succeeding to failure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No point in querying RRD metrics if it will fail anyway, so stop them
like we stop the status store, and start them again once it can work.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Disabling basically was already done only on an transition edge from
"success" -> "failure" (= !success), as we stopped the periodic store
load in that case, thus we never trigger to "failures" after each
other without any user input.
But on success we always unconditionally fired an activate, which
cause the status store to start its store updates, which in turn
immediately triggered as store load. So the verbose status call of the
info panel was now coupled to the 1s update period of the encompassing
summary panel, not the slower 5s period it actually wanted to trigger
an update.
So save the last state and check if it actually differs before causing
such action.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Without this, we immediately start the store updates even before the
browser created the (async) mount API request. So it's very likely
that the first store load will still get an error due to the backing
device of the datastore not being mounted yet. That in turn will
trigger our error detection behavior in the load even listener and
disable periodic store updates again.
Move the start of the update into the taskDone handler. We do not need
to check if the task succeeded, as either it did, and we will do
periodic updates, or it did not and we do at least one update to load
the current status and then stop again auto-loading the store anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's not nice if a existing always visible button moves around
depending on the datastore type. Rather move the optional buttons to
the right and add a separator for visual grouping.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some filesystems like FAT don't include a concept of UUIDs.
Instead, tools like blkid tools like blkid derive these
identifiers based on certain filesystem metadata, such as
volume serial numbers or other unique information. This does
however not follow the format specified in RFC 9562[1].
[1] https://datatracker.ietf.org/doc/html/rfc9562
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Data deletion is only possible if the datastore is mounted, won't attempt
mounting it for the purpose of deleting data.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
If a device houses multiple datastore, none of them will be mounted
automatically. If a device only contains a single datastore it will be
mounted automatically. The reason for not mounting multiple datastore
automatically is that we don't know which is actually wanted, and since
mounting all means also all have to be unmounted manually, it made sense
to have the user choose which to mount.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
We can't just directly delegate these commands to the API endpoints
since both mounting and unmounting are done in a worker, and that one
would be killed when the parent ends. In this case that would be the CLI
process, which basically ends right after spwaning the worker.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Devices can contains multiple datastores.
If the specified path already contains a datastore, `reuse datastore` has
to be set so it'll be added without creating a chunckstore.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Removable datastores can be mounted unless
- they are already
- their device is not present
For unmounting the maintenance mode is set to `unmount`,
which prohibits the starting of any new tasks envolving any
IO, this mode is unset either
- on completion of the unmount
- on abort of the unmount tasks
If the unmounting itself should fail, the maintenance mode stays in
place and requires manual intervention by unsetting it in the config
file directly. This is intentional, as unmounting should not fail,
and if it should the situation should be looked at.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... at a specific location. Also adds two additional functions to
get the mount status, and ensuring a removable datastore is mounted.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... and add MaintenanceType::Delete to it. We also want to clear any
cach entries if we are deleting the datastore, not just if it is marked
as offline.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
To ensure the adapted handlebars escaper that keeps '=' as is gets
used, required for the consent banner.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before showing the LoginView, check if we got a non-empty consent text
from the template. If there is a non-empty text, display it in a modal.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Add consent_text option to the node.cfg config. Embed the value into
index.html file using handlebars.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
we already have two different password schemas, `PBS_PASSWORD_SCHEMA`
being the stricter one, which ensures a minimum length of new
passwords. however, this wasn't used on the change password endpoint
before, so add it there too. this is also in-line with NIST's latest
recommendations [1].
[1]: https://pages.nist.gov/800-63-4/sp800-63b.html#passwordver
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
currently if a password is provided, we check whether the user that is
going to be updated can authenticate with it. later on, the password
is then set as the same password. this means that the password here
can only be changed if it is the exact same one that is already used.
so in essence, the password cannot be changed through this endpoint
already. remove all of this logic here in favor of the
`PUT /access/password` endpoint.
to keep the api stable for now, just ignore the parameter and add a
description that explains what to use instead.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Fix switching the source for group filters based on the sync jobs
sync direction.
The helper to set the local namespace for the group filers was
introduced in commit 43a92c8c ("ui: group filter: allow to set
namespace for local datastore"), but never used because lost during
subsequent iterations of reworking the patch series.
The switching is corrected by:
- correctly initializing the local store and namespace for the group
filer of sync jobs in push direction in the controller init, if a
datastore is set.
- fixing an incorrect check for the sync direction in the remote
datastore selector change listener.
- conditionally switching namespace to be set for the group filter in
the remote and local namespace selector change listeners.
- conditionally switching datastore to be set for the group filter in
the local datastore selector change listener.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
These were put in place so that initial release of the new
notification system for Proxmox Backup Server can already include
improved notification matchers, which at that time have not been yet
merged into proxmox-widget-toolkit.
In the meanwhile, the changes have been merged an released in
proxmox-widget-toolkit 4.2.4, hence we can remove the override.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
We need "notification: matcher: match-field: show known fields/values",
which was released in proxmox-widget-toolkit 4.2.4
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
PathPatterns is hard to distinguish from PathPattern, so would need to be
renamed anyway.. but there isn't really a reason to define a separate API type
just for this.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
When the user is only interested in a subset of the entries stored in
a file-level backup, it is convenient to be able to provide a list of
match patterns for the entries intended to be restored.
The required restore logic is already in place. Therefore, expose it
for the `proxmox-backup-client restore` command by adding the optional
array of patterns as command line argument and parse these before
passing them via the pxar restore options to the archive extractor.
Link to bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=2996
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the common api type with schema based input validation for all
match pattern parameters exposed via the api macro.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of taking a plain string as input parameter, use the
corresponding api type performing additional input validation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Introduces a dedicated api type `PathPattern` and the corresponding
format and input validation schema. Further, add a `PathPatterns`
type for collections of path patterns and implement required traits
to be able to replace currently defined api parameters.
In preparation for using this common api type for all api endpoints
exposing a match pattern parameter.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, common details regarding garbage collection are documented
in the backup client and the maintenance task. Deduplicate this
information by moving the details to the background section of the
maintenance task and reference that section in the backup client
part.
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Users should be made aware that the data stored in chunks outlives
the backup snapshots on pruning and that backups created using the
change-detection-mode set to metadata might reference chunks
containing files which have vanished since the previous backup, but
might still be accessible when access to the chunks raw data is
possible (client or server side).
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add short section explaining the `resync-corrupt` option on the
sync-job.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add the `resync-corrupt` option to the ui and the
`proxmox-backup-manager` cli. It is listed in the `Advanced` section,
because it slows the sync-job down and is useless if no verification
job was run beforehand.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This option allows us to "fix" corrupt snapshots (and/or their chunks)
by pulling them from another remote. When traversing the remote
snapshots, we check if it exists locally, and if it is, we check if the
last verification of it failed. If the local snapshot is broken and the
`resync-corrupt` option is turned on, we pull in the remote snapshot,
overwriting the local one.
This is very useful and has been requested a lot, as there is currently
no way to "fix" corrupt chunks/snapshots even if the user has a healthy
version of it on their offsite instance.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add helper functions to retrieve the verify_state from the manifest of a
snapshot. Replaced all the manual "verify_state" parsing with the helper
function.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Log also empty backup groups with no snapshots encountered during the
sync so the log output contains this additional information as well,
reducing possible confusion.
Nevertheless, continue with the regular logic, so that pruning of
vanished snapshot is honored.
Examplary output in the sync jobs task log:
```
2024-11-22T18:32:40+01:00: Syncing datastore 'datastore', root namespace into datastore 'push-target-store', namespace 'test'
2024-11-22T18:32:40+01:00: Found 2 groups to sync (out of 2 total)
2024-11-22T18:32:40+01:00: skipped: 1 snapshot(s) (2024-11-22T13:40:18Z) - older than the newest snapshot present on sync target
2024-11-22T18:32:40+01:00: Group 'vm/200' contains no snapshots to sync to remote
2024-11-22T18:32:40+01:00: Finished syncing root namespace, current progress: 1 groups, 0 snapshots
```
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add an anyhow context to errors and display the full error context
in the log output. Further, make it clear which errors stem from api
calls by explicitly mentioning this in the context message.
This also fixes incorrect error handling by placing the error context
on the api result instead of the serde deserialization error for
cases this was handled incorrectly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: add missing format!
FG: run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Parsing of the type based on the archive name extension is now
handled by `BackupArchiveName`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: add removal of import
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.
Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.
Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.
This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.
A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.
E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```
The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use LazyLock for constant archive names
FG: add missing import
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Introduces a dedicated wrapper type to be used for backup archive
names instead of plain strings and associated helper methods for
archive type checks and archive name mappings.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use LazyLock for constant archive names reduces churn, and saves some
allocations
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.
In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
these can occur in practice, and neither setting nor getting them throws an
error. if "invalid" ACLs are non-restorable, this means that creating a pxar
archive with such an ACL is possible, but restoring it isn't.
reported in our community forum:
https://forum.proxmox.com/threads/155477
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else, error messages using this path_info refer to the parent directory instead
of the actual file entry causing the problem. since this is just for
informational purposes, lossy conversion is acceptable.
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.
If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.
In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.
Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.
link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Various smaller adaptions such as capitalization of the start of
sentences, expansion of abbreviations and shortening of to long
error messages.
To improve consistency with the rest of the error messages for the
sync job in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use "skipping" for non-owner-groups - we haven't started uploading at that
point, there is nothing to "abort"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
so that they are logged in the success case, since the error case already has
its own log messages.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Mixing of terms only makes the errors harder to understand.
In order to make error messages more intuitive, always refer to the
sync push target as remote, mention the remote explicitly and/or
improve messages where needed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When updating the datastore config using `proxmox-backup-manager` we
need to make an api-call, because the api-route starts a tokio task to
update the proxy-cache and the client will kill the task if we don't
wait. With an api-call the tokio task will be executed on the api
process and runs in the background while the endpoint handler has
already returned.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
When creating a datastore without the "reuse-datastore" option and the
datastore contains a `lost+found` directory (which is quite common), the
creation fails. Add `lost+found` to the ignore list.
Reported here: https://forum.proxmox.com/threads/bug-when-adding-new-storage-task-error-datastore-path-is-not-empty.157629/#post-721733
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
FG: slight code style change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Permissions are stored in the lower 9 bits (rwxrwxrwx),
so we have to mask `st_mode` with 0o777.
The datastore root dir is created with 755, the `.chunks` dir and its
contents with 750 and the `.lock` file with 644, this changes the
expected permissions accordingly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Reviewed-By: Gabriel Goller <g.goller@proxmox.com>
With single ticks the containing modes and archive formats are
displayed cursive, to be consistent with other sections of the
documentation use inline blocks.
Adapted line wrappings to the additional line length.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Currently, the change detection modes are described in the client
usage section, not intended for in-depth explanation on how these
client option works, but rather with focus on how to use them.
Therefore, add a reference to the more detailed technical section
regarding the change detection modes and reduce duplicate
explanations.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Describe in more details how the different change detection modes
operate and give insights into the inner workings, especially for the
more complex `metadata` mode, which involves lookahead caching and
padding calculation for reused payload chunks.
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
this got reported via e-mail - seems this one occurrence was
forgotten. grepped through the docs (and the whole repo) for 'Mail'
and 'Gateway', and it seems this was the only one.
Fixes: cbd7db1d ("docs: certificates")
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Skip and warn the user for files which returned a stale file handle
error while reading the metadata associated to that file, or the
target in case of a symbolic link.
Instead of returning with a hard error, report the stale file handle
and skip over encoding this file entry in the pxar archive.
Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5853
Link to thread in community forum:
https://forum.proxmox.com/threads/156822/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not fail hard if a file open fails because of a stale file handle.
Warn the user and ignore the file, just like the client already does
in case of missing privileges to access the file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the entries when a stale file handle is encountered during
generation of the entry list of a directory entry.
This will lead to the directory not being backed up if the directory
itself was invalidated, as then reading all child entries will fail
also, or the directory is backed up without entries which have been
invalidated.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the whole directory in case the file handle was invalidated
and therefore the filesystem type check returns with ESTALE.
Encode the directory start entry in the archive and the catalog only
after the filesystem type check, so the directory can be fully skipped.
At this point it is still possible to ignore the invalidated
directory. If the directory is invalidated afterwards, it will be
backed up only partially.
Introduce a helper method to report entries for which a stale file
handle was encountered, providing an optional path for cases where
the `Archiver`s state does not store the correct path.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch from mutable reference to shared reference on `self` and drop
unused return value.
These helpers only write log messages, there is currently no need for
a mutable reference to `self`, nor to return a `Result`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
else, combined with remove_vanished everything on the target side would be
removed.
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the group filters need adaptations both for pushing and local pulling, so left
those out for now.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid attempting to create them multiple times in case a whole hierarchy is
missing, and misleadingly logging that they were created multiple times as
well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the error message for failure to sync the whole namespace was too long, so
split it into two lines and make it a warning.
the namespace creation one lacked context (that the error was caused by the
remote side or the connection) and had too much (the datastore, which is
already logged very often) at the same time.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
add a bit more detail for the pull side, and reword some comments on the push
side to make them easier to read.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`try_exists` will return Ok(false) if the path is or containts a dangling
symlink, treat that as hard error just like if `try_exists` has returned an
Err(..).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
one million chunks are a bit much, considering that chunks are representing
1-2MB (dynamic) to 4MB (fixed) of input data, that would mean 1-4TB of re-used
input data in a single snapshot.
64k chunks are still representing 64-256GB of input data, which should be
plenty (and for such big snapshots with lots of re-used chunks, growing the
allocation of the HashSet should not be the bottleneck), and is also the
default capacity used for pulling.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of calling this three times, call it once:
retrieving the highest backup timestamp doesn't need its own request, it can
re-use the "main" result, the corresponding helper can thus be dropped.
remove_vanished can re-use the earlier result - if anybody prunes the backup
group or adds new snapshots while the sync is running, the whole group sync is
racy and might cause spurious errors anyway.
since re-syncing the last already existing snapshot is not possible at the
moment, the code can also be simplified by treating such a snapshots already
fully synced.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
a vanished namespace is one that
- exists on the target side, below the target prefix
- but within the specified max_depth
- and was not part of the synced namespaces
Co-developed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
two parameters that only differ by a letter are not very nice for quickly
understanding semantics..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
BackupGroup is serializable as its API parameter components, like BackupDir.
move the (always present) namespace closer to the group to improve readability.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make it easier to distinguish from missing "Prune" privs when removing
vanished groups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make the complex logic code shorter and easier to parse. no semantic changes
intended.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Documents the caveats of sync jobs in push direction, explicitly
recommending setting up dedicted remotes for these sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Expose the 'prune-delete-stats' as supported feature, in order for
the sync job in pull direction to pass the optional
`error-on-protected=false` flag to the api calls when pruning backup
snapshots, groups or namespaces.
Add and optionally expose the backup group delete statistics by adding the
return type to the corresponding REST API endpoints.
Clients can opt-into the new behaviour by setting the new `error-on-protected`
flag to `false` when calling the api endpoints, which results in removal not
erroring out when encountering protected snapshots.
The default value for the flag remains `true` for now, to remain backwards
compatible with older clients expecting this behaviour.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: reworded commit message slightly
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
In order to load data using the same model from different sources,
set the proxy on the store instead of the model.
This allows to use the view to display sync jobs in either pull or
push direction, by setting the `sync-direction` ont the view.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch the subject and labels to be shown based on the direction of
the sync job, and set the `sync-direction` parameter from the
submit values in case of push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Show sync jobs in pull and in push direction in two separate grids,
visually separating them to limit possible misconfiguration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch to the local datastore, used as sync source for jobs in push
direction, to get the available group filter options.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The namespace has to be set in order to get the correct groups to be
used as group filter options with a local datastore as source,
required for sync jobs in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`list_sync_jobs` exists as api method in `api2::admin::sync` and
`api2::config::sync`.
Rename the admin api endpoint method to `list_config_sync_jobs` in
order to reduce possible confusion when searching/reviewing.
No functional change intended.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Exposes and switch the config type for sync job operations based
on the `sync-direction` parameter, exposed on required api endpoints.
If not set, the default config type is `sync` and the default sync
direction is `pull` for full backwards compatibility. Whenever
possible, determine the sync direction and config type from the sync
job config directly rather than requiring it as optional api
parameter.
Further, extend read and modify access checks by sync direction to
conditionally check for the required permissions in pull and push
direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the sync job owner check to its own helper function, for it to
be reused for the owner check for sync jobs in push direction.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Read access to sync jobs is not granted to users not having at least
PRIV_DATASTORE_AUDIT permissions on the datastore. However a user is
able to create or modify such jobs, without having the audit
permission.
Therefore, further restrict the modify check by also including the
audit permissions.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Moves and refactores the sync_job_do function into the common server
sync module so that it can be reused for both sync directions, pull
and push.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Expose the sync job in push direction via a dedicated API endpoint,
analogous to the pull direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order for sync jobs to be either pull or push jobs, allow to
configure the direction of the job.
Adds an additional config type `sync-push` to the sync job config, to
clearly distinguish sync jobs configured in pull and in push
direction and defines and implements the required `SyncDirection` api
type.
This approach was chosen in order to limit possible misconfiguration,
as unintentionally switching the sync direction could potentially
delete still required snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds the functionality required to push datastore contents from a
source to a remote target.
This includes syncing of the namespaces, backup groups and snapshots
based on the provided filters as well as removing vanished contents
from the target when requested.
While trying to mimic the pull direction of sync jobs, the
implementation is different as access to the remote must be performed
via the REST API, not needed for the pull job which can access the
local datastore via the filesystem directly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a dedicated api type for the `version` api endpoint and helper
methods for supported feature comparison.
This will be used to detect api incompatibility of older hosts, not
supporting some features.
Use the new api type to refactor the version endpoint and set it as
return type.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To correctly account also for the number of deleted backup groups, in
preparation to correctly return the delete statistics when removing
contents via the REST API.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Make the `BackupGroupDeleteStats` exposable via the API by implementing
the ApiTypes trait via the api macro invocation and add an additional
field to account for the number of deleted groups.
Further, add a method to add up the statistics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for the delete stats to be exposed as return type to
the backup group delete api endpoint.
Also, rename the private field `unremoved_protected` to a better
fitting `protected_snapshots` to be in line with the method names.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adding the privileges to allow backup, namespace creation and prune
on remote targets, to be used for sync jobs in push direction.
Also adds dedicated roles setting the required privileges.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add `remote_acl_path` method which generates the acl path from the sync
job configuration. This helper allows to easily generate the acl path
from a given sync job config for privilege checks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a `remote_acl_path` helper method for creating acl paths for
remote namespaces, to be used by the priv checks on remote datastore
namespaces for e.g. the sync job in push direction.
Factor out the common path extension into a dedicated method.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Extend the component limit for ACL paths of `remote` to include
possible namespace components.
This allows to limit the permissions for sync jobs in push direction
to a namespace subset on the remote datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Combine the two if statements checking the datastores ACL path
components, which can be represented more concisely as one.
Further, extend the pre-existing comment to clarify that `datastore`
ACL paths are not limited to the datastore name, but might have
further sub-components specifying the namespace.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a method `upload_index_chunk_info` to be used for uploading an
existing index and the corresponding chunk stream.
Instead of taking an input stream of raw bytes as the
`upload_stream`, this takes a stream of `MergedChunkInfo` object
provided by the local chunk reader of the sync jobs source.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for implementing push support for sync jobs.
Factor out the upload stream for merged chunks, which can be reused
to upload the local chunks to a remote target datastore during a
snapshot sync operation in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for push support in sync jobs.
Extend and move `BackupStats` into `backup_stats` submodule and add
method to create them from `UploadStats`.
Further, introduce `UploadCounters` struct to hold the Arc clones of
the chunk upload statistics counters, simplifying the house keeping.
By bundling the counters into the struct, they can be passed as
single function parameter when factoring out the common stream future
in the subsequent implementation of the chunk upload for sync jobs
in push direction.
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to filter namespaces by given callback function. This will be
used to pre-filter the list of namespaces to push to a remote target
for sync jobs in push direction, based on the privs of the sync jobs
local user on the source datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`BackupGroup` implements `cmp::Ord`, so use that implementation for
comparing groups during sorting. Furtuher, only sort the list of
backup groups after filtering, thereby possibly reducing the number
of required comparisons.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To ensure the fix for avoiding printing verbose log levels to stderr,
stdout is included, as that spams the log with the full worker log
tasks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I plugged in a USB pen drive and the whole disk list UI became
completely unusable because smartctl fails to handle that device due
to some `Unknown USB bridge [0x090c:0x1000 (0x1100)]` error.
That itself might be improvable, but most often I do not care at all
about smart data, and certainly not enough to make failing gathering
it disallow me from viewing my disks (or the smart data from disks
where it still could be gathered, for that matter!)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
removable datastores will have a PBS-managed mountpoint as path, direct
access to the field needs to be replaced with a helper that can account
for this.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
instead, require 'Tape.Write' or 'Tape.Modify' on '/tape' path.
This makes it possible for a TapeOperator to destroy tapes and for a
TapeAdmin to update the tape status, instead of just root@pam.
I opted for the path '/tape' since we don't have a dedicated acl
structure for single tapes, just '/tape/pool' (which does not apply
since not all tapes have to have a pool), '/tape/device' (which is
intended for drives/changers) and '/tape/jobs' (which is for jobs only).
Also we use that path for e.g. move_tape already.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To ensure the recent fixes for the "infinite loop on early connection
abort when trying to detect the TLS handshake" problem is included.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Log the path of directory entries matched by an exclude pattern in
order to more conveniently debug possible issues.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
While traversing the filesystem tree, `generate_directory_file_list`
generates the list of entries to include for each directory level,
already matching the entry against the given list of match patterns.
Since this already excludes entries which should not be included in
the archive, the same check in the `add_entry` call is redundant,
as it is executed for each entry which is included in the list
generated by `generate_directory_file_list`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Factors the kernel version compatibility check into its own method and
adds test cases for a set of expected and unexpected kernel versions.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
To improve the performance of the smartctl checks, especially when a lot
of disks are used, parallelize the checks using the `ParallelHandler`.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Avoid running `lsblk` twice when executing the `list_disk`
endpoint/command. This and the various other small nits improve the
performance of the endpoint.
Does not really fix, but is related to: #4961.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Avoid to underflow the catalogs shell position stack by navigating
below the archives root directory into the catalog root. Otherwise
the shell will panic, as the root entry is always expected to be
present.
This threats the archive root directory as being it's own parent
directory, mimicking the behaviour of most common shells.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Disallows creating a datastore in root on the frontend side, by
filtering the '/' path. Add reuse-flag to permit us to open existing
datastores.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Disallow creating datastores in non-empty directories. Allow adding
existing datastores via a 'reuse-datastore' checkmark. This only checks
if all the necessary directories (.chunks + subdirectories and .lock)
exist and have the correct permissions. Note that the reuse-datastore
path does not open the datastore, so that we don't drop the
ProcessLocker of an existing datastore.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Seems this was forgotten while bumping it in Cargo.toml in dcd863e0.
Fixes: dcd863e0 ("bump proxmox-subscription to 0.5.0")
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
make it a bit easier to parse and include some examples of what the resync
might be able to pick up.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Decouple the actual filter logic from the skip reason output logic by
pulling the latter out of the filter closue.
Makes the filtering logic more intuitive.
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The last snapshot synced during the previous sync job might not have
been fully completed just yet (e.g. backup log still missing,
verification still ongoing, ...).
Explicitley mention the reason and that the resync is therefore
intentional by a comment in the filter logic.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
While checking which snapshots to sync, the filter logic incorrectly
included the first snapshot newer that the last synced one
unconditionally, bypassing the transfer last check for that one
snapshot. Following snapshots are correctly handled again.
E.g. of an incorrect sync by excerpt of a task log provided by a user
in the community forum [0], with transfer last set to 1:
```
skipped: 2 snapshot(s) (2024-09-29T18:00:28Z .. 2024-10-20T18:00:29Z) - older than the newest local snapshot
skipped: 5 snapshot(s) (2024-10-28T19:00:28Z .. 2024-11-01T19:00:32Z) - due to transfer-last
sync snapshot vm/110/2024-10-27T19:00:25Z
...
sync snapshot vm/110/2024-11-02T19:00:23Z
```
Not only the last, but the first newer than newest and last were
incorrectly synced.
By dropping the early return, leading to incorrect inclusion of the
snapshot, the transfer last condition is now correctly checked as
well.
Link to the issue reported in the community forum:
[0] https://forum.proxmox.com/threads/156873/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Drop the payload offset output for the multi line formatting helper,
as the formatting was skewed anyways and the `stat` output is not
intended for debugging.
Commit 51e8fa96 ("client: pxar: include payload offset in entry
listing") introduced the payload offset output for pxar entries
in case of split archives for both, single line and multi line
formatting helpers with debugging prupose.
While the payload offset output is fine for the single line entry
formatting (generates the pxar dump output in debugging mode),
it should not be included in the multi line entry formatting helper,
used to generate the output for the `stat` command of the catalog
shell.
Fixes: 51e8fa96 ("client: pxar: include payload offset in entry listing")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Make the catalog optional and use the pxar accessor for navigation if
the catalog is not provided.
This allows to use the metadata archive for navigraion, as for split
pxar archives no dedicated catalog is encoded.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds helper functions to reimplement the catalog shell functionality
for snapshots being encoded as split pxar archives.
Just as the `CatalogReader`s find method, recursively iterate entries
and call the given callback on all entries matched by the match
patterns, starting from the given parent entry.
The helper has been split into 2 functions for the async recursion to
work.
Commit c0302805c "client: backup: conditionally write catalog for
file level backups" drops encoding of the dedicated catalog when
archives are encoded as split metadata/data archives with the
`change-detection-mode` set to `data` or `metadata`.
Since the catalog is not present anymore, fallback to use the pxar
metadata archives in the manifest (if present) for generating the
listing of contents in a compatible manner.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implements the methods to dump the contents of a metadata pxar
archive using the same output format as used by the catalog dump.
The helper function has been split into 2 for async recursion to
work.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Perform the conversion from pxar file entries to catalog entry
attributes by implementing `TryFrom<&FileEntry<T>>` for
`DirEntryAttribute` and use that.
Allows the reuse for the catalog shell, when using the split pxar
archive instead of the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the logic to generate `FileEntry` paths with a given prefix to
its own helper function for it to be reusable for the catalog shell
implementation of split pxar archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `get_remote_pxar_reader` helper function so it can be reused
also for getting the metadata archive reader instance for the catalog
dump.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `handle_root_with_optional_format_version_prelude` helper,
purely related to handling the root entry for pxar format version 2
archives, to the more fitting pxar tools submodule.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The lookup helper used to generate catalog entries via the metadata
archive for split archive backups is pxar specific, therefore move it
to the appropriate pxar tools submodlue.
Change namespace visibility for tools submodule to be accessible from
other creates, to be used for common pxar related helpers.
Switch helpers declared as `pub` to `pub(crate)` in order to keep module
encapsulation, adapt namespace for functions required to be `pub`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a short sentence describing the function of the remove vanished
flag since this has not been documented explicitly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
As node.cfg is a rather general name that could clash with manual
pages from other packages, or at least be a bit confusing if there's
another tool providing a node.cfg.
In the long term we should rename all existing manual pages from
section 5 and 7, i.e. all those that are not directly named after an
executable. As those normally talk about product-specific configs and
topics where just the filename is not specific enough for a system
wide manual page.
Note that there was some off-list discussion with proposal of using
"section suffixes" that man supports and can be used to differ between
manual pages with the same name (and in the same section), for example
`man 3pm Git`, but to me this seems a bit more obscure and potentially
less discoverable, but can be a great way to provide an link alias for
convenience.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add man page for the node.cfg config file.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[ TL: pull out sorting of synopsis file list to separate commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It was fine as is, but IMO saving a few lines is nice, albeit it makes
the atomic fetch expressions look slightly complexer by wrapping them
directly with the HumanByte and TimeSpan from-constructors.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Spawn a new tokio task which about every minute displays the
cumulative progress of the backup for pxar, ppxar or img archive
streams. Catalog and metadata archive streams are excluded from the
output for better readability, and because the catalog upload lives
for the whole upload time, leading to possible temporal
misalignments in the output. The actual payload data is written via
the other streams anyway.
Add accounting for uploaded chunks, to distinguish from chunks queued
for upload, but not actually uploaded yet.
Example output in the backup task log:
```
...
INFO: processed 2.471 GiB in 1m, uploaded 2.439 GiB
INFO: processed 4.963 GiB in 2m, uploaded 4.929 GiB
INFO: processed 7.349 GiB in 3m, uploaded 7.284 GiB
...
```
This partially fixes issue 5560:
https://bugzilla.proxmox.com/show_bug.cgi?id=5560
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The chunks subdirectories are only using the chunk's 2 byte checksum
prefix given in hex notation.
Also, clarify that chunks are grouped into subdirectories.
Reported in the community forum:
https://forum.proxmox.com/threads/155751/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reading from the metric cache is somewhat expensive, so validate as many
of the required permissions as possible. For host metrics, we can
do the full check in advance. For datastores, we check if we have
audit permissions for *any* datastore. If we do not have privs for
either of those, we return early and avoid reading from the
cache altogether.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This one is modelled exactly as the one in PVE (there it
is available under /cluster/metrics/export).
The returned data format is quite simple, being an array of
metric records, including a value, a metric name, an id to identify
the object (e.g. datastore/foo, host), a timestamp and a type
('gauge', 'derive', ...). The latter property makes the format
self-describing and aids the metric collector in choosing a
representation for storing the metric data.
[
...
{
"metric": "cpu_avg1",
"value": 0.12,
"timestamp": 170053205,
"id": "host",
"type": "gauge"
},
...
]
In terms of permissions, the new endpoint requires Sys.Audit
on /system/status for metrics of the 'host' object,
and Datastore.Audit on /datastore/{store} for 'datastore/{store}'
metric objects.
Via the 'history' and 'start-time' parameters one can query
the last 30mins of metric history. If these parameters
are not provided, only the most recent metric generation
is returned.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Any pull-metric API endpoint can alter access the cache to
retrieve metric data for a limited time (30mins).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
It is only needed there and could be considered an implementation detail
of how this module works.
No functional changes intended.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
With the upcoming pull-metric system/metric caching, these
things should go into a sepearate module.
No functional changes intended.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
By moving and refactoring the check for a sync job exceeding the
global maximum namespace limit, the same function can be reused for
sync jobs in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By specifying that the snapshot is being skipped because of the
condition met on the sync target instead of 'local', the same message
can be reused for the sync job in push direction without loosing
sense.
Make `SkipReason` and `SkipInfo` accessible for sync operations of
both direction variants, push and pull.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Rename the `PullSource` trait to `SyncSource` and move the trait and
types implementing it to the common sync module, making them
reusable for both sync directions, push and pull.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `PullReader` trait and the types implementing it to the
common sync module, so this can be reused for the push direction
variant for a sync job as well.
Adapt the naming to be more ambiguous by renaming `PullReader` trait to
`SyncSourceReader`, `LocalReader` to `LocalSourceReader` and
`RemoteReader` to `RemoteSourceReader`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move and rename the `PullStats` to `SyncStats` as well as moving the
`RemovedVanishedStats` to make them reusable for sync operations in
push direction as well as pull direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Commit f16c5de757 ("pxar: bin: test `pxar list` with payload-input")
introduced a regression test for listing of split pxar archives. This
test relies on a large pxar blob file, the large size (> 100M) being
overlooked when writing the test.
In order to not depend on this file any further in the future, drop
it and rewrite the test to dynamically generate the files, needed and
further extend the test thereby also cover the archive creation and
extraction for split pxar archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When using the `log` to `tracing` translation layer, the messages get
padded with whitespaces. This bug will get fixed upstream [0], but in
the meantime we switch to the `tracing` macros.
[0]: https://github.com/tokio-rs/tracing/pull/3070
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
It seems some changers are setting the PVolTag/AVolTag flags in the
ELEMENT STATUS page response, but don't include the actual fields then.
To make it work with such changers, downgrade the errors to warnings, so
we can continue to decode the remaining data.
This is OK since one volume tag is optional and the other is skipped
anyway.
Reported in the forum:
https://forum.proxmox.com/threads/hpe-storeonce-vtl.152547/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Add tracing logger to all client binaries and remove env_logger.
The reason for this change is twofold: our migration to tracing, and the
behavior when the client calls an api handler directly. Currently the
proxmox-backup-manager calls the api handlers directly for some
commands. This results in no output (on console and task log), as no
tracing logger is instantiated.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The rate and burst parameters are integers, so the mapping from value
with `.as_str()` will always return `None` effectively never
applying any rate limit at all.
Fix it by turning them into a HumanByte instead of an integer.
To not crowd the parameter section so much, create a
ClientRateLimitConfig struct that gets flattened into the parameter list
of the backup client.
To adapt the description of the parameters, add new schemas that copy
the `HumanByte` schema but change the description.
With this, the rate limit actually works, and there is no lower limit
any more.
The old TRAFFIC_CONTROL_RATE/BURST_SCHEMAs can be deleted since the
client was the only user of them.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we currently use the behavior of zstd that is not part of the public
api, so this is at risk to be changed without notice.
There is a public api that we could use, but it's only available
with zstd_sys >= 2.0.9, which at this time, is not yet packaged for/by
us.
Add a comment that we can use the public api for this when the
new version of the crate gets available.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Make the `network reload` command in proxmox-backup-manager wait on the
api handler's workertask. Otherwise the task would be killed when the
client exits.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Without this, the button is enabled if no entry at all is selected (e.g.
when switching to the 'User Management' tab), with the button then
(obviously) being a noop.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
rustc warns about creating references to them (although it does allow
using `.as_ref()` on them for some reason), and this will become a
hard error with edition 2024.
Previously we could not use Mutex there as its ::new() was not a
`const fn` , but not we can, so let's drop the `mut`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Don't hardcode the debug flag but retrieve the currently enabled level
using tracing. This will change the default log-behavior and disable
some logs that have been printed previously. F.e.: the "protocol upgrade
done" message is not visible anymore per default because it is printed
with debug.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
In the following commit we will make use of std::sync::LazyLock which
was introduced in rust 1.80.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Otherwise proxmox-daily-update panics if attempting to send a
notification for any available new updates:
"context for proxmox-notify has not been set yet"
Reported on our community forum:
https://forum.proxmox.com/threads/152429/
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by combining the compression call from both encrypted and unencrypted
paths and deciding on the header magic at one site.
No functional changes intended, besides reusing the same buffer for
compression.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Increase the zstd compression throughput by not using the
`zstd::stream::copy_encode` method, because it seems it uses an
internal buffer size of 32 KiB [0], copies at least once extra in the
target buffer and might have some additional (allocation and/or
syscall) overhead. Due to the amount of wrappers and indirections it's
a bit hard to tell for sure. In anyway, there can be a reduced
throughput observed if all, the target and source storage and the
network are so fast that the operations from creating chunks, like
compressions, can become the bottleneck.
Instead use the lower-level `zstd_safe::compress` which avoids (big)
allocations, since we provide the target buffer.
In case of a compression error just return the uncompressed data,
there's nothing we can do and saving uncompressed data is better than
having none. Additionally, log any such error besides the one for the
target buffer being too small.
Some benchmarks on my machine (Intel i7-12700K with DDR5-4800 memory
using a ASUS Prime Z690-A motherboard) from a tmpfs to a datastore on
tmpfs:
Type without patches (MiB/s) with patches (MiB/s)
.img file ~614 ~767
pxar one big file ~657 ~807
pxar small files ~576 ~627
The new approach is faster by a factor of 1.19.
Note that the new approach should not have a measurable negative
impact, e.g. (peak) memory usage wise. That is because we always
reserved a vector with max-data-size (data length + header length) and
thus did not have to add a new buffer, rather we actually removed the
buffer that the high-level zstd wrapper crate used internally.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check the error code of zstd not to be 'Destination buffer
to small' (dstSize_tooSmall), but currently there is no practical API
that is also public. So we introduce a helper that uses the internal
logic of zstd to determine the error.
Since this is not guaranteed to be a stable api, add a test for that
so we catch that error early on build. This should be fine, as long as
the zstd behavior only changes with e.g. major debian upgrades, which
is normally the only time where the zstd version is updated.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: re-order fn, rename test and reword comments ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is leftover code that is not currently used outside of its own
tests.
Should we need it again, we can just revert this commit.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Add External Metrics page to PBS's documentation. Most of it is copied
from the PVE documentation, minus the Graphite part.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
with the default 8k input buffer size, the client will spend most of the time
polling instead of reading/chunking/uploading.
tested with 16G random data file from tmpfs to fresh datastore backed by tmpfs,
without encryption.
stock:
Time (mean ± σ): 36.064 s ± 0.655 s [User: 21.079 s, System: 26.415 s]
Range (min … max): 35.663 s … 36.819 s 3 runs
patched:
Time (mean ± σ): 23.591 s ± 0.807 s [User: 16.532 s, System: 18.629 s]
Range (min … max): 22.663 s … 24.125 s 3 runs
Summary
patched ran
1.53 ± 0.06 times faster than stock
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by dropping the print-per-chunk and making the input buffer size configurable
(8k is the default when using `new()`).
this allows benchmarking various input buffer sizes. basically the same code is
used for image-based backups in proxmox-backup-client, but just the
reading and chunking part. looking at the flame graphs the smaller input
buffer sizes clearly show most of time spent polling, instead of
reading+copying (or reading and scanning and copying).
for a fixed chunk size stream with a 16G input file on tmpfs:
fixed 1M ran
1.06 ± 0.17 times faster than fixed 4M
1.22 ± 0.11 times faster than fixed 16M
1.25 ± 0.09 times faster than fixed 512k
1.31 ± 0.10 times faster than fixed 256k
1.55 ± 0.13 times faster than fixed 128k
1.92 ± 0.15 times faster than fixed 64k
3.09 ± 0.31 times faster than fixed 32k
4.76 ± 0.32 times faster than fixed 16k
8.08 ± 0.59 times faster than fixed 8k
(from 15.275s down to 1.890s)
dynamic chunk stream, same input:
dynamic 4M ran
1.01 ± 0.03 times faster than dynamic 1M
1.03 ± 0.03 times faster than dynamic 16M
1.06 ± 0.04 times faster than dynamic 512k
1.07 ± 0.03 times faster than dynamic 128k
1.12 ± 0.03 times faster than dynamic 64k
1.15 ± 0.20 times faster than dynamic 256k
1.23 ± 0.03 times faster than dynamic 32k
1.47 ± 0.04 times faster than dynamic 16k
1.92 ± 0.05 times faster than dynamic 8k
(from 26.5s down to 13.772s)
same input file on ext4 on LVM on CT2000P5PSSD8 (with caches dropped for each run):
fixed 4M ran
1.06 ± 0.02 times faster than fixed 16M
1.10 ± 0.01 times faster than fixed 1M
1.12 ± 0.01 times faster than fixed 512k
1.15 ± 0.02 times faster than fixed 128k
1.17 ± 0.01 times faster than fixed 256k
1.22 ± 0.02 times faster than fixed 64k
1.55 ± 0.05 times faster than fixed 32k
2.00 ± 0.07 times faster than fixed 16k
3.01 ± 0.15 times faster than fixed 8k
(from 19.807s down to 6.574s)
dynamic 4M ran
1.04 ± 0.02 times faster than dynamic 512k
1.04 ± 0.02 times faster than dynamic 128k
1.04 ± 0.02 times faster than dynamic 16M
1.06 ± 0.02 times faster than dynamic 1M
1.06 ± 0.02 times faster than dynamic 256k
1.08 ± 0.02 times faster than dynamic 64k
1.16 ± 0.02 times faster than dynamic 32k
1.34 ± 0.03 times faster than dynamic 16k
1.70 ± 0.04 times faster than dynamic 8k
(from 31.184s down to 18.378s)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`cargo build` and `cargo install` pick up different config files, by symlinking
the wrapper config into a place with higher precedence than the one in the
top-level git repo dir, we ensure the package build actually picks up the
desired config instead of the one intended for quick dev builds.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The root namespace is displayed as empty string when used in the
format string. Distinguish and explicitly write out the root namespace
in the sync info message shown in the sync jobs task log.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Describe the `pull` direction of the sync operation more precisely
before adding also a `push` direction as synchronization operation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds a helper to create temporal files in XDG_CACHE_HOME. If we cannot
create a file there, we fallback to /tmp as before.
Note that the temporary files stored by the client might grow
arbitrarily in size, making XDG_RUNTIME_DIR a less desirable option.
Citing the Arch wiki [1]:
> Should not store large files as it may be mounted as a tmpfs.
While the cache directory is most often not backed up by an ephemeral
FS, using the `O_TMPFILE` flag avoids the need for potential cleanup,
e.g. on interruption of a command. As with this flag set the data will
be discarded when the last file descriptor is closed.
[1] https://wiki.archlinux.org/title/XDG_Base_Directory
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[ TL: mention TMPFILE flag for clarity ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to pull in the fix for restoring backwards compatibility due to the
digest from that crate using a u8 slice instead of our dedicated
ConfigDigest type, which would serialize to String.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit ea584a75 "move more api types for the client" deprecated
the `archive_type` function in favor of the associated function
`ArchiveType::from_path`.
Replace all remaining callers of the deprecated function with its
replacement.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[WB: and remove the deprecated function]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
When getting the `full_path` of a snapshot we did not use the cached
time string. By using it we avoid a call to the super-slow libc strftime.
This has some minor performance improvements of circa 7%. That is ~100ms
on my datastore with ~5000 snapshots.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The protected status of the snapshot is retrieved twice. This is slow
because it stat's the .protected file multiple times.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Add local dependencies for new crates `proxmox-apt-api-types` and
`proxmox-config-digest`. Also fix order of deps.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Use `Confirmation` helper in the wipe-disk command prompt.
Improves: 887d83cb (cli: add interactive confirmation for block device wipe, 2023-11-29)
Cc: Markus Frank <m.frank@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
... if `.chunks/` is not available(deleted/moved) ChunkStore::open
fails, but that would happen after updating the active operations on the
datastore, so no reference that could be dropped is returned. Leading to
the operations counter to always increase. This only updates the counter
when a reference is returned, not before.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
The re-authentication request can also fail due to network instability,
and not necesarrily only due to an invalid ticket. In that case it makes
sense to retry refreshing the ticket in 15 minutes. Also, the future does
not depend on a failed re-authentication to be clean up properly, so that
happens already somewhere else, therefore we don't rely on this return
anyway. If the ticket is actually invalid or timed out, the main job
will fail and also terminate the renewal future, same applies if the
network is not just unstable but straight up not working.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The .pxarexclude-cli encodes the exclude patterns the client was
invoked with in the pxar archive as regular file entry. The current
behaviour of setting the uid and gid to default 0 (root) causes
however issues when trying to backup and restore the backup as
non-root user.
Opt for using the uid/gid of the user the executable was called as,
allowing the restore for this user to succeed. Root will succeed
to restore anyways.
Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5304
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
When using the `proxmox-backup-client mount` command, the parent sometimes
exits before we can print any error message. Most notably this happens
when no PBS_REPOSITORY is passed, as this is the first option checked.
If the underlying file descriptor has been closed, wait for the client
to complete and return the error message.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Commit 08fe5052 introduced functionality to mount split pxar archives
(sharing code with the map command), moving the manifest lookup
exclusive to fixed index archives.
However, the lookup now uses the incorrect archive name, not
containing the `.fidx` extension, which is however required for the
lookup in the manifest.
Fix the issue by calling the method with the correct server archive
name including the required extension.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Fixes: 08fe5052 ("client: mount: make split pxar archives mountable")
[FG: reworded, add proper "Fixes:" trailer.]
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it builds about 1.5 times faster than regular `make deb` (shaving off a
whopping 100s on my machine). the resulting debs containing executables are of
course bigger (since the debug symbols are not split out into their own
package, and the ELF linkage stripping is also skipped), but other than the
associated file and memory mapping overhead there should be no difference in
behaviour or performance, and such debs are suitable for local testing (both of
the build process, and the built code).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
HashMap::remove() returns the value it removes as an Option<>, so
instead of first checking if the key exists before removing it, just
try to remove it and use the returned Option<> to test whether we
should bail!().
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Fixes the clippy warning:
warning: this multiplication by -1 can be written more succinctly
--> pbs-client/src/tools/mod.rs:700:58
|
700 | SignedDuration::Negative(val) => -1 * i64::try_from(val.as_secs())?,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using: `-i64::try_from(val.as_secs())?`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#neg_multiply
= note: `#[warn(clippy::neg_multiply)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy warning:
warning: unnecessary `pub(self)`
--> src/api2/access/mod.rs:35:1
|
35 | pub(self) async fn user_update_auth<S: AsRef<str>>(
| ^^^^^^^^^ help: remove it
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pub_self
= note: `#[warn(clippy::needless_pub_self)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy warning:
warning: unnecessary use of `get(&user2).is_none()`
--> pbs-config/src/acl.rs:1067:36
|
1067 | assert!(node.users.get(&user2).is_none());
| -----------^^^^^^^^^^^^^^^^^^^^^
| |
| help: replace it with: `!node.users.contains_key(&user2)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
pbs2to3 was missing from the list of to-be-compiled binaries, and thus was only
compiled as a side-effect of running `cargo test` (which is skipped when the
build is using the `nocheck` build profile).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fixes the compile-time warning:
warning: Cargo.toml: `default_features` is deprecated in favor of `default-features` and will not work in the 2024 edition
(in the `proxmox-router` dependency)
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
else we don't pick up the options set by the wrapper, which include generation
of debug symbols. until rustc 1.77, this was not needed because compiled
binaries always included a non-stripped libstd. now, without this change, the
binaries built with `cargo build --release` have no debug symbols at all
trigger a warning. fix this and include debug symbols when building a package,
like was originally intended for release package builds.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to work with cargo 1.77, which changed from
pbs-api-types 0.1.0 (path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types)
to
path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types#0.1.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add the command `proxmox-backup-client group forget <group>` so
that we can forget (delete) whole groups with all the containing
snapshots.
To avoid printing full datastore paths (which are in the error messages)
we filter out the most common one (group not found) and rephrase it.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[WB: rebased & sorted import statements in client's main.rs]
[WB: replace extract_repository_from_value with
remove_repository_from_value since the parameter is rejected on
the remote side]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'extract_repository_from_value' takes an immutable reference and
doesn't remove the parsed parameter (whereas in contrast in our PVE
codebase, the 'extract_param' method does remove it).
This adds a variant that explicitly removes it called
'remove_repository_from_value'.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Instead of storing the error as a string in the PxarBackupStream, we
store it as an anyhow::Error. As we can't clone an anyhow::Error, we take
it out from the mutex and return it. This won't change anything as
the consumation of the stream will stop if it gets a Some(Err(..)).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
To create a pxar archive, we recursively traverse the target folder.
If there is an error further down and we add a context using anyhow,
the context will be duplicated and we get an output like:
> Error: error at "xattr/xattr.txt": error at "xattr/xattr.txt": E2BIG [skip]
This is obviously not optimal, so in recursive contexts we can use the
UniqueContext, which quickly checks the context from the last item in
the error chain and only adds it if it is unique.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The sole purpose of the ArchiveError was to add the file-path to the
error. Using anyhow::Error we can add this information using the context
and don't need this struct anymore.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
String concatenating a variable with some static text as gettext
parameter cannot really work, and it also does not make sense to do
most of the time, as even if we'd use some overly generic format
string like '{0} (disabled)', it would be not easy to translate
correctly in all languages in such a generic way.
So just use the actual full string, which is already contained in our
translation catalogue anyway…
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is basically semantic revert of e5c0d80c ("docs: add note for not
using remote storages") that, while well intended, has a few problems,
e.g.:
- This is the minimal/recommended requirements section, which should
list the rough basic specs a setup must/should have. Listing
everything that is not best to do would bloat this list
significantly and it's just the wrong place for it, i.e., it isn't a
recommended against list.
- while it's true that a remote storage will basically always have
_some_ overhead over using the same HW with a (modern) local storage
(file) system, that does **not** mean that the remote storage has
insufficient performance characteristics. We know of lots of fast
Ceph setups, even release benchmarks for them, or storages like
BlockBridge, that provide high performance while being remote.
So avoid this X-Y-problem style argumentation and focus on what is
actually important, even though I naturally get that there are some
users that use slow NFS attached storages, but breaking style here
won't cure them and I'm sure that they are capable of setting up such
a slow local storage that it won't make a real difference compared to
the NFS one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adapt to the decoder/accessor method changes introduced in the pxar
library, which were introduced in order to move the consistency check
for metadata and payload data archives.
The new location of the checks allows to access the pxar archive via
a `Split` variant reader instance, without penalization when just
accessing the metadata, not reading any payload data.
This greatly improves performance when accessing fuse mounted
archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
bumped dependency after pxar version bump
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
such as NFS or SMB. They will not provide the expected performance
and it's better to recommend against them.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since that leads to errors that we don't currently catch before we
reach the regular early warning on tape.
This can be read/set by the Device Configuration Extension Mode Page.
ignore errors on reading or writing, since it may not be available on
LTO-4
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
that would drop the final byte, and the corresponding code has been removed
from pxar now as well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Currently, whether to encode the exlcude patterns passed via cli as
prelude or via the `.pxar-exclude-cli` is based on the presence of
a previous metadata accessor.
That leaves however to the encoding of the file entry instead of the
prelude for split archives in `data` mode and for the first snapshot
in a backup, creating undesired padding in the first payload chunk.
Therefore, use the pxar writer variant to make the decision instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The current encoding is not extensible, so encode the cli exclude
patterns as json instead. By this, the prelude is easily seralized
and deserialized, while remaining human readable.
Originally-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not attach the payload reader for split pxar archives, as only the
metadata has to be accessed for listing.
This avoids that the decoder performs consistency checks with the
payload stream, which require chunk download and decoding, making the
listing unusable slow.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The currently default variant is named `Default`, which is not future
prove since the default might change in the future. So rename it to
`Legacy` instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
by skipping the payloader reader entirely, it's not needed for listing contents
and would make accessing larger archives too expensive.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Only write the catalog when using the regular backup mode, do not write
it when using the split archive mode.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In case of pxar archives with split metadata and payload data, the
metadata archive has to be used to lookup entries for navigation
before performing a single file restore.
Decide based on the archive filename extension whether to use the
`catalog` or the `pxar-lookup` api endpoint.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The `proxmox-file-restore list` command will uses the provided path to
lookup and list directory entries via the catalog. Fallback to using
the metadata archive if the catalog is not present for fast lookups in
a backup snapshot.
This is in preparation for dropping encoding of the catalog for
snapshots using split archive encoding. Proxmox VE's storage plugin
uses this to allow single file restore for LXCs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Payload data archives cannot be used to navigate the content, so
exclude them from the archive listing, as this is used by
Proxmox VE to list in the file browser.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to pass the archive name as optional api call parameter instead
of having it as prefix to the path.
If this parameter is given, instead of splitting of the archive name
from the path, the parameter itself is used, leaving the path
untouched.
This allows to restore single files from the archive, without having
to artificially construct the path in case of file restores for split
pxar archives, where the response path of the listing does not
include the archive, as opposed to the response provided by lookup
via the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add an optional `archive-name` parameter, indicating the metadata
archive to be used for directory content lookups instead of the
catalog. If provided, instead of the catalog reader, a pxar Accessor
instance is created to perform the lookup.
This is in preparation for dropping catalog encoding for snapshots
with split pxar archive encoding.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation to lookup entries via the pxar metadata archive
instead of the catalog, in order to drop encoding the catalog
for snapshots using split pxar archives altogehter.
This helper allows to lookup the directory entries via the provided
accessor instance and formats them to be compatible with the output
as produced by lookups via the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move code that can be reused when having to perform a lookup via the
pxar metadata archive instead of the catalog out of the thread.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The file path passed to the catalog is base64 encoded, with an exception
for the root.
Factor this check and decoding step out into a helper function to make
it reusable when doing the same for lookups via the metadata archive
instead of the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The test will fail for all users not having euid/egid set to
1000/1000, as the reference test folder structure cannot be created
with the expected ownership.
Therefore, skip over the test if either euid or egid do not match
this condition.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Setting the uid/gid for the files and folders of the test directory
structure will not work when lacking the permissions.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Describe the motivation and basic principle of the clients change
detection mode and show an example invocation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The lookahead cache size requires the resource limit for open file
handles to be high in order to allow for efficient reuse of unchanged
file payloads.
Increase the nofile soft limit to the hard limit and dynamically adapt
the cache size to the new soft limit minus the half of the previous
soft limit.
The `PxarCreateOptions` and the `Archiver` are therefore extended by
an additional field to store the maximum cache size, with fallback to
a default size of 512 entries.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The default soft limit for open file handles is rather low, as some
apis (e.g. the POSIX `select(2)` syscall) do not work [0].
The lookahead cache use during the backup clients metadata comparison
to reuse unchanged files however requires much higher limits to work
effectively.
This helper function allows to raise the soft limit to the hard
limit, as provided by the `getrlimit(2)` syscall.
[0] https://0pointer.net/blog/file-descriptor-limits.html
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the dedicated chunker with boundary suggestions for the payload
stream, by attaching the channel sender to the archiver and the
channel receiver to the payload stream chunker.
The archiver sends the file boundaries for the chunker to consume.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implement the Chunker trait for a dedicated payload stream chunker,
which extends the regular chunker by the option to suggest boundaries
to be used over the hast based boundaries whenever possible.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add the Chunker trait and move the current Chunker to ChunkerImpl to
implement the trait instead. This allows to use different chunker
implementations by dynamic dispatch and is in preparation for
implementing a dedicated payload chunker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to pass an optional input path to mount a split pxar archive
with dedicated payload data file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not list the pxar format version and the prelude entries in the
output of pxar list, these are not regular entries. Do include them
however when dumping with the debug environmet variable set.
Since the prelude is arbitrary in size, only show the content size.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In addition to the entries, also show the padding encountered in-between
referenced payloads.
Example invocation: `PXAR_LOG=debug pxar list archive.mpxar`
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Pxar archives allow to store additional information in a prelude
entry since pxar format version 2.
Add an optional parameter to `pxar` and `proxmox-backup-client` to
specify the path to restore the prelude to and pass this to the
archive extraction by extending the `PxarExtractOptions` by a
corresponding field. If none is given, the prelude is simply skipped
during restore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of encoding the pxar cli exclude patterns as regular file
within the root directory of an archive, store this information
directly after the pxar format version entry in the entry of kind
Prelude.
This behavior is however currently exclusive to the archives written
with format version 2 in a split metadata and payload case.
This is a breaking change for the encoding of new cli exclude
parameters. Any new exclude parameter will not be added to an already
present .pxar-cliexclude file, and it will not be created if not
present.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Pxar archives with format version 2 allows to store optional
information file format version and prelude entries.
Cover the case for these entries, the file format version entry being
introduced to distinguish between different file formats used for
encoding as well as the prelude entry used to store optional metadata
such as the pxar cli exlude parameters.
Add the logic to accept and decode these prelude entries when
accessing the archive via a decoder instance.
For now simply ignore them.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the additional output in case of split pxar archives, the upload
statistics logged by the backup writer following a backup are crowded
and hard to read.
Make the output more concise by merging the currenlty 2 lines per
upload stream, shown as e.g.:
```
data.ppxar: had to backup 4 MiB of 10.943 GiB (compressed 159 B) in 49.30s
data.ppxar: average backup speed: 83.09 KiB/s
```
into a single line, shown as e.g.:
```
data.ppxar: had to back up 4 MiB of 10.943 GiB (159 B compressed) in 49.30 s (average 83.09 KiB/s)
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When walking the file system tree, check for each entry if it is
reusable, meaning that the metadata did not change and the payload
chunks can be reindexed instead of reencoding the whole data.
If the metadata matched, the range of the dynamic index entries for
that file are looked up in the previous payload data index.
Use the range and possible padding introduced by partial reuse of
chunks to decide whether to reuse the dynamic entries and encode
the file payloads as payload reference right away or cache the entry
for now and keep looking ahead.
If however a non-reusable (because changed) entry is encountered
before the padding threshold is reached, the entries on the cache are
flushed to the archive by reencoding them, resetting the cached state.
Reusable chunk digests and size as well as reference offsets to the
start of regular files payloads within the payload stream are injected
into the backup stream by sending them to the chunker via a dedicated
channel, forcing a chunk boundary and inserting the chunks.
If the threshold value for reuse is reached, the chunks are injected
in the payload stream and the references with the corresponding
offsets encoded in the metadata stream.
Since multiple files might be contained within a single chunk, it is
assured that the deduplication of chunks is performed, by keeping back
the last chunk, so following files might as well reuse that same
chunk without double indexing it. It is assured that this chunk is
injected in the stream also in case that the following lookups lead to
a cache clear and reencoding.
Directory boundaries are cached as well, and written as part of the
encoding when flushing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the catalog directory start and end encoding from `add_entry`
to the `add_directory`, the latter being called by the previous.
By this, the `add_entry` method can be reused to walk the filesystem
tree in the context of an enabled lookahead cache without encoding
anything.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a lookahead cache and the neccessary types to store the required
data and keep track of directory boundaries while traversing the
filesystem tree, in order to postpone a decision if to reuse or
reencode a given regular file with unchanged metadata.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add method to compare metadata of current file entry against metadata
of the entry looked up in the previous backup snapshot. If the
metadata matched, the start offset pointing to the files payload
header in the payload steam is returned.
This is in preparation for reusing payload chunks for unchanged files.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implement a method that prepares the decoder instance to access a
previous snapshots metadata index and payload index in order to
pass it to the pxar archiver. The archiver than can utilize these
to compare the metadata for files to the previous state and gather
reusable chunks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds the specification for switching the detection mode used to
identify regular files which changed since a reference backup run.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To reuse dynamic entries of a previous backup run and index them for
the new snapshot. Adds a non-blocking channel between the pxar
archiver and the chunk stream, as well as the chunk stream and the
backup writer.
The archiver sends forced boundary positions and the dynamic
entries to inject into the chunk stream following this boundary.
The chunk stream consumes this channel inputs as receiver whenever a
new chunk is requested by the upload stream, forcing a non-regular
chunk boundary in the pxar stream at the requested positions.
The dynamic entries to inject and the boundary are then send via the
second asynchronous channel to the backup writer's upload stream,
indexing them by inserting the dynamic entries as known chunks into
the upload stream.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When forcing a boundary, the internal chunker state is not in sync
with the chunk stream anymore. The reset method therefore allows
to reset the internal state.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds a dedicated structure to hold the optional sender and receiver
instances and state for injection of reused dynamic entries in the
payload stream for split stream pxar archives.
The asynchronous channels must only be attached to the payload
archive, leaving the current behavior for the metadata archive and
current default encoding without reusing payload chunks of previous
snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order to be included in the backups index file, reused payload
chunks have to be injected into the payload upload stream at a
forced boundary. The chunker forces a chunk boundary and sends the
list of reusable dynamic entries to be uploaded.
This implements the logic to receive these dynamic entries via the
corresponding communication channel from the chunker and inject the
entries into the backup upload stream by looking for the matching
chunk boundary, already forced by the chunker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The helper method allows to lookup the entries of a dynamic index
which fully cover a given offset range. Further, the helper returns
the start padding from the start offset of the dynamic index entry
to the start offset of the given range and the end padding.
This will be used to lookup size and digest for chunks covering the
payload range of a regular file in order to re-use found chunks by
indexing them in the archives index file instead of re-encoding the
payload.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Also display the payload offset as listing output when the regular file
entry had a payload reference rather than the payload encoded in the
archive. This allows for debugging by inspecting the raw payload data
file at given offset.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to list entries of split pxar archives. As the decoder skips
over the file payloads, the corresponding payload file has to be
provided. Otherwise the decoder would skip inside the metadata
archive, leading to incorrect decoding.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to pass the optional payload input to restore for cases where the
regular file payloads are stored in the split archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Attach the payload data archive as input stream to the decoder
and accessor instances for split archives.
Allows to restore contents from split archives via the
`proxmox-file-restore extract` command, by passing the metadata
archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Factor out the logic to get the pxar reader into a dedicated function
so it can be reused to get the payload data archive reader instance.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data for pxar archives written as split archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to access the pxar metadata archives for navigation and
download via the Proxmox Backup Server web ui.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives,
using the corresponding helper methods.
Allows to restore split metadata and payload stream pxar archives via
the catalog shell.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Attach the payload chunk reader for pxar archives which have been
uploaded using split streams for metadata and payload data.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data file in the cli parameter completion callback.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Whenever a split pxar archive is encountered, instantiate and attach
the required dedicated reader instance to the decoder instance on
restore.
Piping the output to stdout is not possible for these, as this would
require a decoder instance which can decode the input stream, while
maintaining the pxar stream format as output.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the introduction of split pxar archives, the allowed extensions
are now `.pxar`, `.mpxar` and `.ppxar`. Add a helper function to
allow to check for all valid variants, including the optional
additional `.didx` in case of a server archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Helper method that takes an archive name as input and checks if the
given archive is present in the manifest, by also taking possible
split archive extensions into account.
Returns the pxar archive name if found or the split archive names if
the split archive variant is present in the manifest.
If neither is matched, an error is returned signaling that nothing
matched entries in the manifest.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add module to place helper methods which need to be used in different
submodules of the client.
Add `get_pxar_fuse_reader`, `get_buffered_pxar_reader` and
`get_pxar_fuse_accessor` to create reader instances to access pxar
archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... and attach the split payload writer variant to the pxar archive
creation. By this, metadata and payload data will create different
dynamic indexes, allowing to lookup and reuse payload chunks without
the additional overhead of the pxar archive's metadata.
For now this functionality remains disabled and will be enabled in a
later patch once the logic for reusing the payload chunks is in
place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Introduce a `PxarWriters` struct to bundle all writer instances
required for the pxar archive creation into a single object to limit
the number of function call parameters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... and adapt to the new reader/writer variant for encoder or
decoder/accessor to attach a dedicated payload input/output for split
pxar archives.
In preparation for look-ahead caching, where a passing around of
per-directory level encoder instances with internal references is
not feasible.
Previously, for each directory level a new encoder instance has been
generated, restricting possible implementation errors. These encoder
instances have been internally linked by references to keep track of
the state changes in a parent child relationship.
This is however not feasible when the encoder has to be passed by
mutable reference, as required by the look-ahead cache
implementation. The encoder has therefore been adapted to use a
single instance implementation with an internal stack keeping track
of the state.
Depends on the bumped pxar library version, including the patches to
attach the corresponding variant for the pxar reader/writer
instantiation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for injecting reused payload chunks in payload streams
for regular files with unchanged metaddata. Allows to get the digest
of a dynamic index entry to construct a reusable dynamic entry from
it.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the code to get the local chunk reader to a dedicated function
to make it reusable. The same code is required to get the local chunk
reader for the payload stream for split stream archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of composing the backup target name and pushing it to the
backup list, push the archive name and extension separately, only
constructing it while iterating the list later.
By this it remains possible to additionally prefix the extension, as
required with the separate pxar metadata and payload indexes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
currently we don't lock the shadow file when removing or storing a
password. by adding locking here we avoid a situation where storing
and/or removing a password concurrently could lead to a race
condition. in this scenario it is possible that a password isn't
persisted or a password isn't removed. we already do this for
the "token.shadow" file, so just use the same mechanism here.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
With proxmox-widget-toolkit < 4.1.4, loading the UI will fail with
a JavaScript error:
> Uncaught TypeError: Proxmox.Utils.overrideNotificationFieldName is not a function
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The read_interface endpoint uses the wrong path identifier. It has been
renamed to 'iface' some time ago but hasn't been changed here.
When a user has a permission on '/' with 'Admin', he wasn't able to
show the config of a single interface, as the non-existent path didn't
match.
Reported-by: https://forum.proxmox.com/threads/permissons-not-working-for-network-settings.147899/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The product name is Proxmox Backup Server, not just Backup Server,
that makes no sense on its own and it really cannot be expected by
tools extracting any Medium Auxiliary Memory (MAM) info to render it
as `${app_vendor} ${app_name}`.
Drop the comment about ignoring errors, that's pretty clear with
the only-log-error construct.
Instead, add some comments about what the hex numbers refers too and
what their respective length (limit) is. The names where taken from
Table 315 "MAM Host type attributes" in the "IBM LTO SCSI Reference"
for LTO 9.
Slightly off-topic: The tape code really is a mess with sprinkling
those hex numbers hard coded all over the place, often with some
unchecked coupling in other places (like here, the list of set MAM
attrs and the one that get cleared can easily get out of sync..), but
that's for another time to clean-up (I need to cut a release).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of blocking on input without telling the user what's going on.
Reported on the forum: https://forum.proxmox.com/threads/147058/
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This new section describes how the notification-mode parameter works.
The section also contains also parts of the old notification section
from the maintenance chapter, reusing the description of the
`notify` and `notify-user` parameters.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
When using the legacy notifications the sync mode would pick up the
settings from the prune-job, which default to Error. This completely
disables notifications for successful sync-jobs when using the legacy
system.
Reported in the forum: https://forum.proxmox.com/threads/147018/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
this commit switches pbs over to generating ed25519 keys when
generating new auth api keys. this also removes the last direct
usages of openssl here and further unifies key handling in the auth
api.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
this commit moves away from using openssl's `PKey` and uses the
wrappers from proxmox-auth-api. this allows us to handle keys in a
more flexible way and enables as to move to ec based crypto for the
authkey in the future.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
if a users password is not hashed with the latest password hashing
function, re-hash the password with the newest hashing function. we
can only do this on login and after the password has been validated,
as this is the only point at which we have access to the plain text
password and also know that it matched the original password.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
previously we used a self-rolled implementation for csrf tokens. while
it's unlikely to cause issues in reality, as csrf tokens are only
valid for a given tickets lifetime, there are still theoretical
attacks on our implementation. so move all of this code into the
proxmox-auth-api crate and use hmac instead.
this change should not impact existing installations for now, as this
falls back to the old implementation if a key is already present. hmac
keys will only be used for new installations and if users manually
remove the old key and
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
For one these different views have different columns shown, and more
importantly: with the state being shared one could change sorting in
the global view and then have that applied in the per-datastore view
too, even if one cannot sort that view explicitly otherwise as there's
just one row anyway. This small glitch might lead to a bit of
confusion in the worst case and looks unpolished in any way.
Note that I explicitly decided against encoding the datastore in the
state-id for the per-datastore views for now, as most users will want
to adapt layout (like column width) for all per-datastores views.
Having to re-do that for every datastore separately can be quite a
nuisance while the same user wanting different layout for each
datastore in their per-datastore view seems rather to be an edge case.
And we can always change this, so starting out with the slightly more
restricted design that has less browser local data to be saved seems
better w.r.t. maintainability.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make columns sortable in the global 'Prune & GC Jobs' view. In the
per-datastore view the columns will not be sortable as there can only be
one job.
Fixes: db3fd213 ("fix #3217: ui: global prune and gc job view")
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
the disk serial given to virtio disks only can be 20 characters, so
looking for a disk with a longer serial will always fail (like
'drive-tpmstate0-backup'). If the serial is longer, also try with the
truncated one. Leave the first try in place in case the limit changes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in case we cannot stat a file in the restore vm, log the path and reason
why. This should normally not happen, but when it does, the path and
error might help us find the issue.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the change in our restore image to ntfs3, non iso8859-1 filenames
were broken. Fix that by adding the 'iocharset' option to ntfs3.
Leave the ntfs option in place, so that if the image gets booted
with an older kernel for some reason, this still works.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely:
Vendor: Proxmox
Name: Backup Server
Version: current running package version
User Label Text: the label text
Media Pool: the current media pool
write it on labeling and when writing a new media-set to a tape.
While we currently don't use this info for anything, this can help users
to identify tapes, even with different backup software.
If we need it in the future, we can e.g. make decisions based on these
fields (e.g. the version).
On format, delete them again.
Note that some VTLs don't correctly delete the attributes from the
virtual tapes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Some MAM attributes are of type 'TEXT' that is not only ascii, but
controlled by an addition field that specifies various 8bit text
formats.
For now, simply assume utf8 as the default is ascii, and we don't expect
any data that is not ASCII anyway.
This will be needed when we'll want to write those attributes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since we don't query each drives status seperately, but rely on a single
call to the drives listing parameter for that, we now add the option
to query the activity there too. This makes that data avaiable for us
to show in a seperate (by default hidden) column.
Also we show the activity in the 'State' column when the drive is idle
from our perspective. This is useful when e.g. an LTO-9 tape is loaded
the first time and is calibrating, since that happens automatically.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when the tape drive has an activity (and the tape is in motion), certain
calls block until the operation is finished. Since we cannot predict how
long it's going to be and it can be quite long in certain cases,
skip those calls when the drive is doing anything.
If we cannot determine the activity, try to do the queries.
We have to extend the check for a loaded drive in the UI, since the
position is not available during any activity.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we use the VHF part from the DT Device Activity page for that.
This is intended to query the drive for it's current state and activity.
Currently only the activity is parsed and used.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and show them on the ui. This can help uses with seeing how much a tape
is used.
The value is updated on 'commit' and when the tape is changed during a
backup.
For drives not supporting the volume statistics, this is simply skipped.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
A small example that simply writes pseudo-random chunks to a drive.
This is useful to benchmark throughput on tape drives.
The output and behavior is similar to what the pool writer does, but
without writing multiple files, committing or loading data from disk.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When writing data on tape, the idea was to sync/committing to tape and
the catalog to disk every 128GiB of data. For that the counter
'bytes_written' was introduced and checked after every chunk/snapshot
archive.
Sadly we forgot to reset the counter after doing so, which meant that
after 128GiB was written onto the tape, we synced/committed after every
archive on the tape for the remaining length of the tape.
Since syncing to tape and writing to disk takes a bit of time, the drive
had to slow down every time and reduced the available throughput. (In
our tests here from ~300MB/s to ~255MB/s).
By resetting the value to zero after syncing, we avoid that and increase
throughput performance when backups are bigger than 128GiB on tape.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to ensure that it can handle the recently lifted restrictions on the
organization and bucket parameters correctly by URL encoding them.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Remove the regex for influxdb organizations and buckets. Influxdb does
not place any constraints on these names and allows all characters. This
allows influxdb organization names with slashes.
Also remove a duplicate comment and add some missing ones.
This also aligns the behavior to PVE as there are no restrictions there
either.
The motivation for this patch is this forum post:
https://forum.proxmox.com/threads/influx-db-organization-doesnt-allow-slash.145402/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
to ensure the next build contains the 78bf05a4 ("fix: use fragmented
block size for space calculation") improvement.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-26 17:54:08 +02:00
414 changed files with 18144 additions and 14243 deletions
:http-proxy:Set proxy for apt and subscription checks.
:email-from:Fallback email from which notifications will be sent.
:ciphers-tls-1.3:List of TLS ciphers for TLS 1.3 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:ciphers-tls-1.2:List of TLS ciphers for TLS <= 1.2 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:default-lang:Default language used in the GUI.
:description:Node description.
:task-log-max-days:Maximum days to keep task logs.
label = "<fv>FORMAT_VERSION\l|PRELUDE\l|<f0>ENTRY\l|\{XATTR\}\* extended attribute list\l|\{ACL_USER\}\* USER ACL entries\l|\{ACL_GROUP\}\* GROUP ACL entries\l|\[ACL_GROUP_OBJ\] the ACL_GROUP_OBJ \l|\[ACL_DEFAULT\] the various default ACL fields\l|\{ACL_DEFAULT_USER\}\* USER ACL entries\l|\{ACL_DEFAULT_GROUP\}\* GROUP ACL entries\l|\[FCAPS\] file capability in Linux disk format\l|\[QUOTA_PROJECT_ID\] the ext4/xfs quota project ID\l|{<pl> PAYLOAD_REF|SYMLINK|DEVICE|{<de> \{DirectoryEntries\}\*|GOODBYE}}"
"Group filter based on group identifier ('group:GROUP'), group type ('type:<vm|ct|host>'), or regex ('regex:RE'). Can be inverted by prepending 'exclude:'.")
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.