Compare commits

...

770 Commits

Author SHA1 Message Date
Thomas Lamprecht
58fb448be5 bump version to 3.4.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-16 14:45:45 +02:00
Dominik Csapak
07a21616c2 tape: wait for calibration of LTO-9 tapes
In case we run into a ready check timeout, query the drive, and
increase the timeout to 2 hours and 5 minutes if it's calibrating (5
minutes headroom). This is effectively a generalization of commit
0b1a30aa ("tape: adapt format_media for LTO9+"), which increased the
timeout for the format procedure, while this here covers also tape
that were not explicitly formatted but get auto-formatted indirectly
on the first action changing a fresh tape, like e.g. barcode labeling.

The actual reason for this is that since LTO-9, initial loading of
tapes into a drive can block up to 2 hours according to the spec. One
can find the IBM and HP LTO SCSI references rather easily [0][1]

As for the timeout, IBM says it only in their recommendations:
> Although most optimizations will complete within 60 minutes some
> optimizations may take up to 2 hours.

And HP states:
> Media initialization adds a variable amount of time to the
> initialization process that typically takes between 20 minutes and
> 2 hours.

So it seems there not a hard limit and depends, but most ordinary
setups should be covered and in my tests it always took around the 1
hour mark.

0: IBM LTO-9 https://www.ibm.com/support/pages/system/files/inline-files/LTO%20SCSI%20Reference_GA32-0928-05%20(EXTERNAL)_0.pdf
1: HP LTO-9 https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001239en_us&page=GUID-D7147C7F-2016-0901-0921-000000000450.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250415114043.2389789-1-d.csapak@proxmox.com
 [TL: extend commit message with info that Dominik provided in a
  reply]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-16 14:28:04 +02:00
Christian Ebner
cb9814e331 garbage collection: fix rare race in chunk marking phase
During phase 1 of garbage collection referenced chunks are marked as
in use by iterating over all index files and updating the atime on
the chunks referenced by these.

In an edge case for long running garbage collection jobs, where a
newly added snapshot (created after the start of GC) reused known
chunks from a previous snapshot, but the previous snapshot index
referencing them disappeared before the marking phase could reach
that index (e.g. pruned because only 1 snapshot to be kept by
retention setting), known chunks from that previous index file might
not be marked (given that by none of the other index files it was
marked).

Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") this is even less likely as now the
iteration reads also index files added during phase 1, and
therefore either the new or the previous index file will account for
these chunks (the previous backup snapshot can only be pruned after
the new one finished, since locked). There remains however a small
race window between the reading of the snapshots in the backup group
and the reading of the actual index files for marking.

Fix this race by:
1. Checking if the last snapshot of a group disappeared and if so
2. generate the list again, looking for new index files previously
   not accounted for
3. To avoid possible endless looping, lock the group if the snapshot
   list changed even after the 10th time (which will lead to
   concurrent operations to this group failing).

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-3-c.ebner@proxmox.com
2025-04-16 14:17:24 +02:00
Christian Ebner
31dbaf69ab garbage collection: fail on ArchiveType::Blob in open index reader
Instead of returning a None, fail if the open index reader is called
on a blob file. Blobs cannot be read as index anyways and this allows
to distinguish cases where the index file cannot be read because
vanished.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-2-c.ebner@proxmox.com
2025-04-16 14:17:24 +02:00
Shannon Sterz
af5ff86a26 sync: switch reader back to a shared lock
the below commit accidentally switched this lock to an exclusive lock
when it should just be a shared one as that is sufficient for a
reader:

e2c1866b: datastore/api/backup: prepare for fix of #3935 by adding
lock helpers

this has already caused failed backups for a user with a sync job that
runs while they are trying to create a new backup.

https://forum.proxmox.com/threads/165038

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-04-16 11:35:27 +02:00
Christian Ebner
5fc281cd89 garbage collection: fix: account for created/deleted index files
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") not only snapshots present at the start of
the garbage collection run are considered for marking, but also newly
added ones. Take these into account by adapting the total index file
counter used for the progress output.

Further, correctly take into account also index files which have been
pruned during GC, therefore present in the list of still to process
index files but never encountered by the datastore iterators. These
would otherwise be interpreted incorrectly as strange paths and logged
accordingly, causing confusion as reported in the community forum [0].

Fixes: https://forum.proxmox.com/threads/164968/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:17:21 +02:00
Fabian Grünbichler
6c6257b94e build: add .do-static-cargo-build target
else parallel builds of the static binaries will not work correctly, just like
with the regular .do-cargo-build.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-04-15 12:16:51 +02:00
Christian Ebner
c644f7bc85 build: include pxar in static binary compilation and package
The debian package providing the dynamically linked version of the
proxmox-backup-client is packaged together with the pxar executable.

To be in line and for user convenience, include a statically linked
version of pxar to the static package as well.

Renames STATIC_BIN env variable to STATIC_BINS to reflect that this
now covers multiple binaries and store rustc flags in its own
variable so they can be reused since `cargo rustc` does not allow
invocations with multiple `--package` arguments at once.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:16:51 +02:00
Christian Ebner
4a022e1a3f api: backup: include previous snapshot name in log message
Extends the log messages written to the server's backup worker task
log to include the snapshot name which is used as previous snapshot.

This information facilitates debugging efforts, as the previous
snapshot might have been pruned since.

For example, instead of
```
download 'index.json.blob' from previous backup.
register chunks in 'drive-scsi0.img.fidx' from previous backup.
download 'drive-scsi0.img.fidx' from previous backup.
```

this now logs
```
download 'index.json.blob' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
register chunks in 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
download 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:12:06 +02:00
Christian Ebner
9247d57fdf docs: describe the intend for the statically linked pbs client
Discurage the use of the statically linked binary for systems where
the regular one is available.

Moves the previous note into it's own section and link to the
installation section.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250410093059.130504-1-c.ebner@proxmox.com
2025-04-10 21:01:24 +02:00
Gabriel Goller
427c687e35 restrict consent-banner text length
Add a maxLength in of 64*1024 in the frontend and the api. We allow
a max body size of 512*1024 in the api (with patch [0]) so we should be
fine.

[0]: https://git.proxmox.com/?p=proxmox.git;a=commit;h=cf9e6c03a092acf8808ce83dad9249414fe4d588

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250410082052.53097-1-g.goller@proxmox.com
2025-04-10 11:40:51 +02:00
Lukas Wagner
f9532a3a84 ui: token view: rephrase token regenerate dialog message
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250410085124.81931-2-l.wagner@proxmox.com
2025-04-10 11:38:51 +02:00
Lukas Wagner
d400673641 ui: token view: fix typo in 'lose'
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250410085124.81931-1-l.wagner@proxmox.com
2025-04-10 11:38:51 +02:00
Thomas Lamprecht
cdc710a736 d/control: normalize with wrap-and-sort -tkn
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 18:16:35 +02:00
Thomas Lamprecht
36ef1b01f7 bump version to 3.4.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 18:00:47 +02:00
Christian Ebner
f91d5912f1 docs: mention verify or encrypted only flags for sync jobs
Extends the sync job documentation to explicitely mention that sync
jobs can be constrained by this on snapshot level.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250409155223.309771-1-c.ebner@proxmox.com
2025-04-09 18:00:06 +02:00
Thomas Lamprecht
c08c934c02 docs: add basic info for how to install the statically linked client
To have something in the docs.

In the long run we want a somewhat fancy and safe mechanism to host
these builds directly on the CDN and implement querying that for
updates, verified with a backed in public key, but for starters this
very basic docs has to suffice.

We could also describe how to extract the client from the .deb through
`ar` or `dpkg -x`, but that feels a bit to hacky for the docs, maybe
better explained on-demand in the forum or the like.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 17:28:13 +02:00
Thomas Lamprecht
9dfd0657eb docs: client usage: define anchor for chapter
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 17:27:49 +02:00
Christian Ebner
d39f1a4b57 docs: mention different name resolution for statically linked binary
Add a note mentioning that the statically linked binary does not use
the same mechanism for name resolution as the regular client, in
particular that this does not support NSS.

The statically linked binary cannot use the `getaddrinfo` based name
resolution because of possible ABI incompatibility. It therefore is
conditionally compiled and linked using the name resolution provided
by hickory-resolver, part of hickory-dns [0].

[0] https://github.com/hickory-dns/hickory-dns

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-09 15:23:14 +02:00
Christian Ebner
83e7b9de88 client: http: Use custom resolver for statically linked binary
The dependency on the `getaddrinfo` based `GaiResolver` used by
default for the `HttpClient` is not suitable for the statically
linked binary of the `proxmox-backup-client`, because of the
dependency on glibc NSS libraries, as described in glibc's FAQs [0].

As a workaround, conditionally compile the binary using the `hickory-dns`
resolver.

[0] https://sourceware.org/glibc/wiki/FAQ#Even_statically_linked_programs_need_some_shared_libraries_which_is_not_acceptable_for_me.__What_can_I_do.3F

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
FG: bump proxmox-http dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-04-09 15:23:14 +02:00
Christian Ebner
601a84ae74 fix #4788: build static version of client
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4788

Build and package the a statically linked binary version of
proxmox-backup-client to facilitate updates and distribution.
This provides a mechanism to obtain and repackage the client for
external parties and Linux distributions.

The statically linked client is provided as dedicated package,
conflicting with the regular package.

Since the RUSTFLAGS env variables are not preserved when building
with dpkg-buildpackage, invoke via `cargo rustc` instead which allows
to set the recquried arguments.

Credit goes also to Christoph Heiss, as this patch is loosely based
on his pre-existing work for the proxmox-auto-install-assistant [0],
which provided a good template.

Also, place the libsystemd stub into its own subdirectory for cleaner
separation from the compiled artifacts.

[0] https://lore.proxmox.com/pve-devel/20240816161942.2044889-1-c.heiss@proxmox.com/

Suggested-by: Christoph Heiss <c.heiss@proxmox.com>
Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: fold in fixups
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-04-09 15:23:14 +02:00
Fabian Grünbichler
152dc37057 build: always set --target
since it affects whether cargo puts build artifacts directly into
target/debug (or target/release) or into a target-specific
sub-directory.

the package build will always pass `--target $(DEB_HOST_RUST_TYPE)`,
since it invokes the cargo wrapper in /usr/share/cargo/bin/cargo, so
this change unifies the behaviour across plain `make` and `make
deb`.

direct calls to `cargo build/test/..` will still work as before.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 13:34:40 +02:00
Aaron Lauterer
e98e962904 ui tasks: use view task instead of open task
This aligns the tooltips to how we have in in Proxmox VE. Using "view"
instead of "open" should make it clear, that this is a safe read-only
action.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20241118104959.95159-1-a.lauterer@proxmox.com
2025-04-09 12:52:11 +02:00
Lukas Wagner
f117dabcf0 docs: notification: use unicode arrow instead of ->
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250409084628.125951-3-l.wagner@proxmox.com
2025-04-09 11:47:20 +02:00
Lukas Wagner
6d193b9a1e docs: notifications: reflow text to 80 characters
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250409084628.125951-2-l.wagner@proxmox.com
2025-04-09 11:47:20 +02:00
Lukas Wagner
d25ec96c21 docs: notifications: add section about how to use custom templates
This section is meant to give a basic overview on how to use
custom templates for notifications. It will be expanded in the
future, providing a more detailed view on how templates are resolved,
existing fallback mechanisms, available templates, template
variables and helpers.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Alexander Zeidler <a.zeidler@proxmox.com>
Link: https://lore.proxmox.com/20250409084628.125951-1-l.wagner@proxmox.com
2025-04-09 11:47:20 +02:00
Friedrich Weber
839b7d8c89 ui: set error mask: ensure that message is html-encoded
to avoid interpreting HTML in the message when displaying the mask.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
2025-04-08 17:07:16 +02:00
Thomas Lamprecht
f7f61002ee cargo: require proxmox-rest-server 0.8.9
To ensure the accepted HTTP request body size is 512 kIB for the
consent banner stuff.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-08 17:07:13 +02:00
Christian Ebner
266becd156 docs: mention how to set the push sync jobs rate limit
Explicitly mention how to set the rate limit for sync jobs in push
direction to avoid possible confusion.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-2-c.ebner@proxmox.com
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2025-04-08 13:25:00 +02:00
Christian Ebner
37a85cf616 fix: ui: sync job: edit rate limit based on sync direction
Commit 9aa213b8 ("ui: sync job: adapt edit window to be used for pull
and push") adapted the sync job edit so jobs in both, push and pull
can be edited using the same window. This however did not include the
switching of the direction to which the http client rate limit is
applied to.

Fix this by further adding the edit field for `rate-out` and
conditionally hide the less useful rate limit direction (rate-out for
pull and rate-in for push). This allows to preserve the values if
explicitly set via the sync job config.

Reported in the community forum:
https://forum.proxmox.com/threads/163414/

Fixes: 9aa213b8 ("ui: sync job: adapt edit window to be used for pull and push")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-1-c.ebner@proxmox.com
2025-04-08 13:25:00 +02:00
Fabian Grünbichler
8a056670ea sync: print whole error chain per group
instead of just the top-most context/error, which often excludes
relevant information, such as when locking fails.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-04-08 13:14:36 +02:00
Gabriel Goller
a7a28c4d95 ui: remove unnecessary Ext.htmlEncode call
The Ext.htmlEncode call is unnecessary, it is already called in
Markdown.parse.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20241210161358.385845-1-g.goller@proxmox.com
2025-04-08 13:04:02 +02:00
Thomas Lamprecht
254169f622 cargo: update proxmox-sys to 0.6.7
To ensure the updated memory usage calculation [0] gets used.

[0]: https://git.proxmox.com/?p=proxmox.git;a=commit;h=58d6e8d4925b342a0ab4cfa4bfde76f092e2465a

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-07 21:19:16 +02:00
Maximiliano Sandoval
33024ffd43 options-view: Fix typo in chache
Fixes: 5e778d98 ("ui: datastore tuning options: increase width and rework labels")
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250407134039.383887-1-m.sandoval@proxmox.com
2025-04-07 17:21:59 +02:00
Thomas Lamprecht
dfc0278248 bump version to 3.3.7-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 19:00:38 +02:00
Thomas Lamprecht
8e50c75fca ui: access control: re-order and separate secret regeneration top-bar button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 18:59:25 +02:00
Thomas Lamprecht
98abd76579 ui: sync job: code style fix: ensure xtype is declared first
the widget type is the most important property as it defines how every
other property will be interpreted, so it should always come first.
Move name afterwards, as that is almost always the key for how the
data will be send to the backend and thus also quite relevant.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 18:59:25 +02:00
Thomas Lamprecht
bd95fd5797 ui: sync job: increase window width to 720px to make it less cramped
That width is already used in a few places, we might even want to
change the edit window default in the future.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 18:59:25 +02:00
Thomas Lamprecht
bccff939fa ui: sync job: small style & casing-consistency fixes
Ensure title-case is honored, while at it drop the "snapshot" for the
advanced options, we do not use that for non-advanced option like
"Removed Vanished" either. This avoids that some field labels wrap
over multiple lines, at least for English.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 18:59:25 +02:00
Thomas Lamprecht
a3815aff82 cargo: require newer pbs-api-types crate
To ensure all the new fields for the datacenter tuning options and
realms are available.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Thomas Lamprecht
d1fd12d82d ui: datastore tuning options: render cut-off time as human readable
For now just in the general datacenter option view, not when editing
the tuning options. For also allowing one to enter this we should
first provide our backend implementation as WASM to avoid having to
redo this in JavaScript.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Thomas Lamprecht
5e778d983a ui: datastore tuning options: increase width and rework labels
This was getting cramped, and while it might be actually even nicer to
got to more verbose style like we use for advanced settings of backup
jobs in Proxmox VE, with actual sentences describing the options basic
effects and rationale.

But this is way quicker to do and adds already a bit more rationale,
and we can always do more later on when there's less release time
pressure.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Thomas Lamprecht
4c0583b14e ui: update online help info reference-map
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Thomas Lamprecht
dc914094c9 ui: token edit: fix missing trailing-comma
Fixes: d49a27ed ("ui: only add delete parameter on token edit, not when creating tokens")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Christian Ebner
6c774660a7 docs: add description for gc-cache-capacity tuning parameter
Adds a bullet point to the listed datastore tuning parameters,
describing its functionality, implications and typical values.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-4-c.ebner@proxmox.com
 [TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Christian Ebner
6df6d3094c ui: expose GC cache capacity in datastore tuning parameters.
Displays and allows to edit the configured LRU cache capacity via the
datastore tuning parameters.

A step of 1024 is used in the number field for convenience when using
the buttons, more fine grained values can be set by typing.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-3-c.ebner@proxmox.com
 [TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:40:10 +02:00
Christian Ebner
f1a711c830 garbage collection: set phase1 LRU cache capacity by tuning option
Allow to control the capacity of the cache used to track recently
touched chunks via the configured value in the datastore tuning
options. Log the configured value to the task log, if an explicit
value is set, allowing the user to confirm the setting and debug.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-2-c.ebner@proxmox.com
2025-04-05 17:40:10 +02:00
Christian Ebner
3f1e103904 ui: sync job: expose new encrypted and verified only flags
Allows the user to set the encrypted/verified only flags in the
advanced settings of a sync job edit window.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-6-c.ebner@proxmox.com
2025-04-05 17:39:27 +02:00
Christian Ebner
f9270de9ef bin: manager: expose encrypted/verified only flags for cli
Allow to perform a push/pull sync job including only encrypted and/or
verified backup snapshots via the command line.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-5-c.ebner@proxmox.com
2025-04-05 17:39:27 +02:00
Christian Ebner
40ccd1ac9e fix #6072: server: sync encrypted or verified snapshots only
Skip over snapshots which have not been verified or encrypted if the
sync jobs has set the flags accordingly.
A snapshot is considered as encrypted, if all the archives in the
manifest have `CryptMode::Encrypt`. A snapshot is considered as
verified, when the manifest's verify state is set to
`VerifyState::Ok`.

This allows to only synchronize a subset of the snapshots, which are
known to be fine (verified) or which are known to be encrypted. The
latter is of most interest for sync jobs in push direction to
untrusted or less trusted remotes, where it might be desired to not
expose unencrypted contents.

Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6072

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-4-c.ebner@proxmox.com
2025-04-05 17:39:27 +02:00
Christian Ebner
ab5b64fadf api: sync: honor sync jobs encrypted/verified only flags
Extend the sync job config api to adapt the 'encrypted-only' and
'verified-only' flags, allowing to include only encrypted and/or
verified backup snapshots, excluding others from the sync.

Set these flags to the sync jobs push or pull parameters on job
invocation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-3-c.ebner@proxmox.com
2025-04-05 17:39:27 +02:00
Hannes Laimer
713fa6ee55 fix #3887: ui: add regenerate token button
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-05 17:39:27 +02:00
Hannes Laimer
f41a233a8e fix #3887: api: access: allow secret regeneration
... through the token PUT endpoint by adding a new `regenerate` bool
parameter.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-05 17:39:27 +02:00
Hannes Laimer
6f9c16d5d4 fix #4382: api: remove permissions and tokens of user on deletion
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-05 17:39:27 +02:00
Hannes Laimer
d93d7a8299 fix #4382: api: access: remove permissions of token on deletion
... and move token deletion into new `do_delete_token` function.
Since it'll be resued later on user deletion.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-05 17:39:27 +02:00
Hannes Laimer
17f183c40b pbs-config: move secret generation into token_shadow
so we have only one place where we generate secrets.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-05 17:39:27 +02:00
Christoph Heiss
d977da6411 docs: user-management: document pam and pbs authentication realm
Mostly taken from pve-docs and adapted as needed.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:39:27 +02:00
Christoph Heiss
960149b51e ui: utils: make built-in PBS realm editable using new AuthSimplePanel
The comment & default property can be updated for the built-in PBS
realm, which the AuthSimplePanel from widget-toolkit implements.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:38:30 +02:00
Christoph Heiss
074d957169 ui: access control: enable default realm checkbox for all realms
This uses the functionality previously introduced in widget-toolkit as
part of this series, which is gated behind this flag.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:38:00 +02:00
Christoph Heiss
8529e79983 ui: access control: set useTypeInUrl property per specific realm
The built-in PAM and PBS use slightly different API paths, without the
type in the URL, as that would be redundant anyway. Thus move the
setting to per-realm.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
 [TL: commit subject style fixe]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-05 17:35:49 +02:00
Christoph Heiss
5b0c6a80e5 api: access: add update support for built-in PBS realm
For the built-in PBS authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:34:38 +02:00
Christoph Heiss
029654a61d api: access: add update support for built-in PAM realm
For the built-in PAM authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:34:36 +02:00
Christoph Heiss
a738d2bcc9 config: use new dedicated PAM and PBS realm types
Currently, the built-in PAM and PBS authentication realms are (hackily)
hardcoded. Replace that with the new, proper API types for these two
realms, thus treating them like any other authentication realm.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:34:36 +02:00
Christoph Heiss
234de23a50 fix #5379: api: access: set default realm accordingly on individual update
Whenever the `default` field is set to `true` for any realm, the
`default` field must be unset first from all realms to ensure that only
ever exactly one realm is the default.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:34:33 +02:00
Christoph Heiss
bf708e8cd7 fix #5379: api: access: add default property for all realm types
Now that all the realms support this field, add the required API
plumbing for it.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
2025-04-05 17:34:29 +02:00
Christian Ebner
3ba907c888 docs: mention gc-atime-cutoff as datastore tuning option
Document the gc-atime-cutoff option and describe the behavior it
controls, by adding it as additional bullet point to the
documented datastore tuning options.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
b5ba40095d ui: expose GC atime cutoff in datastore tuning option
Allows to set the atime cutoff for phase 2 of garbage collection in
the datastores tuning parameters. This value changes the time after
which a chunk is not considered in use anymore if it falls outside of
the cutoff window.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
daa9d0a9d5 datastore: use custom GC atime cutoff if set
Use the user configured atime cutoff over the default 24h 5m
margin if explicitly set, otherwise fallback to the default.

Move the minimum atime calculation based on the atime cutoff to the
sweep_unused_chunks() callside and pass in the calculated values, as
to have the logic in the same place.

Add log outputs shownig which cutoff and minimum access time is used
by the garbage collection.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
c6a87e340c docs: mention GC atime update check for tuning options
Document the gc-atime-safety-check flag and describe the behavior it
controls, by adding it as additional bullet point to the documented
datastore tuning options.

This also fixes the intendation for the cli example how to set the
sync level, to make it clear that still belongs to the previous
bullet point.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
bb8e7e2b48 ui: expose GC atime safety check flag in datastore tuning options
Allow to edit the atime safety check flag via the datastore tuning
options edit window.

Do not expose the flag for datastore creation as it is strongly
discouraged to create datastores on filesystems not correctly handling
atime updates as the garbage collection expects. It is nevertheless
still possible to create a datastore via the cli and pass in the
`--tuning gc-atime-safety-check=false` option.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
b18eab64a9 fix #5982: garbage collection: check atime updates are honored
Check if the filesystem backing the chunk store actually updates the
atime to avoid potential data loss in phase 2 of garbage collection,
in case the atime update is not honored.

Perform the check before phase 1 of garbage collection, as well as
on datastore creation. The latter to early detect and disallow
datastore creation on filesystem configurations which otherwise most
likely would lead to data losses. To perform the check also when
reusing an existing datastore, open the chunks store also on reuse.

Enable the atime update check by default, but allow to opt-out by
setting a datastore tuning parameter flag for backwards compatibility.
This is honored by both, garbage collection and datastore creation.

The check uses a 4 MiB fixed sized, unencypted and compressed chunk
as test marker, inserted if not present. This all zero-chunk is very
likely anyways for unencrypted backup contents with large all-zero
regions using fixed size chunking (e.g. VMs).

To avoid cases were the timestamp will not be updated because of the
Linux kernels timestamp granularity, sleep in-between chunk insert
(including an atime update if pre-existing) and the subsequent
stating + utimensat for 1 second.

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5982
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Christian Ebner
8f6874391f chunk store: set file ownership on chunk insert as root user
Inserting a new chunk into the chunk store as process running with
root priviledger currently does not set an explicit ownership on the
chunk file. As a consequence this will lead to permission issues if
the chunk is operated on by a codepath executed in the less
privileged proxy task running as `backup` user.

Therefore, explicitly set the ownership and permissions of the chunk
file upon insert, if the process is executed as `root` user.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-05 13:18:22 +02:00
Thomas Lamprecht
b48427720a bump version to 3.3.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-03 17:57:16 +02:00
Maximiliano Sandoval
2084fd39c4 docs: client: add section about system credentials
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
d4a2730b1b pbs-client: allow reading fingerprint from system credential
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
b0cd9e84f5 pbs-client: allow reading default repository from system credential
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
912c8c4027 pbs-client: make get_encryption_password return a String
As per the note in the documentation [1], passwords are valid UTF-8.
This allows us to se the shared helper.

[1] https://pbs.proxmox.com/docs/backup-client.html#environment-variables

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
263651912e pbs-client: use helper for getting UTF-8 password
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
4b26fb2bd7 pbs-client: add helper for getting UTF-8 secrets
We are going to add more credentials so it makes sense to have a common
helper to get the secrets.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Maximiliano Sandoval
70e1ad0efb pbs-client: use a const for the PBS_REPOSITORY env variable
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-03 17:53:25 +02:00
Shannon Sterz
d49a27ede8 ui: only add delete parameter on token edit, not when creating tokens
otherwise tokens without comments can no longer be created as the api
will reject the additional `delete` parameter. this bug was introduced
by commit:

3fdf876: api: token: make comment deletable
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-04-03 17:53:09 +02:00
Shannon Sterz
f09f2e0d9e datastore/api: add error message on failed removal due to old locking
group or namespace removal can fail if the old locking mechanism is
still in use, as it is unsafe to properly clean up in that scenario.
return an error message that explains how to rectify that situation.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
 [TL: address simple merge conflict and fine tune message to admins]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-03 16:10:16 +02:00
Shannon Sterz
d728c2e836 datastore: ignore group locking errors when removing snapshots
this is only needed for removing the group if the last snapshot is
removed, ignore locking failures, as the user can't do anything to
rectify the situation anymore.

log the locking error for debugging purposes, though.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
 [TL: line-wrap comment at 100cc and fix bullet-point indentation]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-03 13:06:24 +02:00
Thomas Lamprecht
7fbe029ceb bump version to 3.3.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-02 19:57:51 +02:00
Thomas Lamprecht
907ba4dd61 fix version for upgrade handling for datastore locking using /run now
See commit 27dd7377 ("fix #3935: datastore/api/backup: move datastore
locking to '/run'") for details, as I'll bump PBS now we can fixate
the version and drop the safety-net "reminder" from d/rules again.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-02 19:57:51 +02:00
Thomas Lamprecht
7e15e6039b d/postinst: drop upgrade handling from PBS 1 as old-version
Safe to do in PBS 3 as one cannot skip a major version on upgrade as a
hard limitation.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-02 19:57:51 +02:00
Christian Ebner
03143eee0a fix #5331: garbage collection: avoid multiple chunk atime updates
To reduce the number of atimes updates, keep track of the recently
marked chunks in phase 1 of garbage to avoid multiple atime updates
via expensive utimensat() calls.

Recently touched chunks are tracked by storing the chunk digests in
an LRU cache of fixed capacity. By inserting a digest, the chunk will
be the most recently touched one and if already present in the cache
before insert, the atime update can be skipped. The cache capacity of
1024 * 1024 was chosen as compromise between required memory usage
and the size of an index file referencing a 4 TiB fixed size chunked
image (with 4MiB chunk size).

The previous change to iterate over the datastore contents using the
datastore's iterator helps for increased cache hits, as subsequent
snapshots are most likely to share common chunks.

Basic benchmarking:

Number of utimensat calls shows significatn reduction:
unpatched: 31591944
patched:    1495136

Total GC runtime shows significatn reduction (average of 3 runs):
unpatched: 155.4 ± 3.5 s
patched:    22.8 ± 0.5 s

VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, average of 3 runs):
unpatched before: 1196028 ± 0 kB
unpatched after:  1196028 ± 0 kB

unpatched before: 1163337 ± 28317 kB
unpatched after:  1330906 ± 29280 kB
delta:             167569 kB

Dependence on the cache capacity:
     capacity runtime[s]  VmPeakDiff[kB]
       1*1024     66.221               0
      10*1024     36.164               0
     100*1024     23.141               0
    1024*1024     22.188          101060
 10*1024*1024     23.178          689660
100*1024*1024     25.135         5507292

Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special

Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54

Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB

Distribution of snapshots (binned by month):
2023-11	11
2023-12	16
2024-01	30
2024-02	38
2024-03	17
2024-04	37
2024-05	17
2024-06	59
2024-07	99
2024-08	96
2024-09	115
2024-10	35
2024-11	42
2024-12	37
2025-01	162
2025-02	489
2025-03	1884

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5331
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 19:57:51 +02:00
Christian Ebner
74361da855 garbage collection: generate index file list via datastore iterators
Instead of iterating over all index files found in the datastore in
an unstructured manner, use the datastore iterators to logically
iterate over them as other datastore operations will.

This allows to better distinguish index files in unexpected locations
from ones in their expected location, warning the user of unexpected
ones to allow to act on possible missconfigurations. Further, this
will allow to integrate marking of snapshots with missing chunks as
incomplete/corrupt more easily and helps improve cache hits when
introducing LRU caching to avoid multiple atime updates in phase 1 of
garbage collection.

This now iterates twice over the index files, as indices in
unexpected locations are still considered by generating the list of
all index files to be found in the datastore and removing regular
index files from that list, leaving unexpected ones behind.

Further, align terminology by renaming the `list_images` method to
a more fitting `list_index_files` and the variable names accordingly.

This will reduce possible confusion since throughout the codebase and
in the documentation files referencing the data chunks are referred
to as index files. The term image on the other hand is associated
with virtual machine images and other large binary data stored as
fixed-size chunks.

Basic benchmarking:

Total GC runtime shows no significatn change (average of 3 runs):
unpatched: 155.4 ± 2.6 s
patched:   155.4 ± 3.5 s

VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, no changes for all 3 runs):
unpatched before: 1196032 kB
unpatched after:  1196032 kB

patched before: 1196028 kB
patched after:  1196028 kB

List image shows a slight increase due to the switch to a HashSet
(average of 3 runs):
unpatched: 64.2 ± 8.4 ms
patched:   72.8 ± 3.7 ms

Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special

Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54

Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB

Distribution of snapshots (binned by month):
2023-11	11
2023-12	16
2024-01	30
2024-02	38
2024-03	17
2024-04	37
2024-05	17
2024-06	59
2024-07	99
2024-08	96
2024-09	115
2024-10	35
2024-11	42
2024-12	37
2025-01	162
2025-02	489
2025-03	1884

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 19:57:51 +02:00
Christian Ebner
c9bd214555 datastore: add helper method to open index reader from path
Refactor the archive type and index file reader opening with its
error handling into a helper method for better reusability.

This allows to use the same logic for both, expected image paths
and unexpected image paths when iterating trough the datastore
in a hierarchical manner.

Improve error handling by switching to anyhow's error context.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 19:57:51 +02:00
Christian Ebner
0b016e1efe garbage collection: format error including anyhow error context
Until now errors are shown ignoring the anyhow error context. In
order to allow the garbage collection to return additional error
context, format the error including the context as single line.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 19:57:51 +02:00
Christian Ebner
8d9dc69945 tools: lru cache: tell if node was already present or newly inserted
Add a boolean return type to LruCache::insert(), telling if the node
was already present in the cache or if it was newly inserted.

This will allow to use the LRU cache for garbage collection, where
it is required to skip atime updates for chunks already marked in
use.

That improves phase 1 garbage collection performance by avoiding,
multiple atime updates.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 19:57:51 +02:00
Hannes Laimer
3fdf8769f4 api: token: make comment deletable
Currently, the only way to delete a comment on a token is to set it to
just spaces. Since we trim it in the endpoint, it gets deleted as a
side effect. This allows the comment to be deleted properly.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2025-04-02 18:38:52 +02:00
Dominik Csapak
320ea1cdb7 tape: introduce a tape backup job worker thread option
Using a single thread for reading is not optimal in some cases, e.g.
when the underlying storage can handle reads from multiple threads in
parallel.

We use the ParallelHandler to handle the actual reads. Make the
sync_channel buffer size depend on the number of threads so we have
space for two chunks per thread. (But keep the minimum to 3 like
before).

How this impacts the backup speed largely depends on the underlying
storage and how the backup is laid out on it.

I benchmarked the following setups:

* Setup A: relatively spread out backup on a virtualized pbs on single HDDs
* Setup B: mostly sequential chunks on a virtualized pbs on single HDDs
* Setup C: backup on virtualized pbs on a fast NVME
* Setup D: backup on bare metal pbs with ZFS in a RAID10 with 6 HDDs
  and 2 fast special devices in a mirror

(values are reported in MB/s as seen in the task log, caches were
cleared between runs, backups were bigger than the memory available)

setup  1 thread  2 threads  4 threads  8 threads
A      55        70         80         95
B      110       89         100        108
C      294       294        294        294
D      118       180        300        300

So there are cases where multiple read threads speed up the tape backup
(dramatically). On the other hand there are situations where reading
from a single thread is actually faster, probably because we can read
from the HDD sequentially.

I left the default value of '1' to not change the default behavior.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [TL: update comment about mpsc buffer size for clarity and drop
  commented-out debug-code]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-02 16:45:14 +02:00
Thomas Lamprecht
13b15bce11 cargo: require newer pbs-api-types crate
In preparation of some commits using new types/fields from there.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-02 16:43:47 +02:00
Christian Ebner
ed8205e535 server: pull: refactor snapshot pull logic
In preparation for skipping over snapshots when synchronizing with
encrypted/verified only flags set. In these cases, the manifest has
to be fetched from the remote and it's status checked. If the
snapshot should be skipped, the snapshot directory including the
temporary manifest file has to be cleaned up, given the snapshot
directory has been newly created. By reorganizing the current
snapshot pull logic, this can be achieved more easily.

The `corrupt` flag will be set to `false` in the snapshot
prefiltering, so the previous explicit distinction for newly created
snapshot directories must not be preserved.

No functional changes intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-02 15:29:49 +02:00
Lukas Wagner
32b5716fa4 notifications: remove HTML template for test notification
The template files for this one have simply been copied from PVE,
including the HTML template.

In PBS we actually don't provide any HTML templates for any other type
of notification, so especially with the template override mechanism on
the horizon, it's probably better to remove this template until we
also provide an HTML version for the other types as well.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
d1c96f69ee notifications: add type for verify notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
8210a32613 notifications: add type for tape load notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
f2115b04c1 notifications: add type for tape backup notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
1599b424cd notifications: add type for sync notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
1b9e3cfd18 notifications: add type for prune notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
940d34b42a notifications: add type for APT notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
33d2444eca notifications: add type for ACME notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
7a3cbd7230 notifications: add type for GC notification template data
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.

This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.

This commit also tries to unify the style and naming of template
variables.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Lukas Wagner
b60912c65d notifications: move make notifications module a dir-style module
The next commit is going to add a separate submodule for notification
template data types.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-04-02 14:42:42 +02:00
Shannon Sterz
23be00a42c fix #3336: datastore: remove group if the last snapshot is removed
Empty backup groups are not visible in the API or GUI. This led to a
confusing issue where users were unable to create a group because it
already existed and was still owned by another user. Resolve this
issue by removing the group if its last snapshot is removed.

Also fixes an issue where removing a group used the non-atomic
`remove_dir_all()` function when destroying a group unconditionally.
This could lead to two different threads suddenly holding a lock to
the same group. Make sure that the new locking mechanism is used,
which prevents that, before removing the group. This is also a bit
more conservative now, as it specifically removes the owner file and
group directory separately to avoid accidentally removing snapshots in
case we made an oversight.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-04-02 14:42:42 +02:00
Shannon Sterz
04e50855b3 fix: api: avoid race condition in set_backup_owner
when two clients change the owner of a backup store, a race condition
arose. add locking to avoid this.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-03-26 16:21:47 +01:00
Shannon Sterz
52e5d52cbd fix #3935: datastore: move manifest locking to new locking method
adds double stat'ing and removes directory hierarchy to bring manifest
locking in-line with other locks used by the BackupDir trait.

if the old locking mechanism is still supposed to be used, this still
falls back to the previous lock file. however, we already add double
stat'ing since it is trivial to do here and should only provide better
safety when it comes to removing locks.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-03-26 16:21:43 +01:00
Shannon Sterz
27dd73777f fix #3935: datastore/api/backup: move datastore locking to '/run'
to avoid issues when removing a group or snapshot directory where two
threads hold a lock to the same directory, move locking to the tmpfs
backed '/run' directory. also adds double stat'ing to make it possible
to remove locks without certain race condition issues.

this new mechanism is only employed when we can be sure, that a reboot
has occured so that all processes are using the new locking mechanism.
otherwise, two separate process could assume they have exclusive
rights to a group or snapshot.

bumps the rust version to 1.81 so we can use `std::fs::exists` without
issue.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
 [TL: drop unused format_err import]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-03-26 16:21:43 +01:00
Shannon Sterz
e2c1866b13 datastore/api/backup: prepare for fix of #3935 by adding lock helpers
to avoid duplicate code, add helpers for locking groups and snapshots
to the BackupGroup and BackupDir traits respectively and refactor
existing code to use them.

this also adapts error handling by adding relevant context to each
locking helper call site. otherwise, we might loose valuable
information useful for debugging. note, however, that users that
relied on specific error messages will break.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-03-26 16:21:39 +01:00
Maximiliano Sandoval
27ba2c0318 pbs-client: make get_secret_from_env private
Since we are exposing functions now to get the password and encryption
password this should be private.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-26 12:46:56 +01:00
Maximiliano Sandoval
b510184e72 pbs-client: read credentials from $CREDENTIALS_DIRECTORY
Allows to load credentials passed down by systemd. A possible use-case
is safely storing the server's password in a file encrypted by the
systems TPM, e.g. via

```
systemd-ask-password -n | systemd-creds encrypt --name=proxmox-backup-client.password - my-api-token.cred
```

which then can be used via

```
systemd-run --pipe --wait --property=LoadCredentialEncrypted=proxmox-backup-client.password:my-api-token.cred \
proxmox-backup-client ...
```

or from inside a service.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-26 12:46:51 +01:00
Wolfgang Bumiller
79e9eddf4b api: minor formatting fixup (missing blank line)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-03-21 09:05:13 +01:00
Christian Ebner
24a6d4fd82 client: align description for backup specification to docs
Adapt the description for the backup specification to use
`archive-name` and `type` over `label` and `ext`, to be in line with
the terminology used in the documentation.

Further, explicitley describe the `path` as `source-path` to be less
ambigouos.

In order to avoid formatting issues in the man pages because of line
breaks after a hyphen, show the backup specification description in
multiple lines.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-03-20 18:47:44 +01:00
Hannes Laimer
b693f5d471 api: config: use guard for unmounting on failed datastore creation
Currently if any `?`/`bail!` happens between mounting and completing
the creation process unmounting will be skipped. Adding this guard
solves that problem and makes it easier to add things in the future
without having to worry about a disk not being unmounted in case of a
failed creation.

Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
2025-03-20 18:45:37 +01:00
Christian Ebner
3362a6e049 clippy/fmt: tree wide drop of clone for types implementing copy
fixes the clippy warning on types T implementing Copy:
```
warning: using `clone` on type `T` which implements the `Copy` trait
```

followed by formatting fixups via `cargo fmt`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-03-20 14:48:31 +01:00
Wolfgang Bumiller
7c45cf8c7a dependency cleanup and d/control bump
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-03-19 12:27:07 +01:00
Gabriel Goller
d99c481596 log: use new builder initializer
Use new logger builder to initialize the logging in each component.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2025-03-19 12:02:48 +01:00
Maximiliano Sandoval
f74978572b client: allocate two fewer strings
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-17 16:04:41 +01:00
Maximiliano Sandoval
bb408fd151 pull_metrics: rename argument called gen to generation
gen is a reserved keyword in the rust 2024 edition. See
https://doc.rust-lang.org/edition-guide/rust-2024/gen-keyword.html.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-17 16:02:41 +01:00
Christian Ebner
54763b39c7 datastore: restrict datastores list_images method scope to module
Drop the pub scope for `DataStore`s `list_images` method.

This method is only used to generate a list of index files found in
the datastore for iteration during garbage collection. There are no
other call sites and this is intended to only be used within the
module itself. Allows to be more flexible for future method signature
adaptions.

No functional changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-03-17 14:06:01 +01:00
Shannon Sterz
f1dd1e3557 pbs-config: fix unresolved link warnings by correcting the links
otherwise creating the docs for pbs-config throws a warning

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-03-17 13:51:54 +01:00
Maximiliano Sandoval
f314078a8d examples: h2s-server: port to http2::builder::new
Fixes the deprecation warning when building this example.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-17 13:20:16 +01:00
Maximiliano Sandoval
7085d270d4 examples: h2server: port to http2::Builder::new
Fixes the deprecation warning while building this example.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-17 13:20:16 +01:00
Fabian Grünbichler
6565199af4 hyper: start preparing upgrade to 1.x
by switching on deprecations and using some backported types already
available on 0.14:

- use body::HttpBody::collect() instead of to_bytes() directly on Body
- use server::conn::http2::Builder instead of server::conn::Http with
  http2_only

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-03-13 13:23:48 +01:00
Fabian Grünbichler
168ed37026 h2: switch to legacy feature
to avoid upgrading to hyper 1 / http 1 right now. this is a Debian/Proxmox
specific workaround.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-03-13 13:23:42 +01:00
Fabian Grünbichler
2c9f3a63d5 update env_logger to 0.11
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-03-13 13:23:22 +01:00
Fabian Grünbichler
eba172a492 run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-03-13 13:23:17 +01:00
Fabian Grünbichler
cec8c75cd0 bump version to 3.3.4-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-03-13 13:06:35 +01:00
Dominik Csapak
ddf0489abb docs: fix hash collision probability comparison
Commit:
 efc09f63c (docs: tech overview: avoid 'we' and other small style fixes/additions)

introduced the comparison with 13 lottery games, but sadly without any
mention how to arrive at that number.

When calculating I did arrive at 8-9 games (8 is more probable, 9 is
less probable), so rewrite to 'chance is lower than 8 lottery games' and
give the calculation directly inline as a reference.

Fixes: efc09f63 ("docs: tech overview: avoid 'we' and other small style fixes/additions")
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [TL: reference commit that introduced this]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-03-07 11:26:16 +01:00
Maximiliano Sandoval
22285d0d01 add too_many_arguments clippy exception
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:57:23 +01:00
Maximiliano Sandoval
f37ce33164 zfs: remove unnecessary arc from dataset object map
The static was not really used anywhere else so it was made private.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:57:05 +01:00
Maximiliano Sandoval
2c89b88226 create a CachedSchema struct
Fix the type_complexity clippy lint.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:56:42 +01:00
Maximiliano Sandoval
cdc2b341b6 fix the type_complexity clippy lint
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:55:49 +01:00
Maximiliano Sandoval
5117a21ec9 snapshot_reader: replace Arc with Rc
The type `Box<dyn IndexFile + Send>>, usize, Vec<(usize, u64)>` is not
Sync so it makes more sense to use Rc. This is suggested by clippy.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:55:04 +01:00
Maximiliano Sandoval
883e14ebcb api: remove redundant guard
Fixes the redundant_guards clippy lint.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:53:49 +01:00
Maximiliano Sandoval
858744bf3c run cargo clippy --fix
The actual incantation is:

clippy --all-targets --workspace --all-features --fix

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:53:47 +01:00
Maximiliano Sandoval
582ba899b6 server: remove needless clone
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-03-06 14:53:44 +01:00
Christian Ebner
f098814876 datastore: use libc's timespec constants instead of redefinition
Use the UTIME_NOW and UTIME_OMIT constants defined in libc crate
instead of redefining them. This improves consistency, as utimesat
and its timespec parameter are also defined via the libc crate.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-03-05 10:01:12 +01:00
Filip Schauer
62ff4f2472 fix #5946: disks: wipe: ensure GPT header backup is wiped
When wiping a block device with a GUID partition table, the header
backup might get left behind at the end of the disk. This commit also
wipes the last 4096 bytes of the disk, making sure that a GPT header
backup is erased, even from disks with 4k sector sizes.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2025-02-26 22:26:11 +01:00
Filip Schauer
7cae3e44f2 disks: wipe: replace dd with write_all_at for zeroing disk
Replace the external invocation of `dd` with direct file writes using
`std::os::unix::fs::FileExt::write_all_at` to zero out the start of the
disk.

Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2025-02-26 22:26:11 +01:00
Christoph Heiss
9d4d1216e3 using-the-installer: adapt to raised root password length requirement
It's been raised in the installer across the board, so adapt it here
too.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2025-02-24 12:14:41 +01:00
Wolfgang Bumiller
d8881be658 client: reflow strings over the column limit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-02-21 16:04:23 +01:00
Wolfgang Bumiller
c7a29011fa whitespace fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-02-21 16:03:20 +01:00
Christian Ebner
abad8e25c4 fix #6185: client/docs: explicitly mention archive name restrictions
Mention in the docs and the api parameter description the limitations
for archive name labels. They must contain alphanumerics, hyphens and
underscores only to match the regex pattern.

By setting this in the api parameter description, it will be included
in the man page for proxmox-backup-client.

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6185
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-02-21 16:00:17 +01:00
Gabriel Goller
6c2b039ef4 cargo: add pbs-api-types override and reorder overrides
Add the new pbs-api-types crate to the cargo override section. Reorder
the overrides to be alphabetic.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2025-02-21 15:04:40 +01:00
Christian Ebner
64cfb13193 client: style cleanup: inline variable names in format string
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-02-20 16:13:54 +01:00
Thomas Lamprecht
d986714201 bump version to 3.3.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-02-11 20:24:40 +01:00
Thomas Lamprecht
1b5436ccdd Revert "log: update to tracing in proxmox-daily-update"
This reverts commit c6600acf0b as it's
dependency prerequisites are not yet fulfilled...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-02-11 20:13:01 +01:00
Gabriel Goller
c6600acf0b log: update to tracing in proxmox-daily-update
Previously we just wrote to syslog directly. This doesn't work anymore
since the tracing update and we won't get any output in the tasklog.

Reported-by: https://forum.proxmox.com/threads/158764/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2025-02-11 20:09:00 +01:00
Christian Ebner
c9cd520a1a client: pxar: fix race in pxar backup stream
Fixes a race condition where the backup upload stream can miss an
error returned by pxar::create_archive, because the error state is
only set after the backup stream was already polled.

On instantiation, `PxarBackupStream` spawns a future handling the
pxar archive creation, which sends the encoded pxar archive stream
(or streams in case of split archives) through a channel, received
by the pxar backup stream on polling.

In case this channel is closed as signaled by returning an error, the
poll logic will propagate an eventual error occurred during pxar
creation by taking it from the `PxarBackupStream`.

As this error might not have been set just yet, this can lead to
incorrectly terminating a backup snapshot with success, eventhough an
error occurred.

To fix this, introduce a dedicated notifier for each stream instance
and wait for the archiver to signal it has finished via this
notification channel. In addition, extend the `PxarBackupStream` by a
`finished` flag to allow early return on subsequent polls, which
would otherwise block, waiting for a new notification.

In case of premature termination of the pxar backup stream, no
additional measures have to been taken, as the abort handle already
terminates the archive creation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-02-11 11:18:29 +01:00
Fiona Ebner
e0e644f119 fix #6069: prune simulator: allow values specifying both range and step size
The prune schedule simulator returned "X/Y is not an integer" error
for a schedule that uses a `start..end` hour range combined with a
`/`-separated time step-gap, while that works out fine for actual
prune jobs in PBS.

Previously, a schedule like `5..23/3` was mistakenly interpreted as
hour-start = `5`, hour-end = `23/3`, hour-step = `1`, resulting in
above parser error for hour-end. By splitting the right hand side on
`/` to extract the step and normalizing that we correctly get
hour-start = `5`, hour-end = `23`, hour-step = `3`.

Short reminder: hours and minutes part are treated as separate and can
both be declared as range, step or range-step, so `5..23/3:15` does
not mean the step size is 3:15 (i.e. 3.25 hours or 195 minutes) but
rather 3 hours step size and each resulting interval happens on the
15 minute of that hour.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 [TL: add context to commit message partially copied from bug report
  and add a short reminder how these intervals work, can be confusing]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-02-10 15:34:39 +01:00
Laurențiu Leahu-Vlăducu
5863e5ff5d Fix #4408: add 'disaster recovery' section for tapes
Add new markers so that we can refer to the chapters.

Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
2025-02-10 12:09:29 +01:00
Thomas Lamprecht
46d4ceef77 verfiy: code style: inline format string variables
Use a intermediate variable for the frequently used datastore name and
backup snapshod name, while it's not often the case the diff(stat)
makes a good argument that it's worth it here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-02-10 11:43:46 +01:00
Thomas Lamprecht
afd22455da api daemon: run rustfmt to fix code formatting style
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-01-30 16:51:57 +01:00
Christian Ebner
5ba351bac7 verify: handle manifest update errors as non-fatal
Since commit 8ea00f6e ("allow to abort verify jobs") errors
propagated up to the verify jobs worker call side are interpreted as
job aborts.

The manifest update did not honor this, leading to the verify job
being aborted with the misleading log entry:
`verification failed - job aborted`

Instead, handle the manifest update error non-fatal just like any
other verification related error, log it including the error message
and continue verification with the next item.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-01-30 13:36:03 +01:00
Maximiliano Sandoval
961c81bdeb man: verification: Fix config file name
Fixes: 5b7f4455 ("docs: add manual page for verification.cfg")
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
 [TL: add references to commit that this fixes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-01-28 14:49:23 +01:00
Maximiliano Sandoval
af18706fcb docs: add synopsis and basic docs for prune job configuration
Have our docgen tool generate a synopsis for the prune.cfg schema, and
use that output in a new prune.cfg manpage, and include it in the
appropriate appendix of our html/pdf rendered admin guide.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
 [TL: expand commit message and keep alphabetical order for configs in
      the guide.]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-01-28 14:34:47 +01:00
Christian Ebner
ce8d56a3b5 api: datastore: add missing log context for prune
Adds the missing log context for cases were a prune is not executed as
dedicated tokio task.

Commit 432de66a ("api: make prune-group a real workertask") moved the
prune group logic into it's own tokio task conditionally.

However, the log context was missing for cases where no dedicated
task/thread is started, leading to the worker task state being
unknown after finish, as no logs are written to the worker task log
file.

Reported in the community forum:
https://forum.proxmox.com/threads/161273/

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com
2025-01-28 13:08:26 +01:00
Laurențiu Leahu-Vlăducu
1f24167b4d proxy/parallel_handler: Improved panic errors with formatted strings
* Improved errors when panics occur and the panic message is a
formatted (not static) string. This worked already for &str literals,
but not for Strings.

Downcasting to both &str and String is also done by the Rust Standard
Library in the default panic handler. See:
b605c65b6e/library/std/src/panicking.rs (L777)

* Switched from eprintln! to tracing::error when logging panics in the
task scheduler.

Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
2025-01-27 14:17:20 +01:00
Maximiliano Sandoval
d4468ba6f8 pxar: extract: Follow overwrite_flags when opening file
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-27 13:22:34 +01:00
Maximiliano Sandoval
600ce36d57 use truncate whenever we create files
Fixes the suspicious_open_options clippy lint, for example:

```
warning: file opened with `create`, but `truncate` behavior not defined
    --> src/api2/tape/restore.rs:1713:18
     |
1713 |                 .create(true)
     |                  ^^^^^^^^^^^^- help: add: `.truncate(true)`
     |
     = help: if you intend to overwrite an existing file entirely, call `.truncate(true)`
     = help: if you instead know that you may want to keep some parts of the old file, call `.truncate(false)`
     = help: alternatively, use `.append(true)` to append to the file instead of overwriting it
     = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_open_options
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-27 13:22:27 +01:00
Maximiliano Sandoval
1cf52c6bb3 remove create & truncate when create_new is used
As per its documentation [1]:

> If .create_new(true) is set, .create() and .truncate() are ignored.

This gets rid of the "file opened with `create`, but `truncate`
behavior not defined " clippy warnings.

[1] https://doc.rust-lang.org/std/fs/struct.OpenOptions.html#method.create_new

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-27 11:53:23 +01:00
Maximiliano Sandoval
95d8e70c84 docs: Improve GC's cutofftime description
As written it can be read as "24h5m after the garbage collection
started".

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-27 08:51:02 +01:00
Maximiliano Sandoval
b249e44a0e fix typos in docs and API descriptions
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-27 08:50:42 +01:00
Christian Ebner
d93d782d37 cargo: drop direct http crate dependency, tree-wide namespace fix
Instead of using and depending on the `http` crate directly, use and
depend on the re-exported `hyper::http`. Adapt namespace prefixes
accordingly.

This makes sure the `hyper::http` types are version compatible and
allows to possibly depend on incompatible versions of `http` in the
workspace in the future.

No functional changes intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-01-24 09:43:35 +01:00
Fabian Grünbichler
d910543d56 d/control: add pbs-api-types
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-01-24 09:24:14 +01:00
Friedrich Weber
41588772c9 fix: docs: prune sim: show "keep" entries in backup list
Currently, the list of backups only shows removed backups and is
missing backups that are kept, though they are shown correctly in the
calendar view.

The reason is that a refactor (see Fixes tag) moved the definition of
a custom field renderer referencing `me` to a scope where `me` is not
defined. This causes the renderer to error out for "kept" backups,
which apparently causes the grid to skip the rows altogether (without
any messages in the console).

Fix this by replacing the broken `me` reference.

Fixes: bb044304 ("prune sim: move PruneList to more static declaration")
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2025-01-24 09:17:26 +01:00
Shannon Sterz
ed03985bd6 d/copyright; docs/conf.py: update copyright years
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-01-24 09:16:24 +01:00
Dietmar Maurer
7769be2f17 use new librust-pbs-api-types-dev debian package
We moved the whole code from the pbs-api-types subdirectory into the proxmox
git repository and build a rust debian package for the crate.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2025-01-22 12:40:18 +01:00
Wolfgang Bumiller
de875c0f0e update to proxmox-schema 4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2025-01-15 13:03:42 +01:00
Maximiliano Sandoval
f1a5808e67 replace match statements with ? operator
When possible.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-14 08:57:24 +01:00
Maximiliano Sandoval
c4c050dc36 sg_pt_changer: remove needless call to as_bytes()
Fixes:

warning: needless call to `as_bytes()`
   --> pbs-tape/src/sg_pt_changer.rs:913:45
    |
913 |             let rem = SCSI_VOLUME_TAG_LEN - voltag.as_bytes().len();
    |                                             ^^^^^^^^^^^^^^^^^^^^^^^ help: `len()` can be called directly on strings: `voltag.len()`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_as_bytes
    = note: `#[warn(clippy::needless_as_bytes)]` on by default

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-14 08:57:08 +01:00
Maximiliano Sandoval
fd6cdeebea elide lifetimes when possible
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-14 08:56:42 +01:00
Maximiliano Sandoval
414f3656a8 metric_collection: remove redundant map_or
Fixes:

warning: this `map_or` is redundant
   --> src/server/metric_collection/mod.rs:172:20
    |
172 |                   if config
    |  ____________________^
173 | |                     .get_maintenance_mode()
174 | |                     .map_or(false, |mode| mode.check(Some(Operation::Read)).is_err())
    | |_____________________________________________________________________________________^
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_map_or
    = note: `#[warn(clippy::unnecessary_map_or)]` on by default
help: use is_some_and instead
    |
172 ~                 if config
173 +                     .get_maintenance_mode().is_some_and(|mode| mode.check(Some(Operation::Read)).is_err())
    |

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-14 08:56:01 +01:00
Maximiliano Sandoval
0185228ad7 backup: remove unneded import
Fixes:

warning: unused import: `SnapshotVerifyState`
  --> src/api2/backup/mod.rs:23:66
   |
23 |     ArchiveType, Authid, BackupNamespace, BackupType, Operation, SnapshotVerifyState, VerifyState,
   |                                                                  ^^^^^^^^^^^^^^^^^^^
   |
   = note: `#[warn(unused_imports)]` on by default

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2025-01-14 08:55:40 +01:00
Christian Ebner
b72bdf4156 Revert "fix #5710: api: backup: stat known chunks on backup finish"
Commit da11d226 ("fix #5710: api: backup: stat known chunks on backup
finish") introduced a seemingly cheap server side check to verify
existence of known chunks in the chunk store by stating. This check
however does not scale for large backup snapshots which might contain
millions of known chunks, as reported in the community forum [0].
Revert the changes for now instead of making this opt-in/opt-out, a
more general approach has to be thought out to mark backup snapshots
which fail verification.

Link to the report in the forum:
[0] https://forum.proxmox.com/threads/158812/

Fixes: da11d226 ("fix #5710: api: backup: stat known chunks on backup finish")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-01-13 11:03:35 +01:00
Laurențiu Leahu-Vlăducu
4773f6b721 readme: clarify when one needs to adjust the rustup config
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
 [ TL: add tag to subject and shorten it ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-16 13:54:59 +01:00
Shannon Sterz
40ef2afe01 api: move DataStoreConfig parsing and mount check after allowed check
this moves the parsing of the concrete DataStoreConfig as well as the
check whether a store is mounted after the authorization checks.
otherwise we always check for all datastore whether they are mounted,
even if the requesting user has no privileges to list the specified
datastore anyway.

this may improve performance for large setups, as we won't need to stat
mounted datastores regardless of the useres privileges. this was
suggested on the mailing list [1].

[1]: https://lore.proxmox.com/pbs-devel/embeb48874-d400-4e69-ae0f-2cc56a39d592@93f95f61.com/

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-12-16 13:08:30 +01:00
Thomas Lamprecht
c312d58488 file-restore: bump version to 3.3.2-2
only upload file-restore for a targeted fix of an recent regression.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-10 12:31:49 +01:00
Fabian Grünbichler
34fbf1a809 file-restore: fix -blockdev regression with namespaces or encryption
QEMU CLI option parsing requires doubling the commas for values, this
seems to be also used when a combined option is used to pass down the
key=value pairs to the internal options, like for the combined -drive
option that was replaced by the slightly lower-level blockdev option
in commit 668b8383 ("file restore: qemu helper: switch to more modern
blockdev option for drives"). So there we now could drop the comma
duplication as blockdev directly interprets these options, thus no
need for escaping the comma.

We missed two instances because they were not part of the "main"
format string, which broke some use cases.

Fixes: 668b8383 ("file restore: qemu helper: switch to more modern blockdev option for drives")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
 [ TL: add more context, but it's a bit guesstimation ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-10 11:44:18 +01:00
Lukas Wagner
c676439a15 docs: notifications: document HTTP-based target's proxy behavior
Gotify and webhook targets will use the HTTP proxy settings from
node.cfg, the documentation should mention this.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-12-09 13:33:32 +01:00
Thomas Lamprecht
ed8bc69a50 bump version to 3.3.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-09 10:37:38 +01:00
Christian Ebner
c57ac02879 pxar: client: fix missing file size check for metadata comparison
Change detection mode set to metadata compares regular file entries
metadata to the reference metadata archive of the previous run. The
`pxar::format::Stat` as stored in `pxar::Metadata` however does not
include the actual file size, it only partially stores information
gathered from stating the file.

This means however that the actual file size is never compared and
therefore, that if the file size did change, but the other metadata
information did not (including the mtime which might have been
restored), that file will be incorrectly reused.
A subsequent restore will however fail, because the expected file size
as encoded in the metadata archive does not match the file size as
stored in the payload archive.

Fix this by adding the missing file size check, comparing the size
for the given file against the one stored in the metadata archive.

Link to issue reported in community forum:
https://forum.proxmox.com/threads/158722/

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-12-09 09:43:49 +01:00
Fiona Ebner
668b8383a7 file restore: qemu helper: switch to more modern blockdev option for drives
From the QEMU man page:

> The most explicit way to describe disks is to use a combination of
> -device to specify the hardware device and -blockdev to describe the
> backend. The device defines what the guest sees and the backend
> describes how QEMU handles the data. It is the only guaranteed stable
> interface for describing block devices and as such is recommended for
> management tools and scripting.

> The -drive option combines the device and backend into a single
> command line option which is a more human friendly. There is however
> no interface stability guarantee although some older board models
> still need updating to work with the modern blockdev forms.

From the perspective of live restore, there should be no behavioral
change, except that the used driver is now explicitly specified. The
'-device' options are still the same, the fact that 'if=none' is gone
shouldn't matter, because the '-device' option was already used to
define the interface (i.e. virito-blk) and the 'id' option needed to
be replaced with 'node-name'.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:43:42 +01:00
Gabriel Goller
c69d18626a pbs-client: remove log dependency and migrate to tracing
Remove the `log` dependency in pbs-client and change all the invocations
to tracing logs.
No functional change intended.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-12-05 08:31:52 +01:00
Christian Ebner
08d136e069 client: backup: remove unnecessary clone for backup reader
This was introduced by commit fdea4e53 ("client: implement prepare
reference method") to read a reference metadata archive for detection
of unchanged, reusable files when using change detection mode set to
`metadata`.

Avoid unnecessary cloning of the atomic reference counted
`BackupReader` instance, as it is used exclusively for this codepath.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-12-04 14:43:40 +01:00
Maximiliano Sandoval
bf063e4494 auth: doc: Explicitly set namespace for UserInfomation
Fixes the cargo doc warning:

```
warning: unresolved link to `UserInformation`
   --> src/auth.rs:418:53
    |
418 |     /// Check if a userid is enabled and return a [`UserInformation`] handle.
    |                                                     ^^^^^^^^^^^^^^^ no item named `UserInformation` in scope
    |
    = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
    = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-04 14:40:42 +01:00
Maximiliano Sandoval
d430b05ec3 datastore: docs: escape <uuid>
Fixes the cargo doc lint:

```
warning: unclosed HTML tag `uuid`
  --> pbs-datastore/src/datastore.rs:60:41
   |
60 | ///  - could not stat /dev/disk/by-uuid/<uuid>
   |                                         ^^^^^^
   |
   = note: `#[warn(rustdoc::invalid_html_tags)]` on by default

warning: unclosed HTML tag `uuid`
  --> pbs-datastore/src/datastore.rs:61:26
   |
61 | ///  - /dev/disk/by-uuid/<uuid> is not a block device
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-04 14:40:42 +01:00
Maximiliano Sandoval
f55a08891e pxar: extract: docs: remove redundant explicit link
Also fix `Entries` link.

Fixes the cargo doc lint:

```
warning: redundant explicit link target
   --> pbs-client/src/pxar/extract.rs:212:27
    |
212 |     ///   * The [`Entry`][E]'s filename is invalid (contains nul bytes or a slash)
    |                  -------  ^ explicit target is redundant
    |                  |
    |                  because label contains path that resolves to same destination
    |
note: referenced explicit link target defined here
   --> pbs-client/src/pxar/extract.rs:221:14
    |
221 |     /// [E]: pxar::Entry
    |              ^^^^^^^^^^^
    = note: when a link's destination is not specified,
            the label is used to resolve intra-doc links
    = note: `#[warn(rustdoc::redundant_explicit_links)]` on by default
help: remove explicit link target
    |
212 |     ///   * The [`Entry`]'s filename is invalid (contains nul bytes or a slash)
    |                 ~~~~~~~~~

warning: redundant explicit link target
   --> pbs-client/src/pxar/extract.rs:215:37
    |
215 |     /// fetching the next [`Entry`][E]), the error may be handled by the
    |                            -------  ^ explicit target is redundant
    |                            |
    |                            because label contains path that resolves to same destination
    |
note: referenced explicit link target defined here
   --> pbs-client/src/pxar/extract.rs:221:14
    |
221 |     /// [E]: pxar::Entry
    |              ^^^^^^^^^^^
    = note: when a link's destination is not specified,
            the label is used to resolve intra-doc links
help: remove explicit link target
    |
215 |     /// fetching the next [`Entry`]), the error may be handled by the
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-04 14:40:42 +01:00
Maximiliano Sandoval
77c81bcb31 datastore: docs: turn uri into hyperlink
Fixes the cargo doc lint:

```
warning: this URL is not a hyperlink
   --> pbs-datastore/src/data_blob.rs:555:5
    |
555 | /// https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: bare URLs are not automatically turned into clickable links
    = note: `#[warn(rustdoc::bare_urls)]` on by default
help: use an automatic link instead
    |
555 | /// <https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h>
    |     +                                                                    +
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-04 14:40:42 +01:00
Thomas Lamprecht
cf0aaec985 bump version to 3.3.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-03 18:11:12 +01:00
Shannon Sterz
7c570bac70 ui: check that store is set before trying to select in GCJobView
otherwise users will get a `b.store is null` error in the console and
a loading spinner is shown for a while.

the issue in question seems to stem from the event handler that gets
attached when the "Prune & GC Jobs" tab is opened for a specific
datastore. however, that event handler should *not* be attached for
the "Datastore" -> "Prune & GC Jobs" panel. it seems that the event
handler does still get attached, and will fire in the "Datastore"
view if it hasn't fired while opened in a specific datastore
(it should only trigger a single time).

that scenario seems to occur when a different tab was previously
selected in a specific datastore and navigation is triggered via the
side bar from the "Datastore" -> "Prune GC Jobs" to a specific
datastore. that leads to the "Prune & GC Jobs" view for that specific
datastore being opened very briefly in which the event handler gets
attached, navigation then automatically moves to the previously
selected tab. this will stop the store from updating ensuring that
the event is never triggered. when we then move to
the "Datastore" -> "Prune & GC Jobs" tab again the event handler will
be triggered but the store of the view is null leading to the error.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-03 18:09:30 +01:00
Thomas Lamprecht
1874857dc2 cargo: update proxmox dependency of rest-server and sys
To ensure PBS gets build with the new fixes for CLOEXEC and active
worker refcount.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-03 18:03:57 +01:00
Dominik Csapak
8eaeedf31e tree-wide: add missing O_CLOEXEC flags to openat calls
Since we don't want to have lingering file descriptors on any fork +
exec, like the reload code from the proxmox-daemon crate we're using
for the rest-server(s) does, as that can have serious side effects and
even cause hangs.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ TL: Reword commit message ]}
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-03 16:48:23 +01:00
Fabian Grünbichler
c17964e7fc docs: escape <foo> in doc comments
warning: unclosed HTML tag `nodename`
   --> pbs-api-types/src/metrics.rs:224:5
    |
224 | /     /// Unique identifier for this metric object, for instance 'node/<nodename>'
225 | |     /// or 'qemu/<vmid>'.
    | |_________________________^
    |
    = note: `#[warn(rustdoc::invalid_html_tags)]` on by default

warning: unclosed HTML tag `vmid`
   --> pbs-api-types/src/metrics.rs:224:5
    |
224 | /     /// Unique identifier for this metric object, for instance 'node/<nodename>'
225 | |     /// or 'qemu/<vmid>'.
    | |_________________________^

warning: `pbs-api-types` (lib doc) generated 2 warnings

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-12-03 11:52:50 +01:00
Maximiliano Sandoval
5d60f8692a restore: docs: escape <uid> with code block
otherwise:

```
warning: unclosed HTML tag `uid`
   --> proxmox-file-restore/src/main.rs:686:63
    |
686 | /// "www-data", so we use a custom one in /run/proxmox-backup/<uid> instead.
    |                                                               ^^^^^
    |
    = note: `#[warn(rustdoc::invalid_html_tags)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-03 11:50:47 +01:00
Maximiliano Sandoval
61d18bcf9c config: acl: docs: link to PRIVILEGES with namespace
Otherwise:

```
warning: unresolved link to `PRIVILEGES`
  --> pbs-config/src/acl.rs:15:71
   |
15 | /// Map of pre-defined [Roles](Role) to their associated [privileges](PRIVILEGES) combination
   |                                                                       ^^^^^^^^^^ no item named `PRIVILEGES` in scope
   |
   = help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
   = note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-03 11:50:47 +01:00
Maximiliano Sandoval
2bacfa7029 client: clippy: allow too_many_arguments
These are API endpoints.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-03 11:24:37 +01:00
Maximiliano Sandoval
109e063a7e chunker: do not reassign context's total field
```
warning: field assignment outside of initializer for an instance created with Default::default()
   --> pbs-datastore/src/chunker.rs:431:5
    |
431 |     ctx.total = buffer.len() as u64;
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
note: consider initializing the variable with `chunker::Context { total: buffer.len() as u64, ..Default::default() }` and removing relevant reassignments
   --> pbs-datastore/src/chunker.rs:430:5
    |
430 |     let mut ctx = Context::default();
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#field_reassign_with_default
    = note: `#[warn(clippy::field_reassign_with_default)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-03 11:24:37 +01:00
Maximiliano Sandoval
47a29b1896 docs: remove empty lines in doc strings
Fixes the clippy lint:

```
warning: empty line after doc comment
   --> src/tape/pool_writer/mod.rs:441:5
    |
441 | /     /// updated.
442 | |
    | |_
...
448 | /     pub fn append_snapshot_archive(
449 | |         &mut self,
450 | |         snapshot_reader: &SnapshotReader,
451 | |     ) -> Result<(bool, usize), Error> {
    | |_____________________________________- the comment documents this method
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#empty_line_after_doc_comments
    = help: if the empty line is unintentional remove it
help: if the documentation should include the empty line include it in the comment
    |
442 |     ///
    |
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-03 11:24:37 +01:00
Christian Ebner
0083e7ac05 sync: push: use direct api version comparison in compatibility checks
Use the trait implementations of `ApiVersion` to perform operator
based version comparisons. This makes the comparison more readable
and reduces the risk for errors.

No functional change intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-12-02 15:27:37 +01:00
Christian Ebner
00254d60e3 api types: version: implement traits to allow for version comparison
Derive and implement the traits to allow comparison of two
`ApiVersion` instances for more direct and easy api version
comparisons. Further, add some basic test cases to reduce risk of
regressions.

This is useful for e.g. feature compatibility checks by comparing api
versions of remote instances.

Example comparison:
```
api_version >= ApiVersion::new(3, 3, 0)
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-12-02 15:27:37 +01:00
Christian Ebner
d11393c70e api types: version: drop unused repoid field
The `ApiVersion` type was introduced in commit a926803b
("api/api-types: refactor api endpoint version, add api types")
including the `repoid`, added for completeness when converting from
a pre-existing `ApiVersionInfo` instance, as returned by the
`version` api endpoint.

Drop the additional `repoid` field, since this is currently not used,
can be obtained fro the `ApiVersionInfo` as well and only hinders the
implementation for easy api version comparison.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-12-02 15:27:37 +01:00
Fabian Grünbichler
77fd1853b3 clippy: use div_ceil to calculate fixed index length
no semantic changes intended

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-12-02 11:37:06 +01:00
Fabian Grünbichler
a50e0014df clippy: elide more lifetimes
these were detected with 1.83.0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-12-02 11:34:05 +01:00
Maximiliano Sandoval
d61bac6841 api: config: run rustfmt
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
cdeed5e440 datastore: simplify let-else block with ? operator
Fixes the question_mark clippy lint:

```
warning: this `let...else` may be rewritten with the `?` operator
   --> pbs-datastore/src/datastore.rs:101:5
    |
101 | /     let Some(ref device_uuid) = config.backing_device else {
102 | |         return None;
103 | |     };
    | |______^ help: replace it with: `let ref device_uuid = config.backing_device?;`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#question_mark
    = note: `#[warn(clippy::question_mark)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
acddd3f09a restore_daemon: use map_while instead of filter_map(Result::ok)
Fixes the lines_filter_map_ok clippy lint:

```
warning: `filter_map()` will run forever if the iterator repeatedly produces an `Err`
   --> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:195:14
    |
195 |             .filter_map(Result::ok)
    |              ^^^^^^^^^^^^^^^^^^^^^^ help: replace with: `map_while(Result::ok)`
    |
note: this expression returning a `std::io::Lines` may produce an infinite number of `Err` in case of a read error
   --> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:193:18
    |
193 |           for f in BufReader::new(File::open("/proc/filesystems")?)
    |  __________________^
194 | |             .lines()
    | |____________________^
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#lines_filter_map_ok
    = note: `#[warn(clippy::lines_filter_map_ok)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
414a5b3a3a remove redundant imports
Fixes the single_component_path_imports clippy lint:

```
warning: this import is redundant
  --> proxmox-file-restore/src/block_driver_qemu.rs:15:1
   |
15 | use proxmox_systemd;
   | ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
   = note: `#[warn(clippy::single_component_path_imports)]` on by default

warning: this import is redundant
  --> proxmox-backup-client/src/mount.rs:19:1
   |
19 | use proxmox_systemd;
   | ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
   = note: `#[warn(clippy::single_component_path_imports)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
80264dbfaa docs: fix outer docs comments
Fixes the suspicious_doc_comments clippy lints:

```
warning: this is an outer doc comment and does not apply to the parent module or crate
 --> proxmox-restore-daemon/src/main.rs:1:1
  |
1 | ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
  = note: `#[warn(clippy::suspicious_doc_comments)]` on by default
help: use an inner doc comment to document the parent module or crate
  |
1 | //! Daemon binary to run inside a micro-VM for secure single file restore of disk images
  |

warning: this is an outer doc comment and does not apply to the parent module or crate
 --> proxmox-restore-daemon/src/proxmox_restore_daemon/mod.rs:1:1
  |
1 | ///! File restore VM related functionality
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
  |
1 | //! File restore VM related functionality
  |

warning: this is an outer doc comment and does not apply to the parent module or crate
 --> proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs:1:1
  |
1 | ///! File-restore API running inside the restore VM
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
  |
1 | //! File-restore API running inside the restore VM
  |
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
ff9e36b431 client: catalog: remove unnecessary sort_unstable_by
Fixes the unnecessary_sort_by clippy lint:

```
warning: consider using `sort`
   --> proxmox-backup-client/src/catalog.rs:102:13
    |
102 |             metadata_archives.sort_unstable_by(|a, b| a.cmp(b));
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `metadata_archives.sort_unstable()`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_sort_by
    = note: `#[warn(clippy::unnecessary_sort_by)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
7eee253d8c remove needless type conversion
The mount types were probably here for compatibility with older proxmox-sys.

Fixes the useless_conversion clippy lints:

```
warning: useless conversion to the same type: `std::os::fd::OwnedFd`
   --> proxmox-backup-client/src/mount.rs:172:23
    |
172 |     let pr: OwnedFd = pr.into(); // until next sys bump
    |                       ^^^^^^^^^ help: consider removing `.into()`: `pr`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
    = note: `#[warn(clippy::useless_conversion)]` on by default

warning: useless conversion to the same type: `std::os::fd::OwnedFd`
   --> proxmox-backup-client/src/mount.rs:173:23
    |
173 |     let pw: OwnedFd = pw.into();
    |                       ^^^^^^^^^ help: consider removing `.into()`: `pw`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion

warning: useless conversion to the same type: `pbs_api_types::BackupArchiveName`
   --> proxmox-file-restore/src/main.rs:484:18
    |
484 |                 &archive_name.try_into()?,
    |                  ^^^^^^^^^^^^^^^^^^^^^^^
    |
    = help: consider removing `.try_into()`
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
    = note: `#[warn(clippy::useless_conversion)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
2dff4d2d6d client: remove unnecessary deref
Fixes the needless_option_as_deref clippy lint:

```
warning: derefed type is same as origin
    --> proxmox-backup-client/src/main.rs:1154:21
     |
1154 |                     payload_target.as_ref().as_deref(),
     |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `payload_target.as_ref()`
     |
     = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_option_as_deref
     = note: `#[warn(clippy::needless_option_as_deref)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
23e91fdd98 docs: use sublist indentation
Fixes the doc_lazy_continuation clippy lint, e.g.:

```
warning: doc list item without indentation
   --> src/server/pull.rs:764:5
    |
764 | /// -- attempt to pull each NS in turn
    |     ^
    |
    = help: if this is supposed to be its own paragraph, add a blank line
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuation
help: indent this line
    |
764 | ///   -- attempt to pull each NS in turn
    |     ++
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
c68117d0a1 push: use unwrap_or to prevent lazy evaluation
Fixes the unnecessary_lazy_evaluations clippy lint:

```
warning: unnecessary closure used to substitute value for `Option::None`
   --> src/server/push.rs:445:25
    |
445 |           let max_depth = params
    |  _________________________^
446 | |             .max_depth
447 | |             .unwrap_or_else(|| pbs_api_types::MAX_NAMESPACE_DEPTH);
    | |__________________________________________________________________^
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_lazy_evaluations
    = note: `#[warn(clippy::unnecessary_lazy_evaluations)]` on by default
help: use `unwrap_or` instead
    |
447 |             .unwrap_or(pbs_api_types::MAX_NAMESPACE_DEPTH);
    |              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
de44cb7b47 backup_manager: use Vec::first instead of get(0)
Fixes the get_first clippy lint:

```
warning: accessing first element with `matching_stores.get(0)`
   --> src/bin/proxmox_backup_manager/datastore.rs:284:26
    |
284 |     if let Some(store) = matching_stores.get(0) {
    |                          ^^^^^^^^^^^^^^^^^^^^^^ help: try: `matching_stores.first()`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#get_first
    = note: `#[warn(clippy::get_first)]` on by default
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
81635877e2 use inspect_err when possible
Fixes the manual_inspect clippy lint:

```
warning: using `map_err` over `inspect_err`
   --> src/bin/proxmox_backup_debug/diff.rs:125:18
    |
125 |                 .map_err(|err| {
    |                  ^^^^^^^
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#manual_inspect
    = note: `#[warn(clippy::manual_inspect)]` on by default
help: try
    |
125 ~                 .inspect_err(|err| {
126 ~                     log::error!("{}", format_key_source(&key.source, "encryption"));
    |
```

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Maximiliano Sandoval
f36e8fea91 remove needless borrows
Fixes the needless_borrow lint.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-12-02 11:15:18 +01:00
Thomas Lamprecht
2fab9155b3 bump version to 3.3.0-2
minor bump as just server will be uploaded with minor, mostly
cosmetic, fixes.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-28 13:13:33 +01:00
Christian Ebner
b711ccf0ad server: push: fix supported api version check
The current version check does not cover cases where the minor
version is 3, but the release version is below 11. Fix this by
extending the check accordingly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
 [ TL: re-sort line to go from bigger to smaller ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-28 12:58:25 +01:00
Stefan Hanreich
38d961f9e4 ui: mask unmounted datastores in datastore overview
Currently, showing the Datastore summary page leads to errors since
the status returned by the API does not contain any fields that are
checked by the component rendering the datastore summary. We solve
this by checking if the datastore is currently mounted first and mask
the element if it is currently unmounted.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-28 12:07:25 +01:00
Christian Ebner
6ab04f14ae ui: fix remove vanished tooltip to be valid for both sync directions
The tooltip text shown for the remove vanished flag when hovering
is incorrect for push direction. By using `sync target` over `local`,
make the text agnostic to the actual sync direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-28 11:29:15 +01:00
Thomas Lamprecht
269b7bffc7 tree-wide: fix various typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 20:52:48 +01:00
Thomas Lamprecht
f418479aaa bump version to 3.3.0-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 20:38:44 +01:00
Thomas Lamprecht
c7cf3b424a ui: version info: replace hyphen separator with dot
Our package uses <x>.<y>.<z>-<rev> as version format, here we get
version=<x>.<y> and release=<z>, so we rendered the version like
<x>.<y>-<z>, which is rather wrong.

And while the return value of the API call might be a bit odd and
should probably change (or at least add a full version property), but
for now it's what it is, so at least render it correctly.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 20:31:53 +01:00
Thomas Lamprecht
eb21f639f2 ui: partition selector: clean-up indentation of model transform arrow-fn
not good yet but better...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 20:16:32 +01:00
Hannes Laimer
54308a12b3 ui: filter partitions without proper UUID in partition selector
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 20:08:03 +01:00
Thomas Lamprecht
af037dd25d Merge branch 're-apply-fionas-v2'
I made a mistake and applied the v1 not the v2 of the series, show
this by merging the actual v2; albeit this should not be done to
frequently to avoid making the git history to messy – sorry!

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 20:01:19 +01:00
Fiona Ebner
d3910e1334 api: disks: directory: fail if mount unit already exists
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.

There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.

The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-27 19:59:37 +01:00
Fiona Ebner
a7792e16c5 api: disks: directory: factor out helper for mount unit path
In preparation to check for a pre-existing mount unit.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-27 19:59:37 +01:00
Fiona Ebner
353711199e api: disks: directory: fail if mount unit already exists
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.

There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.

The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-27 19:57:46 +01:00
Fiona Ebner
584068893a api: disks: directory: factor out helper for mount unit path
In preparation to check for a pre-existing mount unit.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 [ TL: move format template variable directly into string ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 19:56:19 +01:00
Thomas Lamprecht
87c648018d docs: removable datastores: expand notes on supported file systems
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 19:50:35 +01:00
Hannes Laimer
d8777c0f9b docs: add note for why FAT is not supported for removable datastores
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 19:41:40 +01:00
Thomas Lamprecht
e5f2903981 docs: make sphinx ignore the environment cache to avoid missing synopsis
Pass the `-E` option to, quoting it's man-page, "don't use a saved
environment (the structure caching all cross-references, but rebuild
it completely."

As with reusing the environment one gets some empty results for
synopsis stuff depending on build order, for example the synopsis in
the command-syntax appendix HTML output is empty while the same
synopsis used for the dedicated HTML page is complete.

By making the build-log more verbose I caught the attention of some
emitted 'env-purge-doc' events from sphinx; while this itself might be
harmless (I didn't followed the rat tail to its end), it made me a bit
suspicious about caching and wrong/missing invalidation.

With ignoring the environment this is fixed, a diffoscope comparison
shows that not only the command-syntax page, but many others have the
various synposis content added again. There are solely added lines, no
removed nor changed, so it seems fine to enabled that option without
an in-depth sphinx review.

Note, I first suspected the use of a separate "doctree pickles" cache
directory (`-d` option) and is used for all output types besides the
man-pages one, which uses the default .doctree directory.
But changing the man-page target to also use the custom doctree cache
had no effect on the build-result whatsoever (compared with
diffoscope).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 19:37:46 +01:00
Fabian Grünbichler
1b4426feec GC: add check for nested datastore
these are particularly problematic since GC will walk the whole datastore tree
on the file system, and will thus pick up indices (but not chunks!) from nested
directories that are ignored in other code paths that use our regular
iterators..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-27 15:26:56 +01:00
Fabian Grünbichler
4d5e14c07e datastore: extract nesting check into helper
and improve the variable namign while we are at it. this allows the check to be
re-used in other code paths, like when starting a garbage collection.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-27 15:25:37 +01:00
Maximiliano Sandoval
93bdba1ac6 dashboard: make Subscription translatable
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-11-27 15:25:14 +01:00
Maximiliano Sandoval
5a6aff6ad5 ui: tree: make Tape Backup string translatable
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-11-27 15:25:11 +01:00
Thomas Lamprecht
614b5b6713 bump version to 3.2.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 14:43:06 +01:00
Fabian Grünbichler
4948864a07 api: create_datastore: fix nesting checks
there two kinds of overlap we need to check here:
- two removable datastores backed by the same device must not have nested
  relative paths on the device
- any two datastores must not have nested absolute paths

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-27 14:17:29 +01:00
Christian Ebner
13fe842041 docs: mention required source audit permission for push sync jobs
To be in line with the updated permission requirements, as
Datastore.Audit is now required to read and edit sync jobs in push
direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 13:20:22 +01:00
Christian Ebner
4eadbcc49f api: sync: include required permissions for push direction
Sync jobs in push and pull direction require a different set of
privileges for the various api methods provided. Update the
descriptitons to include the push direction and list them
accordingly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 13:20:22 +01:00
Christian Ebner
5eb51c75ca api: sync: restrict edit permissions for push sync jobs
Users require `Datastore.Audit` on the source datastore to read sync
jobs. Further restrict also the permissions to modify sync jobs in
push direction to include the `Datastore.Audit` permission on the
source, as otherwise a user is able to create or edit sync jobs in
push direction, but not able to see them.

Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 13:20:22 +01:00
Alexander Zeidler
0e21bb2482 docs: installation: several small fixes/improvements
* consistently use "medium" (singular), as only one is needed for
  installation (installation-media.rst not renamed)
* add short introduction to recently added chapter "Installation Media"
* update minimum required flash drive storage space to 2 GB
* remove CD-ROM (too little storage space) but keep DVD
* mention explicitly that data get overwritten on installation media /
  installation target disks
* mention that using `dd` will require root privileges
* add accidentally cut off text when copying from PVE docs
* add reference labels to currently needed section titles
* reword some paragraphs for completeness and readability
* mention all installation methods in the intro of "Server Installation"
* add the boot order as possible boot issue
* remove recently added redundant product website hyperlinks (as earlier
  with commit 34407477e2)
* fix broken heading level of APT-based PBC repo

* slightly reorder sub-chapters of "Installation":

After adding the chapter "Installation Media" (d363818641), the chapter
order under "Installation" is:

1. System Requirements
2. Installation Media
3. Debian Package Repositories
4. Server Installation
5. Client Installation

But repos are more likely to be configured after installation, and for
other installation methods chapter links exist anyway. So to keep the
chapter order more logical, "Debian Package Repositories" is now moved
after "Client Installation".

Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
2024-11-27 12:56:00 +01:00
Aaron Lauterer
0fe9fd8dd0 api: removable datastore: downgrade device already mounted error to info
pbs-datastore::datastore::is_datastore_mounted_at() verifies that the
mounted file system has the expected UUID. Therefore we don't have to
error out if we try to mount an already mounted removable datastore.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2024-11-27 12:44:06 +01:00
Christoph Heiss
4869ec3bd3 docs: update copyright years
It's already 2024 for quite some time now.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-11-27 12:43:23 +01:00
Christian Ebner
17b7ab8021 sync: push: pass full error context when returning error to job
Show the full error context when fetching the remote target
namespaces fails. As logging of the error is handled by the calling
sync job, reformat the error to include the error context before
returning.

Instead of the error
```
TASK ERROR: Fetching remote namespaces failed, remote returned error
```

the user is now presented with an error like
```
TASK ERROR: Fetching remote namespaces failed, remote returned error: datastore 'removable1' is not mounted
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-27 12:37:03 +01:00
Thomas Lamprecht
00ced7808b d/control: update versioned dependency for widget-toolkit
To ensure we can bind to the emptyText of a display-edit field,
otherwise the empty text can be confusing.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 12:32:21 +01:00
Fiona Ebner
87f2087789 ui: datastore edit: fix emptytext for path field
It is a relative path for removable datastores.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-27 12:05:53 +01:00
Lukas Wagner
0ca7833bc5 ui: datastore content: change button text "Add NS" to "Add Namespace"
We don't use the abbreviation anywhere else in our UI or docs.
To avoid any confusion about this (loaded) abbreviation, this
commits replaces it with the full word "Namespace".
There is more than enough space in the top bar for the larger button
size, even on low resolution screens (checked on 1280x700).

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-27 11:49:54 +01:00
Hannes Laimer
41b8bf2aff api: admin: add Datastore.Modify permission for mount
So the mount and unmount endpoint have matching permissions.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 11:49:26 +01:00
Dominik Csapak
0fe805b95f ui: prune keep input: actually clear value on clear trigger click
instead of resetting to the originalValue. This makes it behave like
other similar fields (e.g. the combogrid).

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-27 10:46:06 +01:00
Hannes Laimer
363e32a805 api: directory: use relative path when creating removable datastore
In an earlier version of this series the datastore path was absolute
for removable datastores. This is simply a leftover that was missed
when changing that to relative paths.

Reported-by: Markus Frank <m.frank@proxmox.com>
Fixes: 94a068e31 ("api: node: allow creation of removable datastore through directory endpoint")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-27 10:05:40 +01:00
Fabian Grünbichler
c5c7fd3482 pull-sync: do not interpret older missing snapshots as needs-resync
when loading the verification state for a local snapshot, it must
first be ensured that it actually exists, else the lack of manifest
will be interpreted as corrupt snapshot triggering a "resync" that is
actually a sync of all missing snapshots, not just the newer ones,
which is what's actually wanted here.

The diff is best seen by telling git to ignore the whitespace changes.

Fixes: 0974ddfa ("fix #3786: api: add resync-corrupt option to sync jobs")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ TL: reword subject and add a bit to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-27 10:03:09 +01:00
Thomas Lamprecht
6bd63b0e71 bump version to 3.2.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 17:17:09 +01:00
Christian Ebner
63da9f8397 client: backup writer: fix regression in progress output
Fixes a regression introduced when switching from the plain string
to be used for archive names to the BackupArchiveName api type in
commit addfae26 ("api types: introduce `BackupArchiveName` type").

The archive name now always is stored including the server archive
name extension. Adapt the check for which archive types to display
the progress log output to reflect this change.

Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-26 17:16:14 +01:00
Shannon Sterz
963401348a datastore: re-phrase error message when datastore is unavailable
the current phrase leads to clumsy log messages such as:

> datastore 'store' is in datastore is being unmounted

this commit re-phrases that too:

> datastore 'store' is unavailable: datastore is being unmounted

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-26 16:44:26 +01:00
Fiona Ebner
7e1aa4d283 ui: datastore edit: improve field label name
And use title case to be consistent with the other field labels.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-26 16:42:07 +01:00
Hannes Laimer
bd25fc40a6 ui: allow resetting unmounting maintenance
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-26 16:35:02 +01:00
Thomas Lamprecht
bb367c4d2e manager: run cargo fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:24:01 +01:00
Hannes Laimer
a5f3d4a21c api: removable datastores: require Sys.Modify permission on /system/disks
A lot of removable datastore actions can alter the system state
(mounting, unmounting), so require Sys.Modify for lack of better
alternative.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
 [ TL: improve commit subject and add access-description for create,
   and delete, where we do a dynamic access check ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:22:41 +01:00
Dominik Csapak
af4d5607f1 sync jobs: remove superfluous direction property
since the SyncJobConfig struct now contains a 'sync-direction' property, we can
omit the 'direction' property of the SyncJobStatus struct. This makes a
few adaptions in the ui necessary:

* use the correct field
* handle 'pull' as default (since we don't necessarily get a
  'sync-direction' in that case)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:02:22 +01:00
Fabian Grünbichler
b3f16f6227 api: sync direction: extract match check into impl fn
In case we add another direction or another call site, doing it
without a wildcard match arm seems cleaner and more future-proof.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
 [ TL: adapt subject/message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 16:00:54 +01:00
Christian Ebner
e9dfb83131 api types: drop unused config type helpers for sync direction
Jobs for both sync directions are now stored using the same `sync`
config section type, so drop the outdated helpers.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:50 +01:00
Christian Ebner
e066bd7207 bin: show direction in sync job list output
As the WebUI also lists the sync direction, display the direction in
the cli output as well.

Examplary output:
```
┌─────────────────┬────────────────┬───────────┬────────┬───────────────────┬──────────┬──────────────┬─────────┬─────────┐
│ id              │ sync-direction │ store     │ remote │ remote-store      │ schedule │ group-filter │ rate-in │ comment │
╞═════════════════╪════════════════╪═══════════╪════════╪═══════════════════╪══════════╪══════════════╪═════════╪═════════╡
│ s-6c16fab2-9e85 │                │ datastore │        │ datastore         │ hourly   │ all          │         │         │
├─────────────────┼────────────────┼───────────┼────────┼───────────────────┼──────────┼──────────────┼─────────┼─────────┤
│ s-8764c440-3a6c │ push           │ datastore │ local  │ push-target-store │ hourly   │ all          │         │         │
└─────────────────┴────────────────┴───────────┴────────┴───────────────────┴──────────┴──────────────┴─────────┴─────────┘
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:45 +01:00
Christian Ebner
4b76b731cb api: admin/config: introduce sync direction as job config parameter
Add the sync direction for the sync job as optional config parameter
and refrain from using the config section type for conditional
direction check, as they are now the same (see previous commit).

Use the configured sync job parameter instead of passing it to the
various methods as function parameter and only filter based on sync
direction if an optional api parameter to distingush/filter based on
direction is given.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:40 +01:00
Christian Ebner
b5814a4142 config: sync: use same config section type sync for push and pull
Use `sync` as config section type string for both, sync jobs in push
and pull direction, renaming the now combined config plugin to sync
plugin.

Commit bcd80bf9 ("api types/config: add `sync-push` config type for
push sync jobs") introduced the additional config type with the
intend to reduce possible misconfiguration. Partially revert this to
use the same config type string again, since the misconfiguration
can happen nevertheless (by editing the config type) and currently
sync job configs are only listed partially when fetched via the
config api endpoint. The filtering based on the additional api
parameter is however retained.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 16:00:28 +01:00
Dominik Csapak
3b3d63ccfd ui: sync jobs: add search box
filter by (remote) store, remote, id, owner, direction.
Local store is only included on the globabl view not the datastore
specific one.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
5b11e52b08 ui: sync jobs: change default sorting to 'store' -> 'direction' -> 'id'
instead of just the id, which makes the list in the global datastore
view a bit more easier to digest (since it's now sorted by store first)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
e302382890 ui: sync jobs: revert to single list for pull/push jobs
but add a separate column for the direction so one still sees the
separate jobs.

change the 'local owner/user' to a single column, but add a tooltip in
the header to explain when it does what.

This makes the 'SyncJobsPullPushView' unnecessary, so delete it.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
28d6afc2d7 cli: manager: sync: add 'sync-direction' parameter to list
so one can list pull and push jobs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
403ad1f6d1 api: admin: sync: add optional 'all' sync type for listing
so that one can list all sync jobs, both pull and push, at the same
time. To not confuse existing clients that only know of pull syncs, show
only them by default and make the 'all' parameter opt-in. (But add a
todo for 4.x to change that)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Dominik Csapak
23185135bb api: admin: sync: add direction to sync job status
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 14:54:33 +01:00
Thomas Lamprecht
d394c33a0c update proxmox-notify to 0.5.1
To ensure we got the 10s timeout for webhooks and gotify
notifications available.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 14:47:03 +01:00
Christian Ebner
0db4d9031b api types: add missing conf to blob archive name mapping
Commit addfae26 ("api types: introduce `BackupArchiveName` type")
introduced a dedicated archive name api type to add rust type
checking and bundle helpers to the api type. Since this, the backup
archive name to server archive name mapping is handled by its parser.

This however did not cover the `.conf` extension used for VM config
files. Add the missing `.conf` to `.conf.blob` to the match statement
and the test cases.

Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-26 13:28:17 +01:00
Fabian Grünbichler
d9f36232f1 docs: removable datastores: rephrasing and typos
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-26 13:09:16 +01:00
Hannes Laimer
3b9cf7b7a1 docs: add information for removable datastores
Specifically about jobs and how they behave when the datastore is not
mounted, how to create and use deivices with multiple datatstores on
multiple PBS instances and options how to handle failed unmounts.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-26 13:08:04 +01:00
Hannes Laimer
8c7492b99b api: types: add 'mount_status' to schema
... and deserialize with default if field is missing in data.

Reported-by: Aaron Lauterer <a.lauterer@proxmox.com>
Fixes: 76609915d6 ("pbs-api-types: add mount_status field to DataStoreListItem")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-26 13:08:04 +01:00
Hannes Laimer
e90baeaaa8 api: maintenance: allow setting of maintenance mode if 'unmounting'
So it is possible to reset it after a failed unmount, or abort an
unmount task by resetting it through the API.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-26 13:07:53 +01:00
Stefan Hanreich
feeace2696 docs: client: change disk name from backup to disk
The same word occurring twice in succession can lead to the brain
skipping the second occurrence. Change the name of the archives in the
example from backup.pxar to archive-name.pxar to avoid that effect.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
 [ TL: squash in Christian's suggestion to use 'archive-name.pxar' ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 12:43:41 +01:00
Christian Ebner
caba859692 ui: use same label for removable datastore created from disk
The `Add datastore` window labels the flag for creating a removable
datastore as `Removable datastore`, while creating the datastore via the
storage/disks interface will refer to it as `is removable`.

Use the same `Removable datastore` as label for both locations.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-26 12:25:09 +01:00
Christian Ebner
a910ee8c0d docs: explain some further caveats of the change detection modes
Explain that the change detection mode data makes sure that no files
are considered reusable, even if their metadata might match and that
the use of ctime and inode number is not possible for detection of
unchanged files if the filesystem was synced to a temporary location,
therefore the mtime and size are used for detection.

Also note the reduced deduplication when storing snaphshots with
mixed archive formats on the same datastore.

Further, mention the backwards compatibility to older version of the
Proxmox Backup Server.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-26 12:22:35 +01:00
Stefan Hanreich
67a7e3c3eb client: fix example commands for client usage
The example commands in the Change Detection Mode / File Exclusion
section are missing the command in the client invocation. Add the
backup command to the examples, so they are actually valid.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
2024-11-26 12:08:46 +01:00
Thomas Lamprecht
1a0eec9469 docs: update online-help-info reference map
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-26 12:07:38 +01:00
Gabriel Goller
80c9afae4e ui: add onlineHelp for consent-banner option
Add onlineHelp link to the consent-banner docs section in the popup when
inserting the consent-banner text.

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-26 12:00:08 +01:00
Dominik Csapak
20e58c056f ui: utils: add task description for mounting/unmounting
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-26 11:59:31 +01:00
Lukas Wagner
8dccdeb942 docs: notification: add webhook endpoint documentation
Same information as in pve-docs but translated to restructured text.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-26 11:58:46 +01:00
Lukas Wagner
9c4a934c71 ui: utils: enable webhook edit window
This allows users to add/edit new webhook targets.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
2024-11-26 11:58:46 +01:00
Lukas Wagner
6a9aa3b9f4 management cli: add CLI for webhook targets
The code was copied and adapted from the gotify target CLI.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-26 11:58:46 +01:00
Lukas Wagner
aa8f7f6208 api: notification: add API routes for webhook targets
Copied and adapted from the Gotify ones.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
2024-11-26 11:58:46 +01:00
Gabriel Goller
5a52d1f06c reuse-datastore: avoid creating another prune job
If a datastore with a prune job is removed, the prune job is preserverd
as it is stored in /etc/proxmox-backup/prune.cfg. We also create a
default prune job for every datastore – this means that when reusing a
datastore that previously existed, you end up with duplicate prune jobs.
To avoid this we check if a prune job already exists, and when it does,
we refrain from creating the default one. (We also check if specific
keep-options have been added, if yes, then we create the job
nevertheless.)

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-26 11:24:46 +01:00
Fabian Grünbichler
f73bc28f03 bump pxar dependency
to ensure bug fix is picked up

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-26 11:24:20 +01:00
Thomas Lamprecht
4174dafd32 bump version to 3.2.12-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 22:52:48 +01:00
Thomas Lamprecht
779f82ebdf ui: datastore summary: also trigger navgiation-store load on mount/unmount
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
668d8dfda4 ui: datastore summary: do single info-panel load on failure
Switch over using the controller of the info panel directly, avoiding
firing events, and add a single store load to cause the mask-logic
when the status update store goes from succeeding to failure.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
fc2d288434 ui: datastore summary: also stop/start rrd store on failure edges
No point in querying RRD metrics if it will fail anyway, so stop them
like we stop the status store, and start them again once it can work.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
2de3d5385c ui: datastore summary: only start/stop other stores on edges
Disabling basically was already done only on an transition edge from
"success" -> "failure" (= !success), as we stopped the periodic store
load in that case, thus we never trigger to "failures" after each
other without any user input.

But on success we always unconditionally fired an activate, which
cause the status store to start its store updates, which in turn
immediately triggered as store load. So the verbose status call of the
info panel was now coupled to the 1s update period of the encompassing
summary panel, not the slower 5s period it actually wanted to trigger
an update.

So save the last state and check if it actually differs before causing
such action.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
d11deccff1 ui: datastore summary: trigger status store load after unmount
Always trigger an explicit status store load.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
f784201c63 ui: datastore summary: start store updates after mount task is done
Without this, we immediately start the store updates even before the
browser created the (async) mount API request. So it's very likely
that the first store load will still get an error due to the backing
device of the datastore not being mounted yet. That in turn will
trigger our error detection behavior in the load even listener and
disable periodic store updates again.

Move the start of the update into the taskDone handler. We do not need
to check if the task succeeded, as either it did, and we will do
periodic updates, or it did not and we do at least one update to load
the current status and then stop again auto-loading the store anyway.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
127de88a95 ui: datastore summary: fixate connection-info button position & add separator
It's not nice if a existing always visible button moves around
depending on the datastore type. Rather move the optional buttons to
the right and add a separator for visual grouping.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
09155de386 api: disks: only return UUID of partitions if it actually is one
Some filesystems like FAT don't include a concept of UUIDs.
Instead, tools like blkid tools like blkid derive these
identifiers based on certain filesystem metadata, such as
volume serial numbers or other unique information. This does
however not follow the format specified in RFC 9562[1].

[1] https://datatracker.ietf.org/doc/html/rfc9562

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
18d4c4fc35 bin: debug: add inspect device command
... to get information about (removable) datastores a device contains

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
c8835f5882 ui: support create removable datastore through directory creation
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
42e3b2f12a node: disks: replace BASE_MOUNT_DIR with DATASTORE_MOUNT_DIR
... since they do have the same value.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
43466bf538 api: node: include removable datastores in directory list
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
94a068e316 api: node: allow creation of removable datastore through directory endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
f0e1cb86d6 ui: render 'unmount' maintenance mode correctly
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
703a822c97 ui: maintenance: fix disable msg field if no type is selected
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
aaac857282 ui: add datastore status mask for unmounted removable datastores
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
b1b6489233 ui: utils: make parseMaintenanceMode more robust
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
c74b289174 ui: tree: render unmounted datastores correctly
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
9cafa1775b ui: add (un)mount button to summary
And only try to load datastore information if the datastore is
available.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
51148a0b1e ui: add removable datastore creation support
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
62963e6452 ui: add partition selector form
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
e8eeee0b52 docs: add removable datastores section
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
b17ebd5c2c datastore: handle deletion of removable datastore properly
Data deletion is only possible if the datastore is mounted, won't attempt
mounting it for the purpose of deleting data.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
2f874935b5 add auto-mounting for removable datastores
If a device houses multiple datastore, none of them will be mounted
automatically. If a device only contains a single datastore it will be
mounted automatically. The reason for not mounting multiple datastore
automatically is that we don't know which is actually wanted, and since
mounting all means also all have to be unmounted manually, it made sense
to have the user choose which to mount.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
919925519a bin: manager: add (un)mount command
We can't just directly delegate these commands to the API endpoints
since both mounting and unmounting are done in a worker, and that one
would be killed when the parent ends. In this case that would be the CLI
process, which basically ends right after spwaning the worker.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
76609915d6 pbs-api-types: add mount_status field to DataStoreListItem
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
40a2b110bf api: add check for nested datastores on creation
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
91c67298f4 api: removable datastore creation
Devices can contains multiple datastores.
If the specified path already contains a datastore, `reuse datastore` has
to be set so it'll be added without creating a chunckstore.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
2b31406a37 api: admin: add (un)mount endpoint for removable datastores
Removable datastores can be mounted unless
 - they are already
 - their device is not present
For unmounting the maintenance mode is set to `unmount`,
which prohibits the starting of any new tasks envolving any
IO, this mode is unset either
 - on completion of the unmount
 - on abort of the unmount tasks
If the unmounting itself should fail, the maintenance mode stays in
place and requires manual intervention by unsetting it in the config
file directly. This is intentional, as unmounting should not fail,
and if it should the situation should be looked at.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
46d7e573a9 datastore: add helper for checking if a datastore is mounted
... at a specific location. Also adds two additional functions to
get the mount status, and ensuring a removable datastore is mounted.

Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Dietmar Maurer
66389b2fd9 maintenance: add 'Unmount' maintenance type
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
652b774eb0 maintenance: make is_offline more generic
... and add MaintenanceType::Delete to it. We also want to clear any
cach entries if we are deleting the datastore, not just if it is marked
as offline.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Hannes Laimer
89c650b83e pbs-api-types: add backing-device to DataStoreConfig
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
ffc8265e1f ui: login view: add missing trailing comma
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 21:34:22 +01:00
Thomas Lamprecht
4ef241a63b ui: update online help info reference-map
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 20:45:49 +01:00
Thomas Lamprecht
37440cd93a d/control: update versioned dependency for widget-toolkit
To ensure newly used components for the consent banner are available.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 18:51:35 +01:00
Thomas Lamprecht
d399fe50da update proxmox-rest-server dependency to 0.8.4
To ensure the adapted handlebars escaper that keeps '=' as is gets
used, required for the consent banner.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-25 18:50:28 +01:00
Gabriel Goller
ccf08921ee docs: add section about consent banner
Add short section on how to enable consent banner.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 18:48:13 +01:00
Gabriel Goller
dea876fd5e ui: show consent banner before login
Before showing the LoginView, check if we got a non-empty consent text
from the template. If there is a non-empty text, display it in a modal.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 18:48:13 +01:00
Gabriel Goller
28028b15b7 api: add consent api handler and config option
Add consent_text option to the node.cfg config. Embed the value into
index.html file using handlebars.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 17:19:32 +01:00
Shannon Sterz
d3f2e69cad ui: set min length for new passwords to 8
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-25 15:51:47 +01:00
Shannon Sterz
fb5b6f3eab api: enforce minimum character limit of 8 on new passwords
we already have two different password schemas, `PBS_PASSWORD_SCHEMA`
being the stricter one, which ensures a minimum length of new
passwords. however, this wasn't used on the change password endpoint
before, so add it there too. this is also in-line with NIST's latest
recommendations [1].

[1]: https://pages.nist.gov/800-63-4/sp800-63b.html#passwordver

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-25 15:51:47 +01:00
Shannon Sterz
9f3733c5ed api: ignore password parameter in the update_user endpoint
currently if a password is provided, we check whether the user that is
going to be updated can authenticate with it. later on, the password
is then set as the same password. this means that the password here
can only be changed if it is the exact same one that is already used.

so in essence, the password cannot be changed through this endpoint
already. remove all of this logic here in favor of the
`PUT /access/password` endpoint.

to keep the api stable for now, just ignore the parameter and add a
description that explains what to use instead.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-25 15:51:47 +01:00
Christian Ebner
6a619b2488 ui: sync job: fix source group filters based on sync direction
Fix switching the source for group filters based on the sync jobs
sync direction.

The helper to set the local namespace for the group filers was
introduced in commit 43a92c8c ("ui: group filter: allow to set
namespace for local datastore"), but never used because lost during
subsequent iterations of reworking the patch series.

The switching is corrected by:
- correctly initializing the local store and namespace for the group
  filer of sync jobs in push direction in the controller init, if a
  datastore is set.
- fixing an incorrect check for the sync direction in the remote
  datastore selector change listener.
- conditionally switching namespace to be set for the group filter in
  the remote and local namespace selector change listeners.
- conditionally switching datastore to be set for the group filter in
  the local datastore selector change listener.

Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-25 15:49:04 +01:00
Lukas Wagner
674ae4947b web ui: notification: remove matcher overiddes
These were put in place so that initial release of the new
notification system for Proxmox Backup Server can already include
improved notification matchers, which at that time have not been yet
merged into proxmox-widget-toolkit.

In the meanwhile, the changes have been merged an released in
proxmox-widget-toolkit 4.2.4, hence we can remove the override.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-25 15:46:47 +01:00
Lukas Wagner
b0448d0ad1 d/control: bump proxmox-widget-toolkit dependency
We need "notification: matcher: match-field: show known fields/values",
which was released in proxmox-widget-toolkit 4.2.4

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-11-25 15:46:47 +01:00
Christoph Heiss
eb126116ca docs: images: add installer guide screenshots
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-11-25 15:45:15 +01:00
Christoph Heiss
5cacfe02da docs: add installation wizard guide
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-11-25 15:44:27 +01:00
Christoph Heiss
d363818641 docs: add installation media preparation guide
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-11-25 15:44:23 +01:00
Fabian Grünbichler
391822f9ce run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 13:18:11 +01:00
Fabian Grünbichler
2009d8de41 api types: replace PathPatterns with Vec<PathPattern>
PathPatterns is hard to distinguish from PathPattern, so would need to be
renamed anyway.. but there isn't really a reason to define a separate API type
just for this.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 12:28:40 +01:00
Christian Ebner
1bb680017b fix #2996: client: allow optional match patterns for restore
When the user is only interested in a subset of the entries stored in
a file-level backup, it is convenient to be able to provide a list of
match patterns for the entries intended to be restored.

The required restore logic is already in place. Therefore, expose it
for the `proxmox-backup-client restore` command by adding the optional
array of patterns as command line argument and parse these before
passing them via the pxar restore options to the archive extractor.

Link to bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=2996

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:58:43 +01:00
Christian Ebner
70545af183 client: catalog shell: use dedicated api type for patterns
Use the common api type with schema based input validation for all
match pattern parameters exposed via the api macro.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:57:07 +01:00
Christian Ebner
33031f9835 pxar: bin: use dedicated api type for restore pattern
Instead of taking a plain string as input parameter, use the
corresponding api type performing additional input validation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:57:07 +01:00
Christian Ebner
45b5556765 api-types: implement dedicated api type for match patterns
Introduces a dedicated api type `PathPattern` and the corresponding
format and input validation schema. Further, add a `PathPatterns`
type for collections of path patterns and implement required traits
to be able to replace currently defined api parameters.

In preparation for using this common api type for all api endpoints
exposing a match pattern parameter.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:57:07 +01:00
Christian Ebner
a6c3192233 docs: deduplicate background details for garbage collection
Currently, common details regarding garbage collection are documented
in the backup client and the maintenance task. Deduplicate this
information by moving the details to the background section of the
maintenance task and reference that section in the backup client
part.

Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:52:26 +01:00
Christian Ebner
75c695bea4 docs: add security implications of prune and change detection mode
Users should be made aware that the data stored in chunks outlives
the backup snapshots on pruning and that backups created using the
change-detection-mode set to metadata might reference chunks
containing files which have vanished since the previous backup, but
might still be accessible when access to the chunks raw data is
possible (client or server side).

Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 11:52:26 +01:00
Fabian Grünbichler
8e057c3874 sync config: forbid setting resync_corrupt for push jobs
they don't support it (yet), so don't allow setting it in the backend either.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 11:22:46 +01:00
Gabriel Goller
19818d1449 fix #3786: docs: add resync-corrupt option to sync-job
Add short section explaining the `resync-corrupt` option on the
sync-job.

Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 10:53:26 +01:00
Gabriel Goller
590187ff53 fix #3786: ui/cli: add resync-corrupt option on sync-jobs
Add the `resync-corrupt` option to the ui and the
`proxmox-backup-manager` cli. It is listed in the `Advanced` section,
because it slows the sync-job down and is useless if no verification
job was run beforehand.

Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 10:53:26 +01:00
Gabriel Goller
0974ddfa17 fix #3786: api: add resync-corrupt option to sync jobs
This option allows us to "fix" corrupt snapshots (and/or their chunks)
by pulling them from another remote. When traversing the remote
snapshots, we check if it exists locally, and if it is, we check if the
last verification of it failed. If the local snapshot is broken and the
`resync-corrupt` option is turned on, we pull in the remote snapshot,
overwriting the local one.

This is very useful and has been requested a lot, as there is currently
no way to "fix" corrupt chunks/snapshots even if the user has a healthy
version of it on their offsite instance.

Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-25 10:53:26 +01:00
Gabriel Goller
b5be65cf8a snapshot: add helper function to retrieve verify_state
Add helper functions to retrieve the verify_state from the manifest of a
snapshot. Replaced all the manual "verify_state" parsing with the helper
function.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-25 10:52:40 +01:00
Christian Ebner
3dc9d2de69 server: push: log encountered empty backup groups during sync
Log also empty backup groups with no snapshots encountered during the
sync so the log output contains this additional information as well,
reducing possible confusion.

Nevertheless, continue with the regular logic, so that pruning of
vanished snapshot is honored.

Examplary output in the sync jobs task log:
```
2024-11-22T18:32:40+01:00: Syncing datastore 'datastore', root namespace into datastore 'push-target-store', namespace 'test'
2024-11-22T18:32:40+01:00: Found 2 groups to sync (out of 2 total)
2024-11-22T18:32:40+01:00: skipped: 1 snapshot(s) (2024-11-22T13:40:18Z) - older than the newest snapshot present on sync target
2024-11-22T18:32:40+01:00: Group 'vm/200' contains no snapshots to sync to remote
2024-11-22T18:32:40+01:00: Finished syncing root namespace, current progress: 1 groups, 0 snapshots
```

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-25 10:10:37 +01:00
Christian Ebner
62228d39f2 server: push: add error context to api calls and priv checks
Add an anyhow context to errors and display the full error context
in the log output. Further, make it clear which errors stem from api
calls by explicitly mentioning this in the context message.

This also fixes incorrect error handling by placing the error context
on the api result instead of the serde deserialization error for
cases this was handled incorrectly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: add missing format!
FG: run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 14:08:15 +01:00
Fabian Grünbichler
83810759ee api types: extend backup archive name parsing tests
and also test the error triggered by a directory path being passed in.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 13:47:25 +01:00
Christian Ebner
db5bf33cfe api types: add unit tests for backup archive name parsing
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-22 13:47:25 +01:00
Christian Ebner
7ad5ad82e5 client: drop unused parse_archive_type helper
Parsing of the type based on the archive name extension is now
handled by `BackupArchiveName`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: add removal of import

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 13:47:25 +01:00
Christian Ebner
6771869cc1 client/server: use dedicated api type for all archive names
Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.

Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.

Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.

This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.

A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.

E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```

The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: use LazyLock for constant archive names
FG: add missing import

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 13:47:05 +01:00
Christian Ebner
addfae26cf api types: introduce BackupArchiveName type
Introduces a dedicated wrapper type to be used for backup archive
names instead of plain strings and associated helper methods for
archive type checks and archive name mappings.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: use LazyLock for constant archive names reduces churn, and saves some
allocations

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 13:46:35 +01:00
Christian Ebner
e932ec101e datastore: move ArchiveType to api types
Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.

In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-22 11:45:43 +01:00
Fabian Grünbichler
cbf7bbefb7 pxar: extract: make invalid ACLs non-fatal
these can occur in practice, and neither setting nor getting them throws an
error. if "invalid" ACLs are non-restorable, this means that creating a pxar
archive with such an ACL is possible, but restoring it isn't.

reported in our community forum:
https://forum.proxmox.com/threads/155477

Tested-by: Gabriel Goller <g.goller@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 10:38:49 +01:00
Fabian Grünbichler
4e37c678dc pxar: add file name to path_info when applying metadata
else, error messages using this path_info refer to the parent directory instead
of the actual file entry causing the problem. since this is just for
informational purposes, lossy conversion is acceptable.

Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 10:38:42 +01:00
Christian Ebner
da11d22610 fix #5710: api: backup: stat known chunks on backup finish
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.

If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.

In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.

Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.

link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-22 10:26:25 +01:00
Christian Ebner
6265b1103a server: push: various smaller improvements to error messages
Various smaller adaptions such as capitalization of the start of
sentences, expansion of abbreviations and shortening of to long
error messages.

To improve consistency with the rest of the error messages for the
sync job in push direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

FG: use "skipping" for non-owner-groups - we haven't started uploading at that
point, there is nothing to "abort"

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 10:03:02 +01:00
Fabian Grünbichler
2134a2af48 push: move log messages for removed snapshot/group
so that they are logged in the success case, since the error case already has
its own log messages.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 09:56:26 +01:00
Christian Ebner
935368a62f server: push: consistently use remote over target for error messages
Mixing of terms only makes the errors harder to understand.

In order to make error messages more intuitive, always refer to the
sync push target as remote, mention the remote explicitly and/or
improve messages where needed.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-22 09:51:17 +01:00
Christian Ebner
ffd52fbeeb server: push: fix needless borrow clippy warning
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-22 09:51:17 +01:00
Gabriel Goller
cc6fc6a540 fix #5801: backup_manager: make api call on datastore update
When updating the datastore config using `proxmox-backup-manager` we
need to make an api-call, because the api-route starts a tokio task to
update the proxy-cache and the client will kill the task if we don't
wait. With an api-call the tokio task will be executed on the api
process and runs in the background while the endpoint handler has
already returned.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-22 09:15:58 +01:00
Gabriel Goller
41b97b2454 fix: allow datastore creation in directory with lost+found directory
When creating a datastore without the "reuse-datastore" option and the
datastore contains a `lost+found` directory (which is quite common), the
creation fails. Add `lost+found` to the ignore list.

Reported here: https://forum.proxmox.com/threads/bug-when-adding-new-storage-task-error-datastore-path-is-not-empty.157629/#post-721733

Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>

FG: slight code style change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-22 09:06:53 +01:00
Hannes Laimer
cd933e9d69 chunk_store: fix problem with permission checking
Permissions are stored in the lower 9 bits (rwxrwxrwx),
so we have to mask `st_mode` with 0o777.
The datastore root dir is created with 755, the `.chunks` dir and its
contents with 750 and the `.lock` file with 644, this changes the
expected permissions accordingly.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Reviewed-By: Gabriel Goller <g.goller@proxmox.com>
2024-11-22 08:49:32 +01:00
Christian Ebner
ec4ffa924a docs: client: fix formatting by using double ticks
With single ticks the containing modes and archive formats are
displayed cursive, to be consistent with other sections of the
documentation use inline blocks.

Adapted line wrappings to the additional line length.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-21 17:02:28 +01:00
Christian Ebner
98ac310845 docs: reference technical change detection mode section for client
Currently, the change detection modes are described in the client
usage section, not intended for in-depth explanation on how these
client option works, but rather with focus on how to use them.
Therefore, add a reference to the more detailed technical section
regarding the change detection modes and reduce duplicate
explanations.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-21 17:02:28 +01:00
Christian Ebner
1964cbdaad docs: explain the working principle of the change detection modes
Describe in more details how the different change detection modes
operate and give insights into the inner workings, especially for the
more complex `metadata` mode, which involves lookahead caching and
padding calculation for reused payload chunks.

Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
2024-11-21 17:02:28 +01:00
Stoiko Ivanov
e70c389918 docs: fix wrong product name in certificate docs
this got reported via e-mail - seems this one occurrence was
forgotten. grepped through the docs (and the whole repo) for 'Mail'
and 'Gateway', and it seems this was the only one.

Fixes: cbd7db1d ("docs: certificates")
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2024-11-21 16:59:22 +01:00
Christian Ebner
e3f2756cbb fix #5853: client: pxar: exclude stale files on metadata/link read
Skip and warn the user for files which returned a stale file handle
error while reading the metadata associated to that file, or the
target in case of a symbolic link.

Instead of returning with a hard error, report the stale file handle
and skip over encoding this file entry in the pxar archive.

Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5853

Link to thread in community forum:
https://forum.proxmox.com/threads/156822/

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 13:21:02 +01:00
Christian Ebner
efb49d8abe client: pxar: warn user and ignore stale file handles on file open
Do not fail hard if a file open fails because of a stale file handle.
Warn the user and ignore the file, just like the client already does
in case of missing privileges to access the file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 13:21:02 +01:00
Christian Ebner
102ab18146 client: pxar: skip directory entries on stale file handle
Skip over the entries when a stale file handle is encountered during
generation of the entry list of a directory entry.

This will lead to the directory not being backed up if the directory
itself was invalidated, as then reading all child entries will fail
also, or the directory is backed up without entries which have been
invalidated.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 13:21:02 +01:00
Christian Ebner
1b9df4ba4f client: pxar: skip directories on stale file handle
Skip over the whole directory in case the file handle was invalidated
and therefore the filesystem type check returns with ESTALE.

Encode the directory start entry in the archive and the catalog only
after the filesystem type check, so the directory can be fully skipped.
At this point it is still possible to ignore the invalidated
directory. If the directory is invalidated afterwards, it will be
backed up only partially.

Introduce a helper method to report entries for which a stale file
handle was encountered, providing an optional path for cases where
the `Archiver`s state does not store the correct path.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 13:21:02 +01:00
Christian Ebner
1be78aad72 client: pxar: refactor report vanished/changed helpers
Switch from mutable reference to shared reference on `self` and drop
unused return value.

These helpers only write log messages, there is currently no need for
a mutable reference to `self`, nor to return a `Result`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 13:21:02 +01:00
Fabian Grünbichler
adbf59dd17 bump version to 3.2.11-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 12:47:11 +01:00
Fabian Grünbichler
5990728ec9 push: check that source namespace exists
else, combined with remove_vanished everything on the target side would be
removed.

Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 12:47:11 +01:00
Fabian Grünbichler
0679c25ebb manager: push: add more completions
the group filters need adaptations both for pushing and local pulling, so left
those out for now.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 12:01:04 +01:00
Fabian Grünbichler
c02d3a8717 push: keep track of created namespaces
to avoid attempting to create them multiple times in case a whole hierarchy is
missing, and misleadingly logging that they were created multiple times as
well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
e0ddb88cb7 push: improve error messages
the error message for failure to sync the whole namespace was too long, so
split it into two lines and make it a warning.

the namespace creation one lacked context (that the error was caused by the
remote side or the connection) and had too much (the datastore, which is
already logged very often) at the same time.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
a5350595fc version: remove named features
and use version comparison for the push code that previously used it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
a3a0c7dbe7 sync: add/adapt access check comments
add a bit more detail for the pull side, and reword some comments on the push
side to make them easier to read.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
a304ed7c01 push: treat all missing referenced files as fatal
`try_exists` will return Ok(false) if the path is or containts a dangling
symlink, treat that as hard error just like if `try_exists` has returned an
Err(..).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
2492083e37 push: reduce initial capacity of known chunks
one million chunks are a bit much, considering that chunks are representing
1-2MB (dynamic) to 4MB (fixed) of input data, that would mean 1-4TB of re-used
input data in a single snapshot.

64k chunks are still representing 64-256GB of input data, which should be
plenty (and for such big snapshots with lots of re-used chunks, growing the
allocation of the HashSet should not be the bottleneck), and is also the
default capacity used for pulling.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
0bcbb1badd push: reduce calls to list_snapshots on target side
instead of calling this three times, call it once:

retrieving the highest backup timestamp doesn't need its own request, it can
re-use the "main" result, the corresponding helper can thus be dropped.

remove_vanished can re-use the earlier result - if anybody prunes the backup
group or adds new snapshots while the sync is running, the whole group sync is
racy and might cause spurious errors anyway.

since re-syncing the last already existing snapshot is not possible at the
moment, the code can also be simplified by treating such a snapshots already
fully synced.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:49 +01:00
Fabian Grünbichler
56ab13f0e2 push: fix remove_vanished namespaces logic
a vanished namespace is one that
- exists on the target side, below the target prefix
- but within the specified max_depth
- and was not part of the synced namespaces

Co-developed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:53:16 +01:00
Fabian Grünbichler
27b8321f2a push: rename namespace parameters/variables
two parameters that only differ by a letter are not very nice for quickly
understanding semantics..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
2031aa4bec push: code style cleanup
BackupGroup is serializable as its API parameter components, like BackupDir.
move the (always present) namespace closer to the group to improve readability.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
70acf0f1df push: remove namespace: improve missing Modify priv error
to make it easier to distinguish from missing "Prune" privs when removing
vanished groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
90900fd017 push: factor out remote api path helper
to make the complex logic code shorter and easier to parse. no semantic changes
intended.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
89ef8bf502 push: code style cleanup
no semantic changes intended

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
162ff15378 push: add comment for version guard
explaining why that particular version is used as lower bound.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Fabian Grünbichler
aec0ef6260 push: clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 11:01:25 +01:00
Christian Ebner
44999809b0 docs: add section for sync jobs in push direction
Documents the caveats of sync jobs in push direction, explicitly
recommending setting up dedicted remotes for these sync jobs.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 11:01:25 +01:00
Christian Ebner
00f441eb93 api: version: add 'prune-delete-stats' as supported feature
Expose the 'prune-delete-stats' as supported feature, in order for
the sync job in pull direction to pass the optional
`error-on-protected=false` flag to the api calls when pruning backup
snapshots, groups or namespaces.
2024-11-21 11:01:25 +01:00
Christian Ebner
4e50ef5193 api: datastore/namespace: return backup groups delete stats on remove
Add and optionally expose the backup group delete statistics by adding the
return type to the corresponding REST API endpoints.

Clients can opt-into the new behaviour by setting the new `error-on-protected`
flag to `false` when calling the api endpoints, which results in removal not
erroring out when encountering protected snapshots.

The default value for the flag remains `true` for now, to remain backwards
compatible with older clients expecting this behaviour.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: reworded commit message slightly
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-21 10:57:09 +01:00
Christian Ebner
5462d9d44d ui: sync view: set proxy on view instead of model
In order to load data using the same model from different sources,
set the proxy on the store instead of the model.
This allows to use the view to display sync jobs in either pull or
push direction, by setting the `sync-direction` ont the view.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
9aa213b88e ui: sync job: adapt edit window to be used for pull and push
Switch the subject and labels to be shown based on the direction of
the sync job, and set the `sync-direction` parameter from the
submit values in case of push direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
262395abaf ui: add view with separate grids for pull and push sync jobs
Show sync jobs in pull and in push direction in two separate grids,
visually separating them to limit possible misconfiguration.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
0b965ec115 ui: sync edit: source group filters based on sync direction
Switch to the local datastore, used as sync source for jobs in push
direction, to get the available group filter options.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
43a92c8c1b ui: group filter: allow to set namespace for local datastore
The namespace has to be set in order to get the correct groups to be
used as group filter options with a local datastore as source,
required for sync jobs in push direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
964162ce35 bin: manager: add datastore push cli command
Expose the push api endpoint to be callable via the command line
interface.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
4a1fa30a6f api: admin: avoid duplicate name for list sync jobs api method
`list_sync_jobs` exists as api method in `api2::admin::sync` and
`api2::config::sync`.

Rename the admin api endpoint method to `list_config_sync_jobs` in
order to reduce possible confusion when searching/reviewing.

No functional change intended.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
397e9c9991 api: sync jobs: expose optional sync-direction parameter
Exposes and switch the config type for sync job operations based
on the `sync-direction` parameter, exposed on required api endpoints.

If not set, the default config type is `sync` and the default sync
direction is `pull` for full backwards compatibility. Whenever
possible, determine the sync direction and config type from the sync
job config directly rather than requiring it as optional api
parameter.

Further, extend read and modify access checks by sync direction to
conditionally check for the required permissions in pull and push
direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
c9078b189c api: config: factor out sync job owner check
Move the sync job owner check to its own helper function, for it to
be reused for the owner check for sync jobs in push direction.

No functional change intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
5876a963b8 api: config: Require PRIV_DATASTORE_AUDIT to modify sync job
Read access to sync jobs is not granted to users not having at least
PRIV_DATASTORE_AUDIT permissions on the datastore. However a user is
able to create or modify such jobs, without having the audit
permission.

Therefore, further restrict the modify check by also including the
audit permissions.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
46951c103b api: sync: move sync job invocation to server sync module
Moves and refactores the sync_job_do function into the common server
sync module so that it can be reused for both sync directions, pull
and push.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
e898887f54 api: push: implement endpoint for sync in push direction
Expose the sync job in push direction via a dedicated API endpoint,
analogous to the pull direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
bcd80bf976 api types/config: add sync-push config type for push sync jobs
In order for sync jobs to be either pull or push jobs, allow to
configure the direction of the job.

Adds an additional config type `sync-push` to the sync job config, to
clearly distinguish sync jobs configured in pull and in push
direction and defines and implements the required `SyncDirection` api
type.

This approach was chosen in order to limit possible misconfiguration,
as unintentionally switching the sync direction could potentially
delete still required snapshots.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
33737196b1 fix #3044: server: implement push support for sync operations
Adds the functionality required to push datastore contents from a
source to a remote target.
This includes syncing of the namespaces, backup groups and snapshots
based on the provided filters as well as removing vanished contents
from the target when requested.

While trying to mimic the pull direction of sync jobs, the
implementation is different as access to the remote must be performed
via the REST API, not needed for the pull job which can access the
local datastore via the filesystem directly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
a926803b92 api/api-types: refactor api endpoint version, add api types
Add a dedicated api type for the `version` api endpoint and helper
methods for supported feature comparison.
This will be used to detect api incompatibility of older hosts, not
supporting some features.

Use the new api type to refactor the version endpoint and set it as
return type.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
0be5b147d5 datastore: increment deleted group counter when removing group
To correctly account also for the number of deleted backup groups, in
preparation to correctly return the delete statistics when removing
contents via the REST API.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
f982c915f5 api types: implement api type for BackupGroupDeleteStats
Make the `BackupGroupDeleteStats` exposable via the API by implementing
the ApiTypes trait via the api macro invocation and add an additional
field to account for the number of deleted groups.
Further, add a method to add up the statistics.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
d1e5e4533c datastore: move BackupGroupDeleteStats to api types
In preparation for the delete stats to be exposed as return type to
the backup group delete api endpoint.

Also, rename the private field `unremoved_protected` to a better
fitting `protected_snapshots` to be in line with the method names.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
db4f1f64b6 api types: define remote permissions and roles for push sync
Adding the privileges to allow backup, namespace creation and prune
on remote targets, to be used for sync jobs in push direction.

Also adds dedicated roles setting the required privileges.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
da0fd4267a api types: implement remote acl path method for sync job
Add `remote_acl_path` method which generates the acl path from the sync
job configuration. This helper allows to easily generate the acl path
from a given sync job config for privilege checks.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
ae56a50b9d api types: add remote acl path method for BackupNamespace
Add a `remote_acl_path` helper method for creating acl paths for
remote namespaces, to be used by the priv checks on remote datastore
namespaces for e.g. the sync job in push direction.

Factor out the common path extension into a dedicated method.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
aa273905d7 config: acl: allow namespace components for remote datastores
Extend the component limit for ACL paths of `remote` to include
possible namespace components.

This allows to limit the permissions for sync jobs in push direction
to a namespace subset on the remote datastore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
732d9d7a5f config: acl: refactor acl path component check for datastore
Combine the two if statements checking the datastores ACL path
components, which can be represented more concisely as one.

Further, extend the pre-existing comment to clarify that `datastore`
ACL paths are not limited to the datastore name, but might have
further sub-components specifying the namespace.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
a5e3032d36 client: backup writer: allow push uploading index and chunks
Add a method `upload_index_chunk_info` to be used for uploading an
existing index and the corresponding chunk stream.
Instead of taking an input stream of raw bytes as the
`upload_stream`, this takes a stream of `MergedChunkInfo` object
provided by the local chunk reader of the sync jobs source.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
008b38bfc7 client: backup writer: factor out merged chunk stream upload
In preparation for implementing push support for sync jobs.

Factor out the upload stream for merged chunks, which can be reused
to upload the local chunks to a remote target datastore during a
snapshot sync operation in push direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
9fbe870d1c client: backup writer: refactor backup and upload stats counters
In preparation for push support in sync jobs.

Extend and move `BackupStats` into `backup_stats` submodule and add
method to create them from `UploadStats`.

Further, introduce `UploadCounters` struct to hold the Arc clones of
the chunk upload statistics counters, simplifying the house keeping.

By bundling the counters into the struct, they can be passed as
single function parameter when factoring out the common stream future
in the subsequent implementation of the chunk upload for sync jobs
in push direction.

Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
c6648d59c6 sync: extend sync source's list namespaces method by filter callback
Allow to filter namespaces by given callback function. This will be
used to pre-filter the list of namespaces to push to a remote target
for sync jobs in push direction, based on the privs of the sync jobs
local user on the source datastore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:53 +01:00
Christian Ebner
19a621ab98 sync: pull: optimize backup group sorting
`BackupGroup` implements `cmp::Ord`, so use that implementation for
comparing groups during sorting. Furtuher, only sort the list of
backup groups after filtering, thereby possibly reducing the number
of required comparisons.

No functional changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-21 10:14:10 +01:00
Thomas Lamprecht
72fe4cdb79 bump version to 3.2.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-19 22:36:44 +01:00
Thomas Lamprecht
73b18b279d cargo: require proxmox-log 0.2.6
To ensure the fix for avoiding printing verbose log levels to stderr,
stdout is included, as that spams the log with the full worker log
tasks.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-19 11:40:06 +01:00
Thomas Lamprecht
4983a3c0ba api: disk list: do not fail but just log error on gathering smart data
I plugged in a USB pen drive and the whole disk list UI became
completely unusable because smartctl fails to handle that device due
to some `Unknown USB bridge [0x090c:0x1000 (0x1100)]` error.

That itself might be improvable, but most often I do not care at all
about smart data, and certainly not enough to make failing gathering
it disallow me from viewing my disks (or the smart data from disks
where it still could be gathered, for that matter!)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-17 20:33:22 +01:00
Hannes Laimer
936ec6b69e disks: add UUID to partition info
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-17 20:28:09 +01:00
Dietmar Maurer
01bbaef7fa config: factor out method to get the absolute datastore path
removable datastores will have a PBS-managed mountpoint as path, direct
access to the field needs to be replaced with a helper that can account
for this.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-17 19:57:33 +01:00
Hannes Laimer
9ab2e4e710 tools: add disks utility functions
... for mounting and unmounting

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-17 19:57:33 +01:00
Thomas Lamprecht
79db26d316 bump version to 3.2.9-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-14 16:40:46 +01:00
Dominik Csapak
2febc83cc0 fix #5233: don't require root for some tape operations
instead, require 'Tape.Write' or 'Tape.Modify' on '/tape' path.
This makes it possible for a TapeOperator to destroy tapes and for a
TapeAdmin to update the tape status, instead of just root@pam.

I opted for the path '/tape' since we don't have a dedicated acl
structure for single tapes, just '/tape/pool' (which does not apply
since not all tapes have to have a pool), '/tape/device' (which is
intended for drives/changers) and '/tape/jobs' (which is for jobs only).

Also we use that path for e.g. move_tape already.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-14 15:17:31 +01:00
Thomas Lamprecht
b3675d867f fix #5868: workspace: require rest-server >= 0.8.2
To ensure the recent fixes for the "infinite loop on early connection
abort when trying to detect the TLS handshake" problem is included.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-14 15:15:38 +01:00
Christian Ebner
76504bfcac client: pxar: add debug output for exclude pattern matches
Log the path of directory entries matched by an exclude pattern in
order to more conveniently debug possible issues.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-12 21:23:42 +01:00
Christian Ebner
7465ccd097 client: pxar: perform match pattern check only once
While traversing the filesystem tree, `generate_directory_file_list`
generates the list of entries to include for each directory level,
already matching the entry against the given list of match patterns.

Since this already excludes entries which should not be included in
the archive, the same check in the `add_entry` call is redundant,
as it is executed for each entry which is included in the list
generated by `generate_directory_file_list`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-12 21:23:39 +01:00
Daniel Kral
974b4527e2 fix #5600: pbs2to3: allow arbitrary newer '-pve' kernels after upgrade
Fixes a bug where `pbs2to3` shows an incorrect warning about an
unexpected running kernel version, where newer kernel versions than 6.5
were marked as unexpected (e.g. "8.6.12-1-pve").

This commit allows arbitrary newer kernel versions that are suffixed
with '-pve' from kernel version 6.2 onward. This is the same behavior as
in other upgrade helpers like `pve7to8` [1] and `pmg7to8` [2].

[1] https://git.proxmox.com/?p=pve-manager.git;a=commit;h=fb59038a8b110b0b0b438ec035fd41dd9d591232
[2] https://git.proxmox.com/?p=pmg-api.git;a=commit;h=9d67a9af218b73027822c9c4665b88e6662e7ef7

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
2024-11-12 21:19:22 +01:00
Daniel Kral
65574209ad pbs2to3: add test for kernel version compatibility
Factors the kernel version compatibility check into its own method and
adds test cases for a set of expected and unexpected kernel versions.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
2024-11-12 21:19:22 +01:00
Gabriel Goller
1a0229b881 api: parallelize smartctl checks
To improve the performance of the smartctl checks, especially when a lot
of disks are used, parallelize the checks using the `ParallelHandler`.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-12 21:17:24 +01:00
Gabriel Goller
b3d9b6d5f1 api: avoid retrieving lsblk result twice
Avoid running `lsblk` twice when executing the `list_disk`
endpoint/command. This and the various other small nits improve the
performance of the endpoint.

Does not really fix, but is related to: #4961.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-12 21:17:24 +01:00
Christian Ebner
da82aca849 client: catalog shell: avoid navigating below archive root
Avoid to underflow the catalogs shell position stack by navigating
below the archives root directory into the catalog root. Otherwise
the shell will panic, as the root entry is always expected to be
present.

This threats the archive root directory as being it's own parent
directory, mimicking the behaviour of most common shells.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-12 21:08:27 +01:00
Hannes Laimer
7f193b88ed api: tape: add permission to move_tape endpoint
... so it is usable by non-root users, this came up in support.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-11-12 13:55:15 +01:00
Thomas Lamprecht
720bf707e8 update proxmox-notify crate to 0.5
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-11 23:55:25 +01:00
Gabriel Goller
f2ea424cc1 web: disallow datastore in root, add reuse-datastore flag
Disallows creating a datastore in root on the frontend side, by
filtering the '/' path. Add reuse-flag to permit us to open existing
datastores.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-11 23:51:06 +01:00
Gabriel Goller
6e101ff757 fix #5439: allow to reuse existing datastore
Disallow creating datastores in non-empty directories. Allow adding
existing datastores via a 'reuse-datastore' checkmark. This only checks
if all the necessary directories (.chunks + subdirectories and .lock)
exist and have the correct permissions. Note that the reuse-datastore
path does not open the datastore, so that we don't drop the
ProcessLocker of an existing datastore.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-11 23:51:06 +01:00
Gabriel Goller
27811f3f8f fix #5861: remove min username length in ChangeOwner modal
We allow usernames shorter than 4 characters since this patch [0] in
pbs.

[0]: https://lore.proxmox.com/pbs-devel/20240117142918.264978-1-g.goller@proxmox.com/

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-11-11 11:08:15 +01:00
Wolfgang Bumiller
6391a45b43 bump rest-server to 0.8.1
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-11-08 12:07:00 +01:00
Christoph Heiss
667797ce2e d/control: bump proxmox-subscription to 0.5
Seems this was forgotten while bumping it in Cargo.toml in dcd863e0.

Fixes: dcd863e0 ("bump proxmox-subscription to 0.5.0")
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-11-08 10:58:48 +01:00
Dietmar Maurer
dcd863e0c9 bump proxmox-subscription to 0.5.0
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-11-07 14:09:15 +01:00
Fabian Grünbichler
faa08f6564 sync: pull: reword last_sync_time resync comment
make it a bit easier to parse and include some examples of what the resync
might be able to pick up.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-11-04 15:13:37 +01:00
Christian Ebner
868ca01a7a sync: pull: simplify logic for source snapshot filtering
Decouple the actual filter logic from the skip reason output logic by
pulling the latter out of the filter closue.

Makes the filtering logic more intuitive.

Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-04 14:45:43 +01:00
Christian Ebner
b752b8cb96 sync: pull: mention why last snapshot of previous sync is resynced
The last snapshot synced during the previous sync job might not have
been fully completed just yet (e.g. backup log still missing,
verification still ongoing, ...).
Explicitley mention the reason and that the resync is therefore
intentional by a comment in the filter logic.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-04 14:45:43 +01:00
Christian Ebner
1e36930e0b sync: fix premature return in snapshot skip filter logic
While checking which snapshots to sync, the filter logic incorrectly
included the first snapshot newer that the last synced one
unconditionally, bypassing the transfer last check for that one
snapshot. Following snapshots are correctly handled again.

E.g. of an incorrect sync by excerpt of a task log provided by a user
in the community forum [0], with transfer last set to 1:

```
skipped: 2 snapshot(s) (2024-09-29T18:00:28Z .. 2024-10-20T18:00:29Z) - older than the newest local snapshot
skipped: 5 snapshot(s) (2024-10-28T19:00:28Z .. 2024-11-01T19:00:32Z) - due to transfer-last
sync snapshot vm/110/2024-10-27T19:00:25Z
...
sync snapshot vm/110/2024-11-02T19:00:23Z
```

Not only the last, but the first newer than newest and last were
incorrectly synced.

By dropping the early return, leading to incorrect inclusion of the
snapshot, the transfer last condition is now correctly checked as
well.

Link to the issue reported in the community forum:
[0] https://forum.proxmox.com/threads/156873/

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-11-04 13:03:15 +01:00
Christian Ebner
59243d200e client: catalog shell: drop payload offset in stat output
Drop the payload offset output for the multi line formatting helper,
as the formatting was skewed anyways and the `stat` output is not
intended for debugging.

Commit 51e8fa96 ("client: pxar: include payload offset in entry
listing") introduced the payload offset output for pxar entries
in case of split archives for both, single line and multi line
formatting helpers with debugging prupose.

While the payload offset output is fine for the single line entry
formatting (generates the pxar dump output in debugging mode),
it should not be included in the multi line entry formatting helper,
used to generate the output for the `stat` command of the catalog
shell.

Fixes: 51e8fa96 ("client: pxar: include payload offset in entry listing")

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-25 14:22:32 +02:00
Fabian Grünbichler
dd16eabe19 pxar: tools: inline async recursion
this works since rustc 1.77, and makes the code less verbose.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
5ddd59e167 client: catalog shell: fallback to accessor for navigation
Make the catalog optional and use the pxar accessor for navigation if
the catalog is not provided.
This allows to use the metadata archive for navigraion, as for split
pxar archives no dedicated catalog is encoded.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
78e4098eae client: helper to mimic catalog find using metadata archive
Adds helper functions to reimplement the catalog shell functionality
for snapshots being encoded as split pxar archives.

Just as the `CatalogReader`s find method, recursively iterate entries
and call the given callback on all entries matched by the match
patterns, starting from the given parent entry.

The helper has been split into 2 functions for the async recursion to
work.
2024-10-23 16:10:41 +02:00
Christian Ebner
8b9bae1ef1 client: catalog: fallback to metadata archives for catalog dump
Commit c0302805c "client: backup: conditionally write catalog for
file level backups" drops encoding of the dedicated catalog when
archives are encoded as split metadata/data archives with the
`change-detection-mode` set to `data` or `metadata`.

Since the catalog is not present anymore, fallback to use the pxar
metadata archives in the manifest (if present) for generating the
listing of contents in a compatible manner.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
d530ba080d client: add helper to dump catalog from metadata archive
Implements the methods to dump the contents of a metadata pxar
archive using the same output format as used by the catalog dump.

The helper function has been split into 2 for async recursion to
work.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
0a32544585 client: tools: factor out pxar entry to dir entry mapping
Perform the conversion from pxar file entries to catalog entry
attributes by implementing `TryFrom<&FileEntry<T>>` for
`DirEntryAttribute` and use that.

Allows the reuse for the catalog shell, when using the split pxar
archive instead of the catalog.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
50d20e9b64 client: tools: factor out entry path prefix helper
Move the logic to generate `FileEntry` paths with a given prefix to
its own helper function for it to be reusable for the catalog shell
implementation of split pxar archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
3e6318a535 client: make helper to get remote pxar reader reusable
Move the `get_remote_pxar_reader` helper function so it can be reused
also for getting the metadata archive reader instance for the catalog
dump.

No functional changes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
3a6755363b client: tools: move pxar root entry helper to pxar submodule
Move the `handle_root_with_optional_format_version_prelude` helper,
purely related to handling the root entry for pxar format version 2
archives, to the more fitting pxar tools submodule.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Christian Ebner
113c04bc60 client: pxar: move catalog lookup helper to pxar tools
The lookup helper used to generate catalog entries via the metadata
archive for split archive backups is pxar specific, therefore move it
to the appropriate pxar tools submodlue.
2024-10-23 16:10:41 +02:00
Christian Ebner
84c066297c client: tools: make tools module public
Change namespace visibility for tools submodule to be accessible from
other creates, to be used for common pxar related helpers.

Switch helpers declared as `pub` to `pub(crate)` in order to keep module
encapsulation, adapt namespace for functions required to be `pub`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-23 16:10:41 +02:00
Thomas Lamprecht
2d4209d9ef api: add missing doc-comment description for api enums
this is used as description in the api schema

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-22 15:26:35 +02:00
Thomas Lamprecht
32dad63696 file-restore: add missing doc-comment description for api enums
this is used as description in the api schema

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-22 15:26:11 +02:00
Thomas Lamprecht
c606fdaa88 api-types: add missing doc-comment description for api enums
this is used as description in the api schema

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-22 15:25:52 +02:00
Thomas Lamprecht
6c44f3e584 bump version to 3.2.8-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-21 17:18:18 +02:00
Christian Ebner
d2b6c75fa1 docs: sync: explicitly mention removed-vanish flag
Add a short sentence describing the function of the remove vanished
flag since this has not been documented explicitly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-21 16:38:24 +02:00
Thomas Lamprecht
1836c135bc docs: prefix node.cfg man page with proxmox-backup
As node.cfg is a rather general name that could clash with manual
pages from other packages, or at least be a bit confusing if there's
another tool providing a node.cfg.

In the long term we should rename all existing manual pages from
section 5 and 7, i.e. all those that are not directly named after an
executable. As those normally talk about product-specific configs and
topics where just the filename is not specific enough for a system
wide manual page.

Note that there was some off-list discussion with proposal of using
"section suffixes" that man supports and can be used to differ between
manual pages with the same name (and in the same section), for example
`man 3pm Git`, but to me this seems a bit more obscure and potentially
less discoverable, but can be a great way to provide an link alias for
convenience.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-21 14:19:51 +02:00
Thomas Lamprecht
082c37b5a6 buildsys: install node.cfg man page in server package
Fixes: 3c9fe358 ("docs: add node.cfg man page")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-21 09:07:25 +02:00
Gabriel Goller
521647436d docs: fix warnings in external-metric-server page
Rename external-metric-server page and fix code-block to remove some
warnings.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-10-18 17:56:46 +02:00
Gabriel Goller
3c9fe358cc docs: add node.cfg man page
Add man page for the node.cfg config file.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
 [ TL: pull out sorting of synopsis file list to separate commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-18 17:56:17 +02:00
Thomas Lamprecht
45c9383e94 buildsys: sort list of generated synopsis and man page files alphabetically
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-18 17:54:23 +02:00
Thomas Lamprecht
a3e87f5a03 client: progress log: small opinionated code clean-up
It was fine as is, but IMO saving a few lines is nice, albeit it makes
the atomic fetch expressions look slightly complexer by wrapping them
directly with the HumanByte and TimeSpan from-constructors.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-17 16:51:41 +02:00
Christian Ebner
1d746a2c02 partial fix #5560: client: periodically show backup progress
Spawn a new tokio task which about every minute displays the
cumulative progress of the backup for pxar, ppxar or img archive
streams. Catalog and metadata archive streams are excluded from the
output for better readability, and because the catalog upload lives
for the whole upload time, leading to possible temporal
misalignments in the output. The actual payload data is written via
the other streams anyway.

Add accounting for uploaded chunks, to distinguish from chunks queued
for upload, but not actually uploaded yet.

Example output in the backup task log:
```
...
INFO:  processed 2.471 GiB in 1m, uploaded 2.439 GiB
INFO:  processed 4.963 GiB in 2m, uploaded 4.929 GiB
INFO:  processed 7.349 GiB in 3m, uploaded 7.284 GiB
...
```

This partially fixes issue 5560:
https://bugzilla.proxmox.com/show_bug.cgi?id=5560

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-17 16:32:32 +02:00
Christoph Heiss
fb378fe543 docs: installation: fix wrong product reference
This was probably copied verbatim from pve-docs and forgotten to be
appropriately changed.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-10-17 05:43:25 +02:00
Christian Ebner
4ee9264b00 docs: Fix typo for chunk directory naming and rewording
The chunks subdirectories are only using the chunk's 2 byte checksum
prefix given in hex notation.

Also, clarify that chunks are grouped into subdirectories.

Reported in the community forum:
https://forum.proxmox.com/threads/155751/

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-17 05:43:25 +02:00
Thomas Lamprecht
a61272c7ff debian: run wrap-and-sort -tkn to normalize control files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-16 14:34:11 +02:00
Lukas Wagner
8038f96a53 api: metrics: check permissions before reading any data from the cache
Reading from the metric cache is somewhat expensive, so validate as many
of the required permissions as possible. For host metrics, we can
do the full check in advance. For datastores, we check if we have
audit permissions for *any* datastore. If we do not have privs for
either of those, we return early and avoid reading from the
cache altogether.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-10-15 16:03:37 +02:00
Wolfgang Bumiller
2feb4160f1 update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-10-15 15:32:58 +02:00
Wolfgang Bumiller
da409e0a62 api: optimize metrics permission checks a bit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-10-15 14:53:15 +02:00
Lukas Wagner
c804763bdf api: add /status/metrics API
This one is modelled exactly as the one in PVE (there it
is available under /cluster/metrics/export).

The returned data format is quite simple, being an array of
metric records, including a value, a metric name, an id to identify
the object (e.g. datastore/foo, host), a timestamp and a type
('gauge', 'derive', ...). The latter property makes the format
self-describing and aids the metric collector in choosing a
representation for storing the metric data.

[
    ...
    {
	"metric": "cpu_avg1",
	"value": 0.12,
	"timestamp": 170053205,
	"id": "host",
	"type": "gauge"
    },
    ...
]

In terms of permissions, the new endpoint requires Sys.Audit
on /system/status for metrics of the 'host' object,
and Datastore.Audit on /datastore/{store} for 'datastore/{store}'
metric objects.

Via the 'history' and 'start-time' parameters one can query
the last 30mins of metric history. If these parameters
are not provided, only the most recent metric generation
is returned.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
da12adb1f9 metric collection: put metrics in a cache
Any pull-metric API endpoint can alter access the cache to
retrieve metric data for a limited time (30mins).

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
20753e1b53 metric collection: initialize metric cache on startup
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
8ecbb5f152 pbs-api-types: add types for the new metrics endpoint
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
3547cfb63e metric collection: move impl block for DiskStats to metric_server module
It is only needed there and could be considered an implementation detail
of how this module works.

No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
98cb8ff86b metric collection: drop std::path prefix where not needed
No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
da3612fade metric collection: rrd: remove rrd prefix from some function names
We have proper namespaces, so these are a bit redundant.

No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
e8c70ec252 metric collection: rrd: restrict function visibility
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
b7ac1fc8aa metric collection: rrd: move rrd update function to rrd module
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
0a852e1927 metric_collection: split out push metric part
No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
045fc7750c metric collection: move rrd_cache to new metric_collection module
No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
b862c872e0 metric collection: add doc comments for public functions
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Lukas Wagner
72e1181830 proxy: server: move rrd stat/metric server to separate module
With the upcoming pull-metric system/metric caching, these
things should go into a sepearate module.

No functional changes intended.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-10-15 14:09:41 +02:00
Christian Ebner
d87c9771e4 server: sync: factor out namespace depth check into sync module
By moving and refactoring the check for a sync job exceeding the
global maximum namespace limit, the same function can be reused for
sync jobs in push direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-14 12:31:51 +02:00
Christian Ebner
a4f08cbbbb server: sync: make skip reason message more genenric
By specifying that the snapshot is being skipped because of the
condition met on the sync target instead of 'local', the same message
can be reused for the sync job in push direction without loosing
sense.
2024-10-14 12:31:51 +02:00
Christian Ebner
2f05d211c4 server: sync: move skip info/reason to common sync module
Make `SkipReason` and `SkipInfo` accessible for sync operations of
both direction variants, push and pull.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-14 12:31:51 +02:00
Christian Ebner
610dd9b8f3 server: sync: move source to common sync module
Rename the `PullSource` trait to `SyncSource` and move the trait and
types implementing it to the common sync module, making them
reusable for both sync directions, push and pull.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-10 11:11:25 +02:00
Christian Ebner
3f52a94624 server: sync: move reader trait to common sync module
Move the `PullReader` trait and the types implementing it to the
common sync module, so this can be reused for the push direction
variant for a sync job as well.

Adapt the naming to be more ambiguous by renaming `PullReader` trait to
`SyncSourceReader`, `LocalReader` to `LocalSourceReader` and
`RemoteReader` to `RemoteSourceReader`.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-10 11:11:25 +02:00
Christian Ebner
ffe1dd4369 server: sync: move sync related stats to common module
Move and rename the `PullStats` to `SyncStats` as well as moving the
`RemovedVanishedStats` to make them reusable for sync operations in
push direction as well as pull direction.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-10 11:11:25 +02:00
Christian Ebner
0a916665ae api: datastore: add missing whitespace in description
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-10-10 11:11:25 +02:00
Wolfgang Bumiller
6950c7e4ef update proxmox-acme to 0.5.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-10-03 09:54:33 +02:00
Wolfgang Bumiller
93e9e8b6ef update to rrd-api-types 1.0.2
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-19 15:21:45 +02:00
Wolfgang Bumiller
79ed296f2d update to proxmox-rrd 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-18 16:04:20 +02:00
Christian Ebner
0415304ca4 pxar: bin: rework and dynamically generate list test data
Commit f16c5de757 ("pxar: bin: test `pxar list` with payload-input")
introduced a regression test for listing of split pxar archives. This
test relies on a large pxar blob file, the large size (> 100M) being
overlooked when writing the test.

In order to not depend on this file any further in the future, drop
it and rewrite the test to dynamically generate the files, needed and
further extend the test thereby also cover the archive creation and
extraction for split pxar archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-09-11 16:04:46 +02:00
Wolfgang Bumiller
d1a5855e74 api: replace deprecated 'streaming' attribute with 'serializing'
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-05 14:01:45 +02:00
Wolfgang Bumiller
bd8c677eab update to proxmox-router 3 and proxmox-rest-server 0.8
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-05 14:01:36 +02:00
Wolfgang Bumiller
c3b7862be5 pxar: list stuff to stdout instead of stderr
Our tooling really needs to stop doing outputs wrong...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-05 14:01:04 +02:00
Wolfgang Bumiller
ec1e78a4df bump proxmox-log dependency to 0.2.4 for stderr logging
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-09-05 14:00:59 +02:00
Gabriel Goller
1de4974eeb pxar-bin: remove log dependency, use tracing directly
When using the `log` to `tracing` translation layer, the messages get
padded with whitespaces. This bug will get fixed upstream [0], but in
the meantime we switch to the `tracing` macros.

[0]: https://github.com/tokio-rs/tracing/pull/3070

Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-09-05 14:00:46 +02:00
Dominik Csapak
e97132bb64 tape: fix read element status for some changers
It seems some changers are setting the PVolTag/AVolTag flags in the
ELEMENT STATUS page response, but don't include the actual fields then.
To make it work with such changers, downgrade the errors to warnings, so
we can continue to decode the remaining data.

This is OK since one volume tag is optional and the other is skipped
anyway.

Reported in the forum:
https://forum.proxmox.com/threads/hpe-storeonce-vtl.152547/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-09-03 09:22:08 +02:00
Gabriel Goller
dc0de0db10 move client binaries to tracing
Add tracing logger to all client binaries and remove env_logger.

The reason for this change is twofold: our migration to tracing, and the
behavior when the client calls an api handler directly. Currently the
proxmox-backup-manager calls the api handlers directly for some
commands. This results in no output (on console and task log), as no
tracing logger is instantiated.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-08-30 13:56:43 +02:00
Dominik Csapak
ff485aa320 fix #5622: backup client: properly handle rate/burst parameters
The rate and burst parameters are integers, so the mapping from value
with `.as_str()` will always return `None` effectively never
applying any rate limit at all.

Fix it by turning them into a HumanByte instead of an integer.

To not crowd the parameter section so much, create a
ClientRateLimitConfig struct that gets flattened into the parameter list
of the backup client.

To adapt the description of the parameters, add new schemas that copy
the `HumanByte` schema but change the description.

With this, the rate limit actually works, and there is no lower limit
any more.

The old TRAFFIC_CONTROL_RATE/BURST_SCHEMAs can be deleted since the
client was the only user of them.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-30 13:21:29 +02:00
Dominik Csapak
8f27262d42 data_blob: add TODO comment for zstd api
we currently use the behavior of zstd that is not part of the public
api, so this is at risk to be changed without notice.

There is a public api that we could use, but it's only available
with zstd_sys >= 2.0.9, which at this time, is not yet packaged for/by
us.

Add a comment that we can use the public api for this when the
new version of the crate gets available.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-30 09:57:32 +02:00
Gabriel Goller
2801fbf03c fix: proxmox-backup-manager network reload wait on worker
Make the `network reload` command in proxmox-backup-manager wait on the
api handler's workertask. Otherwise the task would be killed when the
client exits.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-08-29 14:13:10 +02:00
Christoph Heiss
4c0a8bc054 ui: user view: disable 'Unlock TFA' button by default
Without this, the button is enabled if no entry at all is selected (e.g.
when switching to the 'User Management' tab), with the button then
(obviously) being a noop.

Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
2024-08-29 14:11:46 +02:00
Wolfgang Bumiller
e9152ee951 tests: replace static mut with a mutex
rustc warns about creating references to them (although it does allow
using `.as_ref()` on them for some reason), and this will become a
hard error with edition 2024.

Previously we could not use Mutex there as its ::new() was not a
`const fn` , but not we can, so let's drop the `mut`.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-29 11:40:49 +02:00
Gabriel Goller
baacc3f2de log: retrieve ReaderEnvironment debug flag from tracing
Don't hardcode the debug flag but retrieve the currently enabled level
using tracing. This will change the default log-behavior and disable
some logs that have been printed previously. F.e.: the "protocol upgrade
done" message is not visible anymore per default because it is printed
with debug.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-08-29 11:23:44 +02:00
Wolfgang Bumiller
631b09b2eb bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-14 12:15:40 +02:00
Maximiliano Sandoval
8f58b0bf60 cargo: remove unused dependencies
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
bfca4f272d backup-client: remove unused dependencies
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
59d9e62307 pxar-bin: remove unused dependencies
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
ea3047b2c6 restore-daemon: remove unused dependencies
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
0aab7980fc file-restore: remove unused deps
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
2443b3f8d0 client: remove unused deps
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
0a005b092c tools: remove unused dependencies
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:13:50 +02:00
Maximiliano Sandoval
17c82d4a73 backup: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
8dc1b5abf7 restore-daemon: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
a637e7f490 datastore: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
7549722640 fuse-loop: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
81b40e1421 tape: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
a480089bc9 config: remove lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
e1220b02ad cargo: declare msrv
In the following commit we will make use of std::sync::LazyLock which
was introduced in rust 1.80.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
849c2deeb8 tools: remove unused lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
03412aaa5b client: remove unused lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Maximiliano Sandoval
a47b71a9ce api-types: remove unused lazy_static dependency
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-14 12:08:01 +02:00
Wolfgang Bumiller
12a141a727 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 13:06:32 +02:00
Wolfgang Bumiller
4963c05f40 bump proxmox-rrd dep to 0.3
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-09 13:06:25 +02:00
Lukas Wagner
f9843eec16 api-types: rrd: use api-types from proxmox-rrd
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-08-09 13:04:57 +02:00
Lukas Wagner
4fa99a164d rrd_cache: use new callback for RRD creation
Some changes in `promox-rrd` now require a separate callback for
creating a new RRD.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-08-09 13:04:55 +02:00
Lukas Wagner
f629a56c47 daily-update: initialize context for notification system
Otherwise proxmox-daily-update panics if attempting to send a
notification for any available new updates:

  "context for proxmox-notify has not been set yet"

Reported on our community forum:
https://forum.proxmox.com/threads/152429/

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-08-09 10:47:48 +02:00
Thomas Lamprecht
a0ec3a9e14 backup: use proxmox-systemd crate
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-08-07 20:53:49 +02:00
Thomas Lamprecht
05d22be1cf file restore: use proxmox-systemd crate
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-08-07 20:53:21 +02:00
Thomas Lamprecht
1c395ad195 client: use proxmox_systemd crate
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-08-07 20:51:23 +02:00
Thomas Lamprecht
8caf3f9f57 cargo fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-08-07 20:50:28 +02:00
Dominik Csapak
98c4056eaa datastore: data blob encode: simplify code
by combining the compression call from both encrypted and unencrypted
paths and deciding on the header magic at one site.

No functional changes intended, besides reusing the same buffer for
compression.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-07 18:58:11 +02:00
Dominik Csapak
aae596ee18 datastore: data blob: increase compression throughput
Increase the zstd compression throughput by not using the
`zstd::stream::copy_encode` method, because it seems it uses an
internal buffer size of 32 KiB [0], copies at least once extra in the
target buffer and might have some additional (allocation and/or
syscall) overhead. Due to the amount of wrappers and indirections it's
a bit hard to tell for sure.  In anyway, there can be a reduced
throughput observed if all, the target and source storage and the
network are so fast that the operations from creating chunks, like
compressions, can become the bottleneck.

Instead use the lower-level `zstd_safe::compress` which avoids (big)
allocations, since we provide the target buffer.

In case of a compression error just return the uncompressed data,
there's nothing we can do and saving uncompressed data is better than
having none. Additionally, log any such error besides the one for the
target buffer being too small.

Some benchmarks on my machine (Intel i7-12700K with DDR5-4800 memory
using a ASUS Prime Z690-A motherboard) from a tmpfs to a datastore on
tmpfs:

Type                without patches (MiB/s)  with patches (MiB/s)
.img file           ~614                     ~767
pxar one big file   ~657                     ~807
pxar small files    ~576                     ~627

The new approach is faster by a factor of 1.19.

Note that the new approach should not have a measurable negative
impact, e.g. (peak) memory usage wise. That is because we always
reserved a vector with max-data-size (data length + header length) and
thus did not have to add a new buffer, rather we actually removed the
buffer that the high-level zstd wrapper crate used internally.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-07 18:57:39 +02:00
Dominik Csapak
e1d92bce57 datastore: data blob: allow checking for zstd internal buffer-to-small error
We want to check the error code of zstd not to be 'Destination buffer
to small' (dstSize_tooSmall),  but currently there is no practical API
that is also public. So we introduce a helper that uses the internal
logic of zstd to determine the error.

Since this is not guaranteed to be a stable api, add a test for that
so we catch that error early on build. This should be fine, as long as
the zstd behavior only changes with e.g. major debian upgrades, which
is normally the only time where the zstd version is updated.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [ TL: re-order fn, rename test and reword comments ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-08-07 18:54:04 +02:00
Dominik Csapak
69b8b4b02f datastore: test DataBlob encode/decode roundtrip
so that we can be sure we can decode an encoded blob again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-07 18:52:06 +02:00
Dominik Csapak
b43845aa07 datastore: remove unused data blob writer
This is leftover code that is not currently used outside of its own
tests.

Should we need it again, we can just revert this commit.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-08-07 18:51:09 +02:00
Maximiliano Sandoval
42e5be0f87 fix typos in variables and function names
Variables, methods and functions in public API were not changed.

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-07 16:49:31 +02:00
Maximiliano Sandoval
19dfc86198 fix typos in strings
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-07 16:49:31 +02:00
Maximiliano Sandoval
72478171cf fix typos in docs an manual pages
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-07 16:49:31 +02:00
Maximiliano Sandoval
1198253b20 fix typos in rust documentation blocks
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-07 16:49:31 +02:00
Maximiliano Sandoval
a62a9f098d fix typos in comments
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-08-07 16:49:31 +02:00
Gabriel Goller
ab8e84498d docs: add external metrics server page
Add External Metrics page to PBS's documentation. Most of it is copied
from the PVE documentation, minus the Graphite part.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-08-07 16:31:17 +02:00
Wolfgang Bumiller
5985905eb8 replace proxmox_sys::systemd with proxmox_systemd calls
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-08-06 14:13:58 +02:00
Fabian Grünbichler
fa487e5352 bump h2 to 0.4
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-07-29 09:28:45 +02:00
Wolfgang Bumiller
3f325047dc remove use of proxmox_lang::error::io_err_other
by now its functionality is provided by std::io::Error::other

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-26 13:05:20 +02:00
Wolfgang Bumiller
96b7812b6a update to proxmox-log 0.2 and proxmox-rest-server 0.7
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-24 14:53:07 +02:00
Dietmar Maurer
eb44bdb842 client: avoid unnecessary allocation in AES benchmark
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-07-19 12:01:15 +02:00
Fabian Grünbichler
deb237a288 image backup: use 4M input buffer
with the default 8k input buffer size, the client will spend most of the time
polling instead of reading/chunking/uploading.

tested with 16G random data file from tmpfs to fresh datastore backed by tmpfs,
without encryption.

stock:

Time (mean ± σ):     36.064 s ±  0.655 s    [User: 21.079 s, System: 26.415 s]
  Range (min … max):   35.663 s … 36.819 s    3 runs

patched:

 Time (mean ± σ):     23.591 s ±  0.807 s    [User: 16.532 s, System: 18.629 s]
  Range (min … max):   22.663 s … 24.125 s    3 runs

Summary
  patched ran
    1.53 ± 0.06 times faster than stock

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-07-19 10:05:19 +02:00
Fabian Grünbichler
00ce0e38bd example: improve chunking speed example
by dropping the print-per-chunk and making the input buffer size configurable
(8k is the default when using `new()`).

this allows benchmarking various input buffer sizes. basically the same code is
used for image-based backups in proxmox-backup-client, but just the
reading and chunking part. looking at the flame graphs the smaller input
buffer sizes clearly show most of time spent polling, instead of
reading+copying (or reading and scanning and copying).

for a fixed chunk size stream with a 16G input file on tmpfs:

fixed 1M ran
    1.06 ± 0.17 times faster than fixed 4M
    1.22 ± 0.11 times faster than fixed 16M
    1.25 ± 0.09 times faster than fixed 512k
    1.31 ± 0.10 times faster than fixed 256k
    1.55 ± 0.13 times faster than fixed 128k
    1.92 ± 0.15 times faster than fixed 64k
    3.09 ± 0.31 times faster than fixed 32k
    4.76 ± 0.32 times faster than fixed 16k
    8.08 ± 0.59 times faster than fixed 8k

(from 15.275s down to 1.890s)

dynamic chunk stream, same input:

dynamic 4M ran
    1.01 ± 0.03 times faster than dynamic 1M
    1.03 ± 0.03 times faster than dynamic 16M
    1.06 ± 0.04 times faster than dynamic 512k
    1.07 ± 0.03 times faster than dynamic 128k
    1.12 ± 0.03 times faster than dynamic 64k
    1.15 ± 0.20 times faster than dynamic 256k
    1.23 ± 0.03 times faster than dynamic 32k
    1.47 ± 0.04 times faster than dynamic 16k
    1.92 ± 0.05 times faster than dynamic 8k

(from 26.5s down to 13.772s)

same input file on ext4 on LVM on CT2000P5PSSD8 (with caches dropped for each run):

fixed 4M ran
   1.06 ± 0.02 times faster than fixed 16M
   1.10 ± 0.01 times faster than fixed 1M
   1.12 ± 0.01 times faster than fixed 512k
   1.15 ± 0.02 times faster than fixed 128k
   1.17 ± 0.01 times faster than fixed 256k
   1.22 ± 0.02 times faster than fixed 64k
   1.55 ± 0.05 times faster than fixed 32k
   2.00 ± 0.07 times faster than fixed 16k
   3.01 ± 0.15 times faster than fixed 8k

(from 19.807s down to 6.574s)

dynamic 4M ran
    1.04 ± 0.02 times faster than dynamic 512k
    1.04 ± 0.02 times faster than dynamic 128k
    1.04 ± 0.02 times faster than dynamic 16M
    1.06 ± 0.02 times faster than dynamic 1M
    1.06 ± 0.02 times faster than dynamic 256k
    1.08 ± 0.02 times faster than dynamic 64k
    1.16 ± 0.02 times faster than dynamic 32k
    1.34 ± 0.03 times faster than dynamic 16k
    1.70 ± 0.04 times faster than dynamic 8k

(from 31.184s down to 18.378s)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-07-19 10:05:14 +02:00
Fabian Grünbichler
396806b211 build: ensure wrapper config is picked up
`cargo build` and `cargo install` pick up different config files, by symlinking
the wrapper config into a place with higher precedence than the one in the
top-level git repo dir, we ensure the package build actually picks up the
desired config instead of the one intended for quick dev builds.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-07-18 12:30:47 +02:00
Christian Ebner
b5b0b87eef server: pull: fix sync info message for root namespace
The root namespace is displayed as empty string when used in the
format string. Distinguish and explicitly write out the root namespace
in the sync info message shown in the sync jobs task log.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-17 14:41:58 +02:00
Christian Ebner
c7275ede6d www: sync edit: indetation style fix
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-17 14:41:58 +02:00
Christian Ebner
ce9b933556 server: pull: silence clippy to many arguments warning
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-17 14:41:58 +02:00
Christian Ebner
1c81ffdefc server: pull: be more specific in module comment
Describe the `pull` direction of the sync operation more precisely
before adding also a `push` direction as synchronization operation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-17 14:41:58 +02:00
Christian Ebner
077c1a9979 datastore: data blob: fix typos in comments
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-17 14:41:58 +02:00
Thomas Lamprecht
16170ef91d api certs: run cargo fmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-07-17 13:28:05 +02:00
Maximiliano Sandoval
fcccc3dfa5 fix #3699: client: prefer xdg cache directory for tmp files
Adds a helper to create temporal files in XDG_CACHE_HOME. If we cannot
create a file there, we fallback to /tmp as before.

Note that the temporary files stored by the client might grow
arbitrarily in size, making XDG_RUNTIME_DIR a less desirable option.
Citing the Arch wiki [1]:

> Should not store large files as it may be mounted as a tmpfs.

While the cache directory is most often not backed up by an ephemeral
FS, using the `O_TMPFILE` flag avoids the need for potential cleanup,
e.g. on interruption of a command. As with this flag set the data will
be discarded when the last file descriptor is closed.

[1] https://wiki.archlinux.org/title/XDG_Base_Directory

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
 [ TL: mention TMPFILE flag for clarity ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-07-17 13:27:09 +02:00
Thomas Lamprecht
2ecdbe9a96 bump apt-api-types dependency to 1.0.1
to pull in the fix for restoring backwards compatibility due to the
digest from that crate using a u8 slice instead of our dedicated
ConfigDigest type, which would serialize to String.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-07-17 11:44:41 +02:00
Christian Ebner
6e3f844f9a datastore: replace deprecated archive_type function
Commit ea584a75 "move more api types for the client" deprecated
the `archive_type` function in favor of the associated function
`ArchiveType::from_path`.

Replace all remaining callers of the deprecated function with its
replacement.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[WB: and remove the deprecated function]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-12 13:57:59 +02:00
Gabriel Goller
c052040028 datastore: fix typo in comment
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-12 13:50:30 +02:00
Gabriel Goller
c1689192d9 datastore: use cached snapshot time string in path
When getting the `full_path` of a snapshot we did not use the cached
time string. By using it we avoid a call to the super-slow libc strftime.

This has some minor performance improvements of circa 7%. That is ~100ms
on my datastore with ~5000 snapshots.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-12 13:41:04 +02:00
Gabriel Goller
0e9aa78bf4 datastore: avoid calculating protected attribute twice
The protected status of the snapshot is retrieved twice. This is slow
because it stat's the .protected file multiple times.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
2024-07-12 13:40:23 +02:00
Wolfgang Bumiller
625e2fd95f don't directly depend on tracing-subscriber
This was only used for LevelFilter which is also exposed via the
tracing crate directly.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-12 11:20:20 +02:00
Wolfgang Bumiller
1134a14166 api: inherit LogContext in tasks hyper spawns in h2 handlers
so that tasks spawn()ed by hyper's h2 code log to the correct place

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-12 10:47:44 +02:00
Wolfgang Bumiller
c4f2bb70da bump sys and rest-server dependencies to 0.6
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-11 16:02:38 +02:00
Gabriel Goller
3b2ade778f api: switch from task_log! macro to tracing
Import `proxmox-log` and substitute all `task_log!`
(and task_warn!, task_error!) invocations with tracing calls (info!,
warn!, etc..). Remove worker references where it isn't necessary
anymore.

Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-11 15:56:06 +02:00
Gabriel Goller
7e2486e800 switch from task_log! macro to tracing
Import `proxmox-log` and substitute all `task_log!`
(and task_warn!, task_error!) invocations with tracing calls (info!,
warn!, etc..). Remove worker references where it isn't necessary
anymore.

Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-11 15:56:04 +02:00
Gabriel Goller
cd25c36d21 cargo: fix package name
s/proxmox-apt-api/proxmox-apt-api-types/

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-11 11:38:52 +02:00
Gabriel Goller
88f3ccb96c cargo: add local dependencies
Add local dependencies for new crates `proxmox-apt-api-types` and
`proxmox-config-digest`. Also fix order of deps.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-10 11:04:00 +02:00
Wolfgang Bumiller
68ec9356ec bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-09 08:06:17 +02:00
Dietmar Maurer
55e7bef4d2 use new apt/apt-api-types crate 2024-07-08 15:28:59 +02:00
Wolfgang Bumiller
da2002eadb bump proxmox-tfa dependency to 5.0.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-03 15:27:47 +02:00
Thomas Lamprecht
cb3d41e838 bump version to 3.2.7-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-07-03 13:34:21 +02:00
Thomas Lamprecht
d864ed1ca6 update online help reference info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-07-03 11:59:39 +02:00
Wolfgang Bumiller
226e61361b manager: restore newline in wipe-disk confirmation query
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-03 11:20:40 +02:00
Gabriel Goller
888aede177 backup_manager: use confirmation helper in wipe-disk command
Use `Confirmation` helper in the wipe-disk command prompt.

Improves: 887d83cb (cli: add interactive confirmation for block device wipe, 2023-11-29)
Cc: Markus Frank <m.frank@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-03 10:57:00 +02:00
Hannes Laimer
0020601d52 datastore: fix problem with operations counting
... if `.chunks/` is not available(deleted/moved) ChunkStore::open
fails, but that would happen after updating the active operations on the
datastore, so no reference that could be dropped is returned. Leading to
the operations counter to always increase. This only updates the counter
when a reference is returned, not before.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2024-07-02 17:02:38 +02:00
Hannes Laimer
c3e6770104 http_client: keep renewal future running on failed re-auth
The re-authentication request can also fail due to network instability,
and not necesarrily only due to an invalid ticket. In that case it makes
sense to retry refreshing the ticket in 15 minutes. Also, the future does
not depend on a failed re-authentication to be clean up properly, so that
happens already somewhere else, therefore we don't rely on this return
anyway. If the ticket is actually invalid or timed out, the main job
will fail and also terminate the renewal future, same applies if the
network is not just unstable but straight up not working.

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-07-02 11:13:42 +02:00
Christian Ebner
de00745354 fix 5304: client: set process uid/gid for .pxarexclude-cli
The .pxarexclude-cli encodes the exclude patterns the client was
invoked with in the pxar archive as regular file entry. The current
behaviour of setting the uid and gid to default 0 (root) causes
however issues when trying to backup and restore the backup as
non-root user.

Opt for using the uid/gid of the user the executable was called as,
allowing the restore for this user to succeed. Root will succeed
to restore anyways.

Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5304

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
2024-07-02 10:51:21 +02:00
Gabriel Goller
a981ddbc77 client: mount: wait for child to return before exiting
When using the `proxmox-backup-client mount` command, the parent sometimes
exits before we can print any error message. Most notably this happens
when no PBS_REPOSITORY is passed, as this is the first option checked.
If the underlying file descriptor has been closed, wait for the client
to complete and return the error message.

Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
2024-07-02 10:49:41 +02:00
Christian Ebner
a08698d32a close #5571: client: fix regression for map command
Commit 08fe5052 introduced functionality to mount split pxar archives
(sharing code with the map command), moving the manifest lookup
exclusive to fixed index archives.

However, the lookup now uses the incorrect archive name, not
containing the `.fidx` extension, which is however required for the
lookup in the manifest.

Fix the issue by calling the method with the correct server archive
name including the required extension.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>

Fixes: 08fe5052 ("client: mount: make split pxar archives mountable")

[FG: reworded, add proper "Fixes:" trailer.]
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-07-02 10:41:38 +02:00
Fabian Grünbichler
8dab8f3301 make: add deb-nostrip target
it builds about 1.5 times faster than regular `make deb` (shaving off a
whopping 100s on my machine). the resulting debs containing executables are of
course bigger (since the debug symbols are not split out into their own
package, and the ELF linkage stripping is also skipped), but other than the
associated file and memory mapping overhead there should be no difference in
behaviour or performance, and such debs are suitable for local testing (both of
the build process, and the built code).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-28 13:52:52 +02:00
Wolfgang Bumiller
42c6224f92 don't call contains_key() before remove()
HashMap::remove() returns the value it removes as an Option<>, so
instead of first checking if the key exists before removing it, just
try to remove it and use the returned Option<> to test whether we
should bail!().

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-28 09:33:23 +02:00
Maximiliano Sandoval
f4130d531f tools: add missing cfg(test) macro
Fixes the rustc warning:

warning: struct `TestAsyncCacher` is never constructed
  --> pbs-tools/src/async_lru_cache.rs:86:12
   |
86 |     struct TestAsyncCacher {
   |            ^^^^^^^^^^^^^^^
   |
   = note: `#[warn(dead_code)]` on by default

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:21:07 +02:00
Maximiliano Sandoval
cef764ff85 chunk_store: do not explicitly write implied trait
Fixes the clippy warning:

warning: this bound is already specified as the supertrait of `std::iter::FusedIterator`
   --> pbs-datastore/src/chunk_store.rs:254:14
    |
254 |         impl Iterator<Item = (Result<proxmox_sys::fs::ReadDirEntry, Error>, usize, bool)>
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#implied_bounds_in_impls
    = note: `#[warn(clippy::implied_bounds_in_impls)]` on by default
help: try removing this bound
    |
254 -         impl Iterator<Item = (Result<proxmox_sys::fs::ReadDirEntry, Error>, usize, bool)>
255 -             + std::iter::FusedIterator,
254 +         impl std::iter::FusedIterator<Item = (Result<proxmox_sys::fs::ReadDirEntry, Error>, usize, bool)>,

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:21:06 +02:00
Maximiliano Sandoval
f619f3e0e7 tools: write multiplication by 01 succinctly
Fixes the clippy warning:

warning: this multiplication by -1 can be written more succinctly
   --> pbs-client/src/tools/mod.rs:700:58
    |
700 |                         SignedDuration::Negative(val) => -1 * i64::try_from(val.as_secs())?,
    |                                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using: `-i64::try_from(val.as_secs())?`
    |
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#neg_multiply
    = note: `#[warn(clippy::neg_multiply)]` on by default

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:21:04 +02:00
Maximiliano Sandoval
8bd4be15c9 api: remove use of unnecessary pub(self)
Fixes the clippy warning:

warning: unnecessary `pub(self)`
  --> src/api2/access/mod.rs:35:1
   |
35 | pub(self) async fn user_update_auth<S: AsRef<str>>(
   | ^^^^^^^^^ help: remove it
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pub_self
   = note: `#[warn(clippy::needless_pub_self)]` on by default

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:21:02 +02:00
Maximiliano Sandoval
1d836ed32a replace get(key).is_none() with !contains_key()
Fixes the clippy warning:

warning: unnecessary use of `get(&user2).is_none()`
    --> pbs-config/src/acl.rs:1067:36
     |
1067 |                 assert!(node.users.get(&user2).is_none());
     |                         -----------^^^^^^^^^^^^^^^^^^^^^
     |                         |
     |                         help: replace it with: `!node.users.contains_key(&user2)`
     |
     = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:20:59 +02:00
Maximiliano Sandoval
f3f3c67267 replace get(key).is_some() with contains_key()
Fixes the clippy warning:

warning: unnecessary use of `get(realm).is_some()`
  --> pbs-config/src/domains.rs:68:58
   |
68 |     realm == "pbs" || realm == "pam" || domains.sections.get(realm).is_some()
   |                                                          ^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(realm)`
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-28 09:20:56 +02:00
Fabian Grünbichler
0003743962 Makefile: drop outdated comment
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-27 09:43:01 +02:00
Fabian Grünbichler
f4a7bddd39 build: fix nocheck build
pbs2to3 was missing from the list of to-be-compiled binaries, and thus was only
compiled as a side-effect of running `cargo test` (which is skipped when the
build is using the `nocheck` build profile).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-27 09:23:25 +02:00
Fabian Grünbichler
d6f4fd4ec7 cargo config: add debug=true
else debug symbols are stipped with 1.79+.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-25 14:21:58 +02:00
Fabian Grünbichler
8c75fcd07c trivial clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-25 13:37:35 +02:00
Fabian Grünbichler
71bf1a3b12 run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-24 10:02:07 +02:00
Fabian Grünbichler
8aa244641d trivial clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-24 09:59:27 +02:00
Wolfgang Bumiller
f699c72b20 bump proxmox-rrd to 0.2 and proxmox-time to 2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 14:08:08 +02:00
Wolfgang Bumiller
d9e9ed845d bump bitflags to 2.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 13:38:34 +02:00
Wolfgang Bumiller
a912e551cd update README.rst to refer to .cargo/config.toml
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 12:31:18 +02:00
Maximiliano Sandoval
2170d58b0b fs: update comment to reflect usage of C-string literals
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-20 12:26:49 +02:00
Wolfgang Bumiller
603042d0a4 rename .cargo/config to .cargo/config.toml
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 12:24:27 +02:00
Maximiliano Sandoval
1a76efc616 cargo: use default-features
Fixes the compile-time warning:

warning: Cargo.toml: `default_features` is deprecated in favor of `default-features` and will not work in the 2024 edition
(in the `proxmox-router` dependency)

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-06-20 12:18:40 +02:00
Wolfgang Bumiller
da72994faf use XATTR_* constants instead of calling functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 11:08:58 +02:00
Wolfgang Bumiller
0c9f247cfd bump sys dependency to 0.5.7
for the new xattr constants

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 11:08:58 +02:00
Wolfgang Bumiller
6359e6d4d4 replace c_str! macro with c"literals"
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-20 11:07:11 +02:00
Fabian Grünbichler
fd3f72820e build: use cargo wrapper when building package
else we don't pick up the options set by the wrapper, which include generation
of debug symbols. until rustc 1.77, this was not needed because compiled
binaries always included a non-stripped libstd. now, without this change, the
binaries built with `cargo build --release` have no debug symbols at all
trigger a warning. fix this and include debug symbols when building a package,
like was originally intended for release package builds.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-20 08:50:41 +02:00
Fabian Grünbichler
a92a745fdc build: fix SUBCRATES for arbitrary working dirs
else this only works if the git working tree is in a dir called
'proxmox-backup'

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-19 17:49:35 +02:00
Fabian Grünbichler
376cd5897e build: adapt workspace member command
to work with cargo 1.77, which changed from

 pbs-api-types 0.1.0 (path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types)

to

 path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types#0.1.0

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-19 16:00:39 +02:00
Gabriel Goller
734c4601a5 close #4763: client: add command to forget backup group
Add the command `proxmox-backup-client group forget <group>` so
that we can forget (delete) whole groups with all the containing
snapshots.
To avoid printing full datastore paths (which are in the error messages)
we filter out the most common one (group not found) and rephrase it.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[WB: rebased & sorted import statements in client's main.rs]
[WB: replace extract_repository_from_value with
     remove_repository_from_value since the parameter is rejected on
     the remote side]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-19 11:32:28 +02:00
Wolfgang Bumiller
8e924a7bc0 client: add 'remove_repository_from_value' helper
'extract_repository_from_value' takes an immutable reference and
doesn't remove the parsed parameter (whereas in contrast in our PVE
codebase, the 'extract_param' method does remove it).

This adds a variant that explicitly removes it called
'remove_repository_from_value'.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-19 11:31:46 +02:00
Gabriel Goller
00c88a42a2 pxar: use anyhow::Error in PxarBackupStream
Instead of storing the error as a string in the PxarBackupStream, we
store it as an anyhow::Error. As we can't clone an anyhow::Error, we take
it out from the mutex and return it. This won't change anything as
the consumation of the stream will stop if it gets a Some(Err(..)).

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-06-19 10:30:13 +02:00
Gabriel Goller
230527a360 pxar: add UniqueContext helper
To create a pxar archive, we recursively traverse the target folder.
If there is an error further down and we add a context using anyhow,
the context will be duplicated and we get an output like:

> Error: error at "xattr/xattr.txt": error at "xattr/xattr.txt": E2BIG [skip]

This is obviously not optimal, so in recursive contexts we can use the
UniqueContext, which quickly checks the context from the last item in
the error chain and only adds it if it is unique.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-06-19 10:30:12 +02:00
Gabriel Goller
095ddcad48 pxar: remove ArchiveError
The sole purpose of the ArchiveError was to add the file-path to the
error. Using anyhow::Error we can add this information using the context
and don't need this struct anymore.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-06-19 10:30:10 +02:00
Thomas Lamprecht
74d735eeed ui: gc job edit: fix i18n gettext usage
String concatenating a variable with some static text as gettext
parameter cannot really work, and it also does not make sense to do
most of the time, as even if we'd use some overly generic format
string like '{0} (disabled)', it would be not easy to translate
correctly in all languages in such a generic way.

So just use the actual full string, which is already contained in our
translation catalogue anyway…

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-06-18 16:15:27 +02:00
Thomas Lamprecht
5c15fb97b4 docs: drop blanket statement recommending against remote storage
This is basically semantic revert of e5c0d80c ("docs: add note for not
using remote storages") that, while well intended, has a few problems,
e.g.:
- This is the minimal/recommended requirements section, which should
  list the rough basic specs a setup must/should have. Listing
  everything that is not best to do would bloat this list
  significantly and it's just the wrong place for it, i.e., it isn't a
  recommended against list.
- while it's true that a remote storage will basically always have
  _some_ overhead over using the same HW with a (modern) local storage
  (file) system, that does **not** mean that the remote storage has
  insufficient performance characteristics. We know of lots of fast
  Ceph setups, even release benchmarks for them, or storages like
  BlockBridge, that provide high performance while being remote.

So avoid this X-Y-problem style argumentation and focus on what is
actually important, even though I naturally get that there are some
users that use slow NFS attached storages, but breaking style here
won't cure them and I'm sure that they are capable of setting up such
a slow local storage that it won't make a real difference compared to
the NFS one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-06-17 17:52:03 +02:00
Wolfgang Bumiller
0d47038a0c bump proxmox-sys dep to 0.5.6
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-06-17 14:06:25 +02:00
Fabian Grünbichler
1d36b502f5 Merge branch '3.2.6'
branched off to avoid a breaking change on master
2024-06-17 10:38:02 +02:00
Fabian Grünbichler
472b52f54c bump version to 3.2.6-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-17 10:18:53 +02:00
Christian Ebner
dbfba9db89 client: pxar: fix fuse mount performance for split archives
Adapt to the decoder/accessor method changes introduced in the pxar
library, which were introduced in order to move the consistency check
for metadata and payload data archives.

The new location of the checks allows to access the pxar archive via
a `Split` variant reader instance, without penalization when just
accessing the metadata, not reading any payload data.

This greatly improves performance when accessing fuse mounted
archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>

bumped dependency after pxar version bump

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-17 10:17:42 +02:00
Dietmar Maurer
e1a506a0d0 config: acme: use latest proxmox_sys::fs::ensure_dir_exists
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-06-13 11:58:09 +02:00
Dominik Csapak
e5c0d80ca4 docs: add note for not using remote storages
such as NFS or SMB. They will not provide the expected performance
and it's better to recommend against them.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-06-11 11:41:58 +02:00
Dominik Csapak
a3e79113cf tape: handle PEWZ like regular early warning
as a safeguard, should the disabling not work for some reason.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-06-11 10:33:57 +02:00
Dominik Csapak
b7a6c5da06 tape: disable Programmable Early Warning Zone (PEWZ)
since that leads to errors that we don't currently catch before we
reach the regular early warning on tape.

This can be read/set by the Device Configuration Extension Mode Page.
ignore errors on reading or writing, since it may not be available on
LTO-4

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-06-11 10:33:39 +02:00
Dominik Csapak
192d70bbe2 tape: refactor setting the mode page
we'll reuse that code later for a different page/subpage

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-06-11 10:05:25 +02:00
Fabian Grünbichler
3a76fdc9e7 bump version to 3.2.5-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-10 13:45:49 +02:00
Fabian Grünbichler
4354cae7ba bump pxar to 0.11.1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-10 13:39:33 +02:00
Fabian Grünbichler
481a9515b1 extract: don't interpret prelude as OsStr
that would drop the final byte, and the corresponding code has been removed
from pxar now as well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-10 13:38:10 +02:00
Christian Ebner
afdc0f0e42 client: pxar: encode prelude based on writer variant
Currently, whether to encode the exlcude patterns passed via cli as
prelude or via the `.pxar-exclude-cli` is based on the presence of
a previous metadata accessor.
That leaves however to the encoding of the file entry instead of the
prelude for split archives in `data` mode and for the first snapshot
in a backup, creating undesired padding in the first payload chunk.

Therefore, use the pxar writer variant to make the decision instead.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 13:11:38 +02:00
Christian Ebner
903ab2e938 client: pxar: json encode cli exclude pattern in prelude
The current encoding is not extensible, so encode the cli exclude
patterns as json instead. By this, the prelude is easily seralized
and deserialized, while remaining human readable.

Originally-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 13:11:29 +02:00
Christian Ebner
3c9a29e5cf file-restore: list: improve pxar v2 performance
Do not attach the payload reader for split pxar archives, as only the
metadata has to be accessed for listing.
This avoids that the decoder performs consistency checks with the
payload stream, which require chunk download and decoding, making the
listing unusable slow.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 10:58:31 +02:00
Christian Ebner
8834153187 docs: add table listing possible change detection modes
Quick and concise listing of the available change detection modes for
reference.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 10:30:10 +02:00
Christian Ebner
04bb256abd client: backup spec: rename change detection mode default
The currently default variant is named `Default`, which is not future
prove since the default might change in the future. So rename it to
`Legacy` instead.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 10:30:05 +02:00
Fabian Grünbichler
fdcae4bd8a api: catalog: improve pxar v2 performance
by skipping the payloader reader entirely, it's not needed for listing contents
and would make accessing larger archives too expensive.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-10 10:10:33 +02:00
Fabian Grünbichler
8f9330b582 run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-07 14:00:33 +02:00
Fabian Grünbichler
e653ace318 api: catalog/file-restore: use archive-name schema
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-07 14:00:15 +02:00
Christian Ebner
c0302805c4 client: backup: conditionally write catalog for file level backups
Only write the catalog when using the regular backup mode, do not write
it when using the split archive mode.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
4dd816b343 www: content: lookup via metadata archive instead of catalog
In case of pxar archives with split metadata and payload data, the
metadata archive has to be used to lookup entries for navigation
before performing a single file restore.

Decide based on the archive filename extension whether to use the
`catalog` or the `pxar-lookup` api endpoint.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
84981486cb file-restore: fallback to mpxar if catalog not present
The `proxmox-file-restore list` command will uses the provided path to
lookup and list directory entries via the catalog. Fallback to using
the metadata archive if the catalog is not present for fast lookups in
a backup snapshot.

This is in preparation for dropping encoding of the catalog for
snapshots using split archive encoding. Proxmox VE's storage plugin
uses this to allow single file restore for LXCs.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
6cf75f2fe2 file-restore: never list ppxar as archive
Payload data archives cannot be used to navigate the content, so
exclude them from the archive listing, as this is used by
Proxmox VE to list in the file browser.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
364e2d1133 api: datastore: add optional archive-name to file-restore
Allow to pass the archive name as optional api call parameter instead
of having it as prefix to the path.
If this parameter is given, instead of splitting of the archive name
from the path, the parameter itself is used, leaving the path
untouched.

This allows to restore single files from the archive, without having
to artificially construct the path in case of file restores for split
pxar archives, where the response path of the listing does not
include the archive, as opposed to the response provided by lookup
via the catalog.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
cfb2632c9e api: datastore: conditional lookup for catalog endpoint
Add an optional `archive-name` parameter, indicating the metadata
archive to be used for directory content lookups instead of the
catalog. If provided, instead of the catalog reader, a pxar Accessor
instance is created to perform the lookup.

This is in preparation for dropping catalog encoding for snapshots
with split pxar archive encoding.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
8d8142955f client: tools: add helper to lookup ArchiveEntrys via pxar
In preparation to lookup entries via the pxar metadata archive
instead of the catalog, in order to drop encoding the catalog
for snapshots using split pxar archives altogehter.

This helper allows to lookup the directory entries via the provided
accessor instance and formats them to be compatible with the output
as produced by lookups via the catalog.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
3b95f09522 api: datastore: move reusable code out of thread
Move code that can be reused when having to  perform a lookup via the
pxar metadata archive instead of the catalog out of the thread.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
30ea695518 api: datastore: factor out path decoding for catalog
The file path passed to the catalog is base64 encoded, with an exception
for the root.
Factor this check and decoding step out into a helper function to make
it reusable when doing the same for lookups via the metadata archive
instead of the catalog.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 13:56:24 +02:00
Christian Ebner
2baa9f8fb8 client: helper: fix minor formatting issue
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 12:09:21 +02:00
Christian Ebner
6b1110badf client: pxar: fix minor formatting issue
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-07 12:09:21 +02:00
Christian Ebner
1259234488 client: pxar: conditionally skip metadata reference test
The test will fail for all users not having euid/egid set to
1000/1000, as the reference test folder structure cannot be created
with the expected ownership.
Therefore, skip over the test if either euid or egid do not match
this condition.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-06 10:46:05 +02:00
Christian Ebner
bab0645cc6 client: pxar: do not attempt to set uid/gid in test
Setting the uid/gid for the files and folders of the test directory
structure will not work when lacking the permissions.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-06 10:46:05 +02:00
Fabian Grünbichler
766faeb04a bump pxar build-dep to 0.11
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
c51f0d5e8d docs: add section describing change detection mode
Describe the motivation and basic principle of the clients change
detection mode and show an example invocation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
5cff9c6fe8 docs: file formats: describe split pxar archive file layout
Describes the pxar metadata archive and the corresponding pxar payload
file-format layout.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
cf75bc0db5 client: pxar: set cache limit based on nofile rlimit
The lookahead cache size requires the resource limit for open file
handles to be high in order to allow for efficient reuse of unchanged
file payloads.

Increase the nofile soft limit to the hard limit and dynamically adapt
the cache size to the new soft limit minus the half of the previous
soft limit.

The `PxarCreateOptions` and the `Archiver` are therefore extended by
an additional field to store the maximum cache size, with fallback to
a default size of 512 entries.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
7b352cc0cf client: tools: add helper to raise nofile rlimit
The default soft limit for open file handles is rather low, as some
apis (e.g. the POSIX `select(2)` syscall) do not work [0].

The lookahead cache use during the backup clients metadata comparison
to reuse unchanged files however requires much higher limits to work
effectively.

This helper function allows to raise the soft limit to the hard
limit, as provided by the `getrlimit(2)` syscall.

[0] https://0pointer.net/blog/file-descriptor-limits.html

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
992487929a client: pxar: add archive creation with reference test
Add a basic regression test for archive creation with reference
metadata archive and index.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
5a5d454083 client: chunk stream: switch payload stream chunker
Use the dedicated chunker with boundary suggestions for the payload
stream, by attaching the channel sender to the archiver and the
channel receiver to the payload stream chunker.

The archiver sends the file boundaries for the chunker to consume.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
589f510e7d chunk stream: tests: add regression tests for payload chunker
Regression tests to cover suggested and forced boundaries as well as
chunk injection.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
e11ee319ce chunker: tests: add regression tests for payload chunker
Test chunking of a payload stream with suggested chunk boundaries.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
88ef759cc4 datastore: chunker: implement chunker for payload stream
Implement the Chunker trait for a dedicated payload stream chunker,
which extends the regular chunker by the option to suggest boundaries
to be used over the hast based boundaries whenever possible.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
e321815635 datastore: chunker: add Chunker trait
Add the Chunker trait and move the current Chunker to ChunkerImpl to
implement the trait instead. This allows to use different chunker
implementations by dynamic dispatch and is in preparation for
implementing a dedicated payload chunker.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:42 +02:00
Christian Ebner
a43399da06 pxar: add optional payload input to mount archive
Allow to pass an optional input path to mount a split pxar archive
with dedicated payload data file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
108764f95b pxar: bin: support creation of split pxar archives via cli
Add support to create split pxar archives by redirecting the payload
output to a dedicated file.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
f16c5de757 pxar: bin: test pxar list with payload-input
Add a unit test to check for correct listing of pxar archives with
split payload input.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
1bec755b50 pxar: bin: ignore version and prelude entries in listing
Do not list the pxar format version and the prelude entries in the
output of pxar list, these are not regular entries. Do include them
however when dumping with the debug environmet variable set.
Since the prelude is arbitrary in size, only show the content size.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
58bfb6bb49 pxar: bin: show padding in debug output on archive list
In addition to the entries, also show the padding encountered in-between
referenced payloads.

Example invocation: `PXAR_LOG=debug pxar list archive.mpxar`

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
ee478ef1dc client: pxar: allow to restore prelude to optional path
Pxar archives allow to store additional information in a prelude
entry since pxar format version 2.

Add an optional parameter to `pxar` and `proxmox-backup-client` to
specify the path to restore the prelude to and pass this to the
archive extraction by extending the `PxarExtractOptions` by a
corresponding field. If none is given, the prelude is simply skipped
during restore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
126fe1365d client: pxar: opt encode cli exclude patterns as Prelude
Instead of encoding the pxar cli exclude patterns as regular file
within the root directory of an archive, store this information
directly after the pxar format version entry in the entry of kind
Prelude.

This behavior is however currently exclusive to the archives written
with format version 2 in a split metadata and payload case.

This is a breaking change for the encoding of new cli exclude
parameters. Any new exclude parameter will not be added to an already
present .pxar-cliexclude file, and it will not be created if not
present.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
0cbfaf64b5 client: pxar: add helper to handle optional preludes
Pxar archives with format version 2 allows to store optional
information file format version and prelude entries.

Cover the case for these entries, the file format version entry being
introduced to distinguish between different file formats used for
encoding as well as the prelude entry used to store optional metadata
such as the pxar cli exlude parameters.

Add the logic to accept and decode these prelude entries when
accessing the archive via a decoder instance.

For now simply ignore them.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
7401be7e96 client: backup writer: make backup info output more concise
With the additional output in case of split pxar archives, the upload
statistics logged by the backup writer following a backup are crowded
and hard to read.

Make the output more concise by merging the currenlty 2 lines per
upload stream, shown as e.g.:

```
data.ppxar: had to backup 4 MiB of 10.943 GiB (compressed 159 B) in 49.30s
data.ppxar: average backup speed: 83.09 KiB/s
```

into a single line, shown as e.g.:

```
data.ppxar: had to back up 4 MiB of 10.943 GiB (159 B compressed) in 49.30 s (average 83.09 KiB/s)
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
5b91d85150 pxar: create: show chunk injection stats info output
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
64eddfffbe pxar: create: keep track of reused chunks and files
Track and log reused or reencoded files as well as the reused chunks
and their paddings.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
51bb7bc6b0 client: backup writer: add injected chunk count to stats
Track the number of injected chunks and show them in the debug output

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
7dcbe69a87 fix #3174: client: pxar: enable caching and meta comparison
When walking the file system tree, check for each entry if it is
reusable, meaning that the metadata did not change and the payload
chunks can be reindexed instead of reencoding the whole data.

If the metadata matched, the range of the dynamic index entries for
that file are looked up in the previous payload data index.
Use the range and possible padding introduced by partial reuse of
chunks to decide whether to reuse the dynamic entries and encode
the file payloads as payload reference right away or cache the entry
for now and keep looking ahead.

If however a non-reusable (because changed) entry is encountered
before the padding threshold is reached, the entries on the cache are
flushed to the archive by reencoding them, resetting the cached state.

Reusable chunk digests and size as well as reference offsets to the
start of regular files payloads within the payload stream are injected
into the backup stream by sending them to the chunker via a dedicated
channel, forcing a chunk boundary and inserting the chunks.

If the threshold value for reuse is reached, the chunks are injected
in the payload stream and the references with the corresponding
offsets encoded in the metadata stream.

Since multiple files might be contained within a single chunk, it is
assured that the deduplication of chunks is performed, by keeping back
the last chunk, so following files might as well reuse that same
chunk without double indexing it.  It is assured that this chunk is
injected in the stream also in case that the following lookups lead to
a cache clear and reencoding.

Directory boundaries are cached as well, and written as part of the
encoding when flushing.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
3149454511 client: pxar: refactor catalog encoding for directories
Move the catalog directory start and end encoding from `add_entry`
to the `add_directory`, the latter being called by the previous.

By this, the `add_entry` method can be reused to walk the filesystem
tree in the context of an enabled lookahead cache without encoding
anything.

No functional change intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
6f23976247 pxar: caching: add look-ahead cache
Add a lookahead cache and the neccessary types to store the required
data and keep track of directory boundaries while traversing the
filesystem tree, in order to postpone a decision if to reuse or
reencode a given regular file with unchanged metadata.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
d0f7d86c9e client: pxar: add method for metadata comparison
Add method to compare metadata of current file entry against metadata
of the entry looked up in the previous backup snapshot. If the
metadata matched, the start offset pointing to the files payload
header in the payload steam is returned.

This is in preparation for reusing payload chunks for unchanged files.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
fdea4e5327 client: implement prepare reference method
Implement a method that prepares the decoder instance to access a
previous snapshots metadata index and payload index in order to
pass it to the pxar archiver. The archiver than can utilize these
to compare the metadata for files to the previous state and gather
reusable chunks.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
7c00ec904d specs: add backup detection mode specification
Adds the specification for switching the detection mode used to
identify regular files which changed since a reference backup run.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
7de35dc243 client: streams: add channels for dynamic entry injection
To reuse dynamic entries of a previous backup run and index them for
the new snapshot. Adds a non-blocking channel between the pxar
archiver and the chunk stream, as well as the chunk stream and the
backup writer.

The archiver sends forced boundary positions and the dynamic
entries to inject into the chunk stream following this boundary.

The chunk stream consumes this channel inputs as receiver whenever a
new chunk is requested by the upload stream, forcing a non-regular
chunk boundary in the pxar stream at the requested positions.

The dynamic entries to inject and the boundary are then send via the
second asynchronous channel to the backup writer's upload stream,
indexing them by inserting the dynamic entries as known chunks into
the upload stream.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
77fdae28cf chunker: add method to reset chunker state
When forcing a boundary, the internal chunker state is not in sync
with the chunk stream anymore. The reset method therefore allows
to reset the internal state.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
717b9b4c88 client: chunk stream: add struct to hold injection state
Adds a dedicated structure to hold the optional sender and receiver
instances and state for injection of reused dynamic entries in the
payload stream for split stream pxar archives.

The asynchronous channels must only be attached to the payload
archive, leaving the current behavior for the metadata archive and
current default encoding without reusing payload chunks of previous
snapshots.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
e8f3abb88f upload stream: implement reused chunk injector
In order to be included in the backups index file, reused payload
chunks have to be injected into the payload upload stream at a
forced boundary. The chunker forces a chunk boundary and sends the
list of reusable dynamic entries to be uploaded.

This implements the logic to receive these dynamic entries via the
corresponding communication channel from the chunker and inject the
entries into the backup upload stream by looking for the matching
chunk boundary, already forced by the chunker.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
c2fc7f5390 client: pxar: helper for lookup of reusable dynamic entries
The helper method allows to lookup the entries of a dynamic index
which fully cover a given offset range. Further, the helper returns
the start padding from the start offset of the dynamic index entry
to the start offset of the given range and the end padding.

This will be used to lookup size and digest for chunks covering the
payload range of a regular file in order to re-use found chunks by
indexing them in the archives index file instead of re-encoding the
payload.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
51e8fa9648 client: pxar: include payload offset in entry listing
Also display the payload offset as listing output when the regular file
entry had a payload reference rather than the payload encoded in the
archive. This allows for debugging by inspecting the raw payload data
file at given offset.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
d83839ddf3 pxar: bin: add more context to extraction error
Show more of the extraction error context provided by the pxar decoder.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
cf5d30c53f pxar: bin: cover listing for split archives
Allows to list entries of split pxar archives. As the decoder skips
over the file payloads, the corresponding payload file has to be
provided. Otherwise the decoder would skip inside the metadata
archive, leading to incorrect decoding.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
0b789a96dd pxar: bin: add optional payload input for archive restore
Allows to pass the optional payload input to restore for cases where the
regular file payloads are stored in the split archive.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
082c801ebb file restore: show more error context when extraction fails
Otherwise the context swallows the actual, underlying error message.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
d4a22d05df file restore: cover split metadata and payload archives
Attach the payload data archive as input stream to the decoder
and accessor instances for split archives.
Allows to restore contents from split archives via the
`proxmox-file-restore extract` command, by passing the metadata
archive name.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
00b0fbc4b6 file restore: factor out getting pxar reader
Factor out the logic to get the pxar reader into a dedicated function
so it can be reused to get the payload data archive reader instance.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
8fb247b030 file restore: cover extension for split pxar archives
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data for pxar archives written as split archive.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
4dcc60e3d3 www: cover metadata extension for pxar archives
Allows to access the pxar metadata archives for navigation and
download via the Proxmox Backup Server web ui.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
82f4d32544 catalog: shell: make split pxar archives accessible
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives,
using the corresponding helper methods.
Allows to restore split metadata and payload stream pxar archives via
the catalog shell.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
0e44d9d30c api: datastore: attach split archive payload chunk reader
Attach the payload chunk reader for pxar archives which have been
uploaded using split streams for metadata and payload data.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
08fe50520a client: mount: make split pxar archives mountable
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
99dea0b678 client: tools: cover extension for split pxar archives
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data file in the cli parameter completion callback.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
a701d015dd client: restore: read payload from dedicated index
Whenever a split pxar archive is encountered, instantiate and attach
the required dedicated reader instance to the decoder instance on
restore.

Piping the output to stdout is not possible for these, as this would
require a decoder instance which can decode the input stream, while
maintaining the pxar stream format as output.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
65dee618cc client: tools: helper to check pxar filename extensions
With the introduction of split pxar archives, the allowed extensions
are now `.pxar`, `.mpxar` and `.ppxar`. Add a helper function to
allow to check for all valid variants, including the optional
additional `.didx` in case of a server archive name.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
4d1831ef56 client: helper: add method for split archive name mapping
Helper method that takes an archive name as input and checks if the
given archive is present in the manifest, by also taking possible
split archive extensions into account.
Returns the pxar archive name if found or the split archive names if
the split archive variant is present in the manifest.

If neither is matched, an error is returned signaling that nothing
matched entries in the manifest.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
488872e461 client: helper: add helpers for creating reader instances
Add module to place helper methods which need to be used in different
submodules of the client.

Add `get_pxar_fuse_reader`, `get_buffered_pxar_reader` and
`get_pxar_fuse_accessor` to create reader instances to access pxar
archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
07cb3e7f77 client: pxar: optionally split metadata and payload streams
... and attach the split payload writer variant to the pxar archive
creation. By this, metadata and payload data will create different
dynamic indexes, allowing to lookup and reuse payload chunks without
the additional overhead of the pxar archive's metadata.

For now this functionality remains disabled and will be enabled in a
later patch once the logic for reusing the payload chunks is in
place.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
0fd3bcebe7 client: pxar: combine writers into struct
Introduce a `PxarWriters` struct to bundle all writer instances
required for the pxar archive creation into a single object to limit
the number of function call parameters.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Christian Ebner
e2784a594e client: pxar: switch to stack based encoder state
... and adapt to the new reader/writer variant for encoder or
decoder/accessor to attach a dedicated payload input/output for split
pxar archives.

In preparation for look-ahead caching, where a passing around of
per-directory level encoder instances with internal references is
not feasible.

Previously, for each directory level a new encoder instance has been
generated, restricting possible implementation errors. These encoder
instances have been internally linked by references to keep track of
the state changes in a parent child relationship.

This is however not feasible when the encoder has to be passed by
mutable reference, as required by the look-ahead cache
implementation. The encoder has therefore been adapted to use a
single instance implementation with an internal stack keeping track
of the state.

Depends on the bumped pxar library version, including the patches to
attach the corresponding variant for the pxar reader/writer
instantiation.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 16:39:41 +02:00
Fabian Grünbichler
4940514b0f bump version to 3.2.4-1
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-06-05 16:24:36 +02:00
Christian Ebner
9978f6934b datastore: dynamic index: add method to get digest
In preparation for injecting reused payload chunks in payload streams
for regular files with unchanged metaddata. Allows to get the digest
of a dynamic index entry to construct a reusable dynamic entry from
it.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 10:47:36 +02:00
Christian Ebner
846e10cdb4 api: datastore: refactor getting local chunk reader
Move the code to get the local chunk reader to a dedicated function
to make it reusable. The same code is required to get the local chunk
reader for the payload stream for split stream archives.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 09:59:13 +02:00
Christian Ebner
3e57f3dc91 client: backup: factor out extension from backup target
Instead of composing the backup target name and pushing it to the
backup list, push the archive name and extension separately, only
constructing it while iterating the list later.

By this it remains possible to additionally prefix the extension, as
required with the separate pxar metadata and payload indexes.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2024-06-05 09:59:13 +02:00
Shannon Sterz
94d6a65dd6 auth: add locking to PbsAuthenticator to avoid race conditions
currently we don't lock the shadow file when removing or storing a
password. by adding locking here we avoid a situation where storing
and/or removing a password concurrently could lead to a race
condition. in this scenario it is possible that a password isn't
persisted or a password isn't removed. we already do this for
the "token.shadow" file, so just use the same mechanism here.

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2024-06-03 10:55:02 +02:00
Fiona Ebner
843211b050 fix #5503: d/control: bump dependency for proxmox-widget-toolkit
With proxmox-widget-toolkit < 4.1.4, loading the UI will fail with
a JavaScript error:

> Uncaught TypeError: Proxmox.Utils.overrideNotificationFieldName is not a function

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-06-03 09:50:19 +02:00
Gabriel Goller
92c0b1866b fix: api: permission using wrong pathname
The read_interface endpoint uses the wrong path identifier. It has been
renamed to 'iface' some time ago but hasn't been changed here.

When a user has a permission on '/' with 'Admin', he wasn't able to
show the config of a single interface, as the non-existent path didn't
match.

Reported-by: https://forum.proxmox.com/threads/permissons-not-working-for-network-settings.147899/

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-05-31 11:03:28 +02:00
Wolfgang Bumiller
83e748baf5 fixup build with new acme crate
We missed an API break in the acme crate versioning...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-05-27 10:54:03 +02:00
Fabian Grünbichler
8c0bbc0d97 trivial clippy fixes
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-05-24 12:49:59 +02:00
Fabian Grünbichler
b096c590eb run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-05-24 12:49:21 +02:00
Thomas Lamprecht
1d4afdccea bump version to 3.2.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-05-22 19:32:15 +02:00
Thomas Lamprecht
e50448e4ec tape: rework setting MAM Host type attributes
The product name is Proxmox Backup Server, not just Backup Server,
that makes no sense on its own and it really cannot be expected by
tools extracting any Medium Auxiliary Memory (MAM) info to render it
as `${app_vendor} ${app_name}`.

Drop the comment about ignoring errors, that's pretty clear with
the only-log-error construct.

Instead, add some comments about what the hex numbers refers too and
what their respective length (limit) is. The names where taken from
Table 315 "MAM Host type attributes" in the "IBM LTO SCSI Reference"
for LTO 9.

Slightly off-topic: The tape code really is a mess with sprinkling
those hex numbers hard coded all over the place, often with some
unchecked coupling in other places (like here, the list of set MAM
attrs and the one that get cleared can easily get out of sync..), but
that's for another time to clean-up (I need to cut a release).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-05-22 19:15:16 +02:00
Thomas Lamprecht
23a9d70d57 build config: add constant for full cargo crate version
and a todo comment to document some cleanup potential

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-05-22 19:02:28 +02:00
Fabian Grünbichler
a55c6efbf7 acme: explicitly ask for custom directory URI
instead of blocking on input without telling the user what's going on.

Reported on the forum: https://forum.proxmox.com/threads/147058/

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-05-22 17:53:40 +02:00
Lukas Wagner
1665eb2e48 ui: datastore options: link to 'notification-mode' section
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-05-22 17:50:03 +02:00
Lukas Wagner
c730196684 docs: notifications: rewrite overview for more clarity
Also link to the following subsections where applicable.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
2024-05-22 17:50:03 +02:00
Lukas Wagner
4ce1962124 docs: document notification-mode and merge old notification section
This new section describes how the notification-mode parameter works.
The section also contains also parts of the old notification section
from the maintenance chapter, reusing the description of the
`notify` and `notify-user` parameters.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
2024-05-22 17:50:03 +02:00
Gabriel Goller
1d0bcd2359 notifications: fix legacy sync notifications
When using the legacy notifications the sync mode would pick up the
settings from the prune-job, which default to Error. This completely
disables notifications for successful sync-jobs when using the legacy
system.

Reported in the forum: https://forum.proxmox.com/threads/147018/

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
2024-05-22 17:31:51 +02:00
Wolfgang Bumiller
71c65d2282 bump d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-05-22 16:05:53 +02:00
Wolfgang Bumiller
61f55ceee1 bump proxmox-auth-api to 0.4
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-05-22 16:05:49 +02:00
Stefan Sterz
9ce3d0c88c auth: use auth-api when generating keys and generate ec keys
this commit switches pbs over to generating ed25519 keys when
generating new auth api keys. this also removes the last direct
usages of openssl here and further unifies key handling in the auth
api.

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
2024-05-22 16:04:21 +02:00
Stefan Sterz
048a81cc55 auth: move to auth-api's private and public keys when loading keys
this commit moves away from using openssl's `PKey` and uses the
wrappers from proxmox-auth-api. this allows us to handle keys in a
more flexible way and enables as to move to ec based crypto for the
authkey in the future.

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
2024-05-22 16:04:19 +02:00
Stefan Sterz
8e77260256 auth: upgrade hashes on user log in
if a users password is not hashed with the latest password hashing
function, re-hash the password with the newest hashing function. we
can only do this on login and after the password has been validated,
as this is the only point at which we have access to the plain text
password and also know that it matched the original password.

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
2024-05-22 16:04:18 +02:00
Stefan Sterz
cf71dc2428 auth: move to hmac keys for csrf tokens
previously we used a self-rolled implementation for csrf tokens. while
it's unlikely to cause issues in reality, as csrf tokens are only
valid for a given tickets lifetime, there are still theoretical
attacks on our implementation. so move all of this code into the
proxmox-auth-api crate and use hmac instead.

this change should not impact existing installations for now, as this
falls back to the old implementation if a key is already present. hmac
keys will only be used for new installations and if users manually
remove the old key and

Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
2024-05-22 16:04:16 +02:00
Thomas Lamprecht
3c23c4c250 ui: garbage-collection: use different state-id for global and per-datastore view
For one these different views have different columns shown, and more
importantly: with the state being shared one could change sorting in
the global view and then have that applied in the per-datastore view
too, even if one cannot sort that view explicitly otherwise as there's
just one row anyway. This small glitch might lead to a bit of
confusion in the worst case and looks unpolished in any way.

Note that I explicitly decided against encoding the datastore in the
state-id for the per-datastore views for now, as most users will want
to adapt layout (like column width) for all per-datastores views.

Having to re-do that for every datastore separately can be quite a
nuisance while the same user wanting different layout for each
datastore in their per-datastore view seems rather to be an edge case.
And we can always change this, so starting out with the slightly more
restricted design that has less browser local data to be saved seems
better w.r.t. maintainability.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-05-21 11:34:21 +02:00
Gabriel Goller
0385762859 fix #5422: ui: garbage-collection: make columns in global view sortable
Make columns sortable in the global 'Prune & GC Jobs' view. In the
per-datastore view the columns will not be sortable as there can only be
one job.

Fixes: db3fd213 ("fix #3217: ui: global prune and gc job view")

Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
2024-05-21 11:29:31 +02:00
Dominik Csapak
5901050e7a restore daemon: search disk also with truncated serial
the disk serial given to virtio disks only can be 20 characters, so
looking for a disk with a longer serial will always fail (like
'drive-tpmstate0-backup'). If the serial is longer, also try with the
truncated one. Leave the first try in place in case the limit changes.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-16 11:50:45 +02:00
Dominik Csapak
7bc7601f65 restore daemon: log some errors for dir traversal
in case we cannot stat a file in the restore vm, log the path and reason
why. This should normally not happen, but when it does, the path and
error might help us find the issue.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-16 11:50:45 +02:00
Dominik Csapak
31edde560a fix #5465: restore daemon: mount ntfs with utf8 charset
since the change in our restore image to ntfs3, non iso8859-1 filenames
were broken. Fix that by adding the 'iocharset' option to ntfs3.

Leave the ntfs option in place, so that if the image gets booted
with an older kernel for some reason, this still works.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-16 11:50:45 +02:00
Thomas Lamprecht
98e2c16a04 ui: update online help info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-05-15 19:05:47 +02:00
Dietmar Maurer
00ef50146c api: syslog: fix api macro to return array instead of object.
The implementation already returns Vec, so this change is to generate
correct api documentation.

Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-05-15 12:17:03 +02:00
Dominik Csapak
6d4b380c3d tape: write informational MAM attributes on tapes
namely:

Vendor: Proxmox
Name: Backup Server
Version: current running package version
User Label Text: the label text
Media Pool: the current media pool

write it on labeling and when writing a new media-set to a tape.

While we currently don't use this info for anything, this can help users
to identify tapes, even with different backup software.

If we need it in the future, we can e.g. make decisions based on these
fields (e.g. the version).

On format, delete them again.

Note that some VTLs don't correctly delete the attributes from the
virtual tapes.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-15 09:35:56 +02:00
Dominik Csapak
9d2fc6565f tape: correct mam format for some attributes
Some MAM attributes are of type 'TEXT' that is not only ascii, but
controlled by an addition field that specifies various 8bit text
formats.

For now, simply assume utf8 as the default is ascii, and we don't expect
any data that is not ASCII anyway.

This will be needed when we'll want to write those attributes.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-15 09:32:10 +02:00
Dominik Csapak
b5af9333f8 tape: include drive activity in status
Since we don't query each drives status seperately, but rely on a single
call to the drives listing parameter for that, we now add the option
to query the activity there too. This makes that data avaiable for us
to show in a seperate (by default hidden) column.

Also we show the activity in the 'State' column when the drive is idle
from our perspective. This is useful when e.g. an LTO-9 tape is loaded
the first time and is calibrating, since that happens automatically.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-14 10:31:33 +02:00
Dominik Csapak
4ebb08a5f0 tape: drive status: make some depend on the activity
when the tape drive has an activity (and the tape is in motion), certain
calls block until the operation is finished. Since we cannot predict how
long it's going to be and it can be quite long in certain cases,
skip those calls when the drive is doing anything.

If we cannot determine the activity, try to do the queries.

We have to extend the check for a loaded drive in the UI, since the
position is not available during any activity.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-14 10:27:23 +02:00
Dominik Csapak
1d6b1e0258 tape: add drive activity to drive status api
and show it in the gui for single drives. Adds the known values for the
activity to the UI.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-14 10:25:42 +02:00
Dominik Csapak
3f1a084b90 tape: add functions to parse drive device activity
we use the VHF part from the DT Device Activity page for that.
This is intended to query the drive for it's current state and activity.

Currently only the activity is parsed and used.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-14 10:11:06 +02:00
Dominik Csapak
4b21a00744 tape: save 'bytes used' in tape inventory
and show them on the ui. This can help uses with seeing how much a tape
is used.

The value is updated on 'commit' and when the tape is changed during a
backup.

For drives not supporting the volume statistics, this is simply skipped.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-14 10:07:57 +02:00
Dietmar Maurer
aea66a8128 tape: cleanup: rename bytes_written to bytes_written_after_sync 2024-05-08 09:16:57 +02:00
Dominik Csapak
372709326e examples: add tape write benchmark
A small example that simply writes pseudo-random chunks to a drive.
This is useful to benchmark throughput on tape drives.

The output and behavior is similar to what the pool writer does, but
without writing multiple files, committing or loading data from disk.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-08 09:04:52 +02:00
Dominik Csapak
c343c3f7f6 tape: improve throughput by not unnecessarily syncing/committing
When writing data on tape, the idea was to sync/committing to tape and
the catalog to disk every 128GiB of data. For that the counter
'bytes_written' was introduced and checked after every chunk/snapshot
archive.

Sadly we forgot to reset the counter after doing so, which meant that
after 128GiB was written onto the tape, we synced/committed after every
archive on the tape for the remaining length of the tape.

Since syncing to tape and writing to disk takes a bit of time, the drive
had to slow down every time and reduced the available throughput. (In
our tests here from ~300MB/s to ~255MB/s).

By resetting the value to zero after syncing, we avoid that and increase
throughput performance when backups are bigger than 128GiB on tape.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-05-08 09:04:42 +02:00
Dietmar Maurer
de2cd9a688 api: delay datastore lookup after permission check
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2024-04-29 11:20:09 +02:00
Thomas Lamprecht
09da69cc10 update proxmox-metrics dependency to 0.3.1
to ensure that it can handle the recently lifted restrictions on the
organization and bucket parameters correctly by URL encoding them.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-26 17:55:47 +02:00
Gabriel Goller
3e69aba2d8 api-types: remove influxdb bucket name restrictions
Remove the regex for influxdb organizations and buckets. Influxdb does
not place any constraints on these names and allows all characters. This
allows influxdb organization names with slashes.

Also remove a duplicate comment and add some missing ones.

This also aligns the behavior to PVE as there are no restrictions there
either.

The motivation for this patch is this forum post:
https://forum.proxmox.com/threads/influx-db-organization-doesnt-allow-slash.145402/

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
2024-04-26 17:54:51 +02:00
Thomas Lamprecht
fea9358b72 update proxmox-sys dependency to 0.5.4
to ensure the next build contains the 78bf05a4 ("fix: use fragmented
block size for space calculation") improvement.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-26 17:54:08 +02:00
414 changed files with 18144 additions and 14243 deletions

View File

@ -3,3 +3,6 @@
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"
[profile.release]
debug=true

View File

@ -1,5 +1,5 @@
[workspace.package]
version = "3.2.2"
version = "3.4.1"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -13,6 +13,7 @@ authors = [
edition = "2021"
license = "AGPL-3"
repository = "https://git.proxmox.com/?p=proxmox-backup.git"
rust-version = "1.81"
[package]
name = "proxmox-backup"
@ -28,7 +29,6 @@ exclude = [ "build", "debian", "tests/catar_data/test_symlink/symlink1"]
[workspace]
members = [
"pbs-api-types",
"pbs-buildcfg",
"pbs-client",
"pbs-config",
@ -53,43 +53,51 @@ path = "src/lib.rs"
[workspace.dependencies]
# proxmox workspace
proxmox-apt = "0.10.5"
proxmox-apt = { version = "0.11", features = [ "cache" ] }
proxmox-apt-api-types = "1.0.1"
proxmox-async = "0.4"
proxmox-auth-api = "0.3"
proxmox-auth-api = "0.4"
proxmox-borrow = "1"
proxmox-compression = "0.2"
proxmox-config-digest = "0.1.0"
proxmox-daemon = "0.1.0"
proxmox-fuse = "0.1.3"
proxmox-http = { version = "0.9.0", features = [ "client", "http-helpers", "websocket" ] } # see below
proxmox-http = { version = "0.9.5", features = [ "client", "http-helpers", "websocket" ] } # see below
proxmox-human-byte = "0.1"
proxmox-io = "1.0.1" # tools and client use "tokio" feature
proxmox-lang = "1.1"
proxmox-log = "0.2.6"
proxmox-ldap = "0.2.1"
proxmox-metrics = "0.3"
proxmox-notify = "0.4"
proxmox-metrics = "0.3.1"
proxmox-notify = "0.5.1"
proxmox-openid = "0.10.0"
proxmox-rest-server = { version = "0.5.1", features = [ "templates" ] }
proxmox-rest-server = { version = "0.8.9", features = [ "templates" ] }
# some use "cli", some use "cli" and "server", pbs-config uses nothing
proxmox-router = { version = "2.0.0", default_features = false }
proxmox-rrd = { version = "0.1" }
proxmox-router = { version = "3.0.0", default-features = false }
proxmox-rrd = "0.4"
proxmox-rrd-api-types = "1.0.2"
# everything but pbs-config and pbs-client use "api-macro"
proxmox-schema = "3"
proxmox-schema = "4"
proxmox-section-config = "2"
proxmox-serde = "0.1.1"
proxmox-shared-cache = "0.1"
proxmox-shared-memory = "0.3.0"
proxmox-sortable-macro = "0.1.2"
proxmox-subscription = { version = "0.4.2", features = [ "api-types" ] }
proxmox-sys = "0.5.3"
proxmox-tfa = { version = "4.0.4", features = [ "api", "api-types" ] }
proxmox-time = "1.1.6"
proxmox-uuid = "1"
proxmox-subscription = { version = "0.5.0", features = [ "api-types" ] }
proxmox-sys = "0.6.7"
proxmox-systemd = "0.1"
proxmox-tfa = { version = "5", features = [ "api", "api-types" ] }
proxmox-time = "2"
proxmox-uuid = { version = "1", features = [ "serde" ] }
proxmox-worker-task = "0.1"
pbs-api-types = "0.2.2"
# other proxmox crates
pathpatterns = "0.3"
proxmox-acme = "0.5"
pxar = "0.10.2"
proxmox-acme = "0.5.3"
pxar = "0.12.1"
# PBS workspace
pbs-api-types = { path = "pbs-api-types" }
pbs-buildcfg = { path = "pbs-buildcfg" }
pbs-client = { path = "pbs-client" }
pbs-config = { path = "pbs-config" }
@ -105,23 +113,22 @@ anyhow = "1.0"
async-trait = "0.1.56"
apt-pkg-native = "0.3.2"
base64 = "0.13"
bitflags = "1.2.1"
bitflags = "2.4"
bytes = "1.0"
cidr = "0.2.1"
crc32fast = "1"
const_format = "0.2"
crossbeam-channel = "0.5"
endian_trait = { version = "0.6", features = ["arrays"] }
env_logger = "0.10"
env_logger = "0.11"
flate2 = "1.0"
foreign-types = "0.3"
futures = "0.3"
h2 = { version = "0.3", features = [ "stream" ] }
h2 = { version = "0.4", features = [ "legacy", "stream" ] }
handlebars = "3.0"
hex = "0.4.3"
http = "0.2"
hyper = { version = "0.14", features = [ "full" ] }
lazy_static = "1.4"
hickory-resolver = { version = "0.24.1", default-features = false, features = [ "system-config", "tokio-runtime" ] }
hyper = { version = "0.14", features = [ "backports", "deprecated", "full" ] }
libc = "0.2"
log = "0.4.17"
nix = "0.26.1"
@ -135,7 +142,6 @@ regex = "1.5.5"
rustyline = "9"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_plain = "1"
siphasher = "0.3"
syslog = "6"
tar = "0.4"
@ -145,33 +151,29 @@ tokio = "1.6"
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tokio-util = { version = "0.7", features = [ "io" ] }
tracing = "0.1"
tower-service = "0.3.0"
udev = "0.4"
url = "2.1"
walkdir = "2"
xdg = "2.2"
zstd = { version = "0.12", features = [ "bindgen" ] }
zstd-safe = "6.0"
[dependencies]
anyhow.workspace = true
async-trait.workspace = true
apt-pkg-native.workspace = true
base64.workspace = true
bitflags.workspace = true
bytes.workspace = true
cidr.workspace = true
const_format.workspace = true
crc32fast.workspace = true
crossbeam-channel.workspace = true
endian_trait.workspace = true
flate2.workspace = true
futures.workspace = true
h2.workspace = true
handlebars.workspace = true
hex.workspace = true
http.workspace = true
hyper.workspace = true
lazy_static.workspace = true
libc.workspace = true
log.workspace = true
nix.workspace = true
@ -184,7 +186,6 @@ regex.workspace = true
rustyline.workspace = true
serde.workspace = true
serde_json.workspace = true
siphasher.workspace = true
syslog.workspace = true
termcolor.workspace = true
thiserror.workspace = true
@ -192,24 +193,27 @@ tokio = { workspace = true, features = [ "fs", "io-util", "io-std", "macros", "n
tokio-openssl.workspace = true
tokio-stream.workspace = true
tokio-util = { workspace = true, features = [ "codec" ] }
tower-service.workspace = true
tracing.workspace = true
udev.workspace = true
url.workspace = true
walkdir.workspace = true
xdg.workspace = true
zstd.workspace = true
#valgrind_request = { git = "https://github.com/edef1c/libvalgrind_request", version = "1.1.0", optional = true }
# proxmox workspace
proxmox-apt.workspace = true
proxmox-apt-api-types.workspace = true
proxmox-async.workspace = true
proxmox-auth-api = { workspace = true, features = [ "api", "pam-authenticator" ] }
proxmox-compression.workspace = true
proxmox-config-digest.workspace = true
proxmox-daemon.workspace = true
proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async", "rate-limited-stream" ] } # pbs-client doesn't use these
proxmox-human-byte.workspace = true
proxmox-io.workspace = true
proxmox-lang.workspace = true
proxmox-log.workspace = true
proxmox-ldap.workspace = true
proxmox-metrics.workspace = true
proxmox-notify = { workspace = true, features = [ "pbs-context" ] }
@ -219,21 +223,23 @@ proxmox-router = { workspace = true, features = [ "cli", "server"] }
proxmox-schema = { workspace = true, features = [ "api-macro" ] }
proxmox-section-config.workspace = true
proxmox-serde = { workspace = true, features = [ "serde_json" ] }
proxmox-shared-cache.workspace = true
proxmox-shared-memory.workspace = true
proxmox-sortable-macro.workspace = true
proxmox-subscription.workspace = true
proxmox-sys = { workspace = true, features = [ "timer" ] }
proxmox-systemd.workspace = true
proxmox-tfa.workspace = true
proxmox-time.workspace = true
proxmox-uuid.workspace = true
proxmox-worker-task.workspace = true
pbs-api-types.workspace = true
# in their respective repo
pathpatterns.workspace = true
proxmox-acme.workspace = true
pxar.workspace = true
# proxmox-backup workspace/internal crates
pbs-api-types.workspace = true
pbs-buildcfg.workspace = true
pbs-client.workspace = true
pbs-config.workspace = true
@ -242,21 +248,27 @@ pbs-key-config.workspace = true
pbs-tape.workspace = true
pbs-tools.workspace = true
proxmox-rrd.workspace = true
proxmox-rrd-api-types.workspace = true
# Local path overrides
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
#pbs-api-types = { path = "../proxmox/pbs-api-types" }
#proxmox-acme = { path = "../proxmox/proxmox-acme" }
#proxmox-apt = { path = "../proxmox/proxmox-apt" }
#proxmox-apt-api-types = { path = "../proxmox/proxmox-apt-api-types" }
#proxmox-async = { path = "../proxmox/proxmox-async" }
#proxmox-auth-api = { path = "../proxmox/proxmox-auth-api" }
#proxmox-borrow = { path = "../proxmox/proxmox-borrow" }
#proxmox-compression = { path = "../proxmox/proxmox-compression" }
#proxmox-config-digest = { path = "../proxmox/proxmox-config-digest" }
#proxmox-daemon = { path = "../proxmox/proxmox-daemon" }
#proxmox-fuse = { path = "../proxmox-fuse" }
#proxmox-http = { path = "../proxmox/proxmox-http" }
#proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
#proxmox-io = { path = "../proxmox/proxmox-io" }
#proxmox-lang = { path = "../proxmox/proxmox-lang" }
#proxmox-log = { path = "../proxmox/proxmox-log" }
#proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
#proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
#proxmox-notify = { path = "../proxmox/proxmox-notify" }
@ -264,6 +276,7 @@ proxmox-rrd.workspace = true
#proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
#proxmox-router = { path = "../proxmox/proxmox-router" }
#proxmox-rrd = { path = "../proxmox/proxmox-rrd" }
#proxmox-rrd-api-types = { path = "../proxmox/proxmox-rrd-api-types" }
#proxmox-schema = { path = "../proxmox/proxmox-schema" }
#proxmox-section-config = { path = "../proxmox/proxmox-section-config" }
#proxmox-serde = { path = "../proxmox/proxmox-serde" }
@ -271,11 +284,12 @@ proxmox-rrd.workspace = true
#proxmox-sortable-macro = { path = "../proxmox/proxmox-sortable-macro" }
#proxmox-subscription = { path = "../proxmox/proxmox-subscription" }
#proxmox-sys = { path = "../proxmox/proxmox-sys" }
#proxmox-systemd = { path = "../proxmox/proxmox-systemd" }
#proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
#proxmox-time = { path = "../proxmox/proxmox-time" }
#proxmox-uuid = { path = "../proxmox/proxmox-uuid" }
#proxmox-worker-task = { path = "../proxmox/proxmox-worker-task" }
#proxmox-acme = { path = "../proxmox/proxmox-acme" }
#pathpatterns = {path = "../pathpatterns" }
#pxar = { path = "../pxar" }

View File

@ -1,8 +1,10 @@
include /usr/share/dpkg/default.mk
include /usr/share/rustc/architecture.mk
include defines.mk
PACKAGE := proxmox-backup
ARCH := $(DEB_BUILD_ARCH)
export DEB_HOST_RUST_TYPE
SUBDIRS := etc www docs templates
@ -33,15 +35,23 @@ RESTORE_BIN := \
SUBCRATES != cargo metadata --no-deps --format-version=1 \
| jq -r .workspace_members'[]' \
| awk '!/^proxmox-backup[[:space:]]/ { printf "%s ", $$1 }'
| grep "$$PWD/" \
| sed -e "s!.*$$PWD/!!g" -e 's/\#.*$$//g' -e 's/)$$//g'
STATIC_TARGET_DIR := target/static-build
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release
COMPILEDIR := target/release
CARGO_BUILD_ARGS += --release --target $(DEB_HOST_RUST_TYPE)
COMPILEDIR := target/$(DEB_HOST_RUST_TYPE)/release
STATIC_COMPILEDIR := $(STATIC_TARGET_DIR)/$(DEB_HOST_RUST_TYPE)/release
else
COMPILEDIR := target/debug
CARGO_BUILD_ARGS += --target $(DEB_HOST_RUST_TYPE)
COMPILEDIR := target/$(DEB_HOST_RUST_TYPE)/debug
STATIC_COMPILEDIR := $(STATIC_TARGET_DIR)/$(DEB_HOST_RUST_TYPE)/debug
endif
STATIC_RUSTC_FLAGS := -C target-feature=+crt-static -L $(STATIC_COMPILEDIR)/deps-stubs/
ifeq ($(valgrind), yes)
CARGO_BUILD_ARGS += --features valgrind
endif
@ -51,6 +61,9 @@ CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN) $(RESTORE_BIN))
STATIC_BINS := \
$(addprefix $(STATIC_COMPILEDIR)/,proxmox-backup-client-static pxar-static)
export DEB_VERSION DEB_VERSION_UPSTREAM
SERVER_DEB=$(PACKAGE)-server_$(DEB_VERSION)_$(ARCH).deb
@ -59,10 +72,12 @@ CLIENT_DEB=$(PACKAGE)-client_$(DEB_VERSION)_$(ARCH).deb
CLIENT_DBG_DEB=$(PACKAGE)-client-dbgsym_$(DEB_VERSION)_$(ARCH).deb
RESTORE_DEB=proxmox-backup-file-restore_$(DEB_VERSION)_$(ARCH).deb
RESTORE_DBG_DEB=proxmox-backup-file-restore-dbgsym_$(DEB_VERSION)_$(ARCH).deb
STATIC_CLIENT_DEB=$(PACKAGE)-client-static_$(DEB_VERSION)_$(ARCH).deb
STATIC_CLIENT_DBG_DEB=$(PACKAGE)-client-static-dbgsym_$(DEB_VERSION)_$(ARCH).deb
DOC_DEB=$(PACKAGE)-docs_$(DEB_VERSION)_all.deb
DEBS=$(SERVER_DEB) $(SERVER_DBG_DEB) $(CLIENT_DEB) $(CLIENT_DBG_DEB) \
$(RESTORE_DEB) $(RESTORE_DBG_DEB)
$(RESTORE_DEB) $(RESTORE_DBG_DEB) $(STATIC_CLIENT_DEB) $(STATIC_CLIENT_DBG_DEB)
DSC = rust-$(PACKAGE)_$(DEB_VERSION).dsc
@ -70,7 +85,7 @@ DESTDIR=
tests ?= --workspace
all: $(SUBDIRS)
all: proxmox-backup-client-static $(SUBDIRS)
.PHONY: $(SUBDIRS)
$(SUBDIRS):
@ -108,12 +123,15 @@ proxmox-backup-docs: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean
lintian $(DOC_DEB)
# copy the local target/ dir as a build-cache
.PHONY: deb dsc deb-nodoc
.PHONY: deb dsc deb-nodoc deb-nostrip
deb-nodoc: build
cd build; dpkg-buildpackage -b -us -uc --no-pre-clean --build-profiles=nodoc
lintian $(DEBS)
deb-nostrip: build
cd build; DEB_BUILD_OPTIONS=nostrip dpkg-buildpackage -b -us -uc
lintian $(DEBS) $(DOC_DEB)
$(DEBS): deb
deb: build
cd build; dpkg-buildpackage -b -us -uc
@ -137,7 +155,7 @@ clean: clean-deb
$(foreach i,$(SUBDIRS), \
$(MAKE) -C $(i) clean ;)
$(CARGO) clean
rm -f .do-cargo-build
rm -f .do-cargo-build .do-static-cargo-build
# allows one to avoid running cargo clean when one just wants to tidy up after a package build
clean-deb:
@ -176,6 +194,7 @@ $(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-
--bin proxmox-restore-daemon \
--package proxmox-backup \
--bin docgen \
--bin pbs2to3 \
--bin proxmox-backup-api \
--bin proxmox-backup-manager \
--bin proxmox-backup-proxy \
@ -185,12 +204,25 @@ $(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-
--bin sg-tape-cmd
touch "$@"
.PHONY: proxmox-backup-client-static
proxmox-backup-client-static:
rm -f .do-static-cargo-build
$(MAKE) $(STATIC_BINS)
$(STATIC_BINS): .do-static-cargo-build
.do-static-cargo-build:
mkdir -p $(STATIC_COMPILEDIR)/deps-stubs/ && \
echo '!<arch>' > $(STATIC_COMPILEDIR)/deps-stubs/libsystemd.a # workaround for to greedy linkage and proxmox-systemd
$(CARGO) rustc $(CARGO_BUILD_ARGS) --package pxar-bin --bin pxar \
--target-dir $(STATIC_TARGET_DIR) -- $(STATIC_RUSTC_FLAGS)
$(CARGO) rustc $(CARGO_BUILD_ARGS) --package proxmox-backup-client --bin proxmox-backup-client \
--target-dir $(STATIC_TARGET_DIR) -- $(STATIC_RUSTC_FLAGS)
.PHONY: lint
lint:
cargo clippy -- -A clippy::all -D clippy::correctness
install: $(COMPILED_BINS)
install: $(COMPILED_BINS) $(STATIC_BINS)
install -dm755 $(DESTDIR)$(BINDIR)
install -dm755 $(DESTDIR)$(ZSH_COMPL_DEST)
$(foreach i,$(USR_BIN), \
@ -209,16 +241,19 @@ install: $(COMPILED_BINS)
install -m4755 -o root -g root $(COMPILEDIR)/sg-tape-cmd $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/sg-tape-cmd
$(foreach i,$(SERVICE_BIN), \
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
install -m755 $(STATIC_COMPILEDIR)/proxmox-backup-client $(DESTDIR)$(BINDIR)/proxmox-backup-client-static
install -m755 $(STATIC_COMPILEDIR)/pxar $(DESTDIR)$(BINDIR)/pxar-static
$(MAKE) -C www install
$(MAKE) -C docs install
$(MAKE) -C templates install
.PHONY: upload
upload: UPLOAD_DIST ?= $(DEB_DISTRIBUTION)
upload: $(SERVER_DEB) $(CLIENT_DEB) $(RESTORE_DEB) $(DOC_DEB)
upload: $(SERVER_DEB) $(CLIENT_DEB) $(RESTORE_DEB) $(DOC_DEB) $(STATIC_CLIENT_DEB)
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - $(SERVER_DEB) $(SERVER_DBG_DEB) $(DOC_DEB) $(CLIENT_DEB) $(CLIENT_DBG_DEB) \
| ssh -X repoman@repo.proxmox.com upload --product pbs --dist $(UPLOAD_DIST)
tar cf - $(CLIENT_DEB) $(CLIENT_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist $(UPLOAD_DIST)
tar cf - $(STATIC_CLIENT_DEB) $(STATIC_CLIENT_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pbs-client" --dist $(UPLOAD_DIST)
tar cf - $(RESTORE_DEB) $(RESTORE_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist $(UPLOAD_DIST)

View File

@ -5,8 +5,11 @@ Build & Release Notes
``rustup`` Toolchain
====================
We normally want to build with the ``rustc`` Debian package. To do that
you can set the following ``rustup`` configuration:
We normally want to build with the ``rustc`` Debian package (see below). If you
still want to use ``rustup`` for other reasons (e.g. to easily switch between
the official stable, beta, and nightly compilers), you should set the following
``rustup`` configuration to use the Debian-provided ``rustc`` compiler
by default:
# rustup toolchain link system /usr
# rustup default system
@ -30,7 +33,7 @@ pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0").
Local cargo config
==================
This repository ships with a ``.cargo/config`` that replaces the crates.io
This repository ships with a ``.cargo/config.toml`` that replaces the crates.io
registry with packaged crates located in ``/usr/share/cargo/registry``.
A similar config is also applied building with dh_cargo. Cargo.lock needs to be

702
debian/changelog vendored
View File

@ -1,3 +1,705 @@
rust-proxmox-backup (3.4.1-1) bookworm; urgency=medium
* ui: token view: fix typo in 'lose' and rephrase token regenerate dialog
message for more clarity.
* restrict consent-banner text length to 64 KiB.
* docs: describe the intend for the statically linked pbs client.
* api: backup: include previous snapshot name in log message.
* garbage collection: account for created/deleted index files concurrently
to GC to avoid potentially confusing log messages.
* garbage collection: fix rare race in chunk marking phase for setups doing
high frequent backups in quick succession while immediately pruning to a
single backup snapshot being left over after each such backup.
* tape: wait for calibration of LTO-9 tapes in general, not just in the
initial tape format procedure.
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Apr 2025 14:45:37 +0200
rust-proxmox-backup (3.4.0-1) bookworm; urgency=medium
* fix #4788: build statically linked version of the proxmox-backup-client
package.
* ui: sync job: change the rate limit direction based on sync direction.
* docs: mention how to set the push sync jobs rate limit
* ui: set error mask: ensure that message is html-encoded to avoid visual
glitches.
* api server: increase maximal request body size fro 64 kiB to 512 kiB,
similar to a recent change for our perl based projects.
* notifications: include Content-Length header for broader compatibility in
the webhook and gotify targets.
* notifications: allow overriding notification templates.
* docs: notifications: add section about how to use custom templates
* sync: print whole error chain per group on failure for more context.
* ui: options-view: fix typo in empty-text for GC tuning option.
* memory info: use the "MemAvailable" field from '/proc/meminfo' to compute
used memory to fix overestimation of that metric and to better align with
what modern versions of tools like `free` do and to future proof against
changes in how the kernel accounts memory usage for.
* add "MemAvailable" field to ProcFsMemInfo to promote its usage over the
existing "MemFree" field, which is almost never the right choice. This new
field will be provided for external metric server.
* docs: mention different name resolution for statically linked binary.
* docs: add basic info for how to install the statically linked client.
* docs: mention new verify-only and encrypted-only flags for sync jobs.
-- Proxmox Support Team <support@proxmox.com> Wed, 09 Apr 2025 17:41:38 +0200
rust-proxmox-backup (3.3.7-1) bookworm; urgency=medium
* fix #5982: garbage collection: add a check to ensure that the underlying
file system supports and honors file access time (atime) updates.
The check is performed once on datastore creation and on start of every
garbage collection (GC) task, just to be sure. It can be disabled in the
datastore tuning options.
* garbage collection: support setting a custom access time cutoff,
overriding the default of one day and five minutes.
* ui: expose flag for GC access time support safety check and the access
time cutoff override in datastore tuning options.
* docs: describe rationale for new GC access time update check setting and
the access time cutoff check in tuning options.
* access control: add support to mark a specific authentication realm as
default selected realm for the login user interface.
* fix #4382: api: access control: remove permissions of token on deletion.
* fix #3887: api: access control: allow users to regenerate the secret of an
API token without changing any existing ACLs.
* fix #6072: sync jobs: support flags to limit sync to only encrypted and/or
verified snapshots.
* ui: datastore tuning options: expose overriding GC cache capacity so that
admins can either restrict the peak memory usage during GC or allow GC to
use more memory to reduce file system IO even for huge (multiple TiB)
referenced data in backup groups.
* ui: datastore tuning options: increase width and rework labels to provide
a tiny bit more context about what these options are.
* ui: sync job: increase edit window width to 720px to make it less cramped.
* ui: sync job: small field label casing consistency fixes.
-- Proxmox Support Team <support@proxmox.com> Sat, 05 Apr 2025 17:54:31 +0200
rust-proxmox-backup (3.3.6-1) bookworm; urgency=medium
* datastore: ignore group locking errors when removing snapshots, they
normally happen only due to old-locking, and the underlying snapshot is
deleted in any case at this point, so it's no help to confuse the user.
* api: datastore: add error message on failed removal due to old locking and
tell any admin what they can do to switch to the new locking.
* ui: only add delete parameter on token edit, not when creating tokens.
* pbs-client: allow reading fingerprint from system credential.
* docs: client: add section about system credentials integration.
-- Proxmox Support Team <support@proxmox.com> Thu, 03 Apr 2025 17:57:02 +0200
rust-proxmox-backup (3.3.5-1) bookworm; urgency=medium
* api: config: use guard for unmounting on failed datastore creation
* client: align description for backup specification to docs, using
`archive-name` and `type` over `label` and `ext`.
* client: read credentials from CREDENTIALS_DIRECTORY environment variable
following the "System and Service Credentials" specification. This allows
users to use native systemd capabilities for credential management if the
proxmox-backup-client is used in systemd units or, e.g., through a wrapper
like systemd-run.
* fix #3935: datastore/api/backup: move datastore locking to '/run' to avoid
that lock-files can block deleting backup groups or snapshots on the
datastore and to decouple locking from the underlying datastore
file-system.
* api: fix race when changing the owner of a backup-group.
* fix #3336: datastore: remove group if the last snapshot is removed to
avoid confusing situations where the group directory still exists and
blocks re-creating a group with another owner even though the empty group
was not visible in the web UI.
* notifications: clean-up and add dedicated types for all templates as to
allow declaring that interface stable in preparation for allowing
overriding them in the future (not included in this release).
* tape: introduce a tape backup job worker-thread option for restores.
Depending on the underlying storage using more threads can dramatically
improve the restore speed. Especially fast storage with low penalty for
random access, like flash-storage (SSDs) can profit from using more
worker threads. But on file systems backed by spinning disks (HDDs) the
performance can even degrade with more threads. This is why for now the
default is left at a single thread and the admin needs to tune this for
their storage.
* garbage collection: generate index file list via datastore iterators in a
structured manner.
* fix #5331: garbage collection: avoid multiple chunk atime updates by
keeping track of the recently marked chunks in phase 1 of garbage to avoid
multiple atime updates via relatively expensive utimensat (touch) calls.
Use a LRU cache with size 32 MiB for tracking already processed chunks,
this fully covers backup groups referencing up to 4 TiB of actual chunks
and even bigger ones can still benefit from the cache. On some real-world
benchmarks of a datastore with 1.5 million chunks, and original data
usage of 120 TiB and a referenced data usage of 2.7 TiB (high
deduplication count due to long-term history) we measured 21.1 times less
file updates (31.6 million) and a 6.1 times reduction in total GC runtime
(155.4 s to 22.8 s) on a ZFS RAID 10 system consisting of spinning HDDs
and a special device mirror backed by datacenter SSDs.
* logging helper: use new builder initializer not functional change
intended.
-- Proxmox Support Team <support@proxmox.com> Wed, 02 Apr 2025 19:42:38 +0200
rust-proxmox-backup (3.3.4-1) bookworm; urgency=medium
* fix #6185: client/docs: explicitly mention archive name restrictions
* docs: using-the-installer: adapt to raised root password length requirement
* disks: wipe: replace dd with write_all_at for zeroing disk
* fix #5946: disks: wipe: ensure GPT header backup is wiped
* docs: fix hash collision probability comparison
-- Proxmox Support Team <support@proxmox.com> Thu, 13 Mar 2025 13:04:05 +0100
rust-proxmox-backup (3.3.3-1) bookworm; urgency=medium
* api: datastore list: move checking if a datastore is mounted after we
ensured that the user may actually access it. While this had no effect
security wise, it could significantly increase the cost of this API
endpoint in big setups with many datastores and many tenants that each
have only access to one, or a small set, of datastores.
* Revert "fix #5710: api: backup: stat known chunks on backup finish" due to
a big performance impact relative to what this is protectign against. We
will work out a more efficient fix for this issue in the future.
* prune simulator: show backup entries that are kept also in the flat list
of backups, not just in the calendar view.
* docs: improve the description for the garbage collection's cut-off time
* pxar extract: correctly honor the overwrite flag
* api: datastore: add missing log context for prune to avoid a case where
the worker state being unknown after it finished.
* docs: add synopsis and basic docs for prune job configuration
* backup verification: handle manifest update errors as non-fatal to avoid
that the job fails, as we want to continue with verificating the rest to
ensure we uncover as much potential problems as possible.
* fix #4408: docs: add 'disaster recovery' section for tapes
* fix #6069: prune simulator: correctly handle schedules that mix both, a
range and a step size at once.
* client: pxar: fix a race condition where the backup upload stream can miss
an error from the create archive function, because the error state is only
set after the backup stream was already polled. This avoids a edge case
where a file-based backup was incorrectly marked as having succeeded while
there was a error.
-- Proxmox Support Team <support@proxmox.com> Tue, 11 Feb 2025 20:24:27 +0100
rust-proxmox-backup (3.3.2-2) bookworm; urgency=medium
* file-restore: fix regression with the new blockdev method used to pass
disks of a backup to the isolated virtual machine.
-- Proxmox Support Team <support@proxmox.com> Tue, 10 Dec 2024 12:14:47 +0100
rust-proxmox-backup (3.3.2-1) bookworm; urgency=medium
* pbs-client: remove `log` dependency and migrate to our common,
`tracing`-based, logging infrastructure. No semantic change intended.
* file restore: switch to more modern blockdev option for drives in QEMU
wrapper for the restore VM.
* pxar: client: fix missing file size check for metadata comparison
-- Proxmox Support Team <support@proxmox.com> Mon, 09 Dec 2024 10:37:32 +0100
rust-proxmox-backup (3.3.1-1) bookworm; urgency=medium
* tree-wide: add missing O_CLOEXEC flags to `openat` calls to avoid passing
any open FD to new child processes which can have undesired side-effects
like keeping a lock open longer than it should.
* cargo: update proxmox dependency of rest-server and sys crates to include
some fixes for open FDs and a fix for the active task worker tracking, as
on failing to update the index file the daemon did not finished the
worker, causing a reference count issue where an old daemon could keep
running forever.
* ui: check that store is set before trying to select anythin in the garbage
collection (GC) job view.
-- Proxmox Support Team <support@proxmox.com> Tue, 03 Dec 2024 18:11:04 +0100
rust-proxmox-backup (3.3.0-2) bookworm; urgency=medium
* tree-wide: fix various typos.
* ui: fix remove vanished tooltip to be valid for both sync directions.
* ui: mask unmounted datastores in datastore overview.
* server: push: fix supported api version check for minor version bump.
-- Proxmox Support Team <support@proxmox.com> Thu, 28 Nov 2024 13:03:03 +0100
rust-proxmox-backup (3.3.0-1) bookworm; urgency=medium
* GC: add safety-check for nested datastore
* ui: make some more strings translatable
* docs: make sphinx ignore the environment cache to avoid missing synopsis
in some HTML output, like for example the "Command Syntax" appendix.
* docs: add note for why FAT is not supported for as backing file system for
datastores
* api: disks: directory: fail if mount unit already exists for a new file
system
* : filter partitions without proper UUID in partition selector
* ui: version info: replace wrong hyphen separator with dot
-- Proxmox Support Team <support@proxmox.com> Wed, 27 Nov 2024 20:38:41 +0100
rust-proxmox-backup (3.2.14-1) bookworm; urgency=medium
* pull-sync: do not interpret older missing snapshots as needs-resync
* api: directory: use relative path when creating removable datastore
* ui: prune keep input: actually clear value on clear trigger click
* ui: datastore edit: fix empty-text for path field
* sync: push: pass full error context when returning error to job
* api: mount removable datastore: only log an informational message if the
correct device is already mounted.
* api: sync: restrict edit permissions for the new push sync jobs to avoid
that a user is able to create or edit sync jobs in push direction, but not
able to see them.
* api: create datastore: fix checks to avoid that any datastore can contain
another one to better handle the case for the new removable datastores.
-- Proxmox Support Team <support@proxmox.com> Wed, 27 Nov 2024 14:42:56 +0100
rust-proxmox-backup (3.2.13-1) bookworm; urgency=medium
* update pxar dependency to fix selective extraction with the newly
supported match patterns.
* reuse-datastore: avoid creating another prune job
* api: notification: add API routes for webhook targets
* management cli: add CLI for webhook targets
* ui: utils: enable webhook edit window
* ui: utils: add task description for mounting/unmounting
* ui: add onlineHelp for consent-banner option
* docs: client: fix example commands for client usage
* docs: explain some further caveats of the change detection modes
* ui: use same label for removable datastore created from disk
* api: maintenance: allow setting of maintenance mode if 'unmounting'
* docs: add more information for removable datastores
* ui: sync jobs: revert to single list for pull/push jobs, improve
distinction between push and pull jobs through other means.
* ui: sync jobs: change default sorting to 'store' -> 'direction' -> 'id'
* ui: sync jobs: add search filter-box
* config: sync: use same config section type `sync` for push and pull, note
that this breaks existing configurations and needs manual clean-up. As the
package versions never made it beyond test this is ignored, as while it's
not really ideal we never give guarantees for testing package versions,
and the maintenance burden with the old style would not be ideal either.
* api: removable datastores: require Sys.Modify permission on /system/disks
* ui: allow resetting unmounting maintenance
* datastore: re-phrase error message when datastore is unavailable
* client: backup writer: fix regression in progress output
-- Proxmox Support Team <support@proxmox.com> Tue, 26 Nov 2024 17:05:23 +0100
rust-proxmox-backup (3.2.12-1) bookworm; urgency=medium
* fix #5853: client: pxar: exclude stale files on metadata/link read
* docs: fix wrong product name in certificate docs
* docs: explain the working principle of the change detection modes
* allow datastore creation in directory with lost+found directory
* fix #5801: manager: switch datastore update command to real API call to
avoid early cancellation of the task.
* server: push: consistently use remote over target for error messages and
various smaller improvements to related log messages.
* push: move log messages for removed snapshot/group
* fix #5710: api: backup: stat known chunks on backup finish to ensure any
problem/corruption is caught earlier.
* pxar: extract: make invalid ACLs non-fatal, but only log them, there's
nothing to win by failing the restore completely.
* server: push: log encountered empty backup groups during sync
* fix #3786: ui, api, cli: add resync-corrupt option to sync jobs
* docs: add security implications of prune and change detection mode
* fix #2996: client: backup restore: add support to pass match patterns for
a selective restore
* docs: add installation media preparation and installation wizard guides
* api: enforce minimum character limit of 8 on new passwords to follow
recent NIST recommendations.
* ui, api: support configuring a consent banner that is shown before login
to allow complying with some (government) policy frameworks.
* ui, api: add initial support for removable datastore providing better
integration for datastore located on a non-permanently attached medium.
-- Proxmox Support Team <support@proxmox.com> Mon, 25 Nov 2024 22:52:11 +0100
rust-proxmox-backup (3.2.11-1) bookworm; urgency=medium
* fix #3044: server: implement push support for sync operations
* push sync related refactors
-- Proxmox Support Team <support@proxmox.com> Thu, 21 Nov 2024 12:03:50 +0100
rust-proxmox-backup (3.2.10-1) bookworm; urgency=medium
* api: disk list: do not fail but just log error on gathering smart data
* cargo: require proxmox-log 0.2.6 to reduce spamming the logs with the
whole worker task contents
-- Proxmox Support Team <support@proxmox.com> Tue, 19 Nov 2024 22:36:14 +0100
rust-proxmox-backup (3.2.9-1) bookworm; urgency=medium
* client: catalog: fallback to metadata archives for dumping the catalog
* client: catalog shell: make the catalog optional and use the pxar accessor
for navigation if the catalog is not provided, like its the case for
example for split pxar archives.
* client: catalog shell: drop payload offset in `stat` output, as this is a
internal value that only helps on debugging some specific development.
* sync: fix premature return in snapshot-skip filter logic to avoid that the
first snapshot newer that the last synced one gets unconditionally
included.
* fix #5861: ui: remove minimum required username length in dialog for
changing the owner of a backup group, as PBS support usernames shorter
than 4 characters since a while now.
* fix #5439: allow one to reuse an existing datastore on datastore creation
* ui: disallow datastore in the file system root, this is almost never what
user want and they can still use the CLI for such an edge case.
* fix #5233: api: tape: add explicit required permissions for the move tape,
update tape and destroy tape endpoints, requiring Tape.Modify and
Tape.Write on the `/tape` ACL object path, respectively. This avoids
requiring the use of the root account for basic tape management.
* client: catalog shell: make the root element its own parent to avoid
navigating below archive root, which makes no sense and just causes odd
glitches.
* api: disk management: avoid retrieving lsblk result twice when listing
disks, while it's not overly expensive it certainly does not help to be
performant either.
* api: disk management: parallelize retrieving the output from smartctl
checks.
* fix #5600: pbs2to3: make check more flexible to allow one to run arbitrary
newer '-pve' kernels after upgrade
* client: pxar: perform match pattern check for exclusion only once
* client: pxar: add debug output for exclude pattern matches to more
conveniently debug possible issues.
* fix #5868: rest-server: handshake detection: avoid infinite loop on
connections abort
-- Proxmox Support Team <support@proxmox.com> Thu, 14 Nov 2024 16:10:10 +0100
rust-proxmox-backup (3.2.8-1) bookworm; urgency=medium
* switch various log statements in worker tasks to the newer, more flexible
proxmox log crate. With this change, errors from task logs are now also
logged to the system log, increasing their visibility.
* datastore api: list snapshots: avoid calculating protected attribute
twice per snapshot, this reduces the amounts of file metadata requests.
* avoid re-calculating the backup snapshot path's date time component when
getting the full path, reducing calls to the relatively slow strftime
function from libc.
* fix #3699: client: prefer the XDG cache directory for temporary files with
a fallback to using /tmp, as before.
* sync job: improve log message for when syncing the root namespace.
* client: increase read buffer from 8 KiB to 4 MiB for raw image based
backups. This reduces the time spent polling between the reader, chunker
and uploader async tasks and thus can improve backup speed significantly,
especially on setups with fast network and storage.
* client benchmark: avoid unnecessary allocation in the AES benchmark,
causing artificial overhead. The benchmark AES results should now be more
in line with the hardware capability and what the PBS client could already
do. On our test system we saw an increase by an factor of 2.3 on this
specific benchmark.
* docs: add external metrics server page
* tfa: webauthn: serialize OriginUrl following RFC6454
* factor out apt and apt-repository handling into a new library crate for
re-use in other projects. There should be no functional change.
* fix various typos all over the place found using the rust based `typos`
tool.
* datastore: data blob compression: increase compression throughput by
switching away from a higher level zstd method to a lower level one, which
allows us to control the target buffer size directly and thus avoid some
allocation and syscall overhead. We saw the compression bandwidth increase
by a factor of 1.19 in our tests where both the source data and the target
datastore where located in memory backed tmpfs.
* daily-update: ensure notification system context is initialized.
* backup reader: derive if debug messages should be printed from the global
log level. This avoids printing some debug messages by default, e.g., the
"protocol upgrade done" message from sync jobs.
* ui: user view: disable 'Unlock TFA' button by default to improve UX if no
user is selected.
* manager cli: ensure the worker tasks finishes when triggering a reload of
the system network.
* fix #5622: backup client: properly handle rate and burst parameters.
Previously, passing any non-integer value, like `1mb`, was ignored.
* tape: read element status: ignore responses where the library specifies
that it will return a volume tag but then does not includes that field in
the actual response. As both the primary and the alternative volume tag
are not required by PBS, this specific error can simply be downgraded to a
warning.
* pxar: dump archive: print entries to stdout instead of stderr
* sync jobs: various clean-ups and refactoring that should not result in any
semantic change.
* metric collection: put metrics in a cache with a 30 minutes lifetime.
* api: add /status/metrics API to allow pull-based metric server to gather
data directly.
* partial fix #5560: client: periodically show backup progress
* docs: add proxmox-backup.node.cfg man page
* docs: sync: explicitly mention `removed-vanish` flag
-- Proxmox Support Team <support@proxmox.com> Fri, 18 Oct 2024 19:05:41 +0200
rust-proxmox-backup (3.2.7-1) bookworm; urgency=medium
* docs: drop blanket statement recommending against remote storage
* ui: gc job edit: fix i18n gettext usage
* pxar: improve error handling, e.g., avoiding duplicate information
* close #4763: client: add command to forget (delete) whole backup group
with all its snapshots
* close #5571: client: fix regression for `map` command
* client: mount: wait for child to return before exiting to provide better
UX for some edge paths
* fix #5304: client: set process uid/gid for `.pxarexclude-cli` to avoid
issues when trying to backup and restore the backup as non-root user.
* http client: keep renewal future running on failed re-auth to make it more
resilient against some transient errors, like the request just failing due
to network instability.
* datastore: fix problem with operations counting for the case where the
`.chunks/` directory is not available (deleted/moved)
* manager: use confirmation helper in wipe-disk command
-- Proxmox Support Team <support@proxmox.com> Wed, 03 Jul 2024 13:33:51 +0200
rust-proxmox-backup (3.2.6-1) bookworm; urgency=medium
* tape: disable Programmable Early Warning Zone (PEWZ)
* tape: handle PEWZ like regular early warning
* docs: add note for not using remote storages
* client: pxar: fix fuse mount performance for split archives
-- Proxmox Support Team <support@proxmox.com> Mon, 17 Jun 2024 10:18:13 +0200
rust-proxmox-backup (3.2.5-1) bookworm; urgency=medium
* pxar: add support for split archives
* fix #3174: pxar: enable caching and meta comparison
* docs: file formats: describe split pxar archive file layout
* docs: add section describing change detection mode
* api: datastore: add optional archive-name to file-restore
* client: backup: conditionally write catalog for file level backups
* docs: add table listing possible change detection modes
-- Proxmox Support Team <support@proxmox.com> Mon, 10 Jun 2024 13:39:54 +0200
rust-proxmox-backup (3.2.4-1) bookworm; urgency=medium
* fix: network api: permission using wrong pathname
* fix #5503: d/control: bump dependency for proxmox-widget-toolkit
* auth: add locking to `PbsAuthenticator` to avoid race conditions
-- Proxmox Support Team <support@proxmox.com> Wed, 05 Jun 2024 16:23:38 +0200
rust-proxmox-backup (3.2.3-1) bookworm; urgency=medium
* api-types: remove influxdb bucket name restrictions
* api: datastore status: delay lookup after permission check to improve
consistency of tracked read operations
* tape: improve throughput by not unnecessarily syncing/committing after
every archive written beyond the first 128 GiB
* tape: save 'bytes used' in the tape inventory and show them on the web UI
to allow users to more easily see the usage of a tape
* tape drive status: return drive activity (like cleaning, loading,
unloading, writing, ...) in the API and show them in the UI
* ui: tape drive status: avoid checking some specific status if the current
drive activity would block doing so anyway
* tape: write out basic MAM host-type attributes to media to make them more
easily identifiable as Proxmox Backup Server tape by common LTO tooling.
* api: syslog: fix the documented type of the return value
* fix #5465: restore daemon: mount NTFS with UTF-8 charset
* restore daemon: log some more errors on directory traversal
* fix #5422: ui: garbage-collection: make columns in global view sortable
* auth: move to hmac keys for csrf tokens as future-proofing
* auth: upgrade hashes on user log in if a users password is not hashed with
the latest password hashing function for hardening purpose
* auth: use ed25519 keys when generating new auth api keys
* notifications: fix legacy sync notifications
* docs: document notification-mode and merge old notification section
* docs: notifications: rewrite overview for more clarity
* ui: datastore options: link to 'notification-mode' section
* acme: explicitly print a query when prompting for the custom directory URI
-- Proxmox Support Team <support@proxmox.com> Wed, 22 May 2024 19:31:35 +0200
rust-proxmox-backup (3.2.2-1) bookworm; urgency=medium
* ui: notifications fix empty text format for the default mail author

115
debian/control vendored
View File

@ -15,29 +15,28 @@ Build-Depends: bash-completion,
libacl1-dev,
libfuse3-dev,
librust-anyhow-1+default-dev,
librust-apt-pkg-native-0.3+default-dev (>= 0.3.2-~~),
librust-async-trait-0.1+default-dev (>= 0.1.56-~~),
librust-base64-0.13+default-dev,
librust-bitflags-1+default-dev (>= 1.2.1-~~),
librust-bitflags-2+default-dev (>= 2.4-~~),
librust-bytes-1+default-dev,
librust-cidr-0.2+default-dev (>= 0.2.1-~~),
librust-const-format-0.2+default-dev,
librust-crc32fast-1+default-dev,
librust-crossbeam-channel-0.5+default-dev,
librust-endian-trait-0.6+arrays-dev,
librust-endian-trait-0.6+default-dev,
librust-env-logger-0.10+default-dev,
librust-flate2-1+default-dev,
librust-env-logger-0.11+default-dev,
librust-foreign-types-0.3+default-dev,
librust-futures-0.3+default-dev,
librust-h2-0.3+default-dev,
librust-h2-0.3+stream-dev,
librust-handlebars-3+default-dev,
librust-h2-0.4+default-dev,
librust-h2-0.4+legacy-dev,
librust-h2-0.4+stream-dev,
librust-hex-0.4+default-dev (>= 0.4.3-~~),
librust-hex-0.4+serde-dev (>= 0.4.3-~~),
librust-http-0.2+default-dev,
librust-hyper-0.14+backports-dev,
librust-hyper-0.14+default-dev,
librust-hyper-0.14+deprecated-dev,
librust-hyper-0.14+full-dev,
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev (>= 0.4.17-~~),
librust-nix-0.26+default-dev (>= 0.26.1-~~),
@ -46,70 +45,76 @@ Build-Depends: bash-completion,
librust-once-cell-1+default-dev (>= 1.3.1-~~),
librust-openssl-0.10+default-dev (>= 0.10.40-~~),
librust-pathpatterns-0.3+default-dev,
librust-pbs-api-types-0.2+default-dev (>= 0.2.2),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-project-lite-0.2+default-dev,
librust-proxmox-acme-0.5+default-dev,
librust-proxmox-apt-0.10+default-dev (>= 0.10.5-~~),
librust-proxmox-acme-0.5+default-dev (>= 0.5.3-~~),
librust-proxmox-apt-0.11+cache-dev,
librust-proxmox-apt-0.11+default-dev,
librust-proxmox-apt-api-types-1+default-dev (>= 1.0.1-~~),
librust-proxmox-async-0.4+default-dev,
librust-proxmox-auth-api-0.3+api-dev,
librust-proxmox-auth-api-0.3+api-types-dev,
librust-proxmox-auth-api-0.3+default-dev,
librust-proxmox-auth-api-0.3+pam-authenticator-dev,
librust-proxmox-auth-api-0.4+api-dev,
librust-proxmox-auth-api-0.4+default-dev,
librust-proxmox-auth-api-0.4+pam-authenticator-dev,
librust-proxmox-borrow-1+default-dev,
librust-proxmox-compression-0.2+default-dev,
librust-proxmox-config-digest-0.1+default-dev,
librust-proxmox-daemon-0.1+default-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.3-~~),
librust-proxmox-http-0.9+client-dev,
librust-proxmox-http-0.9+client-trait-dev,
librust-proxmox-http-0.9+default-dev,
librust-proxmox-http-0.9+http-helpers-dev,
librust-proxmox-http-0.9+proxmox-async-dev,
librust-proxmox-http-0.9+rate-limited-stream-dev,
librust-proxmox-http-0.9+rate-limiter-dev,
librust-proxmox-http-0.9+websocket-dev,
librust-proxmox-http-0.9+client-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+client-trait-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+default-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+http-helpers-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+proxmox-async-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+rate-limited-stream-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+rate-limiter-dev (>= 0.9.5-~~),
librust-proxmox-http-0.9+websocket-dev (>= 0.9.5-~~),
librust-proxmox-human-byte-0.1+default-dev,
librust-proxmox-io-1+default-dev (>= 1.0.1-~~),
librust-proxmox-io-1+tokio-dev (>= 1.0.1-~~),
librust-proxmox-lang-1+default-dev (>= 1.1-~~),
librust-proxmox-ldap-0.2+default-dev (>= 0.2.1-~~),
librust-proxmox-metrics-0.3+default-dev,
librust-proxmox-notify+default-dev (>= 0.4),
librust-proxmox-notify+pbs-context-dev (>= 0.4),
librust-proxmox-log-0.2+default-dev (>= 0.2.6-~~),
librust-proxmox-metrics-0.3+default-dev (>= 0.3.1-~~),
librust-proxmox-notify-0.5+default-dev (>= 0.5.1-~~),
librust-proxmox-notify-0.5+pbs-context-dev (>= 0.5.1-~~),
librust-proxmox-openid-0.10+default-dev,
librust-proxmox-rest-server-0.5+default-dev (>= 0.5.1-~~),
librust-proxmox-rest-server-0.5+rate-limited-stream-dev (>= 0.5.1-~~),
librust-proxmox-rest-server-0.5+templates-dev (>= 0.5.1-~~),
librust-proxmox-router-2+cli-dev,
librust-proxmox-router-2+default-dev,
librust-proxmox-router-2+server-dev,
librust-proxmox-rrd-0.1+default-dev,
librust-proxmox-schema-3+api-macro-dev,
librust-proxmox-schema-3+default-dev,
librust-proxmox-rest-server-0.8+default-dev (>= 0.8.9-~~),
librust-proxmox-rest-server-0.8+rate-limited-stream-dev (>= 0.8.9-~~),
librust-proxmox-rest-server-0.8+templates-dev (>= 0.8.9-~~),
librust-proxmox-router-3+cli-dev,
librust-proxmox-router-3+server-dev,
librust-proxmox-rrd-0.4+default-dev,
librust-proxmox-rrd-api-types-1+default-dev (>= 1.0.2-~~),
librust-proxmox-schema-4+api-macro-dev,
librust-proxmox-schema-4+default-dev,
librust-proxmox-section-config-2+default-dev,
librust-proxmox-serde-0.1+default-dev (>= 0.1.1-~~),
librust-proxmox-serde-0.1+serde-json-dev (>= 0.1.1-~~),
librust-proxmox-shared-cache-0.1+default-dev,
librust-proxmox-shared-memory-0.3+default-dev,
librust-proxmox-sortable-macro-0.1+default-dev (>= 0.1.2-~~),
librust-proxmox-subscription-0.4+api-types-dev (>= 0.4.2-~~),
librust-proxmox-subscription-0.4+default-dev (>= 0.4.2-~~),
librust-proxmox-sys-0.5+acl-dev (>= 0.5.3-~~),
librust-proxmox-sys-0.5+crypt-dev (>= 0.5.3-~~),
librust-proxmox-sys-0.5+default-dev (>= 0.5.3-~~),
librust-proxmox-sys-0.5+logrotate-dev (>= 0.5.3-~~),
librust-proxmox-sys-0.5+timer-dev (>= 0.5.3-~~),
librust-proxmox-tfa-4+api-dev (>= 4.0.4-~~),
librust-proxmox-tfa-4+api-types-dev (>= 4.0.4-~~),
librust-proxmox-tfa-4+default-dev (>= 4.0.4-~~),
librust-proxmox-time-1+default-dev (>= 1.1.6-~~),
librust-proxmox-subscription-0.5+api-types-dev,
librust-proxmox-subscription-0.5+default-dev,
librust-proxmox-sys-0.6+acl-dev (>= 0.6.5-~~),
librust-proxmox-sys-0.6+crypt-dev (>= 0.6.5-~~),
librust-proxmox-sys-0.6+default-dev (>= 0.6.7-~~),
librust-proxmox-sys-0.6+logrotate-dev (>= 0.6.5-~~),
librust-proxmox-sys-0.6+timer-dev (>= 0.6.5-~~),
librust-proxmox-systemd-0.1+default-dev,
librust-proxmox-tfa-5+api-dev,
librust-proxmox-tfa-5+api-types-dev,
librust-proxmox-tfa-5+default-dev,
librust-proxmox-time-2+default-dev,
librust-proxmox-uuid-1+default-dev,
librust-proxmox-uuid-1+serde-dev,
librust-pxar-0.10+default-dev (>= 0.10.2-~~),
librust-proxmox-worker-task-0.1+default-dev,
librust-pxar-0.12+default-dev (>= 0.12.1-~~),
librust-regex-1+default-dev (>= 1.5.5-~~),
librust-rustyline-9+default-dev,
librust-serde-1+default-dev,
librust-serde-1+derive-dev,
librust-serde-json-1+default-dev,
librust-serde-plain-1+default-dev,
librust-siphasher-0.3+default-dev,
librust-syslog-6+default-dev,
librust-tar-0.4+default-dev,
librust-termcolor-1+default-dev (>= 1.1.2-~~),
@ -133,12 +138,14 @@ Build-Depends: bash-completion,
librust-tokio-util-0.7+default-dev,
librust-tokio-util-0.7+io-dev,
librust-tower-service-0.3+default-dev,
librust-tracing-0.1+default-dev,
librust-udev-0.4+default-dev,
librust-url-2+default-dev (>= 2.1-~~),
librust-walkdir-2+default-dev,
librust-xdg-2+default-dev (>= 2.2-~~),
librust-zstd-0.12+bindgen-dev,
librust-zstd-0.12+default-dev,
librust-zstd-safe-6+default-dev,
libsgutils2-dev,
libstd-rust-dev,
libsystemd-dev (>= 246-~~),
@ -177,7 +184,7 @@ Depends: fonts-font-awesome,
postfix | mail-transport-agent,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 3.5.2),
proxmox-widget-toolkit (>= 4.3.3),
pve-xtermjs (>= 4.7.0-1),
sg3-utils,
smartmontools,
@ -198,6 +205,14 @@ Description: Proxmox Backup Client tools
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-client-static
Architecture: any
Depends: qrencode, ${misc:Depends},
Conflicts: proxmox-backup-client,
Description: Proxmox Backup Client tools (statically linked)
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.
Package: proxmox-backup-docs
Build-Profiles: <!nodoc>
Section: doc

2
debian/copyright vendored
View File

@ -1,4 +1,4 @@
Copyright (C) 2019 - 2024 Proxmox Server Solutions GmbH
Copyright (C) 2019 - 2025 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>

15
debian/postinst vendored
View File

@ -20,15 +20,7 @@ case "$1" in
# modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then
if dpkg --compare-versions "$2" 'lt' '1.0.7-1'; then
# there was an issue with reloading and systemd being confused in older daemon versions
# so restart instead of reload if upgrading from there, see commit 0ec79339f7aebf9
# FIXME: remove with PBS 2.1
echo "Upgrading from older proxmox-backup-server: restart (not reload) daemons"
_dh_action=try-restart
else
_dh_action=try-reload-or-restart
fi
_dh_action=try-reload-or-restart
else
_dh_action=start
fi
@ -80,6 +72,11 @@ EOF
update_sync_job "$prev_job"
fi
fi
if dpkg --compare-versions "$2" 'lt' '3.3.5~'; then
# ensure old locking is used by the daemon until a reboot happened
touch "/run/proxmox-backup/old-locking"
fi
fi
;;

View File

@ -0,0 +1,2 @@
debian/proxmox-backup-client.bc proxmox-backup-client
debian/pxar.bc pxar

View File

@ -0,0 +1,4 @@
usr/share/man/man1/proxmox-backup-client.1
usr/share/man/man1/pxar.1
usr/share/zsh/vendor-completions/_proxmox-backup-client
usr/share/zsh/vendor-completions/_pxar

View File

@ -9,7 +9,7 @@ update_initramfs() {
CACHE_PATH_DBG="/var/cache/proxmox-backup/file-restore-initramfs-debug.img"
# cleanup first, in case proxmox-file-restore was uninstalled since we do
# not want an unuseable image lying around
# not want an unusable image lying around
rm -f "$CACHE_PATH"
if [ ! -f "$INST_PATH/initramfs.img" ]; then

View File

@ -4,6 +4,7 @@ etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/removable-device-attach@.service /lib/systemd/system/
usr/bin/pmt
usr/bin/pmtx
usr/bin/proxmox-tape
@ -30,34 +31,31 @@ usr/share/man/man5/acl.cfg.5
usr/share/man/man5/datastore.cfg.5
usr/share/man/man5/domains.cfg.5
usr/share/man/man5/media-pool.cfg.5
usr/share/man/man5/notifications.cfg.5
usr/share/man/man5/notifications-priv.cfg.5
usr/share/man/man5/notifications.cfg.5
usr/share/man/man5/proxmox-backup.node.cfg.5
usr/share/man/man5/prune.cfg.5
usr/share/man/man5/remote.cfg.5
usr/share/man/man5/sync.cfg.5
usr/share/man/man5/tape-job.cfg.5
usr/share/man/man5/tape.cfg.5
usr/share/man/man5/user.cfg.5
usr/share/man/man5/verification.cfg.5
usr/share/zsh/vendor-completions/_pmt
usr/share/zsh/vendor-completions/_pmtx
usr/share/zsh/vendor-completions/_proxmox-backup-debug
usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_proxmox-tape
usr/share/proxmox-backup/templates/default/acme-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/acme-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/gc-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/gc-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/gc-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/gc-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/gc-ok-subject.txt.hbs
usr/share/proxmox-backup/templates/default/package-updates-body.txt.hbs
usr/share/proxmox-backup/templates/default/package-updates-subject.txt.hbs
usr/share/proxmox-backup/templates/default/prune-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/prune-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/prune-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/prune-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/prune-ok-subject.txt.hbs
usr/share/proxmox-backup/templates/default/sync-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/sync-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/sync-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/sync-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/sync-ok-subject.txt.hbs
usr/share/proxmox-backup/templates/default/tape-backup-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/tape-backup-err-subject.txt.hbs
@ -66,9 +64,13 @@ usr/share/proxmox-backup/templates/default/tape-backup-ok-subject.txt.hbs
usr/share/proxmox-backup/templates/default/tape-load-body.txt.hbs
usr/share/proxmox-backup/templates/default/tape-load-subject.txt.hbs
usr/share/proxmox-backup/templates/default/test-body.txt.hbs
usr/share/proxmox-backup/templates/default/test-body.html.hbs
usr/share/proxmox-backup/templates/default/test-subject.txt.hbs
usr/share/proxmox-backup/templates/default/verify-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/verify-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/verify-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/verify-ok-body.txt.hbs
usr/share/proxmox-backup/templates/default/verify-ok-subject.txt.hbs
usr/share/zsh/vendor-completions/_pmt
usr/share/zsh/vendor-completions/_pmtx
usr/share/zsh/vendor-completions/_proxmox-backup-debug
usr/share/zsh/vendor-completions/_proxmox-backup-manager
usr/share/zsh/vendor-completions/_proxmox-tape

View File

@ -16,3 +16,6 @@ SUBSYSTEM=="scsi_generic", SUBSYSTEMS=="scsi", ATTRS{type}=="1", ENV{ID_SCSI_SER
SYMLINK+="tape/by-id/scsi-$env{ID_SCSI_SERIAL}-sg"
LABEL="persistent_storage_tape_end"
# triggers the mounting of a removable device
ACTION=="add", SUBSYSTEM=="block", ENV{ID_FS_UUID}!="", TAG+="systemd", ENV{SYSTEMD_WANTS}="removable-device-attach@$env{ID_FS_UUID}"

10
debian/rules vendored
View File

@ -8,7 +8,7 @@ include /usr/share/rustc/architecture.mk
export BUILD_MODE=release
CARGO=/usr/share/cargo/bin/cargo
export CARGO=/usr/share/cargo/bin/cargo
export CFLAGS CXXFLAGS CPPFLAGS LDFLAGS
export DEB_HOST_RUST_TYPE DEB_HOST_GNU_TYPE
@ -28,6 +28,11 @@ override_dh_auto_configure:
@perl -ne 'if (/^version\s*=\s*"(\d+(?:\.\d+)+)"/) { my $$v_cargo = $$1; my $$v_deb = "$(DEB_VERSION_UPSTREAM)"; \
die "ERROR: d/changelog <-> Cargo.toml version mismatch: $$v_cargo != $$v_deb\n" if $$v_cargo ne $$v_deb; exit(0); }' Cargo.toml
$(CARGO) prepare-debian $(CURDIR)/debian/cargo_registry --link-from-system
# `cargo build` and `cargo install` have different config precedence, symlink
# the wrapper config into a place where `build` picks it up as well..
# https://doc.rust-lang.org/cargo/commands/cargo-install.html#configuration-discovery
mkdir -p .cargo
ln -s $(CARGO_HOME)/config.toml $(CURDIR)/.cargo/config.toml
dh_auto_configure
override_dh_auto_build:
@ -42,6 +47,9 @@ override_dh_auto_install:
dh_auto_install -- \
PROXY_USER=backup \
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
mkdir -p debian/proxmox-backup-client-static/usr/bin
mv debian/tmp/usr/bin/proxmox-backup-client-static debian/proxmox-backup-client-static/usr/bin/proxmox-backup-client
mv debian/tmp/usr/bin/pxar-static debian/proxmox-backup-client-static/usr/bin/pxar
override_dh_installsystemd:
dh_installsystemd -pproxmox-backup-server proxmox-backup-daily-update.timer

View File

@ -1,59 +1,65 @@
include ../defines.mk
GENERATED_SYNOPSIS := \
proxmox-tape/synopsis.rst \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-backup-debug/synopsis.rst \
proxmox-file-restore/synopsis.rst \
pxar/synopsis.rst \
pmtx/synopsis.rst \
pmt/synopsis.rst \
config/media-pool/config.rst \
config/notifications/config.rst \
config/notifications-priv/config.rst \
config/tape/config.rst \
config/tape-job/config.rst \
config/user/config.rst \
config/remote/config.rst \
config/sync/config.rst \
config/verification/config.rst \
config/acl/roles.rst \
config/datastore/config.rst \
config/domains/config.rst
config/domains/config.rst \
config/media-pool/config.rst \
config/notifications-priv/config.rst \
config/notifications/config.rst \
config/remote/config.rst \
config/sync/config.rst \
config/tape-job/config.rst \
config/tape/config.rst \
config/user/config.rst \
config/verification/config.rst \
config/prune/config.rst \
pmt/synopsis.rst \
pmtx/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
proxmox-backup-client/synopsis.rst \
proxmox-backup-debug/synopsis.rst \
proxmox-backup-manager/synopsis.rst \
proxmox-file-restore/synopsis.rst \
proxmox-tape/synopsis.rst \
pxar/synopsis.rst \
MAN1_PAGES := \
pxar.1 \
pmtx.1 \
pmt.1 \
proxmox-tape.1 \
proxmox-backup-proxy.1 \
proxmox-backup-client.1 \
proxmox-backup-manager.1 \
proxmox-file-restore.1 \
proxmox-backup-debug.1 \
pbs2to3.1 \
pmt.1 \
pmtx.1 \
proxmox-backup-client.1 \
proxmox-backup-debug.1 \
proxmox-backup-manager.1 \
proxmox-backup-proxy.1 \
proxmox-file-restore.1 \
proxmox-tape.1 \
pxar.1 \
# FIXME: prefix all man pages that are not directly relating to an existing executable with
# `proxmox-backup.`, like the newer added proxmox-backup.node.cfg but add backwards compatible
# symlinks, e.g. with a "5pbs" man page "suffix section".
MAN5_PAGES := \
media-pool.cfg.5 \
tape.cfg.5 \
tape-job.cfg.5 \
acl.cfg.5 \
user.cfg.5 \
remote.cfg.5 \
sync.cfg.5 \
verification.cfg.5 \
datastore.cfg.5 \
domains.cfg.5 \
notifications.cfg.5 \
media-pool.cfg.5 \
proxmox-backup.node.cfg.5 \
notifications-priv.cfg.5 \
notifications.cfg.5 \
remote.cfg.5 \
sync.cfg.5 \
tape-job.cfg.5 \
tape.cfg.5 \
user.cfg.5 \
verification.cfg.5 \
prune.cfg.5 \
PRUNE_SIMULATOR_FILES := \
prune-simulator/index.html \
prune-simulator/documentation.html \
prune-simulator/clear-trigger.png \
prune-simulator/prune-simulator.js
prune-simulator/documentation.html \
prune-simulator/prune-simulator.js \
PRUNE_SIMULATOR_JS_SOURCE := \
/usr/share/javascript/proxmox-widget-toolkit-dev/Toolkit.js \
@ -85,15 +91,15 @@ API_VIEWER_FILES := \
/usr/share/javascript/proxmox-widget-toolkit-dev/APIViewer.js \
# Sphinx documentation setup
SPHINXOPTS =
SPHINXOPTS = -E
SPHINXBUILD = sphinx-build
BUILDDIR = output
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
COMPILEDIR := ../target/$(DEB_HOST_RUST_TYPE)/release
SPHINXOPTS += -t release
else
COMPILEDIR := ../target/debug
COMPILEDIR := ../target/$(DEB_HOST_RUST_TYPE)/debug
SPHINXOPTS += -t devbuild
endif

View File

@ -1,3 +1,5 @@
.. _client_usage:
Backup Client Usage
===================
@ -44,6 +46,24 @@ user\@pbs!token@host:store ``user@pbs!token`` host:8007 store
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ================== ================== ===========
.. _statically_linked_client:
Statically Linked Backup Client
-------------------------------
A statically linked version of the Proxmox Backup client is available for Linux
based systems where the regular client is not available. Please note that it is
recommended to use the regular client when possible, as the statically linked
client is not a full replacement. For example, name resolution will not be
performed via the mechanisms provided by libc, but uses a resolver written
purely in the Rust programming language. Therefore, features and modules
provided by Name Service Switch cannot be used.
The statically linked client is available via the ``pbs-client`` repository as
described in the :ref:`installation <install_pbc>` section.
.. _environment-variables:
Environment Variables
---------------------
@ -89,6 +109,43 @@ Environment Variables
you can add arbitrary comments after the first newline.
System and Service Credentials
------------------------------
Some of the :ref:`environment variables <environment-variables>` above can be
set using `system and service credentials <https://systemd.io/CREDENTIALS/>`_
instead.
============================ ==============================================
Environment Variable Credential Name Equivalent
============================ ==============================================
``PBS_REPOSITORY`` ``proxmox-backup-client.repository``
``PBS_PASSWORD`` ``proxmox-backup-client.password``
``PBS_ENCRYPTION_PASSWORD`` ``proxmox-backup-client.encryption-password``
``PBS_FINGERPRINT`` ``proxmox-backup-client.fingerprint``
============================ ==============================================
For example, the repository password can be stored in an encrypted file as
follows:
.. code-block:: console
# systemd-ask-password -n | systemd-creds encrypt --name=proxmox-backup-client.password - my-api-token.cred
The credential can then be reused inside of unit files or in a transient scope
unit as follows:
.. code-block:: console
# systemd-run --pipe --wait \
--property=LoadCredentialEncrypted=proxmox-backup-client.password:/full/path/to/my-api-token.cred \
--property=SetCredential=proxmox-backup-client.repository:'my_default_repository' \
proxmox-backup-client ...
Additionally, system credentials (e.g. passed down from the hypervisor to a
virtual machine via SMBIOS type 11) can be loaded on a service via
`LoadCredential=` as described in the manual page ``systemd.exec(5)``.
Output Format
-------------
@ -169,6 +226,7 @@ the client. The format is:
<archive-name>.<type>:<source-path>
The ``archive-name`` must contain alphanumerics, hyphens and underscores only.
Common types are ``.pxar`` for file archives and ``.img`` for block
device images. To create a backup of a block device, run the following command:
@ -272,13 +330,64 @@ parameter. For example:
.. code-block:: console
# proxmox-backup-client backup.pxar:./linux --exclude /usr
# proxmox-backup-client backup archive-name.pxar:./linux --exclude /usr
Multiple paths can be excluded like this:
.. code-block:: console
# proxmox-backup-client backup.pxar:./linux --exclude=/usr --exclude=/rust
# proxmox-backup-client backup archive-name.pxar:./linux --exclude=/usr --exclude=/rust
.. _client_change_detection_mode:
Change Detection Mode
~~~~~~~~~~~~~~~~~~~~~
File-based backups containing a lot of data can take a long time, as the default
behavior for the Proxmox backup client is to read all data and encode it into a
pxar archive.
The encoded stream is split into variable sized chunks. For each chunk, a digest
is calculated and used to decide whether the chunk needs to be uploaded or can
be indexed without upload, as it is already available on the server (and
therefore deduplicated). If the backed up files are largely unchanged,
re-reading and then detecting the corresponding chunks don't need to be uploaded
after all is time consuming and undesired.
The backup client's ``change-detection-mode`` can be switched from default to
``metadata`` based detection to reduce limitations as described above,
instructing the client to avoid re-reading files with unchanged metadata
whenever possible.
When using this mode, instead of the regular pxar archive, the backup snapshot
is stored into two separate files: the ``mpxar`` containing the archive's
metadata and the ``ppxar`` containing a concatenation of the file contents. This
splitting allows for efficient metadata lookups. When creating the backup
archives, the current file metadata is compared to the one looked up in the
previous ``mpxar`` archive. The operational details are explained more in depth
in the :ref:`technical documentation <change-detection-mode-metadata>`.
Using the ``change-detection-mode`` set to ``data`` allows to create the same
split archive as when using the ``metadata`` mode, but without using a previous
reference and therefore reencoding all file payloads. For details of this mode
please see the :ref:`technical documentation <change-detection-mode-data>`.
.. _client_change_detection_mode_table:
============ ===================================================================
Mode Description
============ ===================================================================
``legacy`` (current default): Encode all files into a self contained pxar
archive.
``data`` Encode all files into a split data and metadata pxar archive.
``metadata`` Encode changed files, reuse unchanged from previous snapshot,
creating a split archive.
============ ===================================================================
The following shows an example for the client invocation with the `metadata`
mode:
.. code-block:: console
# proxmox-backup-client backup archive-name.pxar:./linux --change-detection-mode=metadata
.. _client_encryption:
@ -419,6 +528,8 @@ version of your master key. The following command sends the output of the
proxmox-backup-client key paperkey --output-format text > qrkey.txt
.. _client_restoring_data:
Restoring Data
--------------
@ -730,29 +841,25 @@ Garbage Collection
------------------
The ``prune`` command removes only the backup index files, not the data
from the datastore. This task is left to the garbage collection
command. It is recommended to carry out garbage collection on a regular basis.
from the datastore. Deletion of unused backup data from the datastore is done by
:ref:`garbage collection<_maintenance_gc>`. It is therefore recommended to
schedule garbage collection tasks on a regular basis. The working principle of
garbage collection is described in more details in the related :ref:`background
section <gc_background>`.
The garbage collection works in two phases. In the first phase, all
data blocks that are still in use are marked. In the second phase,
unused data blocks are removed.
To start garbage collection from the client side, run the following command:
.. code-block:: console
# proxmox-backup-client garbage-collect
.. note:: This command needs to read all existing backup index files
and touches the complete chunk-store. This can take a long time
depending on the number of chunks and the speed of the underlying
disks.
.. note:: The garbage collection will only remove chunks that haven't been used
for at least one day (exactly 24h 5m). This grace period is necessary because
chunks in use are marked by touching the chunk which updates the ``atime``
(access time) property. Filesystems are mounted with the ``relatime`` option
by default. This results in a better performance by only updating the
``atime`` property if the last access has been at least 24 hours ago. The
downside is that touching a chunk within these 24 hours will not always
update its ``atime`` property.
Chunks in the grace period will be logged at the end of the garbage
collection task as *Pending removals*.
The progress of the garbage collection will be displayed as shown in the example
below:
.. code-block:: console

View File

@ -44,10 +44,8 @@ web-interface/API or using the ``proxmox-backup-manager`` CLI tool.
Upload Custom Certificate
~~~~~~~~~~~~~~~~~~~~~~~~~
If you already have a certificate which you want to use for a Proxmox
Mail Gateway host, you can simply upload that certificate over the web
interface.
If you already have a certificate which you want to use for a `Proxmox Backup`_
host, you can simply upload that certificate over the web interface.
.. image:: images/screenshots/pbs-gui-certs-upload-custom.png
:target: _images/pbs-gui-certs-upload-custom.png

View File

@ -71,7 +71,7 @@ master_doc = 'index'
# General information about the project.
project = 'Proxmox Backup'
copyright = '2019-2023, Proxmox Server Solutions GmbH'
copyright = '2019-2025, Proxmox Server Solutions GmbH'
author = 'Proxmox Support Team'
# The version info for the project you're documenting acts as a replacement for
@ -108,12 +108,14 @@ man_pages = [
('config/datastore/man5', 'datastore.cfg', 'Datastore Configuration', [author], 5),
('config/domains/man5', 'domains.cfg', 'Realm Configuration', [author], 5),
('config/media-pool/man5', 'media-pool.cfg', 'Media Pool Configuration', [author], 5),
('config/node/man5', 'proxmox-backup.node.cfg', 'Proxmox Backup Server - Node Configuration', [author], 5),
('config/remote/man5', 'remote.cfg', 'Remote Server Configuration', [author], 5),
('config/sync/man5', 'sync.cfg', 'Synchronization Job Configuration', [author], 5),
('config/tape-job/man5', 'tape-job.cfg', 'Tape Job Configuration', [author], 5),
('config/tape/man5', 'tape.cfg', 'Tape Drive and Changer Configuration', [author], 5),
('config/user/man5', 'user.cfg', 'User Configuration', [author], 5),
('config/verification/man5', 'verification.cfg', 'Verification Job Configuration', [author], 5),
('config/prune/man5', 'prune.cfg', 'Prune Job Configuration', [author], 5),
('config/notifications/man5', 'notifications.cfg', 'Notification target/matcher configuration', [author], 5),
('config/notifications-priv/man5', 'notifications-priv.cfg', 'Notification target secrets', [author], 5),
]

View File

@ -0,0 +1,49 @@
The file contains these options:
:acme: The ACME account to use on this node.
:acmedomain0: ACME domain.
:acmedomain1: ACME domain.
:acmedomain2: ACME domain.
:acmedomain3: ACME domain.
:acmedomain4: ACME domain.
:http-proxy: Set proxy for apt and subscription checks.
:email-from: Fallback email from which notifications will be sent.
:ciphers-tls-1.3: List of TLS ciphers for TLS 1.3 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:ciphers-tls-1.2: List of TLS ciphers for TLS <= 1.2 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:default-lang: Default language used in the GUI.
:description: Node description.
:task-log-max-days: Maximum days to keep task logs.
For example:
::
acme: local
acmedomain0: first.domain.com
acmedomain1: second.domain.com
acmedomain2: third.domain.com
acmedomain3: fourth.domain.com
acmedomain4: fifth.domain.com
http-proxy: internal.proxy.com
email-from: proxmox@mail.com
ciphers-tls-1.3: TLS_AES_128_GCM_SHA256:TLS_AES_128_CCM_8_SHA256:TLS_CHACHA20_POLY1305_SHA256
ciphers-tls-1.2: RSA_WITH_AES_128_CCM:DHE_RSA_WITH_AES_128_CCM
default-lang: en
description: Primary PBS instance
task-log-max-days: 30
You can use the ``proxmox-backup-manager node`` command to manipulate
this file.

18
docs/config/node/man5.rst Normal file
View File

@ -0,0 +1,18 @@
:orphan:
========
node.cfg
========
Description
===========
The file /etc/proxmox-backup/node.cfg is a configuration file for Proxmox
Backup Server. It contains the general configuration regarding this node.
Options
=======
.. include:: format.rst
.. include:: ../../pbs-copyright.rst

View File

@ -8,7 +8,7 @@ Description
===========
The file /etc/proxmox-backup/notifications-priv.cfg is a configuration file
for Proxmox Backup Server. It contains the configration for the
for Proxmox Backup Server. It contains the configuration for the
notification system configuration.
File Format

View File

@ -8,7 +8,7 @@ Description
===========
The file /etc/proxmox-backup/notifications.cfg is a configuration file
for Proxmox Backup Server. It contains the configration for the
for Proxmox Backup Server. It contains the configuration for the
notification system configuration.
File Format

View File

@ -0,0 +1,14 @@
Each entry starts with the header ``prune: <name>``, followed by the job
configuration options.
::
prune: prune-store2
schedule mon..fri 10:30
store my-datastore
prune: ...
You can use the ``proxmox-backup-manager prune-job`` command to manipulate this
file.

View File

@ -0,0 +1,23 @@
:orphan:
=========
prune.cfg
=========
Description
===========
The file /etc/proxmox-backup/prune.cfg is a configuration file for Proxmox
Backup Server. It contains the prune job configuration.
File Format
===========
.. include:: format.rst
Options
=======
.. include:: config.rst
.. include:: ../../pbs-copyright.rst

View File

@ -7,8 +7,8 @@ verification.cfg
Description
===========
The file /etc/proxmox-backup/sync.cfg is a configuration file for Proxmox
Backup Server. It contains the verification job configuration.
The file /etc/proxmox-backup/verification.cfg is a configuration file for
Proxmox Backup Server. It contains the verification job configuration.
File Format
===========

View File

@ -67,6 +67,14 @@ Options
.. include:: config/media-pool/config.rst
``node.cfg``
~~~~~~~~~~~~~~~~~~
Options
^^^^^^^
.. include:: config/node/format.rst
.. _notifications.cfg:
``notifications.cfg``
@ -83,6 +91,8 @@ Options
.. include:: config/notifications/config.rst
.. _notifications_priv.cfg:
``notifications-priv.cfg``
~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -98,6 +108,21 @@ Options
.. include:: config/notifications-priv/config.rst
``prune.cfg``
~~~~~~~~~~~~~
File Format
^^^^^^^^^^^
.. include:: config/prune/format.rst
Options
^^^^^^^
.. include:: config/prune/config.rst
``tape.cfg``
~~~~~~~~~~~~

View File

@ -0,0 +1,55 @@
External Metric Server
----------------------
Proxmox Backup Server periodically sends various metrics about your host's memory,
network and disk activity to configured external metric servers.
Currently supported are:
* InfluxDB (HTTP) (see https://docs.influxdata.com/influxdb/v2/ )
* InfluxDB (UDP) (see https://docs.influxdata.com/influxdb/v1/ )
The external metric server definitions are saved in
'/etc/proxmox-backup/metricserver.cfg', and can be edited through the web
interface.
.. note::
Using HTTP is recommended as UDP support has been dropped in InfluxDB v2.
InfluxDB (HTTP) plugin configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The plugin can be configured to use the HTTP(s) API of InfluxDB 2.x.
InfluxDB 1.8.x does contain a forwards compatible API endpoint for this v2 API.
Since InfluxDB's v2 API is only available with authentication, you have
to generate a token that can write into the correct bucket and set it.
In the v2 compatible API of 1.8.x, you can use 'user:password' as token
(if required), and can omit the 'organization' since that has no meaning in InfluxDB 1.x.
You can also set the maximum batch size (default 25000000 bytes) with the
'max-body-size' setting (this corresponds to the InfluxDB setting with the
same name).
InfluxDB (UDP) plugin configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox Backup Server can also send data via UDP. This requires the InfluxDB
server to be configured correctly. The MTU can also be configured here if
necessary.
Here is an example configuration for InfluxDB (on your InfluxDB server):
.. code-block:: console
[[udp]]
enabled = true
bind-address = "0.0.0.0:8089"
database = "proxmox"
batch-size = 1000
batch-timeout = "1s"
With this configuration, the InfluxDB server listens on all IP addresses on
port 8089, and writes the data in the *proxmox* database.

View File

@ -8,7 +8,53 @@ Proxmox File Archive Format (``.pxar``)
.. graphviz:: pxar-format-overview.dot
.. _pxar-meta-format:
Proxmox File Archive Format - Meta (``.mpxar``)
-----------------------------------------------
Pxar metadata archive with same structure as a regular pxar archive, with the
exception of regular file payloads not being contained within the archive
itself, but rather being stored as payload references to the corresponding pxar
payload (``.ppxar``) file.
Can be used to lookup all the archive entries and metadata without the size
overhead introduced by the file payloads.
.. graphviz:: meta-format-overview.dot
.. _ppxar-format:
Proxmox File Archive Format - Payload (``.ppxar``)
--------------------------------------------------
Pxar payload file storing regular file payloads to be referenced and accessed by
the corresponding pxar metadata (``.mpxar``) archive. Contains a concatenation
of regular file payloads, each prefixed by a `PAYLOAD` header. Further, the
actual referenced payload entries might be separated by padding (full/partial
payloads not referenced), introduced when reusing chunks of a previous backup
run, when chunk boundaries did not aligned to payload entry offsets.
All headers are stored as little-endian.
.. list-table::
:widths: auto
* - ``PAYLOAD_START_MARKER``
- header of ``[u8; 16]`` consisting of type hash and size;
marks start
* - ``PAYLOAD``
- header of ``[u8; 16]`` cosisting of type hash and size;
referenced by metadata archive
* - Payload
- raw regular file payload
* - Padding
- partial/full unreferenced payloads, caused by unaligned chunk boundary
* - ...
- further concatenation of payload header, payload and padding
* - ``PAYLOAD_TAIL_MARKER``
- header of ``[u8; 16]`` consisting of type hash and size;
marks end
.. _data-blob-format:
Data Blob Format (``.blob``)

View File

@ -40,6 +40,16 @@ Proxmox Backup Server supports various languages and authentication back ends
.. note:: For convenience, you can save the username on the client side, by
selecting the "Save User name" checkbox at the bottom of the window.
.. _consent_banner:
Consent Banner
^^^^^^^^^^^^^^
A custom consent banner that has to be accepted before login can be configured
in **Configuration -> Other -> General -> Consent Text**. If there is no
content, the consent banner will not be displayed. The text will be stored as a
base64 string in the ``/etc/proxmox-backup/node.cfg`` config file.
GUI Overview
------------

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

157
docs/installation-media.rst Normal file
View File

@ -0,0 +1,157 @@
.. _installation_medium:
Installation Medium
-------------------
Proxmox Backup Server can be installed via
:ref:`different methods <install_pbs>`. The recommended method is the
usage of an installation medium, to simply boot the interactive
installer.
Prepare Installation Medium
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download the installer ISO image from |DOWNLOADS|.
The Proxmox Backup Server installation medium is a hybrid ISO image.
It works in two ways:
- An ISO image file ready to burn to a DVD.
- A raw sector (IMG) image file ready to copy to a USB flash drive (USB stick).
Using a USB flash drive to install Proxmox Backup Server is the
recommended way since it is the faster and more frequently available
option these days.
Prepare a USB Flash Drive as Installation Medium
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The flash drive needs to have at least 2 GB of storage space.
.. note::
Do not use *UNetbootin*. It does not work with the Proxmox Backup
Server installation image.
.. important::
Existing data on the USB flash drive will be overwritten.
Therefore, make sure that it does not contain any still needed data
and unmount it afterwards again before proceeding.
Instructions for GNU/Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~
On Unix-like operating systems use the ``dd`` command to copy the ISO
image to the USB flash drive. First find the correct device name of the
USB flash drive (see below). Then run the ``dd`` command. Depending on
your environment, you will need to have root privileges to execute
``dd``.
.. code-block:: console
# dd bs=1M conv=fdatasync if=./proxmox-backup-server_*.iso of=/dev/XYZ
.. note::
Be sure to replace ``/dev/XYZ`` with the correct device name and adapt
the input filename (*if*) path.
.. caution::
Be very careful, and do not overwrite the wrong disk!
Find the Correct USB Device Name
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are two ways to find out the name of the USB flash drive. The
first one is to compare the last lines of the ``dmesg`` command output
before and after plugging in the flash drive. The second way is to
compare the output of the ``lsblk`` command. Open a terminal and run:
.. code-block:: console
# lsblk
Then plug in your USB flash drive and run the command again:
.. code-block:: console
# lsblk
A new device will appear. This is the one you want to use. To be on the
extra safe side check if the reported size matches your USB flash drive.
Instructions for macOS
~~~~~~~~~~~~~~~~~~~~~~
Open the terminal (query *Terminal* in Spotlight).
Convert the ``.iso`` file to ``.dmg`` format using the convert option of
``hdiutil``, for example:
.. code-block:: console
# hdiutil convert proxmox-backup-server_*.iso -format UDRW -o proxmox-backup-server_*.dmg
.. note::
macOS tends to automatically add ``.dmg`` to the output file name.
To get the current list of devices run the command:
.. code-block:: console
# diskutil list
Now insert the USB flash drive and run this command again to determine
which device node has been assigned to it. (e.g., ``/dev/diskX``).
.. code-block:: console
# diskutil list
# diskutil unmountDisk /dev/diskX
.. note::
replace *X* with the disk number from the last command.
.. code-block:: console
# sudo dd if=proxmox-backup-server_*.dmg bs=1M of=/dev/rdiskX
.. note::
*rdiskX*, instead of *diskX*, in the last command is intended. It
will increase the write speed.
Instructions for Windows
~~~~~~~~~~~~~~~~~~~~~~~~
Using Etcher
^^^^^^^^^^^^
Etcher works out of the box. Download Etcher from https://etcher.io. It
will guide you through the process of selecting the ISO and your USB
flash drive.
Using Rufus
^^^^^^^^^^^
Rufus is a more lightweight alternative, but you need to use the **DD
mode** to make it work. Download Rufus from https://rufus.ie/. Either
install it or use the portable version. Select the destination drive
and the downloaded Proxmox ISO file.
.. important::
Once you click *Start*, you have to click *No* on the dialog asking to
download a different version of Grub. In the next dialog select **DD mode**.
Use the Installation Medium
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Insert the created USB flash drive (or DVD) into your server. Continue
by reading the :ref:`installer <using_the_installer>` chapter, which
also describes possible boot issues.

View File

@ -7,7 +7,9 @@ Debian_ from the provided package repository.
.. include:: system-requirements.rst
.. include:: package-repositories.rst
.. include:: installation-media.rst
.. _install_pbs:
Server Installation
-------------------
@ -18,44 +20,37 @@ for various management tasks such as disk management.
.. note:: You always need a backup server. It is not possible to use
Proxmox Backup without the server part.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
as well as all necessary packages for the Proxmox Backup Server.
Using our provided disk image (ISO file) is the recommended
installation method, as it includes a convenient installer, a complete
Debian system as well as all necessary packages for the Proxmox Backup
Server.
The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configuration
(for example timezone, language, network), and install all required packages.
The provided ISO will get you started in just a few minutes, and is the
recommended method for new and existing users.
Once you have created an :ref:`installation_medium`, the booted
:ref:`installer <using_the_installer>` will guide you through the
setup process. It will help you to partition your disks, apply basic
settings such as the language, time zone and network configuration,
and finally install all required packages within minutes.
Alternatively, Proxmox Backup Server can be installed on top of an
existing Debian system.
As an alternative to the interactive installer, advanced users may
wish to install Proxmox Backup Server
:ref:`unattended <install_pbs_unattended>`.
Install `Proxmox Backup`_ Server using the Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With sufficient Debian knowledge, you can also install Proxmox Backup
Server :ref:`on top of Debian <install_pbs_on_debian>` yourself.
Download the ISO from |DOWNLOADS|.
It includes the following:
While not recommended, Proxmox Backup Server could also be installed
:ref:`on Proxmox VE <install_pbs_on_pve>`.
* The Proxmox Backup Server installer, which partitions the local
disk(s) with ext4, xfs or ZFS, and installs the operating system
.. include:: using-the-installer.rst
* Complete operating system (Debian Linux, 64-bit)
* Proxmox Linux kernel with ZFS support
* Complete tool-set to administer backups and all necessary resources
* Web based management interface
.. note:: During the installation process, the complete server
is used by default and all existing data is removed.
.. _install_pbs_unattended:
Install `Proxmox Backup`_ Server Unattended
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to install {pve} automatically in an unattended manner. This
enables you to fully automate the setup process on bare-metal. Once the
installation is complete and the host has booted up, automation tools like
Ansible can be used to further configure the installation.
It is possible to install Proxmox Backup Server automatically in an
unattended manner. This enables you to fully automate the setup process on
bare-metal. Once the installation is complete and the host has booted up,
automation tools like Ansible can be used to further configure the installation.
The necessary options for the installer must be provided in an answer file.
This file allows the use of filter rules to determine which disks and network
@ -66,6 +61,7 @@ installation ISO. For more details and information on the unattended
installation see `our wiki
<https://pve.proxmox.com/wiki/Automated_Installation>`_.
.. _install_pbs_on_debian:
Install `Proxmox Backup`_ Server on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -103,6 +99,8 @@ support, and a set of common and useful packages.
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
.. _install_pbs_on_pve:
Install Proxmox Backup Server on `Proxmox VE`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -123,6 +121,8 @@ After configuring the
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
.. _install_pbc:
Client Installation
-------------------
@ -138,7 +138,26 @@ you need to run:
# apt update
# apt install proxmox-backup-client
Install Statically Linked Proxmox Backup Client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note:: The client-only repository should be usable by most recent Debian and
Ubuntu derivatives.
Proxmox provides a statically linked build of the Proxmox backup client that
should run on any modern x86-64 Linux system.
It is currently available as a Debian package. After configuring the
:ref:`package_repositories_client_only_apt`, you need to run:
.. code-block:: console
# apt update
# apt install proxmox-backup-client-static
This package conflicts with the `proxmox-backup-client` package, as both
provide the client as an executable in the `/usr/bin/proxmox-backup-client`
path.
You can copy this executable to other, e.g. non-Debian based Linux systems.
For details on using the Proxmox Backup Client, see :ref:`client_usage`.
.. include:: package-repositories.rst

View File

@ -264,6 +264,7 @@ systems with more than 256 GiB of total memory, where simply setting
# update-initramfs -u
.. _zfs_swap:
Swap on ZFS
^^^^^^^^^^^

View File

@ -108,7 +108,7 @@ Ext.define('PageCalibration', {
xtype: 'numberfield',
value: 'a4',
name: 's_x',
fieldLabel: 'Meassured Start Offset Sx (mm)',
fieldLabel: 'Measured Start Offset Sx (mm)',
allowBlank: false,
labelWidth: 200,
},
@ -116,7 +116,7 @@ Ext.define('PageCalibration', {
xtype: 'numberfield',
value: 'a4',
name: 'd_x',
fieldLabel: 'Meassured Length Dx (mm)',
fieldLabel: 'Measured Length Dx (mm)',
allowBlank: false,
labelWidth: 200,
},
@ -124,7 +124,7 @@ Ext.define('PageCalibration', {
xtype: 'numberfield',
value: 'a4',
name: 's_y',
fieldLabel: 'Meassured Start Offset Sy (mm)',
fieldLabel: 'Measured Start Offset Sy (mm)',
allowBlank: false,
labelWidth: 200,
},
@ -132,7 +132,7 @@ Ext.define('PageCalibration', {
xtype: 'numberfield',
value: 'a4',
name: 'd_y',
fieldLabel: 'Meassured Length Dy (mm)',
fieldLabel: 'Measured Length Dy (mm)',
allowBlank: false,
labelWidth: 200,
},

View File

@ -6,8 +6,34 @@ Maintenance Tasks
Pruning
-------
Prune lets you specify which backup snapshots you want to keep.
The following retention options are available:
Prune lets you specify which backup snapshots you want to keep, removing others.
When pruning a snapshot, only the snapshot metadata (manifest, indices, blobs,
log and notes) is removed. The chunks containing the actual backup data and
previously referenced by the pruned snapshot, have to be removed by a garbage
collection run.
.. Caution:: Take into consideration that sensitive information stored in a
given data chunk will outlive pruned snapshots and remain present in the
datastore as long as referenced by at least one backup snapshot. Further,
*even* if no snapshot references a given chunk, it will remain present until
removed by the garbage collection.
Moreover, file-level backups created using the change detection mode
``metadata`` can reference backup chunks containing files which have vanished
since the previous backup. These files might still be accessible by reading
the chunks raw data (client or server side).
To remove chunks containing sensitive data, prune any snapshot made while the
data was part of the backup input and run a garbage collection. Further, if
using file-based backups with change detection mode ``metadata``,
additionally prune all snapshots since the sensitive data was no longer part
of the backup input and run a garbage collection.
The no longer referenced chunks will then be marked for deletion on the next
garbage collection run and removed by a subsequent run after the grace
period.
The following retention options are available for pruning:
``keep-last <N>``
Keep the last ``<N>`` backup snapshots.
@ -171,6 +197,8 @@ It's recommended to setup a schedule to ensure that unused space is cleaned up
periodically. For most setups a weekly schedule provides a good interval to
start.
.. _gc_background:
GC Background
^^^^^^^^^^^^^
@ -196,17 +224,31 @@ datastore or interfering with other backups.
The garbage collection (GC) process is performed per datastore and is split
into two phases:
- Phase one: Mark
All index files are read, and the access time of the referred chunk files is
updated.
- Phase one (Mark):
- Phase two: Sweep
The task iterates over all chunks, checks their file access time, and if it
is older than the cutoff time (i.e., the time when GC started, plus some
headroom for safety and Linux file system behavior), the task knows that the
chunk was neither referred to in any backup index nor part of any currently
running backup that has no index to scan for. As such, the chunk can be
safely deleted.
All index files are read, and the access time (``atime``) of the referenced
chunk files is updated.
- Phase two (Sweep):
The task iterates over all chunks and checks their file access time against a
cutoff time. The cutoff time is given by either the oldest backup writer
instance, if present, or 24 hours and 5 minutes before the start of the
garbage collection.
Garbage collection considers chunk files with access time older than the
cutoff time to be neither referenced by any backup snapshot's index, nor part
of any currently running backup job. Therefore, these chunks can safely be
deleted.
Chunks within the grace period will not be deleted and logged at the end of
the garbage collection task as *Pending removals*.
.. note:: The grace period for backup chunk removal is not arbitrary, but stems
from the fact that filesystems are typically mounted with the ``relatime``
option by default. This results in better performance by only updating the
``atime`` property if a file has been modified since the last access or the
last access has been at least 24 hours ago.
Manually Starting GC
^^^^^^^^^^^^^^^^^^^^
@ -277,26 +319,10 @@ the **Actions** column in the table.
Notifications
-------------
Proxmox Backup Server can send you notification emails about automatically
Proxmox Backup Server can send you notifications about automatically
scheduled verification, garbage-collection and synchronization tasks results.
By default, notifications are sent to the email address configured for the
`root@pam` user. You can instead set this user for each datastore.
.. image:: images/screenshots/pbs-gui-datastore-options.png
:target: _images/pbs-gui-datastore-options.png
:align: right
:alt: Datastore Options
You can also change the level of notification received per task type, the
following options are available:
* Always: send a notification for any scheduled task, independent of the
outcome
* Errors: send a notification for any scheduled task that results in an error
* Never: do not send any notification at all
Refer to the :ref:`notifications` chapter for more details.
.. _maintenance_mode:

View File

@ -69,6 +69,13 @@ sync-job`` command. The configuration information for sync jobs is stored at
in the GUI, or use the ``create`` subcommand. After creating a sync job, you can
either start it manually from the GUI or provide it with a schedule (see
:ref:`calendar-event-scheduling`) to run regularly.
Backup snapshots, groups and namespaces which are no longer available on the
**Remote** datastore can be removed from the local datastore as well by setting
the ``remove-vanished`` option for the sync job.
Setting the ``verified-only`` or ``encrypted-only`` flags allows to limit the
sync jobs to backup snapshots which have been verified or encrypted,
respectively. This is particularly of interest when sending backups to a less
trusted remote backup server.
.. code-block:: console
@ -132,6 +139,12 @@ For mixing include and exclude filter, following rules apply:
.. note:: The ``protected`` flag of remote backup snapshots will not be synced.
Enabling the advanced option 'resync-corrupt' will re-sync all snapshots that have
failed to verify during the last :ref:`maintenance_verification`. Hence, a verification
job needs to be run before a sync job with 'resync-corrupt' can be carried out. Be aware
that a 'resync-corrupt'-job needs to check the manifests of all snapshots in a datastore
and might take much longer than regular sync jobs.
Namespace Support
^^^^^^^^^^^^^^^^^
@ -218,9 +231,52 @@ Bandwidth Limit
Syncing a datastore to an archive can produce a lot of traffic and impact other
users of the network. In order to avoid network or storage congestion, you can
limit the bandwidth of the sync job by setting the ``rate-in`` option either in
the web interface or using the ``proxmox-backup-manager`` command-line tool:
limit the bandwidth of a sync job in pull direction by setting the ``rate-in``
option either in the web interface or using the ``proxmox-backup-manager``
command-line tool:
.. code-block:: console
# proxmox-backup-manager sync-job update ID --rate-in 20MiB
For sync jobs in push direction use the ``rate-out`` option instead.
Sync Direction Push
^^^^^^^^^^^^^^^^^^^
Sync jobs can be configured for pull or push direction. Sync jobs in push
direction are not identical in behaviour because of the limited access to the
target datastore via the remote servers API. Most notably, pushed content will
always be owned by the user configured in the remote configuration, being
independent from the local user as configured in the sync job. Latter is used
exclusively for permission check and scope checks on the pushing side.
.. note:: It is strongly advised to create a dedicated remote configuration for
each individual sync job in push direction, using a dedicated user on the
remote. Otherwise, sync jobs pushing to the same target might remove each
others snapshots and/or groups, if the remove vanished flag is set or skip
snapshots if the backup time is not incremental.
This is because the backup groups on the target are owned by the user
given in the remote configuration.
The following permissions are required for a sync job in push direction:
#. ``Remote.Audit`` on ``/remote/{remote}`` and ``Remote.DatastoreBackup`` on
``/remote/{remote}/{remote-store}/{remote-ns}`` path or subnamespace.
#. At least ``Datastore.Read`` and ``Datastore.Audit`` on the local source
datastore namespace (``/datastore/{store}/{ns}``) or ``Datastore.Backup`` if
owner of the sync job.
#. ``Remote.DatastorePrune`` on ``/remote/{remote}/{remote-store}/{remote-ns}``
path to remove vanished snapshots and groups. Make sure to use a dedicated
remote for each sync job in push direction as noted above.
#. ``Remote.DatastoreModify`` on ``/remote/{remote}/{remote-store}/{remote-ns}``
path to remove vanished namespaces. A remote user with limited access should
be used on the remote backup server instance. Consider the implications as
noted below.
.. note:: ``Remote.DatastoreModify`` will allow to remove whole namespaces on the
remote target datastore, independent of ownership. Make sure the user as
configured in remote.cfg has limited permissions on the remote side.
.. note:: Sync jobs in push direction require namespace support on the remote
Proxmox Backup Server instance (minimum version 2.2).

View File

@ -0,0 +1,50 @@
digraph g {
graph [
rankdir = "LR"
fontname="Helvetica"
];
node [
fontsize = "16"
shape = "record"
];
edge [
];
"archive" [
label = "archive.mpxar"
shape = "record"
];
"rootdir" [
label = "<fv>FORMAT_VERSION\l|PRELUDE\l|<f0>ENTRY\l|\{XATTR\}\* extended attribute list\l|\{ACL_USER\}\* USER ACL entries\l|\{ACL_GROUP\}\* GROUP ACL entries\l|\[ACL_GROUP_OBJ\] the ACL_GROUP_OBJ \l|\[ACL_DEFAULT\] the various default ACL fields\l|\{ACL_DEFAULT_USER\}\* USER ACL entries\l|\{ACL_DEFAULT_GROUP\}\* GROUP ACL entries\l|\[FCAPS\] file capability in Linux disk format\l|\[QUOTA_PROJECT_ID\] the ext4/xfs quota project ID\l|{<pl> PAYLOAD_REF|SYMLINK|DEVICE|{<de> \{DirectoryEntries\}\*|GOODBYE}}"
shape = "record"
];
"entry" [
label = "<f0> size: u64 = 64\l|type: u64 = ENTRY\l|feature_flags: u64\l|mode: u64\l|flags: u64\l|uid: u64\l|gid: u64\l|mtime: u64\l"
labeljust = "l"
shape = "record"
];
"direntry" [
label = "<f0> FILENAME\l|{ENTRY\l|HARDLINK\l}"
shape = "record"
];
"payloadrefentry" [
label = "<f0> offset: u64\l|size: u64\l"
shape = "record"
];
"archive" -> "rootdir":fv
"rootdir":f0 -> "entry":f0
"rootdir":de -> "direntry":f0
"rootdir":pl -> "payloadrefentry":f0
}

View File

@ -1,38 +1,34 @@
.. _notifications:
Notifications
=============
Overview
--------
Proxmox Backup Server will send notifications if case of noteworthy
events.
* Proxmox Backup Server emits :ref:`notification_events` in case of noteworthy
events in the system. These events are handled by the notification system. A
notification event has metadata, for example a timestamp, a severity level, a
type and other metadata fields.
* :ref:`notification_matchers` route a notification event to one or more
notification targets. A matcher can have match rules to selectively route
based on the metadata of a notification event.
* :ref:`notification_targets` are a destination to which a notification event
is routed to by a matcher. There are multiple types of target, mail-based
(Sendmail and SMTP) and Gotify.
There are a number of different :ref:`Notification Events`,
each with their own set of metadata fields that can be used in
notification matchers.
Datastores and tape backup jobs have a configurable :ref:`notification_mode`.
It allows you to choose between the notification system and a legacy mode for
sending notification emails. The legacy mode is equivalent to the way
notifications were handled before Proxmox Backup Server 3.2.
A notification matcher determines *which* notifications shall be sent *where*.
A matcher has *match rules*, that can be used to
match on certain notification properties (e.g. timestamp, severity,
metadata fields).
If a matcher matches a notification, the notification will be routed
to a configured set of notification targets.
A notification target is an abstraction for a destination where a
notification should be sent to - for instance a Gotify server instance,
or a set of email addresses.
There are multiple types of notification targets, including
sendmail, which uses the system's sendmail command to send emails,
or gotify, which sends a notification to a Gotify instance.
The notification system can be configured in the GUI under
``Configuration -> Notifications``. The configuration is stored in
``/etc/proxmox-backup/notifications.cfg`` and
``/etc/proxmox-backup/notifications-priv.cfg`` -
the latter contains sensitive configuration options such as
passwords or authentication tokens for notification targets and
The notification system can be configured in the GUI under *Configuration →
Notifications*. The configuration is stored in :ref:`notifications.cfg` and
:ref:`notifications_priv.cfg` - the latter contains sensitive configuration
options such as passwords or authentication tokens for notification targets and
can only be read by ``root``.
.. _notification_targets:
Notification Targets
--------------------
@ -44,22 +40,23 @@ Proxmox Backup Server offers multiple types of notification targets.
Sendmail
^^^^^^^^
The sendmail binary is a program commonly found on Unix-like operating systems
that handles the sending of email messages.
It is a command-line utility that allows users and applications to send emails
directly from the command line or from within scripts.
that handles the sending of email messages. It is a command-line utility that
allows users and applications to send emails directly from the command line or
from within scripts.
The sendmail notification target uses the ``sendmail`` binary to send emails to a
list of configured users or email addresses. If a user is selected as a recipient,
the email address configured in user's settings will be used.
For the ``root@pam`` user, this is the email address entered during installation.
A user's email address can be configured in ``Configuration -> Access Control -> User Management``.
If a user has no associated email address, no email will be sent.
The sendmail notification target uses the ``sendmail`` binary to send emails to
a list of configured users or email addresses. If a user is selected as a
recipient, the email address configured in user's settings will be used. For
the ``root@pam`` user, this is the email address entered during installation. A
user's email address can be configured in ``Configuration → Access Control →
User Management``. If a user has no associated email address, no email will be
sent.
.. NOTE:: In standard Proxmox Backup Server installations, the ``sendmail`` binary is provided by
Postfix. It may be necessary to configure Postfix so that it can deliver
mails correctly - for example by setting an external mail relay (smart host).
In case of failed delivery, check the system logs for messages logged by
the Postfix daemon.
.. NOTE:: In standard Proxmox Backup Server installations, the ``sendmail``
binary is provided by Postfix. It may be necessary to configure Postfix so
that it can deliver mails correctly - for example by setting an external
mail relay (smart host). In case of failed delivery, check the system logs
for messages logged by the Postfix daemon.
See :ref:`notifications.cfg` for all configuration options.
@ -67,13 +64,13 @@ See :ref:`notifications.cfg` for all configuration options.
SMTP
^^^^
SMTP notification targets can send emails directly to an SMTP mail relay.
This target does not use the system's MTA to deliver emails.
Similar to sendmail targets, if a user is selected as a recipient, the user's configured
email address will be used.
SMTP notification targets can send emails directly to an SMTP mail relay. This
target does not use the system's MTA to deliver emails. Similar to sendmail
targets, if a user is selected as a recipient, the user's configured email
address will be used.
.. NOTE:: Unlike sendmail targets, SMTP targets do not have any queuing/retry mechanism
in case of a failed mail delivery.
.. NOTE:: Unlike sendmail targets, SMTP targets do not have any queuing/retry
mechanism in case of a failed mail delivery.
See :ref:`notifications.cfg` for all configuration options.
@ -81,32 +78,139 @@ See :ref:`notifications.cfg` for all configuration options.
Gotify
^^^^^^
`Gotify <http://gotify.net>`_ is an open-source self-hosted notification server that
allows you to send push notifications to various devices and
applications. It provides a simple API and web interface, making it easy to
integrate with different platforms and services.
`Gotify <http://gotify.net>`_ is an open-source self-hosted notification server
that allows you to send push notifications to various devices and applications.
It provides a simple API and web interface, making it easy to integrate with
different platforms and services.
.. NOTE:: Gotify targets will respect the HTTP proxy settings from
Configuration → Other → HTTP proxy
See :ref:`notifications.cfg` for all configuration options.
.. _notification_targets_webhook:
Webhook
^^^^^^^
Webhook notification targets perform HTTP requests to a configurable URL.
The following configuration options are available:
* ``url``: The URL to which to perform the HTTP requests. Supports templating
to inject message contents, metadata and secrets.
* ``method``: HTTP Method to use (POST/PUT/GET)
* ``header``: Array of HTTP headers that should be set for the request.
Supports templating to inject message contents, metadata and secrets.
* ``body``: HTTP body that should be sent. Supports templating to inject
message contents, metadata and secrets.
* ``secret``: Array of secret key-value pairs. These will be stored in a
protected configuration file only readable by root. Secrets can be
accessed in body/header/URL templates via the ``secrets`` namespace.
* ``comment``: Comment for this target.
For configuration options that support templating, the `Handlebars
<https://handlebarsjs.com>`_ syntax can be used to access the following
properties:
* ``{{ title }}``: The rendered notification title
* ``{{ message }}``: The rendered notification body
* ``{{ severity }}``: The severity of the notification (``info``, ``notice``,
``warning``, ``error``, ``unknown``)
* ``{{ timestamp }}``: The notification's timestamp as a UNIX epoch (in
seconds).
* ``{{ fields.<name> }}``: Sub-namespace for any metadata fields of the
notification. For instance, ``fields.type`` contains the notification
type - for all available fields refer to :ref:`notification_events`.
* ``{{ secrets.<name> }}``: Sub-namespace for secrets. For instance, a secret
named ``token`` is accessible via ``secrets.token``.
For convenience, the following helpers are available:
* ``{{ url-encode <value/property> }}``: URL-encode a property/literal.
* ``{{ escape <value/property> }}``: Escape any control characters that cannot
be safely represented as a JSON string.
* ``{{ json <value/property> }}``: Render a value as JSON. This can be useful
to pass a whole sub-namespace (e.g. ``fields``) as a part of a JSON payload
(e.g. ``{{ json fields }}``).
.. NOTE:: Webhook targets will respect the HTTP proxy settings from
Configuration → Other → HTTP proxy
Example - ntfy.sh
"""""""""""""""""
* Method: ``POST``
* URL: ``https://ntfy.sh/{{ secrets.channel }}``
* Headers:
* ``Markdown``: ``Yes``
* Body::
```
{{ message }}
```
* Secrets:
* ``channel``: ``<your ntfy.sh channel>``
Example - Discord
"""""""""""""""""
* Method: ``POST``
* URL: ``https://discord.com/api/webhooks/{{ secrets.token }}``
* Headers:
* ``Content-Type``: ``application/json``
* Body::
{
"content": "``` {{ escape message }}```"
}
* Secrets:
* ``token``: ``<token>``
Example - Slack
"""""""""""""""
* Method: ``POST``
* URL: ``https://hooks.slack.com/services/{{ secrets.token }}``
* Headers:
* ``Content-Type``: ``application/json``
* Body::
{
"text": "``` {{escape message}}```",
"type": "mrkdwn"
}
* Secrets:
* ``token``: ``<token>``
.. _notification_matchers:
Notification Matchers
---------------------
Notification matchers route notifications to notification targets based
on their matching rules. These rules can match certain properties of a
notification, such as the timestamp (``match-calendar``), the severity of
the notification (``match-severity``) or metadata fields (``match-field``).
If a notification is matched by a matcher, all targets configured for the
matcher will receive the notification.
Notification matchers route notifications to notification targets based on
their matching rules. These rules can match certain properties of a
notification, such as the timestamp (``match-calendar``), the severity of the
notification (``match-severity``) or metadata fields (``match-field``). If a
notification is matched by a matcher, all targets configured for the matcher
will receive the notification.
An arbitrary number of matchers can be created, each with with their own
matching rules and targets to notify.
Every target is notified at most once for every notification, even if
the target is used in multiple matchers.
matching rules and targets to notify. Every target is notified at most once for
every notification, even if the target is used in multiple matchers.
A matcher without rules matches any notification; the configured targets
will always be notified.
A matcher without rules matches any notification; the configured targets will
always be notified.
See :ref:`notifications.cfg` for all configuration options.
@ -123,20 +227,24 @@ Examples:
Field Matching Rules
^^^^^^^^^^^^^^^^^^^^
Notifications have a selection of metadata fields that can be matched.
When using ``exact`` as a matching mode, a ``,`` can be used as a separator.
The matching rule then matches if the metadata field has **any** of the specified
Notifications have a selection of metadata fields that can be matched. When
using ``exact`` as a matching mode, a ``,`` can be used as a separator. The
matching rule then matches if the metadata field has **any** of the specified
values.
Examples:
* ``match-field exact:type=gc`` Only match notifications for garbage collection jobs
* ``match-field exact:type=prune,verify`` Match prune job and verification job notifications.
* ``match-field regex:datastore=^backup-.*$`` Match any datastore starting with ``backup``.
* ``match-field exact:type=gc`` Only match notifications for garbage collection
jobs
* ``match-field exact:type=prune,verify`` Match prune job and verification job
notifications.
* ``match-field regex:datastore=^backup-.*$`` Match any datastore starting with
``backup``.
If a notification does not have the matched field, the rule will **not** match.
For instance, a ``match-field regex:datastore=.*`` directive will match any notification that has
a ``datastore`` metadata field, but will not match if the field does not exist.
For instance, a ``match-field regex:datastore=.*`` directive will match any
notification that has a ``datastore`` metadata field, but will not match if the
field does not exist.
Severity Matching Rules
^^^^^^^^^^^^^^^^^^^^^^^
@ -150,14 +258,14 @@ Examples:
The following severities are in use:
``info``, ``notice``, ``warning``, ``error``, ``unknown``.
.. _Notification Events:
.. _notification_events:
Notification Events
-------------------
The following table contains a list of all notification events in Proxmox Backup server, their
type, severity and additional metadata fields. ``type`` as well as any other metadata field
may be used in ``match-field`` match rules.
The following table contains a list of all notification events in Proxmox
Backup server, their type, severity and additional metadata fields. ``type`` as
well as any other metadata field may be used in ``match-field`` match rules.
================================ ==================== ========== ==============================================================
Event ``type`` Severity Metadata fields (in addition to ``type``)
@ -177,8 +285,8 @@ Verification job failure ``verification`` ``error`` ``datastore``,
Verification job success ``verification`` ``info`` ``datastore``, ``hostname``, ``job-id``
================================ ==================== ========== ==============================================================
The following table contains a description of all use metadata fields. All of these
can be used in ``match-field`` match rules.
The following table contains a description of all use metadata fields. All of
these can be used in ``match-field`` match rules.
==================== ===================================
Metadata field Description
@ -195,19 +303,86 @@ Metadata field Description
System Mail Forwarding
----------------------
Certain local system daemons, such as ``smartd``, send notification emails
to the local ``root`` user. Proxmox Backup Server will feed these mails
into the notification system as a notification of type ``system-mail``
and with severity ``unknown``.
Certain local system daemons, such as ``smartd``, send notification emails to
the local ``root`` user. Proxmox Backup Server will feed these mails into the
notification system as a notification of type ``system-mail`` and with severity
``unknown``.
When the email is forwarded to a sendmail target, the mail's content and headers
are forwarded as-is. For all other targets,
the system tries to extract both a subject line and the main text body
from the email content. In instances where emails solely consist of HTML
content, they will be transformed into plain text format during this process.
When the email is forwarded to a sendmail target, the mail's content and
headers are forwarded as-is. For all other targets, the system tries to extract
both a subject line and the main text body from the email content. In instances
where emails solely consist of HTML content, they will be transformed into
plain text format during this process.
Permissions
-----------
In order to modify/view the configuration for notification targets,
the ``Sys.Modify/Sys.Audit`` permissions are required for the
In order to modify/view the configuration for notification targets, the
``Sys.Modify/Sys.Audit`` permissions are required for the
``/system/notifications`` ACL node.
.. _notification_mode:
Notification Mode
-----------------
Datastores and tape backup/restore job configuration have a
``notification-mode`` option which can have one of two values:
* ``legacy-sendmail``: Send notification emails via the system's ``sendmail``
command. The notification system will be bypassed and any configured
targets/matchers will be ignored. This mode is equivalent to the notification
behavior for version before Proxmox Backup Server 3.2.
* ``notification-system``: Use the new, flexible notification system.
If the ``notification-mode`` option is not set, Proxmox Backup Server will
default to ``legacy-sendmail``.
Starting with Proxmox Backup Server 3.2, a datastore created in the UI will
automatically opt in to the new notification system. If the datastore is
created via the API or the ``proxmox-backup-manager`` CLI, the
``notification-mode`` option has to be set explicitly to
``notification-system`` if the notification system shall be used.
The ``legacy-sendmail`` mode might be removed in a later release of
Proxmox Backup Server.
Settings for ``legacy-sendmail`` notification mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If ``notification-mode`` is set to ``legacy-sendmail``, Proxmox Backup Server
will send notification emails via the system's ``sendmail`` command to the
email address configured for the user set in the ``notify-user`` option
(falling back to ``root@pam`` if not set).
For datastores, you can also change the level of notifications received per
task type via the ``notify`` option.
* Always: send a notification for any scheduled task, independent of the
outcome
* Errors: send a notification for any scheduled task that results in an error
* Never: do not send any notification at all
The ``notify-user`` and ``notify`` options are ignored if ``notification-mode``
is set to ``notification-system``.
Overriding Notification Templates
---------------------------------
Proxmox Backup Server uses Handlebars templates to render notifications. The
original templates provided by Proxmox Backup Server are stored in
``/usr/share/proxmox-backup/templates/default/``.
Notification templates can be overridden by providing a custom template file in
the override directory at
``/etc/proxmox-backup/notification-templates/default/``. When rendering a
notification of a given type, Proxmox Backup Server will first attempt to load
a template from the override directory. If this one does not exist or fails to
render, the original template will be used.
The template files follow the naming convention of
``<type>-<body|subject>.txt.hbs``. For instance, the file
``gc-err-body.txt.hbs`` contains the template for rendering notifications for
garbage collection errors, while ``package-updates-subject.txt.hbs`` is used to
render the subject line of notifications for available package updates.

View File

@ -149,7 +149,7 @@ Currently there's only a client-repository for APT based systems.
.. _package_repositories_client_only_apt:
APT-based Proxmox Backup Client Repository
++++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For modern Linux distributions using `apt` as package manager, like all Debian
and Ubuntu Derivative do, you may be able to use the APT-based repository.

View File

@ -126,7 +126,8 @@ Ext.onReady(function() {
if (data.mark !== 'keep') {
return `<div style="text-decoration: line-through;">${text}</div>`;
}
if (me.useColors) {
let pruneList = this.up('prunesimulatorPruneList');
if (pruneList.useColors) {
let bgColor = COLORS[data.keepName];
let textColor = TEXT_COLORS[data.keepName];
return `<div style="background-color: ${bgColor};color: ${textColor};">${text}</div>`;
@ -353,12 +354,17 @@ Ext.onReady(function() {
specValues.forEach(function(value) {
if (value.includes('..')) {
let [start, end] = value.split('..');
let step = 1;
if (end.includes('/')) {
[end, step] = end.split('/');
step = assertValid(step);
}
start = assertValid(start);
end = assertValid(end);
if (start > end) {
throw "interval start is bigger then interval end '" + start + " > " + end + "'";
}
for (let i = start; i <= end; i++) {
for (let i = start; i <= end; i += step) {
matches[i] = 1;
}
} else if (value.includes('/')) {

View File

@ -165,6 +165,74 @@ following command creates a new datastore called ``store1`` on
# proxmox-backup-manager datastore create store1 /backup/disk1/store1
Removable Datastores
^^^^^^^^^^^^^^^^^^^^
Removable datastores have a ``backing-device`` associated with them, they can be
mounted and unmounted. Other than that they behave the same way a normal datastore
would.
They can be created on already correctly formatted partitions, which should be
either ``ext4`` or ``xfs`` as with normal datastores, but most modern file
systems supported by the Proxmox Linux kernel should work.
.. note:: FAT-based file systems do not support the POSIX file ownership
concept and have relatively low limits on the number of files per directory.
Therefore, creating a datastore is not supported on FAT file systems.
Because some external drives are preformatted with such a FAT-based file
system, you may need to reformat the drive before you can use it as a
backing-device for a removable datastore.
It is also possible to create them on completely unused disks through
"Administration" > "Disks / Storage" > "Directory", using this method the disk will
be partitioned and formatted automatically for the datastore.
Devices with only one datastore on them will be mounted automatically. Unmounting has
to be done through the UI by clicking "Unmount" on the summary page or using the CLI.
If unmounting fails, the reason is logged in the unmount task log, and the
datastore will stay in maintenance mode ``unmounting``, which prevents any IO
operations. In such cases, the maintenance mode has to be reset manually using:
.. code-block:: console
# proxmox-backup-manager datastore update --maintenance-mode offline
to prevent any IO, or to clear it use:
.. code-block:: console
# proxmox-backup-manager datastore update --delete maintenance-mode
A single device can house multiple datastores, they only limitation is that they are not
allowed to be nested.
Removable datastores are created on the the device with the given relative path that is specified
on creation. In order to use a datastore on multiple PBS instances, it has to be created on one,
and added with ``Reuse existing datastore`` checked on the others. The path you set on creation
is how multiple datastores on a single device are identified. So when adding on a new PBS instance,
it has to match what was set on creation.
.. code-block:: console
# proxmox-backup-manager datastore unmount store1
both will wait for any running tasks to finish and unmount the device.
All removable datastores are mounted under /mnt/datastore/<name>, and the specified path
refers to the path on the device.
All datastores present on a device can be listed using ``proxmox-backup-debug``.
.. code-block:: console
# proxmox-backup-debug inspect device /dev/...
Verify, Prune and Garbage Collection jobs are skipped if the removable
datastore is not mounted when they are scheduled. Sync jobs start, but fail
with an error saying the datastore was not mounted. The reason is that syncs
not happening as scheduled should at least be noticeable.
Managing Datastores
^^^^^^^^^^^^^^^^^^^
@ -314,7 +382,7 @@ Options
There are a few per-datastore options:
* :ref:`Notifications <maintenance_notification>`
* :ref:`Notification mode and legacy notification settings <notification_mode>`
* :ref:`Maintenance Mode <maintenance_mode>`
* Verification of incoming backups
@ -367,9 +435,28 @@ There are some tuning related options for the datastore that are more advanced:
This can be set with:
.. code-block:: console
.. code-block:: console
# proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem'
# proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem'
* ``gc-atime-safety-check``: Datastore GC atime update safety check:
You can explicitly `enable` or `disable` the atime update safety check
performed on datastore creation and garbage collection. This checks if atime
updates are handled as expected by garbage collection and therefore avoids the
risk of data loss by unexpected filesystem behavior. It is recommended to set
this to enabled, which is also the default value.
* ``gc-atime-cutoff``: Datastore GC atime cutoff for chunk cleanup:
This allows to set the cutoff for which a chunk is still considered in-use
during phase 2 of garbage collection (given no older writers). If the
``atime`` of the chunk is outside the range, it will be removed.
* ``gc-cache-capacity``: Datastore GC least recently used cache capacity:
Allows to control the cache capacity used to keep track of chunks for which
the access time has already been updated during phase 1 of garbage collection.
This avoids multiple updates and increases GC runtime performance. Higher
values can reduce GC runtime at the cost of increase memory usage, setting the
value to 0 disables caching.
If you want to set multiple tuning options simultaneously, you can separate them
with a comma, like this:
@ -419,7 +506,7 @@ remote-source to avoid that an attacker that took over the source can cause
deletions of backups on the target hosts.
If the source-host became victim of a ransomware attack, there is a good chance
that sync jobs will fail, triggering an :ref:`error notification
<maintenance_notification>`.
<Notification Events>`.
It is also possible to create :ref:`tape backups <tape_backup>` as a second
storage medium. This way, you get an additional copy of your data on a

View File

@ -30,6 +30,8 @@ please refer to the standard Debian documentation.
.. include:: certificate-management.rst
.. include:: external-metric-server.rst
.. include:: services.rst
.. include:: command-line-tools.rst

View File

@ -6,6 +6,8 @@ production. To further decrease the impact of a failed host, you can set up
periodic, efficient, incremental :ref:`datastore synchronization <syncjobs>`
from other Proxmox Backup Server instances.
.. _minimum_system_requirements:
Minimum Server Requirements, for Evaluation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -38,7 +40,8 @@ Recommended Server System Requirements
* Backup storage:
* Use only SSDs, for best results
* Prefer fast storage that delivers high IOPS for random IO workloads; use
only enterprise SSDs for best results.
* If HDDs are used: Using a metadata cache is highly recommended, for example,
add a ZFS :ref:`special device mirror <local_zfs_special_device>`.

View File

@ -61,6 +61,7 @@ In general, LTO tapes offer the following advantages:
Note that `Proxmox Backup Server` already stores compressed data, so using the
tape compression feature has no advantage.
.. _tape-supported-hardware:
Supported Hardware
------------------
@ -969,6 +970,8 @@ You can restore from a tape even without an existing catalog, but only the
whole media set. If you do this, the catalog will be automatically created.
.. _tape_key_management:
Encryption Key Management
~~~~~~~~~~~~~~~~~~~~~~~~~
@ -1180,3 +1183,159 @@ In combination with fitting prune settings and tape backup schedules, this
achieves long-term storage of some backups, while keeping the recent
backups on smaller media sets that expire roughly every 4 weeks (that is, three
plus the current week).
Disaster Recovery
-----------------
.. _Command-line Tools: command-line-tools.html
In case of major disasters, important data, or even whole servers might be
destroyed or at least damaged up to the point where everything - sometimes
including the backup server - has to be restored from a backup. For such cases,
the following step-by-step guide will help you to set up the Proxmox Backup
Server and restore everything from tape backups.
The following guide will explain the necessary steps using both the web GUI and
the command line tools. For an overview of the command line tools, see
`Command-line Tools`_.
Setting Up a Datastore
~~~~~~~~~~~~~~~~~~~~~~
.. _proxmox-backup-manager: proxmox-backup-manager/man1.html
.. _Installation: installation.html
After you set up a new Proxmox Backup Server, as outlined in the `Installation`_
chapter, first set up a datastore so a tape can be restored to it:
#. Go to **Administration -> Storage / Disks** and make sure that the disk that
will be used as a datastore shows up.
#. Under the **Directory** or **ZFS** tabs, you can either choose to create a
directory or create a ZFS ``zpool``, respectively. Here you can also directly
add the newly created directory or ZFS ``zpool`` as a datastore.
Alternatively, the `proxmox-backup-manager`_ can be used to perform the same
tasks. For more information, check the :ref:`datastore_intro` documentation.
Setting Up the Tape Drive
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Make sure you have a properly working tape drive and/or changer matching to
medium you want to restore from.
#. Connect the tape changer(s) and the tape drive(s) to the backup server. These
should be detected automatically by Linux. You can get a list of available
drives using:
.. code-block:: console
# proxmox-tape drive scan
┌────────────────────────────────┬────────┬─────────────┬────────┐
│ path │ vendor │ model │ serial │
╞════════════════════════════════╪════════╪═════════════╪════════╡
│ /dev/tape/by-id/scsi-12345-sg │ IBM │ ULT3580-TD4 │ 12345 │
└────────────────────────────────┴────────┴─────────────┴────────┘
You can get a list of available changers with:
.. code-block:: console
# proxmox-tape changer scan
┌─────────────────────────────┬─────────┬──────────────┬────────┐
│ path │ vendor │ model │ serial │
╞═════════════════════════════╪═════════╪══════════════╪════════╡
│ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└─────────────────────────────┴─────────┴──────────────┴────────┘
For more information, please read the chapters
on :ref:`tape_changer_config` and :ref:`tape_drive_config`.
#. If you have a tape changer, go to the web interface of the Proxmox Backup
Server, go to **Tape Backup -> Changers** and add it. For examples using the
command line, read the chapter on :ref:`tape_changer_config`. If the changer
has been detected correctly by Linux, the changer should show up in the list.
#. In the web interface, go to **Tape Backup -> Drives** and add the tape drive
that will be used to read the tapes. For examples using the command line,
read the chapter on :ref:`tape_drive_config`. If the tape drive has been
detected correctly by Linux, the drive should show up in the list. If the
drive also has a tape changer, make sure to select the changer as well and
assign it the correct drive number.
Restoring Data From the Tape
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _proxmox-tape: proxmox-tape/man1.html
.. _proxmox-backup-client: proxmox-backup-client/man1.html
.. _Restore: https://pve.proxmox.com/pve-docs/chapter-vzdump.html#vzdump_restore
The following guide will explain the steps necessary to restore data from a
tape, which can be done over either the web GUI or the command line. For details
on the command line, read the documentation on the `proxmox-tape`_ tool.
To restore data from tapes, do the following:
#. Insert the first tape (as displayed on the label) into the tape drive or, if
a tape changer is available, use the tape changer to insert the tape into the
right drive. The web GUI can also be used to load or transfer tapes between
tape drives by selecting the changer.
#. If the backup has been encrypted, the encryption keys need to be restored as
well. In the **Encryption Keys** tab, press **Restore Key**. For more
details or examples that use the command line, read the
:ref:`tape_key_management` chapter.
#. The procedure for restoring data is slightly different depending on whether
you are using a standalone tape drive or a changer:
* For changers, the procedure is simple:
#. Insert all tapes from the media set you want to restore from.
#. Click on the changer in the web GUI, click **Inventory**, make sure
**Restore Catalog** is selected and press OK.
* For standalone drives, the procedure would be:
#. Insert the first tape of the media set.
#. Click **Catalog**.
#. Eject the tape, then repeat the steps for the remaining tapes of the
media set.
#. Go back to **Tape Backup**. In the **Content** tab, press **Restore** and
select the desired media set. Choose the snapshot you want to restore, press
**Next**, select the drive and target datastore and press **Restore**.
#. By going to the datastore where the data has been restored, under the
**Content** tab you should be able to see the restored snapshots. In order to
access the backups from another machine, you will need to configure the
access to the backup server. Go to **Configuration -> Access Control** and
either create a new user, or a new API token (API tokens allow easy
revocation if the token is compromised). Under **Permissions**, add the
desired permissions, e.g. **DatastoreBackup**.
#. You can now perform virtual machine, container or file restores. You now have
the following options:
* If you want to restore files on Linux distributions that are not based on
Proxmox products or you prefer using a command line tool, you can use the
`proxmox-backup-client`_, as explained in the
:ref:`client_restoring_data` chapter. Use the newly created API token to
be able to access the data. You can then restore individual files or
mount an archive to your system.
* If you want to restore virtual machines or containers on a Proxmox VE
server, add the datastore of the backup server as storage and go to
**Backups**. Here you can restore VMs and containers, including their
configuration. For more information on restoring backups in Proxmox VE,
visit the `Restore`_ chapter of the Proxmox VE documentation.

View File

@ -28,6 +28,9 @@ which are not chunked, e.g. the client log), or one or more indexes
When uploading an index, the client first has to read the source data, chunk it
and send the data as chunks with their identifying checksum to the server.
When using the :ref:`change detection mode <change_detection_mode>` payload
chunks for unchanged files are reused from the previous snapshot, thereby not
reading the source data again.
If there is a previous Snapshot in the backup group, the client can first
download the chunk list of the previous Snapshot. If it detects a chunk that
@ -53,8 +56,9 @@ The chunks of a datastore are found in
<datastore-root>/.chunks/
This chunk directory is further subdivided by the first four bytes of the
chunk's checksum, so a chunk with the checksum
This chunk directory is further subdivided into directories grouping chunks by
their checksums 2 byte prefix (given as 4 hexadecimal digits), so a chunk with
the checksum
a342e8151cbf439ce65f3df696b54c67a114982cc0aa751f2852c2f7acc19a8b
@ -130,6 +134,141 @@ This is done to speed up the client part of the backup, since it only needs to
encrypt chunks that are actually getting uploaded. Chunks that exist already in
the previous backup, do not need to be encrypted and uploaded.
Change Detection Mode for File-Based Backups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The change detection mode controls how to detect and act for files which did not
change in-between subsequent backup runs as well as the archive file format used
to encode the directory entries.
There are 3 modes available, the current default ``legacy`` mode, as well as the
``data`` and ``metadata`` mode. While the ``legacy`` mode encodes all contents
in a single ``pxar`` archive, the latter two modes split data and metadata into
``ppxar`` and ``mpxar`` archives. This is done to allow for fast comparison of
metadata with the previous snapshot, used by the ``metadata`` mode to detect
reusable files. The ``data`` mode refrains from reusing unchanged files by
rechunking the file unconditionally. This mode therefore assures that no file
changes are missed even if the metadata are unchanged.
.. NOTE:: ``pxar`` and ``mpxar``/``ppxar`` file formats are different and cannot
be deduplicated as efficiently if a datastore stores archive snapshots of
both types.
As the change detection modes are client side changes, they are backwards
compatible with older versions of Proxmox Backup Server. Exploring the backup
contents for the new archive format via the web interface requires however a
Proxmox Backup Server with version 3.2.5 or higher. Upgrading to the latest
version is recommended for full feature compatibility.
.. _change-detection-mode-legacy:
Legacy Mode
+++++++++++
Backup snapshots of filesystems are created by recursively scanning the
directory entries. All entries to be included in the snapshot are read and
serialized by encoding them using the ``pxar``
:ref:`archive format <pxar-format>`. The resulting stream is chunked into
:ref:`dynamically sized chunks <dynamically-sized-chunks>` and uploaded to the
Proxmox Backup Server, deduplicating chunks based on their content digest for
space efficient storage.
File contents are read and chunked unconditionally, no check is performed to
detect unchanged files.
.. _change-detection-mode-data:
Data Mode
+++++++++
Like for ``legacy`` mode file contents are read and chunked unconditionally, no
check is performed to detect unchanged files.
However, in contrast to ``legacy`` mode, which stores entries metadata and data
in a single self-contained ``pxar`` archive, the ``data`` mode encodes metadata
and file contents into two separate streams. The resulting backup snapshots
therefore contain split archives, an archive in ``mpxar``
:ref:`format <pxar-meta-format>` containing the entries metadata and an archive
with ``ppxar`` :ref:`format <ppxar-format>` , containing the actual file
contents, separated by payload headers for consistency checks. The metadata
archive stores a reference offset to the corresponding payload archive entry so
the file contents can be accessed. Both of these archives are chunked and
uploaded by the Proxmox backup client, resulting in separated indices and
independent chunks.
The ``mpxar`` archive can be used to efficiently fetch the associated metadata
for archive entries without the overhead of payload data stored within the same
chunks. This is used for example for entry lookups to list the archive contents
or to navigate the mounted filesystem via the FUSE implementation. No dedicated
catalog is therefore created for archives encoded using this mode.
By not comparing metadata to the previous backup snapshot, no files will be
considered reusable by this mode, in contrast to the ``metadata`` mode.
Latter can reuse files which have changed, but file size and mtime did not
change because restored after changing the files contents.
.. _change-detection-mode-metadata:
Metadata Mode
+++++++++++++
The ``metadata`` mode detects files whose file metadata did not change
in-between subsequent backup runs. The metadata comparison includes file size,
file type, ownership and permission information, as well as acls and attributes
and most importantly the file's mtime, for details see the
:ref:`pxar metadata archive format <pxar-meta-format>`. Files ctime and inode
number are not stored and used for comparison, since some tools (e.g.
``vzdump``) might sync the contents of the filesystem to a temporary location
before actually performing the backup via the Proxmox backup client. For these
cases, ctime and inode number will always change.
This mode will avoid reading and rechunking the file contents whenever possible
by reusing the file content chunks of unchanged files from the previous backup
snapshot.
To compare the metadata, the previous snapshots ``mpxar`` metadata archive is
downloaded at the start of the backup run and used as a reference. Further, the
index of the payload archive ``ppxar`` is fetched and used to lookup the file
content chunk's digests, which will be used to reindex pre-existing chunks
without the need to reread and rechunk the file contents.
During backup, the metadata and payload archives are encoded in the same manner
as for the ``data`` mode, but for the ``metadata`` mode each entry is
additionally looked up in the metadata reference archive for comparison first.
If the file did not change as compared to the reference, the file is considered
as unchanged and the Proxmox backup client enters a look-ahead caching mode. In
this mode, the client will keep reading and comparing then following entries in
the filesystem as long as they are reusable. Further, it keeps track of the
payload archive offset range these file contents are stored in. The additional
look-ahead caching is needed, as file boundaries are not required to be aligned
with chunk boundaries, therefore reused chunks can contain possibly wasted chunk
content (also called padding) if reused unconditionally.
The look-ahead cache will greedily cache all unchanged entries up to the point
where either the cache size limit is reached, a file entry with changed
metadata is encountered, or the range of payload chunks considered for reuse is
not continuous. An example for the latter is a file which disappeared in-between
subsequent backup runs, leaving a hole in the range. At this point, the caching
mode is disabled and the client calculates the wasted padding size which would
be introduced by reusing the payload chunks for all the unchanged files cached
up to this point. If the padding is acceptable (below a preset limit of 10% of
the actually reused chunk content), the files are reused by encoding them in the
metadata archive using updated offset references to the contents and reindexing
the pre-existing chunks in the new ``ppxar`` archive. If however the padding is
not acceptable, exceeding the limit, all cached entries are reencoded, not
reusing any of the pre-existing data. The metadata as cached will be encoded in
the metadata archive, no matter if cached file contents are to be reused or
reencoded.
This combination of look-ahead caching and reuse of pre-existing payload archive
chunks for files with unchanged contents therefore speeds up the backup
process by avoiding rereading and rechunking file contents whenever possible.
To reduce paddings and increase chunk reusability, during creation of the
archives in ``data`` mode and ``metadata`` mode the pxar encoder signals
encountered file boundaries as suggested chunk boundaries to the sliding window
chunker. The chunker then decides based on the internal state if the suggested
boundary is accepted or disregarded.
Caveats and Limitations
-----------------------
@ -159,8 +298,8 @@ will see that the probability of a collision in that scenario is:
For context, in a lottery game of guessing 6 numbers out of 45, the chance to
correctly guess all 6 numbers is only :math:`1.2277 * 10^{-7}`. This means the
chance of a collision is about the same as winning 13 such lottery games *in a
row*.
chance of a collision is lower than winning 8 such lottery games *in a row*:
:math:`(1.2277 * 10^{-7})^{8} = 5.1623 * 10^{-56}`.
In conclusion, it is extremely unlikely that such a collision would occur by
accident in a normal datastore.
@ -180,6 +319,9 @@ read all files again for every backup, otherwise it would not be possible to
generate a consistent, independent pxar archive where the original chunks can be
reused. Note that in spite of this, only new or changed chunks will be uploaded.
In order to avoid these limitations, the Change Detection Mode ``metadata`` was
introduced.
Verification of Encrypted Chunks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -16,8 +16,8 @@ User Configuration
choose the realm when you add a new user. Possible realms are:
:pam: Linux PAM standard authentication. Use this if you want to
authenticate as a Linux system user (users need to exist on the
system).
authenticate as a Linux system user. The users needs to already exist on
the host system.
:pbs: Proxmox Backup Server realm. This type stores hashed passwords in
``/etc/proxmox-backup/shadow.json``.
@ -599,6 +599,32 @@ list view in the web UI, or using the command line:
Authentication Realms
---------------------
.. _user_realms_pam:
Linux PAM
~~~~~~~~~
Linux PAM is a framework for system-wide user authentication. These users are
created on the host system with commands such as ``adduser``.
If PAM users exist on the host system, corresponding entries can be added to
Proxmox Backup Server, to allow these users to log in via their system username
and password.
.. _user_realms_pbs:
Proxmox Backup authentication server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a Unix-like password store, which stores hashed passwords in
``/etc/proxmox-backup/shadow.json``. Passwords are hashed using the SHA-256
hashing algorithm.
This is the most convenient realm for small-scale (or even mid-scale)
installations, where users do not need access to anything outside of Proxmox
Backup Server. In this case, users are fully managed by Proxmox Backup Server
and are able to change their own passwords via the GUI.
.. _user_realms_ldap:
LDAP
@ -663,7 +689,7 @@ address must be specified. Most options from :ref:`user_realms_ldap` apply to
Active Directory as well, most importantly the bind credentials ``bind-dn``
and ``password``. This is typically required by default for Microsoft Active
Directory. The ``bind-dn`` can be specified either in AD-specific
``user@company.net`` syntax or the commen LDAP-DN syntax.
``user@company.net`` syntax or the common LDAP-DN syntax.
The authentication domain name must only be specified if anonymous bind is
requested. If bind credentials are given, the domain name is automatically

View File

@ -0,0 +1,346 @@
.. _using_the_installer:
Install `Proxmox Backup`_ Server using the Installer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download the ISO from |DOWNLOADS|.
It includes the following:
* The Proxmox Backup Server installer, which partitions the local
disk(s) with ext4, xfs or ZFS, and installs the operating system
* Complete operating system (Debian Linux, 64-bit)
* Proxmox Linux kernel with ZFS support
* Complete toolset to administer backups and all necessary resources
* Web-based management interface
.. note:: Any existing data on the selected drives will be overwritten
during the installation process. The installer does not add boot
menu entries for other operating systems.
Please insert the :ref:`installation_medium` (for example, USB flash
drive or DVD) and boot from it.
.. note:: You may need to go into your server's firmware settings, to
enable booting from your installation medium (for example, USB) and
set the desired boot order. When booting an installer prior to
`Proxmox Backup`_ Server version 3.1, Secure Boot needs to be
disabled.
.. image:: images/screenshots/pbs-installer-grub-menu.png
:target: _images/pbs-installer-grub-menu.png
:align: right
:alt: Proxmox Backup Server Installer GRUB Menu
After choosing the correct entry (for example, *Boot from USB*) the
Proxmox Backup Server menu will be displayed, and one of the following
options can be selected:
**Install Proxmox Backup Server (Graphical)**
Starts the normal installation.
TIP: It's possible to use the installation wizard with a keyboard only. Buttons
can be clicked by pressing the ``ALT`` key combined with the underlined character
from the respective button. For example, ``ALT + N`` to press a ``Next`` button.
**Install Proxmox Backup Server (Console)**
Starts the terminal-mode installation wizard. It provides the same overall
installation experience as the graphical installer, but has generally better
compatibility with very old and very new hardware.
**Install Proxmox Backup Server (Terminal UI, Serial Console)**
Starts the terminal-mode installation wizard, additionally setting up the Linux
kernel to use the (first) serial port of the machine for in- and output. This
can be used if the machine is completely headless and only has a serial console
available.
.. image:: images/screenshots/pbs-tui-installer.png
:target: _images/pbs-tui-installer.png
:align: right
:alt: Proxmox Backup Server Terminal UI Installer
Both modes use the same code base for the actual installation process to
benefit from more than a decade of bug fixes and ensure feature parity.
TIP: The *Console* or *Terminal UI* option can be used in case the graphical
installer does not work correctly, due to e.g. driver issues. See also
:ref:`nomodeset_kernel_param`.
**Advanced Options: Install Proxmox Backup Server (Debug Mode)**
Starts the installation in debug mode. A console will be opened at several
installation steps. This helps to debug the situation if something goes wrong.
To exit a debug console, press ``CTRL-D``. This option can be used to boot a
live system with all basic tools available. You can use it, for example, to
repair a degraded ZFS *rpool* or fix the :ref:`chapter-systembooting` for an
existing Proxmox Backup Server setup.
**Advanced Options: Install Proxmox Backup Server (Terminal UI, Debug Mode)**
Same as the graphical debug mode, but preparing the system to run the
terminal-based installer instead.
**Advanced Options: Install Proxmox Backup Server (Serial Console Debug Mode)**
Same the terminal-based debug mode, but additionally sets up the Linux kernel to
use the (first) serial port of the machine for in- and output.
**Advanced Options: Rescue Boot**
With this option you can boot an existing installation. It searches all attached
hard disks. If it finds an existing installation, it boots directly into that
disk using the Linux kernel from the ISO. This can be useful if there are
problems with the bootloader (GRUB/``systemd-boot``) or the BIOS/UEFI is unable
to read the boot block from the disk.
**Advanced Options: Test Memory (memtest86+)**
Runs *memtest86+*. This is useful to check if the memory is functional and free
of errors. Secure Boot must be turned off in the UEFI firmware setup utility to
run this option.
You normally select *Install Proxmox Backup Server (Graphical)* to start the
installation.
The first step is to read our EULA (End User License Agreement). Following this,
you can select the target hard disk(s) for the installation.
.. caution:: By default, the whole server is used and all existing data is
removed. Make sure there is no important data on the server before proceeding
with the installation.
The *Options* button lets you select the target file system, which defaults to
``ext4``. The installer uses LVM if you select ``ext4`` or ``xfs`` as a file
system, and offers additional options to restrict LVM space (see :ref:`below
<advanced_lvm_options>`).
.. image:: images/screenshots/pbs-installer-select-disk.png
:target: _images/pbs-installer-select-disk.png
:align: right
:alt: Proxmox Backup Server Installer - Harddisk Selection Dialog
Proxmox Backup Server can also be installed on ZFS. As ZFS offers several
software RAID levels, this is an option for systems that don't have a hardware
RAID controller. The target disks must be selected in the *Options* dialog. More
ZFS specific settings can be changed under :ref:`Advanced Options
<advanced_zfs_options>`.
.. warning:: ZFS on top of any hardware RAID is not supported and can result in
data loss.
.. image:: images/screenshots/pbs-installer-location.png
:target: _images/pbs-installer-location.png
:align: right
:alt: Proxmox Backup Server Installer - Location and timezone configuration
The next page asks for basic configuration options like your location, time
zone, and keyboard layout. The location is used to select a nearby download
server, in order to increase the speed of updates. The installer is usually able
to auto-detect these settings, so you only need to change them in rare
situations when auto-detection fails, or when you want to use a keyboard layout
not commonly used in your country.
.. image:: images/screenshots/pbs-installer-password.png
:target: _images/pbs-installer-password.png
:align: left
:alt: Proxmox Backup Server Installer - Password and email configuration
Next the password of the superuser (``root``) and an email address needs to be
specified. The password must consist of at least 8 characters. It's highly
recommended to use a stronger password. Some guidelines are:
|
|
- Use a minimum password length of at least 12 characters.
- Include lowercase and uppercase alphabetic characters, numbers, and symbols.
- Avoid character repetition, keyboard patterns, common dictionary words,
letter or number sequences, usernames, relative or pet names, romantic links
(current or past), and biographical information (for example ID numbers,
ancestors' names or dates).
The email address is used to send notifications to the system administrator.
For example:
- Information about available package updates.
- Error messages from periodic *cron* jobs.
.. image:: images/screenshots/pbs-installer-network.png
:target: _images/pbs-installer-network.png
:align: right
:alt: Proxmox Backup Server Installer - Network configuration
All those notification mails will be sent to the specified email address.
The last step is the network configuration. Network interfaces that are *UP*
show a filled circle in front of their name in the drop down menu. Please note
that during installation you can either specify an IPv4 or IPv6 address, but not
both. To configure a dual stack node, add additional IP addresses after the
installation.
.. image:: images/screenshots/pbs-installer-progress.png
:target: _images/pbs-installer-progress.png
:align: left
:alt: Proxmox Backup Server Installer - Installation progress
The next step shows a summary of the previously selected options. Please
re-check every setting and use the *Previous* button if a setting needs to be
changed.
After clicking *Install*, the installer will begin to format the disks and copy
packages to the target disk(s). Please wait until this step has finished; then
remove the installation medium and restart your system.
.. image:: images/screenshots/pbs-installer-summary.png
:target: _images/pbs-installer-summary.png
:align: right
:alt: Proxmox Backup Server Installer - Installation summary
Copying the packages usually takes several minutes, mostly depending on the
speed of the installation medium and the target disk performance.
When copying and setting up the packages has finished, you can reboot the
server. This will be done automatically after a few seconds by default.
Installation Failure
^^^^^^^^^^^^^^^^^^^^
If the installation failed, check out specific errors on the second TTY
(``CTRL + ALT + F2``) and ensure that the systems meets the
:ref:`minimum requirements <minimum_system_requirements>`.
If the installation is still not working, look at the :ref:`how to get help
chapter <get_help>`.
Accessing the Management Interface Post-Installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-login-window.png
:target: _images/pbs-gui-login-window.png
:align: right
:alt: Proxmox Backup Server - Management interface login dialog
After a successful installation and reboot of the system you can use the Proxmox
Backup Server web interface for further configuration.
- Point your browser to the IP address given during the installation and port
8007, for example: https://pbs.yourdomain.tld:8007
- Log in using the ``root`` (realm *Linux PAM standard authentication*) username
and the password chosen during installation.
- Upload your subscription key to gain access to the Enterprise repository.
Otherwise, you will need to set up one of the public, less tested package
repositories to get updates for security fixes, bug fixes, and new features.
- Check the IP configuration and hostname.
- Check the timezone.
.. _advanced_lvm_options:
Advanced LVM Configuration Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The installer creates a Volume Group (VG) called ``pbs``, and additional Logical
Volumes (LVs) called ``root`` and ``swap``, if ``ext4`` or ``xfs`` as filesystem
is used. To control the size of these volumes use:
- *hdsize*
Defines the total hard disk size to be used. This way you can reserve free
space on the hard disk for further partitioning.
- *swapsize*
Defines the size of the ``swap`` volume. The default is the size of the
installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot
be greater than ``hdsize/8``.
If set to ``0``, no ``swap`` volume will be created.
- *minfree*
Defines the amount of free space that should be left in the LVM volume group
``pbs``. With more than 128GB storage available, the default is 16GB,
otherwise ``hdsize/8`` will be used.
.. _advanced_zfs_options:
Advanced ZFS Configuration Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The installer creates the ZFS pool ``rpool``, if ZFS is used. No swap space is
created but you can reserve some unpartitioned space on the install disks for
swap. You can also create a swap zvol after the installation, although this can
lead to problems (see :ref:`ZFS swap notes <zfs_swap>`).
- *ashift*
Defines the *ashift* value for the created pool. The *ashift* needs to be
set at least to the sector-size of the underlying disks (2 to the power of
*ashift* is the sector-size), or any disk which might be put in the pool
(for example the replacement of a defective disk).
- *compress*
Defines whether compression is enabled for ``rpool``.
- *checksum*
Defines which checksumming algorithm should be used for ``rpool``.
- *copies*
Defines the *copies* parameter for ``rpool``. Check the ``zfs(8)`` manpage
for the semantics, and why this does not replace redundancy on disk-level.
- *hdsize*
Defines the total hard disk size to be used. This is useful to save free
space on the hard disk(s) for further partitioning (for example, to create a
swap partition). *hdsize* is only honored for bootable disks, that is only
the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in
RAID-Z[123].
ZFS Performance Tips
^^^^^^^^^^^^^^^^^^^^
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have
enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB
of raw disk space.
ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL).
Use a fast drive (SSD) for it. It can be added after installation with the
following command:
.. code-block:: console
# zpool add <pool-name> log </dev/path_to_fast_ssd>
.. _nomodeset_kernel_param:
Adding the ``nomodeset`` Kernel Parameter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Problems may arise on very old or very new hardware due to graphics drivers. If
the installation hangs during boot, you can try adding the ``nomodeset``
parameter. This prevents the Linux kernel from loading any graphics drivers and
forces it to continue using the BIOS/UEFI-provided framebuffer.
On the Proxmox Backup Server bootloader menu, navigate to *Install Proxmox
Backup Server (Console)* and press ``e`` to edit the entry. Using the arrow
keys, navigate to the line starting with ``linux``, move the cursor to the end
of that line and add the parameter ``nomodeset``, separated by a space from the
pre-existing last parameter.
Then press ``Ctrl-X`` or ``F10`` to boot the configuration.

View File

@ -2,6 +2,7 @@ include ../defines.mk
UNITS := \
proxmox-backup-daily-update.timer \
removable-device-attach@.service
DYNAMIC_UNITS := \
proxmox-backup-banner.service \

View File

@ -0,0 +1,8 @@
[Unit]
Description=Try to mount the removable device of a datastore with uuid '%i'.
After=proxmox-backup-proxy.service
Requires=proxmox-backup-proxy.service
[Service]
Type=simple
ExecStart=/usr/sbin/proxmox-backup-manager datastore uuid-mount %i

View File

@ -10,7 +10,7 @@ use tokio::net::TcpStream;
// Simple H2 client to test H2 download speed using h2server.rs
struct Process {
body: h2::RecvStream,
body: h2::legacy::RecvStream,
trailers: bool,
bytes: usize,
}
@ -50,11 +50,11 @@ impl Future for Process {
}
fn send_request(
mut client: h2::client::SendRequest<bytes::Bytes>,
mut client: h2::legacy::client::SendRequest<bytes::Bytes>,
) -> impl Future<Output = Result<usize, Error>> {
println!("sending request");
let request = http::Request::builder()
let request = hyper::http::Request::builder()
.uri("http://localhost/")
.body(())
.unwrap();
@ -78,7 +78,7 @@ async fn run() -> Result<(), Error> {
let conn = TcpStream::connect(std::net::SocketAddr::from(([127, 0, 0, 1], 8008))).await?;
conn.set_nodelay(true).unwrap();
let (client, h2) = h2::client::Builder::new()
let (client, h2) = h2::legacy::client::Builder::new()
.initial_connection_window_size(1024 * 1024 * 1024)
.initial_window_size(1024 * 1024 * 1024)
.max_frame_size(4 * 1024 * 1024)

View File

@ -10,7 +10,7 @@ use tokio::net::TcpStream;
// Simple H2 client to test H2 download speed using h2s-server.rs
struct Process {
body: h2::RecvStream,
body: h2::legacy::RecvStream,
trailers: bool,
bytes: usize,
}
@ -50,11 +50,11 @@ impl Future for Process {
}
fn send_request(
mut client: h2::client::SendRequest<bytes::Bytes>,
mut client: h2::legacy::client::SendRequest<bytes::Bytes>,
) -> impl Future<Output = Result<usize, Error>> {
println!("sending request");
let request = http::Request::builder()
let request = hyper::http::Request::builder()
.uri("http://localhost/")
.body(())
.unwrap();
@ -94,7 +94,7 @@ async fn run() -> Result<(), Error> {
.await
.map_err(|err| format_err!("connect failed - {}", err))?;
let (client, h2) = h2::client::Builder::new()
let (client, h2) = h2::legacy::client::Builder::new()
.initial_connection_window_size(1024 * 1024 * 1024)
.initial_window_size(1024 * 1024 * 1024)
.max_frame_size(4 * 1024 * 1024)

View File

@ -8,6 +8,19 @@ use tokio::net::{TcpListener, TcpStream};
use pbs_buildcfg::configdir;
#[derive(Clone, Copy)]
struct H2SExecutor;
impl<Fut> hyper::rt::Executor<Fut> for H2SExecutor
where
Fut: Future + Send + 'static,
Fut::Output: Send,
{
fn execute(&self, fut: Fut) {
tokio::spawn(fut);
}
}
fn main() -> Result<(), Error> {
proxmox_async::runtime::main(run())
}
@ -50,12 +63,11 @@ async fn handle_connection(socket: TcpStream, acceptor: Arc<SslAcceptor>) -> Res
stream.as_mut().accept().await?;
let mut http = hyper::server::conn::Http::new();
http.http2_only(true);
let mut http = hyper::server::conn::http2::Builder::new(H2SExecutor);
// increase window size: todo - find optiomal size
let max_window_size = (1 << 31) - 2;
http.http2_initial_stream_window_size(max_window_size);
http.http2_initial_connection_window_size(max_window_size);
http.initial_stream_window_size(max_window_size);
http.initial_connection_window_size(max_window_size);
let service = hyper::service::service_fn(|_req: Request<Body>| {
println!("Got request");
@ -63,8 +75,11 @@ async fn handle_connection(socket: TcpStream, acceptor: Arc<SslAcceptor>) -> Res
let body = Body::from(buffer);
let response = Response::builder()
.status(http::StatusCode::OK)
.header(http::header::CONTENT_TYPE, "application/octet-stream")
.status(hyper::http::StatusCode::OK)
.header(
hyper::http::header::CONTENT_TYPE,
"application/octet-stream",
)
.body(body)
.unwrap();
future::ok::<_, Error>(response)

View File

@ -1,9 +1,24 @@
use std::future::Future;
use anyhow::Error;
use futures::*;
use hyper::{Body, Request, Response};
use tokio::net::{TcpListener, TcpStream};
#[derive(Clone, Copy)]
struct H2Executor;
impl<Fut> hyper::rt::Executor<Fut> for H2Executor
where
Fut: Future + Send + 'static,
Fut::Output: Send,
{
fn execute(&self, fut: Fut) {
tokio::spawn(fut);
}
}
fn main() -> Result<(), Error> {
proxmox_async::runtime::main(run())
}
@ -26,12 +41,11 @@ async fn run() -> Result<(), Error> {
async fn handle_connection(socket: TcpStream) -> Result<(), Error> {
socket.set_nodelay(true).unwrap();
let mut http = hyper::server::conn::Http::new();
http.http2_only(true);
let mut http = hyper::server::conn::http2::Builder::new(H2Executor);
// increase window size: todo - find optiomal size
let max_window_size = (1 << 31) - 2;
http.http2_initial_stream_window_size(max_window_size);
http.http2_initial_connection_window_size(max_window_size);
http.initial_stream_window_size(max_window_size);
http.initial_connection_window_size(max_window_size);
let service = hyper::service::service_fn(|_req: Request<Body>| {
println!("Got request");
@ -39,8 +53,11 @@ async fn handle_connection(socket: TcpStream) -> Result<(), Error> {
let body = Body::from(buffer);
let response = Response::builder()
.status(http::StatusCode::OK)
.header(http::header::CONTENT_TYPE, "application/octet-stream")
.status(hyper::http::StatusCode::OK)
.header(
hyper::http::header::CONTENT_TYPE,
"application/octet-stream",
)
.body(body)
.unwrap();
future::ok::<_, Error>(response)

View File

@ -0,0 +1,91 @@
use std::{
fs::File,
io::Read,
time::{Duration, SystemTime},
};
use anyhow::{format_err, Error};
use pbs_tape::TapeWrite;
use proxmox_backup::tape::drive::{LtoTapeHandle, TapeDriver};
const URANDOM_PATH: &str = "/dev/urandom";
const CHUNK_SIZE: usize = 4 * 1024 * 1024; // 4 MiB
const LOG_LIMIT: usize = 4 * 1024 * 1024 * 1024; // 4 GiB
fn write_chunks<'a>(
mut writer: Box<dyn 'a + TapeWrite>,
blob_size: usize,
max_size: usize,
max_time: Duration,
) -> Result<(), Error> {
// prepare chunks in memory
let mut blob: Vec<u8> = vec![0u8; blob_size];
let mut file = File::open(URANDOM_PATH)?;
file.read_exact(&mut blob[..])?;
let start_time = SystemTime::now();
loop {
let iteration_time = SystemTime::now();
let mut count = 0;
let mut bytes_written = 0;
let mut idx = 0;
let mut incr_count = 0;
loop {
if writer.write_all(&blob)? {
eprintln!("LEOM reached");
break;
}
// modifying chunks a bit to mitigate compression/deduplication
blob[idx] = blob[idx].wrapping_add(1);
incr_count += 1;
if incr_count >= 256 {
incr_count = 0;
idx += 1;
}
count += 1;
bytes_written += blob_size;
if bytes_written > max_size {
break;
}
}
let elapsed = iteration_time.elapsed()?.as_secs_f64();
let elapsed_total = start_time.elapsed()?;
eprintln!(
"{:.2}s: wrote {} chunks ({:.2} MB at {:.2} MB/s, average: {:.2} MB/s)",
elapsed_total.as_secs_f64(),
count,
bytes_written as f64 / 1_000_000.0,
(bytes_written as f64) / (1_000_000.0 * elapsed),
(writer.bytes_written() as f64) / (1_000_000.0 * elapsed_total.as_secs_f64()),
);
if elapsed_total > max_time {
break;
}
}
Ok(())
}
fn main() -> Result<(), Error> {
let mut args = std::env::args_os();
args.next(); // binary name
let path = args.next().expect("no path to tape device given");
let file = File::open(path).map_err(|err| format_err!("could not open tape device: {err}"))?;
let mut drive = LtoTapeHandle::new(file)
.map_err(|err| format_err!("error creating drive handle: {err}"))?;
write_chunks(
drive
.write_file()
.map_err(|err| format_err!("error starting file write: {err}"))?,
CHUNK_SIZE,
LOG_LIMIT,
Duration::new(60 * 20, 0),
)
.map_err(|err| format_err!("error writing data to tape: {err}"))?;
Ok(())
}

View File

@ -5,10 +5,10 @@ extern crate proxmox_backup;
use anyhow::Error;
use std::io::{Read, Write};
use pbs_datastore::Chunker;
use pbs_datastore::{Chunker, ChunkerImpl};
struct ChunkWriter {
chunker: Chunker,
chunker: ChunkerImpl,
last_chunk: usize,
chunk_offset: usize,
@ -23,7 +23,7 @@ struct ChunkWriter {
impl ChunkWriter {
fn new(chunk_size: usize) -> Self {
ChunkWriter {
chunker: Chunker::new(chunk_size),
chunker: ChunkerImpl::new(chunk_size),
last_chunk: 0,
chunk_offset: 0,
chunk_count: 0,
@ -69,7 +69,8 @@ impl Write for ChunkWriter {
fn write(&mut self, data: &[u8]) -> std::result::Result<usize, std::io::Error> {
let chunker = &mut self.chunker;
let pos = chunker.scan(data);
let ctx = pbs_datastore::chunker::Context::default();
let pos = chunker.scan(data, &ctx);
if pos > 0 {
self.chunk_offset += pos;

View File

@ -1,6 +1,6 @@
extern crate proxmox_backup;
use pbs_datastore::Chunker;
use pbs_datastore::{Chunker, ChunkerImpl};
fn main() {
let mut buffer = Vec::new();
@ -12,7 +12,7 @@ fn main() {
buffer.push(byte);
}
}
let mut chunker = Chunker::new(64 * 1024);
let mut chunker = ChunkerImpl::new(64 * 1024);
let count = 5;
@ -23,8 +23,9 @@ fn main() {
for _i in 0..count {
let mut pos = 0;
let mut _last = 0;
let ctx = pbs_datastore::chunker::Context::default();
while pos < buffer.len() {
let k = chunker.scan(&buffer[pos..]);
let k = chunker.scan(&buffer[pos..], &ctx);
if k == 0 {
//println!("LAST {}", pos);
break;

View File

@ -1,9 +1,10 @@
use std::str::FromStr;
use anyhow::Error;
use futures::*;
extern crate proxmox_backup;
use pbs_client::ChunkStream;
use proxmox_human_byte::HumanByte;
// Test Chunker with real data read from a file.
//
@ -21,12 +22,22 @@ fn main() {
async fn run() -> Result<(), Error> {
let file = tokio::fs::File::open("random-test.dat").await?;
let stream = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
.map_ok(|bytes| bytes.to_vec())
.map_err(Error::from);
let mut args = std::env::args();
args.next();
let buffer_size = args.next().unwrap_or("8k".to_string());
let buffer_size = HumanByte::from_str(&buffer_size)?;
println!("Using buffer size {buffer_size}");
let stream = tokio_util::codec::FramedRead::with_capacity(
file,
tokio_util::codec::BytesCodec::new(),
buffer_size.as_u64() as usize,
)
.map_err(Error::from);
//let chunk_stream = FixedChunkStream::new(stream, 4*1024*1024);
let mut chunk_stream = ChunkStream::new(stream, None);
let mut chunk_stream = ChunkStream::new(stream, None, None, None);
let start_time = std::time::Instant::now();
@ -40,7 +51,7 @@ async fn run() -> Result<(), Error> {
repeat += 1;
stream_len += chunk.len();
println!("Got chunk {}", chunk.len());
//println!("Got chunk {}", chunk.len());
}
let speed =

View File

@ -1,24 +0,0 @@
[package]
name = "pbs-api-types"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
description = "general API type helpers for PBS"
[dependencies]
anyhow.workspace = true
const_format.workspace = true
hex.workspace = true
lazy_static.workspace = true
percent-encoding.workspace = true
regex.workspace = true
serde.workspace = true
serde_plain.workspace = true
proxmox-auth-api = { workspace = true, features = [ "api-types" ] }
proxmox-human-byte.workspace = true
proxmox-lang.workspace=true
proxmox-schema = { workspace = true, features = [ "api-macro" ] }
proxmox-serde.workspace = true
proxmox-time.workspace = true
proxmox-uuid = { workspace = true, features = [ "serde" ] }

View File

@ -1,294 +0,0 @@
use std::str::FromStr;
use const_format::concatcp;
use serde::de::{value, IntoDeserializer};
use serde::{Deserialize, Serialize};
use proxmox_lang::constnamedbitmap;
use proxmox_schema::{
api, const_regex, ApiStringFormat, BooleanSchema, EnumEntry, Schema, StringSchema,
};
use crate::PROXMOX_SAFE_ID_REGEX_STR;
const_regex! {
pub ACL_PATH_REGEX = concatcp!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR, ")+", r")$");
}
// define Privilege bitfield
constnamedbitmap! {
/// Contains a list of privilege name to privilege value mappings.
///
/// The names are used when displaying/persisting privileges anywhere, the values are used to
/// allow easy matching of privileges as bitflags.
PRIVILEGES: u64 => {
/// Sys.Audit allows knowing about the system and its status
PRIV_SYS_AUDIT("Sys.Audit");
/// Sys.Modify allows modifying system-level configuration
PRIV_SYS_MODIFY("Sys.Modify");
/// Sys.Modify allows to poweroff/reboot/.. the system
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
/// Permissions.Modify allows modifying ACLs
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
/// Remote.Audit allows reading remote.cfg and sync.cfg entries
PRIV_REMOTE_AUDIT("Remote.Audit");
/// Remote.Modify allows modifying remote.cfg
PRIV_REMOTE_MODIFY("Remote.Modify");
/// Remote.Read allows reading data from a configured `Remote`
PRIV_REMOTE_READ("Remote.Read");
/// Sys.Console allows access to the system's console
PRIV_SYS_CONSOLE("Sys.Console");
/// Tape.Audit allows reading tape backup configuration and status
PRIV_TAPE_AUDIT("Tape.Audit");
/// Tape.Modify allows modifying tape backup configuration
PRIV_TAPE_MODIFY("Tape.Modify");
/// Tape.Write allows writing tape media
PRIV_TAPE_WRITE("Tape.Write");
/// Tape.Read allows reading tape backup configuration and media contents
PRIV_TAPE_READ("Tape.Read");
/// Realm.Allocate allows viewing, creating, modifying and deleting realms
PRIV_REALM_ALLOCATE("Realm.Allocate");
}
}
pub fn privs_to_priv_names(privs: u64) -> Vec<&'static str> {
PRIVILEGES
.iter()
.fold(Vec::new(), |mut priv_names, (name, value)| {
if value & privs != 0 {
priv_names.push(name);
}
priv_names
})
}
/// Admin always has all privileges. It can do everything except a few actions
/// which are limited to the 'root@pam` superuser
pub const ROLE_ADMIN: u64 = u64::MAX;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NO_ACCESS: u64 = 0;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Audit can view configuration and status information, but not modify it.
pub const ROLE_AUDIT: u64 = 0
| PRIV_SYS_AUDIT
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Admin can do anything on the datastore.
pub const ROLE_DATASTORE_ADMIN: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_MODIFY
| PRIV_DATASTORE_READ
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_BACKUP
| PRIV_DATASTORE_PRUNE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 = 0
| PRIV_DATASTORE_AUDIT
| PRIV_DATASTORE_VERIFY
| PRIV_DATASTORE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Backup can do backup and restore, but no prune.
pub const ROLE_DATASTORE_BACKUP: u64 = 0
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.PowerUser can do backup, restore, and prune.
pub const ROLE_DATASTORE_POWERUSER: u64 = 0
| PRIV_DATASTORE_PRUNE
| PRIV_DATASTORE_BACKUP;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Datastore.Audit can audit the datastore.
pub const ROLE_DATASTORE_AUDIT: u64 = 0
| PRIV_DATASTORE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Audit can audit the remote
pub const ROLE_REMOTE_AUDIT: u64 = 0
| PRIV_REMOTE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.Admin can do anything on the remote.
pub const ROLE_REMOTE_ADMIN: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_MODIFY
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 = 0
| PRIV_REMOTE_AUDIT
| PRIV_REMOTE_READ;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Audit can audit the tape backup configuration and media content
pub const ROLE_TAPE_AUDIT: u64 = 0
| PRIV_TAPE_AUDIT;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Admin can do anything on the tape backup
pub const ROLE_TAPE_ADMIN: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_MODIFY
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Operator can do tape backup and restore (but no configuration changes)
pub const ROLE_TAPE_OPERATOR: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ
| PRIV_TAPE_WRITE;
#[rustfmt::skip]
#[allow(clippy::identity_op)]
/// Tape.Reader can do read and inspect tape content
pub const ROLE_TAPE_READER: u64 = 0
| PRIV_TAPE_AUDIT
| PRIV_TAPE_READ;
/// NoAccess can be used to remove privileges from specific (sub-)paths
pub const ROLE_NAME_NO_ACCESS: &str = "NoAccess";
#[api(
type_text: "<role>",
)]
#[repr(u64)]
#[derive(Serialize, Deserialize)]
/// Enum representing roles via their [PRIVILEGES] combination.
///
/// Since privileges are implemented as bitflags, each unique combination of privileges maps to a
/// single, unique `u64` value that is used in this enum definition.
pub enum Role {
/// Administrator
Admin = ROLE_ADMIN,
/// Auditor
Audit = ROLE_AUDIT,
/// Disable Access
NoAccess = ROLE_NO_ACCESS,
/// Datastore Administrator
DatastoreAdmin = ROLE_DATASTORE_ADMIN,
/// Datastore Reader (inspect datastore content and do restores)
DatastoreReader = ROLE_DATASTORE_READER,
/// Datastore Backup (backup and restore owned backups)
DatastoreBackup = ROLE_DATASTORE_BACKUP,
/// Datastore PowerUser (backup, restore and prune owned backup)
DatastorePowerUser = ROLE_DATASTORE_POWERUSER,
/// Datastore Auditor
DatastoreAudit = ROLE_DATASTORE_AUDIT,
/// Remote Auditor
RemoteAudit = ROLE_REMOTE_AUDIT,
/// Remote Administrator
RemoteAdmin = ROLE_REMOTE_ADMIN,
/// Syncronisation Opertator
RemoteSyncOperator = ROLE_REMOTE_SYNC_OPERATOR,
/// Tape Auditor
TapeAudit = ROLE_TAPE_AUDIT,
/// Tape Administrator
TapeAdmin = ROLE_TAPE_ADMIN,
/// Tape Operator
TapeOperator = ROLE_TAPE_OPERATOR,
/// Tape Reader
TapeReader = ROLE_TAPE_READER,
}
impl FromStr for Role {
type Err = value::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Self::deserialize(s.into_deserializer())
}
}
pub const ACL_PATH_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&ACL_PATH_REGEX);
pub const ACL_PATH_SCHEMA: Schema = StringSchema::new("Access control path.")
.format(&ACL_PATH_FORMAT)
.min_length(1)
.max_length(128)
.schema();
pub const ACL_PROPAGATE_SCHEMA: Schema =
BooleanSchema::new("Allow to propagate (inherit) permissions.")
.default(true)
.schema();
pub const ACL_UGID_TYPE_SCHEMA: Schema = StringSchema::new("Type of 'ugid' property.")
.format(&ApiStringFormat::Enum(&[
EnumEntry::new("user", "User"),
EnumEntry::new("group", "Group"),
]))
.schema();
#[api(
properties: {
propagate: {
schema: ACL_PROPAGATE_SCHEMA,
},
path: {
schema: ACL_PATH_SCHEMA,
},
ugid_type: {
schema: ACL_UGID_TYPE_SCHEMA,
},
ugid: {
type: String,
description: "User or Group ID.",
},
roleid: {
type: Role,
}
}
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
/// ACL list entry.
pub struct AclListItem {
pub path: String,
pub ugid: String,
pub ugid_type: String,
pub propagate: bool,
pub roleid: String,
}

View File

@ -1,98 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, Updater};
use super::{
LdapMode, LDAP_DOMAIN_SCHEMA, REALM_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
SYNC_ATTRIBUTES_SCHEMA, SYNC_DEFAULTS_STRING_SCHEMA, USER_CLASSES_SCHEMA,
};
#[api(
properties: {
"realm": {
schema: REALM_ID_SCHEMA,
},
"comment": {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"verify": {
optional: true,
default: false,
},
"sync-defaults-options": {
schema: SYNC_DEFAULTS_STRING_SCHEMA,
optional: true,
},
"sync-attributes": {
schema: SYNC_ATTRIBUTES_SCHEMA,
optional: true,
},
"user-classes" : {
optional: true,
schema: USER_CLASSES_SCHEMA,
},
"base-dn" : {
schema: LDAP_DOMAIN_SCHEMA,
optional: true,
},
"bind-dn" : {
schema: LDAP_DOMAIN_SCHEMA,
optional: true,
}
},
)]
#[derive(Serialize, Deserialize, Updater, Clone)]
#[serde(rename_all = "kebab-case")]
/// AD realm configuration properties.
pub struct AdRealmConfig {
#[updater(skip)]
pub realm: String,
/// AD server address
pub server1: String,
/// Fallback AD server address
#[serde(skip_serializing_if = "Option::is_none")]
pub server2: Option<String>,
/// AD server Port
#[serde(skip_serializing_if = "Option::is_none")]
pub port: Option<u16>,
/// Base domain name. Users are searched under this domain using a `subtree search`.
/// Expected to be set only internally to `defaultNamingContext` of the AD server, but can be
/// overridden if the need arises.
#[serde(skip_serializing_if = "Option::is_none")]
pub base_dn: Option<String>,
/// Comment
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// Connection security
#[serde(skip_serializing_if = "Option::is_none")]
pub mode: Option<LdapMode>,
/// Verify server certificate
#[serde(skip_serializing_if = "Option::is_none")]
pub verify: Option<bool>,
/// CA certificate to use for the server. The path can point to
/// either a file, or a directory. If it points to a file,
/// the PEM-formatted X.509 certificate stored at the path
/// will be added as a trusted certificate.
/// If the path points to a directory,
/// the directory replaces the system's default certificate
/// store at `/etc/ssl/certs` - Every file in the directory
/// will be loaded as a trusted certificate.
#[serde(skip_serializing_if = "Option::is_none")]
pub capath: Option<String>,
/// Bind domain to use for looking up users
#[serde(skip_serializing_if = "Option::is_none")]
pub bind_dn: Option<String>,
/// Custom LDAP search filter for user sync
#[serde(skip_serializing_if = "Option::is_none")]
pub filter: Option<String>,
/// Default options for AD sync
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_defaults_options: Option<String>,
/// List of LDAP attributes to sync from AD to user config
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_attributes: Option<String>,
/// User ``objectClass`` classes to sync
#[serde(skip_serializing_if = "Option::is_none")]
pub user_classes: Option<String>,
}

View File

@ -1,95 +0,0 @@
use std::fmt::{self, Display};
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
#[api(default: "encrypt")]
#[derive(Copy, Clone, Debug, Eq, PartialEq, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Defines whether data is encrypted (using an AEAD cipher), only signed, or neither.
pub enum CryptMode {
/// Don't encrypt.
None,
/// Encrypt.
Encrypt,
/// Only sign.
SignOnly,
}
#[derive(Debug, Eq, PartialEq, Hash, Clone, Deserialize, Serialize)]
#[serde(transparent)]
/// 32-byte fingerprint, usually calculated with SHA256.
pub struct Fingerprint {
#[serde(with = "bytes_as_fingerprint")]
bytes: [u8; 32],
}
impl Fingerprint {
pub fn new(bytes: [u8; 32]) -> Self {
Self { bytes }
}
pub fn bytes(&self) -> &[u8; 32] {
&self.bytes
}
pub fn signature(&self) -> String {
as_fingerprint(&self.bytes)
}
}
/// Display as short key ID
impl Display for Fingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", as_fingerprint(&self.bytes[0..8]))
}
}
impl std::str::FromStr for Fingerprint {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Error> {
let mut tmp = s.to_string();
tmp.retain(|c| c != ':');
let mut bytes = [0u8; 32];
hex::decode_to_slice(&tmp, &mut bytes)?;
Ok(Fingerprint::new(bytes))
}
}
fn as_fingerprint(bytes: &[u8]) -> String {
hex::encode(bytes)
.as_bytes()
.chunks(2)
.map(|v| unsafe { std::str::from_utf8_unchecked(v) }) // it's a hex string
.collect::<Vec<&str>>()
.join(":")
}
pub mod bytes_as_fingerprint {
use std::mem::MaybeUninit;
use serde::{Deserialize, Deserializer, Serializer};
pub fn serialize<S>(bytes: &[u8; 32], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = super::as_fingerprint(bytes);
serializer.serialize_str(&s)
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<[u8; 32], D::Error>
where
D: Deserializer<'de>,
{
// TODO: more efficiently implement with a Visitor implementing visit_str using split() and
// hex::decode by-byte
let mut s = String::deserialize(deserializer)?;
s.retain(|c| c != ':');
let mut out = MaybeUninit::<[u8; 32]>::uninit();
hex::decode_to_slice(s.as_bytes(), unsafe { &mut (*out.as_mut_ptr())[..] })
.map_err(serde::de::Error::custom)?;
Ok(unsafe { out.assume_init() })
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,30 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
#[api]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// General status information about a running VM file-restore daemon
pub struct RestoreDaemonStatus {
/// VM uptime in seconds
pub uptime: i64,
/// time left until auto-shutdown, keep in mind that this is useless when 'keep-timeout' is
/// not set, as then the status call will have reset the timer before returning the value
pub timeout: i64,
}
#[api]
#[derive(Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
/// The desired format of the result.
pub enum FileRestoreFormat {
/// Plain file (only works for single files)
Plain,
/// PXAR archive
Pxar,
/// ZIP archive
Zip,
/// TAR archive
Tar,
}

View File

@ -1,799 +0,0 @@
use std::str::FromStr;
use anyhow::bail;
use const_format::concatcp;
use regex::Regex;
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
use crate::{
Authid, BackupNamespace, BackupType, NotificationMode, RateLimitConfig, Userid,
BACKUP_GROUP_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_NS_RE, DATASTORE_SCHEMA,
DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA, NS_MAX_DEPTH_REDUCED_SCHEMA, PROXMOX_SAFE_ID_FORMAT,
PROXMOX_SAFE_ID_REGEX_STR, REMOTE_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA,
};
const_regex! {
/// Regex for verification jobs 'DATASTORE:ACTUAL_JOB_ID'
pub VERIFICATION_JOB_WORKER_ID_REGEX = concatcp!(r"^(", PROXMOX_SAFE_ID_REGEX_STR, r"):");
/// Regex for sync jobs '(REMOTE|\-):REMOTE_DATASTORE:LOCAL_DATASTORE:(?:LOCAL_NS_ANCHOR:)ACTUAL_JOB_ID'
pub SYNC_JOB_WORKER_ID_REGEX = concatcp!(r"^(", PROXMOX_SAFE_ID_REGEX_STR, r"|\-):(", PROXMOX_SAFE_ID_REGEX_STR, r"):(", PROXMOX_SAFE_ID_REGEX_STR, r")(?::(", BACKUP_NS_RE, r"))?:");
}
pub const JOB_ID_SCHEMA: Schema = StringSchema::new("Job ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new("Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(
proxmox_time::verify_calendar_event,
))
.type_text("<calendar-event>")
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema =
StringSchema::new("Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(
proxmox_time::verify_calendar_event,
))
.type_text("<calendar-event>")
.schema();
pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new("Run prune job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(
proxmox_time::verify_calendar_event,
))
.type_text("<calendar-event>")
.schema();
pub const VERIFICATION_SCHEDULE_SCHEMA: Schema =
StringSchema::new("Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(
proxmox_time::verify_calendar_event,
))
.type_text("<calendar-event>")
.schema();
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.",
)
.default(false)
.schema();
#[api(
properties: {
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[derive(Serialize, Deserialize, Default, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Job Scheduling Status
pub struct JobScheduleStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub next_run: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub last_run_state: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub last_run_endtime: Option<i64>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and successful jobs
Always,
/// Send notifications for failed jobs only
Error,
}
#[api(
properties: {
gc: {
type: Notify,
optional: true,
},
verify: {
type: Notify,
optional: true,
},
sync: {
type: Notify,
optional: true,
},
prune: {
type: Notify,
optional: true,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
/// Datastore notify settings
pub struct DatastoreNotify {
/// Garbage collection settings
#[serde(skip_serializing_if = "Option::is_none")]
pub gc: Option<Notify>,
/// Verify job setting
#[serde(skip_serializing_if = "Option::is_none")]
pub verify: Option<Notify>,
/// Sync job setting
#[serde(skip_serializing_if = "Option::is_none")]
pub sync: Option<Notify>,
/// Prune job setting
#[serde(skip_serializing_if = "Option::is_none")]
pub prune: Option<Notify>,
}
pub const DATASTORE_NOTIFY_STRING_SCHEMA: Schema = StringSchema::new(
"Datastore notification setting, enum can be one of 'always', 'never', or 'error'.",
)
.format(&ApiStringFormat::PropertyString(
&DatastoreNotify::API_SCHEMA,
))
.schema();
pub const IGNORE_VERIFIED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Do not verify backups that are already verified if their verification is not outdated.",
)
.default(true)
.schema();
pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema =
IntegerSchema::new("Days after that a verification becomes outdated. (0 is deprecated)'")
.minimum(0)
.schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
ns: {
optional: true,
schema: BACKUP_NAMESPACE_SCHEMA,
},
"max-depth": {
optional: true,
schema: crate::NS_MAX_DEPTH_SCHEMA,
},
}
)]
#[derive(Serialize, Deserialize, Updater, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Verification Job
pub struct VerificationJobConfig {
/// unique ID to address this job
#[updater(skip)]
pub id: String,
/// the datastore ID this verification job affects
pub store: String,
#[serde(skip_serializing_if = "Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
#[serde(skip_serializing_if = "Option::is_none", default)]
/// on which backup namespace to run the verification recursively
pub ns: Option<BackupNamespace>,
#[serde(skip_serializing_if = "Option::is_none", default)]
/// how deep the verify should go from the `ns` level downwards. Passing 0 verifies only the
/// snapshots on the same level as the passed `ns`, or the datastore root if none.
pub max_depth: Option<usize>,
}
impl VerificationJobConfig {
pub fn acl_path(&self) -> Vec<&str> {
match self.ns.as_ref() {
Some(ns) => ns.acl_path(&self.store),
None => vec!["datastore", &self.store],
}
}
}
#[api(
properties: {
config: {
type: VerificationJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Status of Verification Job
pub struct VerificationJobStatus {
#[serde(flatten)]
pub config: VerificationJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}
#[api(
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
drive: {
schema: DRIVE_NAME_SCHEMA,
},
"eject-media": {
description: "Eject media upon job completion.",
type: bool,
optional: true,
},
"export-media-set": {
description: "Export media set upon job completion.",
type: bool,
optional: true,
},
"latest-only": {
description: "Backup latest snapshots only.",
type: bool,
optional: true,
},
"notify-user": {
optional: true,
type: Userid,
},
"group-filter": {
schema: GROUP_FILTER_LIST_SCHEMA,
optional: true,
},
ns: {
type: BackupNamespace,
optional: true,
},
"max-depth": {
schema: crate::NS_MAX_DEPTH_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Tape Backup Job Setup
pub struct TapeBackupJobSetup {
pub store: String,
pub pool: String,
pub drive: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub eject_media: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub export_media_set: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub latest_only: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if = "Option::is_none")]
pub notify_user: Option<Userid>,
#[serde(skip_serializing_if = "Option::is_none")]
pub notification_mode: Option<NotificationMode>,
#[serde(skip_serializing_if = "Option::is_none")]
pub group_filter: Option<Vec<GroupFilter>>,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub ns: Option<BackupNamespace>,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub max_depth: Option<usize>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
setup: {
type: TapeBackupJobSetup,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
#[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Tape Backup Job
pub struct TapeBackupJobConfig {
#[updater(skip)]
pub id: String,
#[serde(flatten)]
pub setup: TapeBackupJobSetup,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub schedule: Option<String>,
}
#[api(
properties: {
config: {
type: TapeBackupJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Status of Tape Backup Job
pub struct TapeBackupJobStatus {
#[serde(flatten)]
pub config: TapeBackupJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
/// Next tape used (best guess)
#[serde(skip_serializing_if = "Option::is_none")]
pub next_media_label: Option<String>,
}
#[derive(Clone, Debug)]
/// Filter for matching `BackupGroup`s, for use with `BackupGroup::filter`.
pub enum FilterType {
/// BackupGroup type - either `vm`, `ct`, or `host`.
BackupType(BackupType),
/// Full identifier of BackupGroup, including type
Group(String),
/// A regular expression matched against the full identifier of the BackupGroup
Regex(Regex),
}
impl PartialEq for FilterType {
fn eq(&self, other: &Self) -> bool {
match (self, other) {
(Self::BackupType(a), Self::BackupType(b)) => a == b,
(Self::Group(a), Self::Group(b)) => a == b,
(Self::Regex(a), Self::Regex(b)) => a.as_str() == b.as_str(),
_ => false,
}
}
}
impl std::str::FromStr for FilterType {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(match s.split_once(':') {
Some(("group", value)) => BACKUP_GROUP_SCHEMA.parse_simple_value(value).map(|_| FilterType::Group(value.to_string()))?,
Some(("type", value)) => FilterType::BackupType(value.parse()?),
Some(("regex", value)) => FilterType::Regex(Regex::new(value)?),
Some((ty, _value)) => bail!("expected 'group', 'type' or 'regex' prefix, got '{}'", ty),
None => bail!("input doesn't match expected format '<group:GROUP||type:<vm|ct|host>|regex:REGEX>'"),
})
}
}
// used for serializing below, caution!
impl std::fmt::Display for FilterType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FilterType::BackupType(backup_type) => write!(f, "type:{}", backup_type),
FilterType::Group(backup_group) => write!(f, "group:{}", backup_group),
FilterType::Regex(regex) => write!(f, "regex:{}", regex.as_str()),
}
}
}
#[derive(Clone, Debug)]
pub struct GroupFilter {
pub is_exclude: bool,
pub filter_type: FilterType,
}
impl PartialEq for GroupFilter {
fn eq(&self, other: &Self) -> bool {
self.filter_type == other.filter_type && self.is_exclude == other.is_exclude
}
}
impl Eq for GroupFilter {}
impl std::str::FromStr for GroupFilter {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (is_exclude, type_str) = match s.split_once(':') {
Some(("include", value)) => (false, value),
Some(("exclude", value)) => (true, value),
_ => (false, s),
};
Ok(GroupFilter {
is_exclude,
filter_type: type_str.parse()?,
})
}
}
// used for serializing below, caution!
impl std::fmt::Display for GroupFilter {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if self.is_exclude {
f.write_str("exclude:")?;
}
std::fmt::Display::fmt(&self.filter_type, f)
}
}
proxmox_serde::forward_deserialize_to_from_str!(GroupFilter);
proxmox_serde::forward_serialize_to_display!(GroupFilter);
fn verify_group_filter(input: &str) -> Result<(), anyhow::Error> {
GroupFilter::from_str(input).map(|_| ())
}
pub const GROUP_FILTER_SCHEMA: Schema = StringSchema::new(
"Group filter based on group identifier ('group:GROUP'), group type ('type:<vm|ct|host>'), or regex ('regex:RE'). Can be inverted by prepending 'exclude:'.")
.format(&ApiStringFormat::VerifyFn(verify_group_filter))
.type_text("[<exclude:|include:>]<type:<vm|ct|host>|group:GROUP|regex:RE>")
.schema();
pub const GROUP_FILTER_LIST_SCHEMA: Schema =
ArraySchema::new("List of group filters.", &GROUP_FILTER_SCHEMA).schema();
pub const TRANSFER_LAST_SCHEMA: Schema =
IntegerSchema::new("Limit transfer to last N snapshots (per group), skipping others")
.minimum(1)
.schema();
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
ns: {
type: BackupNamespace,
optional: true,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
optional: true,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remote-ns": {
type: BackupNamespace,
optional: true,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
"max-depth": {
schema: NS_MAX_DEPTH_REDUCED_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
limit: {
type: RateLimitConfig,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
"group-filter": {
schema: GROUP_FILTER_LIST_SCHEMA,
optional: true,
},
"transfer-last": {
schema: TRANSFER_LAST_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Clone, Updater, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Sync Job
pub struct SyncJobConfig {
#[updater(skip)]
pub id: String,
pub store: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub ns: Option<BackupNamespace>,
#[serde(skip_serializing_if = "Option::is_none")]
pub owner: Option<Authid>,
#[serde(skip_serializing_if = "Option::is_none")]
/// None implies local sync.
pub remote: Option<String>,
pub remote_store: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub remote_ns: Option<BackupNamespace>,
#[serde(skip_serializing_if = "Option::is_none")]
pub remove_vanished: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub max_depth: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub schedule: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub group_filter: Option<Vec<GroupFilter>>,
#[serde(flatten)]
pub limit: RateLimitConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub transfer_last: Option<usize>,
}
impl SyncJobConfig {
pub fn acl_path(&self) -> Vec<&str> {
match self.ns.as_ref() {
Some(ns) => ns.acl_path(&self.store),
None => vec!["datastore", &self.store],
}
}
}
#[api(
properties: {
config: {
type: SyncJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Status of Sync Job
pub struct SyncJobStatus {
#[serde(flatten)]
pub config: SyncJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}
/// These are used separately without `ns`/`max-depth` sometimes in the API, specifically in the API
/// call to prune a specific group, where `max-depth` makes no sense.
#[api(
properties: {
"keep-last": {
schema: crate::PRUNE_SCHEMA_KEEP_LAST,
optional: true,
},
"keep-hourly": {
schema: crate::PRUNE_SCHEMA_KEEP_HOURLY,
optional: true,
},
"keep-daily": {
schema: crate::PRUNE_SCHEMA_KEEP_DAILY,
optional: true,
},
"keep-weekly": {
schema: crate::PRUNE_SCHEMA_KEEP_WEEKLY,
optional: true,
},
"keep-monthly": {
schema: crate::PRUNE_SCHEMA_KEEP_MONTHLY,
optional: true,
},
"keep-yearly": {
schema: crate::PRUNE_SCHEMA_KEEP_YEARLY,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Default, Updater, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Common pruning options
pub struct KeepOptions {
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_hourly: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_daily: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_weekly: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub keep_yearly: Option<u64>,
}
impl KeepOptions {
pub fn keeps_something(&self) -> bool {
self.keep_last.unwrap_or(0)
+ self.keep_hourly.unwrap_or(0)
+ self.keep_daily.unwrap_or(0)
+ self.keep_weekly.unwrap_or(0)
+ self.keep_monthly.unwrap_or(0)
+ self.keep_yearly.unwrap_or(0)
> 0
}
}
#[api(
properties: {
keep: {
type: KeepOptions,
},
ns: {
type: BackupNamespace,
optional: true,
},
"max-depth": {
schema: NS_MAX_DEPTH_REDUCED_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Default, Updater, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Common pruning options
pub struct PruneJobOptions {
#[serde(flatten)]
pub keep: KeepOptions,
/// The (optional) recursion depth
#[serde(skip_serializing_if = "Option::is_none")]
pub max_depth: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ns: Option<BackupNamespace>,
}
impl PruneJobOptions {
pub fn keeps_something(&self) -> bool {
self.keep.keeps_something()
}
pub fn acl_path<'a>(&'a self, store: &'a str) -> Vec<&'a str> {
match &self.ns {
Some(ns) => ns.acl_path(store),
None => vec!["datastore", store],
}
}
}
#[api(
properties: {
disable: {
type: Boolean,
optional: true,
default: false,
},
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
schedule: {
schema: PRUNE_SCHEDULE_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
options: {
type: PruneJobOptions,
},
},
)]
#[derive(Deserialize, Serialize, Updater, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Prune configuration.
pub struct PruneJobConfig {
/// unique ID to address this job
#[updater(skip)]
pub id: String,
pub store: String,
/// Disable this job.
#[serde(default, skip_serializing_if = "is_false")]
#[updater(serde(skip_serializing_if = "Option::is_none"))]
pub disable: bool,
pub schedule: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(flatten)]
pub options: PruneJobOptions,
}
impl PruneJobConfig {
pub fn acl_path(&self) -> Vec<&str> {
self.options.acl_path(&self.store)
}
}
fn is_false(b: &bool) -> bool {
!b
}
#[api(
properties: {
config: {
type: PruneJobConfig,
},
status: {
type: JobScheduleStatus,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Status of prune job
pub struct PruneJobStatus {
#[serde(flatten)]
pub config: PruneJobConfig,
#[serde(flatten)]
pub status: JobScheduleStatus,
}

View File

@ -1,55 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
use crate::CERT_FINGERPRINT_SHA256_SCHEMA;
#[api(default: "scrypt")]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
/// Encrtypt the Key with a password using PBKDF2
PBKDF2,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
properties: {
kdf: {
type: Kdf,
},
fingerprint: {
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
optional: true,
},
},
)]
#[derive(Deserialize, Serialize)]
/// Encryption Key Information
pub struct KeyInfo {
/// Path to key (if stored in a file)
#[serde(skip_serializing_if = "Option::is_none")]
pub path: Option<String>,
pub kdf: Kdf,
/// Key creation time
pub created: i64,
/// Key modification time
pub modified: i64,
/// Key fingerprint
#[serde(skip_serializing_if = "Option::is_none")]
pub fingerprint: Option<String>,
/// Password hint
#[serde(skip_serializing_if = "Option::is_none")]
pub hint: Option<String>,
}

View File

@ -1,208 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, ApiStringFormat, ApiType, ArraySchema, Schema, StringSchema, Updater};
use super::{REALM_ID_SCHEMA, SINGLE_LINE_COMMENT_SCHEMA};
#[api()]
#[derive(Copy, Clone, PartialEq, Eq, Serialize, Deserialize, Default)]
/// LDAP connection type
pub enum LdapMode {
/// Plaintext LDAP connection
#[serde(rename = "ldap")]
#[default]
Ldap,
/// Secure STARTTLS connection
#[serde(rename = "ldap+starttls")]
StartTls,
/// Secure LDAPS connection
#[serde(rename = "ldaps")]
Ldaps,
}
#[api(
properties: {
"realm": {
schema: REALM_ID_SCHEMA,
},
"comment": {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"verify": {
optional: true,
default: false,
},
"sync-defaults-options": {
schema: SYNC_DEFAULTS_STRING_SCHEMA,
optional: true,
},
"sync-attributes": {
schema: SYNC_ATTRIBUTES_SCHEMA,
optional: true,
},
"user-classes" : {
optional: true,
schema: USER_CLASSES_SCHEMA,
},
"base-dn" : {
schema: LDAP_DOMAIN_SCHEMA,
},
"bind-dn" : {
schema: LDAP_DOMAIN_SCHEMA,
optional: true,
}
},
)]
#[derive(Serialize, Deserialize, Updater, Clone)]
#[serde(rename_all = "kebab-case")]
/// LDAP configuration properties.
pub struct LdapRealmConfig {
#[updater(skip)]
pub realm: String,
/// LDAP server address
pub server1: String,
/// Fallback LDAP server address
#[serde(skip_serializing_if = "Option::is_none")]
pub server2: Option<String>,
/// Port
#[serde(skip_serializing_if = "Option::is_none")]
pub port: Option<u16>,
/// Base domain name. Users are searched under this domain using a `subtree search`.
pub base_dn: String,
/// Username attribute. Used to map a ``userid`` to LDAP to an LDAP ``dn``.
pub user_attr: String,
/// Comment
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// Connection security
#[serde(skip_serializing_if = "Option::is_none")]
pub mode: Option<LdapMode>,
/// Verify server certificate
#[serde(skip_serializing_if = "Option::is_none")]
pub verify: Option<bool>,
/// CA certificate to use for the server. The path can point to
/// either a file, or a directory. If it points to a file,
/// the PEM-formatted X.509 certificate stored at the path
/// will be added as a trusted certificate.
/// If the path points to a directory,
/// the directory replaces the system's default certificate
/// store at `/etc/ssl/certs` - Every file in the directory
/// will be loaded as a trusted certificate.
#[serde(skip_serializing_if = "Option::is_none")]
pub capath: Option<String>,
/// Bind domain to use for looking up users
#[serde(skip_serializing_if = "Option::is_none")]
pub bind_dn: Option<String>,
/// Custom LDAP search filter for user sync
#[serde(skip_serializing_if = "Option::is_none")]
pub filter: Option<String>,
/// Default options for LDAP sync
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_defaults_options: Option<String>,
/// List of attributes to sync from LDAP to user config
#[serde(skip_serializing_if = "Option::is_none")]
pub sync_attributes: Option<String>,
/// User ``objectClass`` classes to sync
#[serde(skip_serializing_if = "Option::is_none")]
pub user_classes: Option<String>,
}
#[api(
properties: {
"remove-vanished": {
optional: true,
schema: REMOVE_VANISHED_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Updater, Default, Debug)]
#[serde(rename_all = "kebab-case")]
/// Default options for LDAP synchronization runs
pub struct SyncDefaultsOptions {
/// How to handle vanished properties/users
pub remove_vanished: Option<String>,
/// Enable new users after sync
pub enable_new: Option<bool>,
}
#[api()]
#[derive(Serialize, Deserialize, Debug, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
/// remove-vanished options
pub enum RemoveVanished {
/// Delete ACLs for vanished users
Acl,
/// Remove vanished users
Entry,
/// Remove vanished properties from users (e.g. email)
Properties,
}
pub const LDAP_DOMAIN_SCHEMA: Schema = StringSchema::new("LDAP Domain").schema();
pub const SYNC_DEFAULTS_STRING_SCHEMA: Schema = StringSchema::new("sync defaults options")
.format(&ApiStringFormat::PropertyString(
&SyncDefaultsOptions::API_SCHEMA,
))
.schema();
const REMOVE_VANISHED_DESCRIPTION: &str =
"A semicolon-seperated list of things to remove when they or the user \
vanishes during user synchronization. The following values are possible: ``entry`` removes the \
user when not returned from the sync; ``properties`` removes any \
properties on existing user that do not appear in the source. \
``acl`` removes ACLs when the user is not returned from the sync.";
pub const REMOVE_VANISHED_SCHEMA: Schema = StringSchema::new(REMOVE_VANISHED_DESCRIPTION)
.format(&ApiStringFormat::PropertyString(&REMOVE_VANISHED_ARRAY))
.schema();
pub const REMOVE_VANISHED_ARRAY: Schema = ArraySchema::new(
"Array of remove-vanished options",
&RemoveVanished::API_SCHEMA,
)
.min_length(1)
.schema();
#[api()]
#[derive(Serialize, Deserialize, Updater, Default, Debug)]
#[serde(rename_all = "kebab-case")]
/// Determine which LDAP attributes should be synced to which user attributes
pub struct SyncAttributes {
/// Name of the LDAP attribute containing the user's email address
pub email: Option<String>,
/// Name of the LDAP attribute containing the user's first name
pub firstname: Option<String>,
/// Name of the LDAP attribute containing the user's last name
pub lastname: Option<String>,
}
const SYNC_ATTRIBUTES_TEXT: &str = "Comma-separated list of key=value pairs for specifying \
which LDAP attributes map to which PBS user field. For example, \
to map the LDAP attribute ``mail`` to PBS's ``email``, write \
``email=mail``.";
pub const SYNC_ATTRIBUTES_SCHEMA: Schema = StringSchema::new(SYNC_ATTRIBUTES_TEXT)
.format(&ApiStringFormat::PropertyString(
&SyncAttributes::API_SCHEMA,
))
.schema();
pub const USER_CLASSES_ARRAY: Schema = ArraySchema::new(
"Array of user classes",
&StringSchema::new("user class").schema(),
)
.min_length(1)
.schema();
const USER_CLASSES_TEXT: &str = "Comma-separated list of allowed objectClass values for \
user synchronization. For instance, if ``user-classes`` is set to ``person,user``, \
then user synchronization will consider all LDAP entities \
where ``objectClass: person`` `or` ``objectClass: user``.";
pub const USER_CLASSES_SCHEMA: Schema = StringSchema::new(USER_CLASSES_TEXT)
.format(&ApiStringFormat::PropertyString(&USER_CLASSES_ARRAY))
.default("inetorgperson,posixaccount,person,user")
.schema();

View File

@ -1,417 +0,0 @@
//! Basic API types used by most of the PBS code.
use const_format::concatcp;
use serde::{Deserialize, Serialize};
pub mod percent_encoding;
use proxmox_schema::{
api, const_regex, ApiStringFormat, ApiType, ArraySchema, ReturnType, Schema, StringSchema,
};
use proxmox_time::parse_daily_duration;
use proxmox_auth_api::types::{APITOKEN_ID_REGEX_STR, USER_ID_REGEX_STR};
pub use proxmox_schema::api_types::SAFE_ID_FORMAT as PROXMOX_SAFE_ID_FORMAT;
pub use proxmox_schema::api_types::SAFE_ID_REGEX as PROXMOX_SAFE_ID_REGEX;
pub use proxmox_schema::api_types::SAFE_ID_REGEX_STR as PROXMOX_SAFE_ID_REGEX_STR;
pub use proxmox_schema::api_types::{
BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX,
};
pub use proxmox_schema::api_types::{DNS_ALIAS_REGEX, DNS_NAME_OR_IP_REGEX, DNS_NAME_REGEX};
pub use proxmox_schema::api_types::{FINGERPRINT_SHA256_REGEX, SHA256_HEX_REGEX};
pub use proxmox_schema::api_types::{
GENERIC_URI_REGEX, HOSTNAME_REGEX, HOST_PORT_REGEX, HTTP_URL_REGEX,
};
pub use proxmox_schema::api_types::{MULTI_LINE_COMMENT_REGEX, SINGLE_LINE_COMMENT_REGEX};
pub use proxmox_schema::api_types::{PASSWORD_REGEX, SYSTEMD_DATETIME_REGEX, UUID_REGEX};
pub use proxmox_schema::api_types::{CIDR_FORMAT, CIDR_REGEX};
pub use proxmox_schema::api_types::{CIDR_V4_FORMAT, CIDR_V4_REGEX};
pub use proxmox_schema::api_types::{CIDR_V6_FORMAT, CIDR_V6_REGEX};
pub use proxmox_schema::api_types::{IPRE_STR, IP_FORMAT, IP_REGEX};
pub use proxmox_schema::api_types::{IPV4RE_STR, IP_V4_FORMAT, IP_V4_REGEX};
pub use proxmox_schema::api_types::{IPV6RE_STR, IP_V6_FORMAT, IP_V6_REGEX};
pub use proxmox_schema::api_types::COMMENT_SCHEMA as SINGLE_LINE_COMMENT_SCHEMA;
pub use proxmox_schema::api_types::HOSTNAME_SCHEMA;
pub use proxmox_schema::api_types::HOST_PORT_SCHEMA;
pub use proxmox_schema::api_types::HTTP_URL_SCHEMA;
pub use proxmox_schema::api_types::MULTI_LINE_COMMENT_SCHEMA;
pub use proxmox_schema::api_types::NODE_SCHEMA;
pub use proxmox_schema::api_types::SINGLE_LINE_COMMENT_FORMAT;
pub use proxmox_schema::api_types::{
BLOCKDEVICE_DISK_AND_PARTITION_NAME_SCHEMA, BLOCKDEVICE_NAME_SCHEMA,
};
pub use proxmox_schema::api_types::{CERT_FINGERPRINT_SHA256_SCHEMA, FINGERPRINT_SHA256_FORMAT};
pub use proxmox_schema::api_types::{DISK_ARRAY_SCHEMA, DISK_LIST_SCHEMA};
pub use proxmox_schema::api_types::{DNS_ALIAS_FORMAT, DNS_NAME_FORMAT, DNS_NAME_OR_IP_SCHEMA};
pub use proxmox_schema::api_types::{PASSWORD_FORMAT, PASSWORD_SCHEMA};
pub use proxmox_schema::api_types::{SERVICE_ID_SCHEMA, UUID_FORMAT};
pub use proxmox_schema::api_types::{SYSTEMD_DATETIME_FORMAT, TIME_ZONE_SCHEMA};
use proxmox_schema::api_types::{DNS_NAME_STR, IPRE_BRACKET_STR};
#[rustfmt::skip]
pub const BACKUP_ID_RE: &str = r"[A-Za-z0-9_][A-Za-z0-9._\-]*";
#[rustfmt::skip]
pub const BACKUP_TYPE_RE: &str = r"(?:host|vm|ct)";
#[rustfmt::skip]
pub const BACKUP_TIME_RE: &str = r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z";
#[rustfmt::skip]
pub const BACKUP_NS_RE: &str =
concatcp!("(?:",
"(?:", PROXMOX_SAFE_ID_REGEX_STR, r"/){0,7}", PROXMOX_SAFE_ID_REGEX_STR,
")?");
#[rustfmt::skip]
pub const BACKUP_NS_PATH_RE: &str =
concatcp!(r"(?:ns/", PROXMOX_SAFE_ID_REGEX_STR, r"/){0,7}ns/", PROXMOX_SAFE_ID_REGEX_STR, r"/");
#[rustfmt::skip]
pub const SNAPSHOT_PATH_REGEX_STR: &str =
concatcp!(
r"(", BACKUP_TYPE_RE, ")/(", BACKUP_ID_RE, ")/(", BACKUP_TIME_RE, r")",
);
#[rustfmt::skip]
pub const GROUP_OR_SNAPSHOT_PATH_REGEX_STR: &str =
concatcp!(
r"(", BACKUP_TYPE_RE, ")/(", BACKUP_ID_RE, ")(?:/(", BACKUP_TIME_RE, r"))?",
);
mod acl;
pub use acl::*;
mod datastore;
pub use datastore::*;
mod jobs;
pub use jobs::*;
mod key_derivation;
pub use key_derivation::{Kdf, KeyInfo};
mod maintenance;
pub use maintenance::*;
mod network;
pub use network::*;
mod node;
pub use node::*;
pub use proxmox_auth_api::types as userid;
pub use proxmox_auth_api::types::{Authid, Userid};
pub use proxmox_auth_api::types::{Realm, RealmRef};
pub use proxmox_auth_api::types::{Tokenname, TokennameRef};
pub use proxmox_auth_api::types::{Username, UsernameRef};
pub use proxmox_auth_api::types::{
PROXMOX_GROUP_ID_SCHEMA, PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA,
};
#[macro_use]
mod user;
pub use user::*;
pub use proxmox_schema::upid::*;
mod crypto;
pub use crypto::{bytes_as_fingerprint, CryptMode, Fingerprint};
pub mod file_restore;
mod openid;
pub use openid::*;
mod ldap;
pub use ldap::*;
mod ad;
pub use ad::*;
mod remote;
pub use remote::*;
mod tape;
pub use tape::*;
mod traffic_control;
pub use traffic_control::*;
mod zfs;
pub use zfs::*;
mod metrics;
pub use metrics::*;
const_regex! {
// just a rough check - dummy acceptor is used before persisting
pub OPENSSL_CIPHERS_REGEX = r"^[0-9A-Za-z_:, +!\-@=.]+$";
pub BACKUP_REPO_URL_REGEX = concatcp!(
r"^^(?:(?:(",
USER_ID_REGEX_STR, "|", APITOKEN_ID_REGEX_STR,
")@)?(",
DNS_NAME_STR, "|", IPRE_BRACKET_STR,
"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR, r")$"
);
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
}
pub const PVE_CONFIG_DIGEST_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&SHA256_HEX_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const OPENSSL_CIPHERS_TLS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&OPENSSL_CIPHERS_REGEX);
pub const DAILY_DURATION_FORMAT: ApiStringFormat =
ApiStringFormat::VerifyFn(|s| parse_daily_duration(s).map(drop));
pub const SEARCH_DOMAIN_SCHEMA: Schema =
StringSchema::new("Search domain for host-name lookup.").schema();
pub const FIRST_DNS_SERVER_SCHEMA: Schema = StringSchema::new("First name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const SECOND_DNS_SERVER_SCHEMA: Schema = StringSchema::new("Second name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const THIRD_DNS_SERVER_SCHEMA: Schema = StringSchema::new("Third name server IP address.")
.format(&IP_FORMAT)
.schema();
pub const OPENSSL_CIPHERS_TLS_1_2_SCHEMA: Schema =
StringSchema::new("OpenSSL cipher list used by the proxy for TLS <= 1.2")
.format(&OPENSSL_CIPHERS_TLS_FORMAT)
.schema();
pub const OPENSSL_CIPHERS_TLS_1_3_SCHEMA: Schema =
StringSchema::new("OpenSSL ciphersuites list used by the proxy for TLS 1.3")
.format(&OPENSSL_CIPHERS_TLS_FORMAT)
.schema();
pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.format(&PASSWORD_FORMAT)
.min_length(5)
.max_length(64)
.schema();
pub const REALM_ID_SCHEMA: Schema = StringSchema::new("Realm name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema =
StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
"Prevent changes if current configuration file has different \
SHA256 digest. This can be used to prevent concurrent \
modifications.",
)
.format(&PVE_CONFIG_DIGEST_FORMAT)
.schema();
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
// Complex type definitions
#[api()]
#[derive(Default, Serialize, Deserialize)]
/// Storage space usage information.
pub struct StorageStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
}
pub const PASSWORD_HINT_SCHEMA: Schema = StringSchema::new("Password hint.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(1)
.max_length(64)
.schema();
#[api()]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
/// Package name
pub package: String,
/// Package title
pub title: String,
/// Package architecture
pub arch: String,
/// Human readable package description
pub description: String,
/// New version to be updated to
pub version: String,
/// Old version currently installed
pub old_version: String,
/// Package origin
pub origin: String,
/// Package priority in human-readable form
pub priority: String,
/// Package section
pub section: String,
/// Custom extra field for additional package information
#[serde(skip_serializing_if = "Option::is_none")]
pub extra_info: Option<String>,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Node Power command type.
pub enum NodePowerCommand {
/// Restart the server
Reboot,
/// Shutdown the server
Shutdown,
}
#[api()]
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum TaskStateType {
/// Ok
OK,
/// Warning
Warning,
/// Error
Error,
/// Unknown
Unknown,
}
#[api(
properties: {
upid: { schema: UPID::API_SCHEMA },
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
/// Task properties.
pub struct TaskListItem {
pub upid: String,
/// The node name where the task is running on.
pub node: String,
/// The Unix PID
pub pid: i64,
/// The task start time (Epoch)
pub pstart: u64,
/// The task start time (Epoch)
pub starttime: i64,
/// Worker type (arbitrary ASCII string)
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The authenticated entity who started the task
pub user: String,
/// The task end time (Epoch)
#[serde(skip_serializing_if = "Option::is_none")]
pub endtime: Option<i64>,
/// Task end status
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<String>,
}
pub const NODE_TASKS_LIST_TASKS_RETURN_TYPE: ReturnType = ReturnType {
optional: false,
schema: &ArraySchema::new("A list of tasks.", &TaskListItem::API_SCHEMA).schema(),
};
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
/// RRD consolidation mode
pub enum RRDMode {
/// Maximum
Max,
/// Average
Average,
}
#[api()]
#[derive(Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// RRD time frame
pub enum RRDTimeFrame {
/// Hour
Hour,
/// Day
Day,
/// Week
Week,
/// Month
Month,
/// Year
Year,
/// Decade (10 years)
Decade,
}
#[api]
#[derive(Deserialize, Serialize, Copy, Clone, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
/// type of the realm
pub enum RealmType {
/// The PAM realm
Pam,
/// The PBS realm
Pbs,
/// An OpenID Connect realm
OpenId,
/// An LDAP realm
Ldap,
/// An Active Directory (AD) realm
Ad,
}
serde_plain::derive_display_from_serialize!(RealmType);
serde_plain::derive_fromstr_from_deserialize!(RealmType);
#[api(
properties: {
realm: {
schema: REALM_ID_SCHEMA,
},
"type": {
type: RealmType,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Deserialize, Serialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Basic Information about a realm
pub struct BasicRealmInfo {
pub realm: String,
#[serde(rename = "type")]
pub ty: RealmType,
/// True if it is the default realm
#[serde(skip_serializing_if = "Option::is_none")]
pub default: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}

View File

@ -1,106 +0,0 @@
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use std::borrow::Cow;
use proxmox_schema::{api, const_regex, ApiStringFormat, Schema, StringSchema};
const_regex! {
pub MAINTENANCE_MESSAGE_REGEX = r"^[[:^cntrl:]]*$";
}
pub const MAINTENANCE_MESSAGE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&MAINTENANCE_MESSAGE_REGEX);
pub const MAINTENANCE_MESSAGE_SCHEMA: Schema =
StringSchema::new("Message describing the reason for the maintenance.")
.format(&MAINTENANCE_MESSAGE_FORMAT)
.max_length(64)
.schema();
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
/// Operation requirements, used when checking for maintenance mode.
pub enum Operation {
/// for any read operation like backup restore or RRD metric collection
Read,
/// for any write/delete operation, like backup create or GC
Write,
/// for any purely logical operation on the in-memory state of the datastore, e.g., to check if
/// some mutex could be locked (e.g., GC already running?)
///
/// NOTE: one must *not* do any IO operations when only helding this Op state
Lookup,
// GarbageCollect or Delete?
}
#[api]
#[derive(Copy, Clone, Deserialize, Serialize, PartialEq, Eq)]
#[serde(rename_all = "kebab-case")]
/// Maintenance type.
pub enum MaintenanceType {
// TODO:
// - Add "unmounting" once we got pluggable datastores
// - Add "GarbageCollection" or "DeleteOnly" as type and track GC (or all deletes) as separate
// operation, so that one can enable a mode where nothing new can be added but stuff can be
// cleaned
/// Only read operations are allowed on the datastore.
ReadOnly,
/// Neither read nor write operations are allowed on the datastore.
Offline,
/// The datastore is being deleted.
Delete,
}
serde_plain::derive_display_from_serialize!(MaintenanceType);
serde_plain::derive_fromstr_from_deserialize!(MaintenanceType);
#[api(
properties: {
type: {
type: MaintenanceType,
},
message: {
optional: true,
schema: MAINTENANCE_MESSAGE_SCHEMA,
}
},
default_key: "type",
)]
#[derive(Deserialize, Serialize)]
/// Maintenance mode
pub struct MaintenanceMode {
/// Type of maintenance ("read-only" or "offline").
#[serde(rename = "type")]
pub ty: MaintenanceType,
/// Reason for maintenance.
#[serde(skip_serializing_if = "Option::is_none")]
pub message: Option<String>,
}
impl MaintenanceMode {
/// Used for deciding whether the datastore is cleared from the internal cache after the last
/// task finishes, so all open files are closed.
pub fn is_offline(&self) -> bool {
self.ty == MaintenanceType::Offline
}
pub fn check(&self, operation: Option<Operation>) -> Result<(), Error> {
if self.ty == MaintenanceType::Delete {
bail!("datastore is being deleted");
}
let message = percent_encoding::percent_decode_str(self.message.as_deref().unwrap_or(""))
.decode_utf8()
.unwrap_or(Cow::Borrowed(""));
if let Some(Operation::Lookup) = operation {
return Ok(());
} else if self.ty == MaintenanceType::Offline {
bail!("offline maintenance mode: {}", message);
} else if self.ty == MaintenanceType::ReadOnly {
if let Some(Operation::Write) = operation {
bail!("read-only maintenance mode: {}", message);
}
}
Ok(())
}
}

View File

@ -1,190 +0,0 @@
use serde::{Deserialize, Serialize};
use crate::{
HOST_PORT_SCHEMA, HTTP_URL_SCHEMA, PROXMOX_SAFE_ID_FORMAT, SINGLE_LINE_COMMENT_SCHEMA,
};
use proxmox_schema::{api, Schema, StringSchema, Updater};
pub const METRIC_SERVER_ID_SCHEMA: Schema = StringSchema::new("Metrics Server ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const INFLUXDB_BUCKET_SCHEMA: Schema = StringSchema::new("InfluxDB Bucket.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.default("proxmox")
.schema();
pub const INFLUXDB_ORGANIZATION_SCHEMA: Schema = StringSchema::new("InfluxDB Organization.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.default("proxmox")
.schema();
fn return_true() -> bool {
true
}
fn is_true(b: &bool) -> bool {
*b
}
#[api(
properties: {
name: {
schema: METRIC_SERVER_ID_SCHEMA,
},
enable: {
type: bool,
optional: true,
default: true,
},
host: {
schema: HOST_PORT_SCHEMA,
},
mtu: {
type: u16,
optional: true,
default: 1500,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// InfluxDB Server (UDP)
pub struct InfluxDbUdp {
#[updater(skip)]
pub name: String,
#[serde(default = "return_true", skip_serializing_if = "is_true")]
#[updater(serde(skip_serializing_if = "Option::is_none"))]
/// Enables or disables the metrics server
pub enable: bool,
/// the host + port
pub host: String,
#[serde(skip_serializing_if = "Option::is_none")]
/// The MTU
pub mtu: Option<u16>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}
#[api(
properties: {
name: {
schema: METRIC_SERVER_ID_SCHEMA,
},
enable: {
type: bool,
optional: true,
default: true,
},
url: {
schema: HTTP_URL_SCHEMA,
},
token: {
type: String,
optional: true,
},
bucket: {
schema: INFLUXDB_BUCKET_SCHEMA,
optional: true,
},
organization: {
schema: INFLUXDB_ORGANIZATION_SCHEMA,
optional: true,
},
"max-body-size": {
type: usize,
optional: true,
default: 25_000_000,
},
"verify-tls": {
type: bool,
optional: true,
default: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// InfluxDB Server (HTTP(s))
pub struct InfluxDbHttp {
#[updater(skip)]
pub name: String,
#[serde(default = "return_true", skip_serializing_if = "is_true")]
#[updater(serde(skip_serializing_if = "Option::is_none"))]
/// Enables or disables the metrics server
pub enable: bool,
/// The base url of the influxdb server
pub url: String,
/// The Optional Token
#[serde(skip_serializing_if = "Option::is_none")]
/// The (optional) API token
pub token: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub bucket: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub organization: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// The (optional) maximum body size
pub max_body_size: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
/// If true, the certificate will be validated.
pub verify_tls: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}
#[api]
#[derive(Copy, Clone, Deserialize, Serialize, PartialEq, Eq, PartialOrd, Ord)]
/// Type of the metric server
pub enum MetricServerType {
/// InfluxDB HTTP
#[serde(rename = "influxdb-http")]
InfluxDbHttp,
/// InfluxDB UDP
#[serde(rename = "influxdb-udp")]
InfluxDbUdp,
}
#[api(
properties: {
name: {
schema: METRIC_SERVER_ID_SCHEMA,
},
"type": {
type: MetricServerType,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Clone, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
#[serde(rename_all = "kebab-case")]
/// Basic information about a metric server that's available for all types
pub struct MetricServerInfo {
pub name: String,
#[serde(rename = "type")]
pub ty: MetricServerType,
/// Enables or disables the metrics server
#[serde(skip_serializing_if = "Option::is_none")]
pub enable: Option<bool>,
/// The target server
pub server: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}

View File

@ -1,345 +0,0 @@
use std::fmt;
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
use crate::{
CIDR_FORMAT, CIDR_V4_FORMAT, CIDR_V6_FORMAT, IP_FORMAT, IP_V4_FORMAT, IP_V6_FORMAT,
PROXMOX_SAFE_ID_REGEX,
};
pub const NETWORK_INTERFACE_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const IP_V4_SCHEMA: Schema = StringSchema::new("IPv4 address.")
.format(&IP_V4_FORMAT)
.max_length(15)
.schema();
pub const IP_V6_SCHEMA: Schema = StringSchema::new("IPv6 address.")
.format(&IP_V6_FORMAT)
.max_length(39)
.schema();
pub const IP_SCHEMA: Schema = StringSchema::new("IP (IPv4 or IPv6) address.")
.format(&IP_FORMAT)
.max_length(39)
.schema();
pub const CIDR_V4_SCHEMA: Schema = StringSchema::new("IPv4 address with netmask (CIDR notation).")
.format(&CIDR_V4_FORMAT)
.max_length(18)
.schema();
pub const CIDR_V6_SCHEMA: Schema = StringSchema::new("IPv6 address with netmask (CIDR notation).")
.format(&CIDR_V6_FORMAT)
.max_length(43)
.schema();
pub const CIDR_SCHEMA: Schema =
StringSchema::new("IP address (IPv4 or IPv6) with netmask (CIDR notation).")
.format(&CIDR_FORMAT)
.max_length(43)
.schema();
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Interface configuration method
pub enum NetworkConfigMethod {
/// Configuration is done manually using other tools
Manual,
/// Define interfaces with statically allocated addresses.
Static,
/// Obtain an address via DHCP
DHCP,
/// Define the loopback interface.
Loopback,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[repr(u8)]
/// Linux Bond Mode
pub enum LinuxBondMode {
/// Round-robin policy
BalanceRr = 0,
/// Active-backup policy
ActiveBackup = 1,
/// XOR policy
BalanceXor = 2,
/// Broadcast policy
Broadcast = 3,
/// IEEE 802.3ad Dynamic link aggregation
#[serde(rename = "802.3ad")]
Ieee802_3ad = 4,
/// Adaptive transmit load balancing
BalanceTlb = 5,
/// Adaptive load balancing
BalanceAlb = 6,
}
impl fmt::Display for LinuxBondMode {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str(match self {
LinuxBondMode::BalanceRr => "balance-rr",
LinuxBondMode::ActiveBackup => "active-backup",
LinuxBondMode::BalanceXor => "balance-xor",
LinuxBondMode::Broadcast => "broadcast",
LinuxBondMode::Ieee802_3ad => "802.3ad",
LinuxBondMode::BalanceTlb => "balance-tlb",
LinuxBondMode::BalanceAlb => "balance-alb",
})
}
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
#[repr(u8)]
/// Bond Transmit Hash Policy for LACP (802.3ad)
pub enum BondXmitHashPolicy {
/// Layer 2
Layer2 = 0,
/// Layer 2+3
#[serde(rename = "layer2+3")]
Layer2_3 = 1,
/// Layer 3+4
#[serde(rename = "layer3+4")]
Layer3_4 = 2,
}
impl fmt::Display for BondXmitHashPolicy {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str(match self {
BondXmitHashPolicy::Layer2 => "layer2",
BondXmitHashPolicy::Layer2_3 => "layer2+3",
BondXmitHashPolicy::Layer3_4 => "layer3+4",
})
}
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Network interface type
pub enum NetworkInterfaceType {
/// Loopback
Loopback,
/// Physical Ethernet device
Eth,
/// Linux Bridge
Bridge,
/// Linux Bond
Bond,
/// Linux VLAN (eth.10)
Vlan,
/// Interface Alias (eth:1)
Alias,
/// Unknown interface type
Unknown,
}
pub const NETWORK_INTERFACE_NAME_SCHEMA: Schema = StringSchema::new("Network interface name.")
.format(&NETWORK_INTERFACE_FORMAT)
.min_length(1)
.max_length(15) // libc::IFNAMSIZ-1
.schema();
pub const NETWORK_INTERFACE_ARRAY_SCHEMA: Schema =
ArraySchema::new("Network interface list.", &NETWORK_INTERFACE_NAME_SCHEMA).schema();
pub const NETWORK_INTERFACE_LIST_SCHEMA: Schema =
StringSchema::new("A list of network devices, comma separated.")
.format(&ApiStringFormat::PropertyString(
&NETWORK_INTERFACE_ARRAY_SCHEMA,
))
.schema();
#[api(
properties: {
name: {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
},
"type": {
type: NetworkInterfaceType,
},
method: {
type: NetworkConfigMethod,
optional: true,
},
method6: {
type: NetworkConfigMethod,
optional: true,
},
cidr: {
schema: CIDR_V4_SCHEMA,
optional: true,
},
cidr6: {
schema: CIDR_V6_SCHEMA,
optional: true,
},
gateway: {
schema: IP_V4_SCHEMA,
optional: true,
},
gateway6: {
schema: IP_V6_SCHEMA,
optional: true,
},
options: {
description: "Option list (inet)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
options6: {
description: "Option list (inet6)",
type: Array,
items: {
description: "Optional attribute line.",
type: String,
},
},
comments: {
description: "Comments (inet, may span multiple lines)",
type: String,
optional: true,
},
comments6: {
description: "Comments (inet6, may span multiple lines)",
type: String,
optional: true,
},
bridge_ports: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
slaves: {
schema: NETWORK_INTERFACE_ARRAY_SCHEMA,
optional: true,
},
"vlan-id": {
description: "VLAN ID.",
type: u16,
optional: true,
},
"vlan-raw-device": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_mode: {
type: LinuxBondMode,
optional: true,
},
"bond-primary": {
schema: NETWORK_INTERFACE_NAME_SCHEMA,
optional: true,
},
bond_xmit_hash_policy: {
type: BondXmitHashPolicy,
optional: true,
},
}
)]
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
/// Network Interface configuration
pub struct Interface {
/// Autostart interface
#[serde(rename = "autostart")]
pub autostart: bool,
/// Interface is active (UP)
pub active: bool,
/// Interface name
pub name: String,
/// Interface type
#[serde(rename = "type")]
pub interface_type: NetworkInterfaceType,
#[serde(skip_serializing_if = "Option::is_none")]
pub method: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if = "Option::is_none")]
pub method6: Option<NetworkConfigMethod>,
#[serde(skip_serializing_if = "Option::is_none")]
/// IPv4 address with netmask
pub cidr: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// IPv4 gateway
pub gateway: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// IPv6 address with netmask
pub cidr6: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// IPv6 gateway
pub gateway6: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub options: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub options6: Vec<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comments: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comments6: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// Maximum Transmission Unit
pub mtu: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub bridge_ports: Option<Vec<String>>,
/// Enable bridge vlan support.
#[serde(skip_serializing_if = "Option::is_none")]
pub bridge_vlan_aware: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(rename = "vlan-id")]
pub vlan_id: Option<u16>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(rename = "vlan-raw-device")]
pub vlan_raw_device: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub slaves: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub bond_mode: Option<LinuxBondMode>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(rename = "bond-primary")]
pub bond_primary: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub bond_xmit_hash_policy: Option<BondXmitHashPolicy>,
}
impl Interface {
pub fn new(name: String) -> Self {
Self {
name,
interface_type: NetworkInterfaceType::Unknown,
autostart: false,
active: false,
method: None,
method6: None,
cidr: None,
gateway: None,
cidr6: None,
gateway6: None,
options: Vec::new(),
options6: Vec::new(),
comments: None,
comments6: None,
mtu: None,
bridge_ports: None,
bridge_vlan_aware: None,
vlan_id: None,
vlan_raw_device: None,
slaves: None,
bond_mode: None,
bond_primary: None,
bond_xmit_hash_policy: None,
}
}
}

View File

@ -1,162 +0,0 @@
use std::ffi::OsStr;
use proxmox_schema::*;
use serde::{Deserialize, Serialize};
use crate::StorageStatus;
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Node memory usage counters
pub struct NodeMemoryCounters {
/// Total memory
pub total: u64,
/// Used memory
pub used: u64,
/// Free memory
pub free: u64,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Node swap usage counters
pub struct NodeSwapCounters {
/// Total swap
pub total: u64,
/// Used swap
pub used: u64,
/// Free swap
pub free: u64,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Contains general node information such as the fingerprint`
pub struct NodeInformation {
/// The SSL Fingerprint
pub fingerprint: String,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
/// The current kernel version (output of `uname`)
pub struct KernelVersionInformation {
/// The systemname/nodename
pub sysname: String,
/// The kernel release number
pub release: String,
/// The kernel version
pub version: String,
/// The machine architecture
pub machine: String,
}
impl KernelVersionInformation {
pub fn from_uname_parts(
sysname: &OsStr,
release: &OsStr,
version: &OsStr,
machine: &OsStr,
) -> Self {
KernelVersionInformation {
sysname: sysname.to_str().map(String::from).unwrap_or_default(),
release: release.to_str().map(String::from).unwrap_or_default(),
version: version.to_str().map(String::from).unwrap_or_default(),
machine: machine.to_str().map(String::from).unwrap_or_default(),
}
}
pub fn get_legacy(&self) -> String {
format!("{} {} {}", self.sysname, self.release, self.version)
}
}
#[api]
#[derive(Serialize, Deserialize, Copy, Clone)]
#[serde(rename_all = "kebab-case")]
/// The possible BootModes
pub enum BootMode {
/// The BootMode is EFI/UEFI
Efi,
/// The BootMode is Legacy BIOS
LegacyBios,
}
#[api]
#[derive(Serialize, Deserialize, Clone)]
#[serde(rename_all = "lowercase")]
/// Holds the Bootmodes
pub struct BootModeInformation {
/// The BootMode, either Efi or Bios
pub mode: BootMode,
/// SecureBoot status
pub secureboot: bool,
}
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
/// Information about the CPU
pub struct NodeCpuInformation {
/// The CPU model
pub model: String,
/// The number of CPU sockets
pub sockets: usize,
/// The number of CPU cores (incl. threads)
pub cpus: usize,
}
#[api(
properties: {
memory: {
type: NodeMemoryCounters,
},
root: {
type: StorageStatus,
},
swap: {
type: NodeSwapCounters,
},
loadavg: {
type: Array,
items: {
type: Number,
description: "the load",
}
},
cpuinfo: {
type: NodeCpuInformation,
},
info: {
type: NodeInformation,
}
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// The Node status
pub struct NodeStatus {
pub memory: NodeMemoryCounters,
pub root: StorageStatus,
pub swap: NodeSwapCounters,
/// The current uptime of the server.
pub uptime: u64,
/// Load for 1, 5 and 15 minutes.
pub loadavg: [f64; 3],
/// The current kernel version (NEW struct type).
pub current_kernel: KernelVersionInformation,
/// The current kernel version (LEGACY string type).
pub kversion: String,
/// Total CPU usage since last query.
pub cpu: f64,
/// Total IO wait since last query.
pub wait: f64,
pub cpuinfo: NodeCpuInformation,
pub info: NodeInformation,
/// Current boot mode
pub boot_info: BootModeInformation,
}

View File

@ -1,120 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, ApiStringFormat, ArraySchema, Schema, StringSchema, Updater};
use super::{
GENERIC_URI_REGEX, PROXMOX_SAFE_ID_FORMAT, PROXMOX_SAFE_ID_REGEX, REALM_ID_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA,
};
pub const OPENID_SCOPE_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const OPENID_SCOPE_SCHEMA: Schema = StringSchema::new("OpenID Scope Name.")
.format(&OPENID_SCOPE_FORMAT)
.schema();
pub const OPENID_SCOPE_ARRAY_SCHEMA: Schema =
ArraySchema::new("Array of OpenId Scopes.", &OPENID_SCOPE_SCHEMA).schema();
pub const OPENID_SCOPE_LIST_FORMAT: ApiStringFormat =
ApiStringFormat::PropertyString(&OPENID_SCOPE_ARRAY_SCHEMA);
pub const OPENID_DEFAILT_SCOPE_LIST: &str = "email profile";
pub const OPENID_SCOPE_LIST_SCHEMA: Schema = StringSchema::new("OpenID Scope List")
.format(&OPENID_SCOPE_LIST_FORMAT)
.default(OPENID_DEFAILT_SCOPE_LIST)
.schema();
pub const OPENID_ACR_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&GENERIC_URI_REGEX);
pub const OPENID_ACR_SCHEMA: Schema =
StringSchema::new("OpenID Authentication Context Class Reference.")
.format(&OPENID_ACR_FORMAT)
.schema();
pub const OPENID_ACR_ARRAY_SCHEMA: Schema =
ArraySchema::new("Array of OpenId ACRs.", &OPENID_ACR_SCHEMA).schema();
pub const OPENID_ACR_LIST_FORMAT: ApiStringFormat =
ApiStringFormat::PropertyString(&OPENID_ACR_ARRAY_SCHEMA);
pub const OPENID_ACR_LIST_SCHEMA: Schema = StringSchema::new("OpenID ACR List")
.format(&OPENID_ACR_LIST_FORMAT)
.schema();
pub const OPENID_USERNAME_CLAIM_SCHEMA: Schema = StringSchema::new(
"Use the value of this attribute/claim as unique user name. It \
is up to the identity provider to guarantee the uniqueness. The \
OpenID specification only guarantees that Subject ('sub') is \
unique. Also make sure that the user is not allowed to change that \
attribute by himself!",
)
.max_length(64)
.min_length(1)
.format(&PROXMOX_SAFE_ID_FORMAT)
.schema();
#[api(
properties: {
realm: {
schema: REALM_ID_SCHEMA,
},
"client-key": {
optional: true,
},
"scopes": {
schema: OPENID_SCOPE_LIST_SCHEMA,
optional: true,
},
"acr-values": {
schema: OPENID_ACR_LIST_SCHEMA,
optional: true,
},
prompt: {
description: "OpenID Prompt",
type: String,
format: &PROXMOX_SAFE_ID_FORMAT,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
autocreate: {
optional: true,
default: false,
},
"username-claim": {
schema: OPENID_USERNAME_CLAIM_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// OpenID configuration properties.
pub struct OpenIdRealmConfig {
#[updater(skip)]
pub realm: String,
/// OpenID Issuer Url
pub issuer_url: String,
/// OpenID Client ID
pub client_id: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub scopes: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub acr_values: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub prompt: Option<String>,
/// OpenID Client Key
#[serde(skip_serializing_if = "Option::is_none")]
pub client_key: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// Automatically create users if they do not exist.
#[serde(skip_serializing_if = "Option::is_none")]
pub autocreate: Option<bool>,
#[updater(skip)]
#[serde(skip_serializing_if = "Option::is_none")]
pub username_claim: Option<String>,
}

View File

@ -1,22 +0,0 @@
use percent_encoding::{utf8_percent_encode, AsciiSet};
/// This used to be: `SIMPLE_ENCODE_SET` plus space, `"`, `#`, `<`, `>`, backtick, `?`, `{`, `}`
pub const DEFAULT_ENCODE_SET: &AsciiSet = &percent_encoding::CONTROLS // 0..1f and 7e
// The SIMPLE_ENCODE_SET adds space and anything >= 0x7e (7e itself is already included above)
.add(0x20)
.add(0x7f)
// the DEFAULT_ENCODE_SET added:
.add(b' ')
.add(b'"')
.add(b'#')
.add(b'<')
.add(b'>')
.add(b'`')
.add(b'?')
.add(b'{')
.add(b'}');
/// percent encode a url component
pub fn percent_encode_component(comp: &str) -> String {
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
}

View File

@ -1,106 +0,0 @@
use serde::{Deserialize, Serialize};
use super::*;
use proxmox_schema::*;
pub const REMOTE_PASSWORD_SCHEMA: Schema =
StringSchema::new("Password or auth token for remote host.")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_PASSWORD_BASE64_SCHEMA: Schema =
StringSchema::new("Password or auth token for remote host (stored as base64 string).")
.format(&PASSWORD_FORMAT)
.min_length(1)
.max_length(1024)
.schema();
pub const REMOTE_ID_SCHEMA: Schema = StringSchema::new("Remote ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[api(
properties: {
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
port: {
optional: true,
description: "The (optional) port",
type: u16,
},
"auth-id": {
type: Authid,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Updater, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Remote configuration properties.
pub struct RemoteConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
pub host: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub port: Option<u16>,
pub auth_id: Authid,
#[serde(skip_serializing_if = "Option::is_none")]
pub fingerprint: Option<String>,
}
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
config: {
type: RemoteConfig,
},
password: {
schema: REMOTE_PASSWORD_BASE64_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Remote properties.
pub struct Remote {
pub name: String,
// Note: The stored password is base64 encoded
#[serde(default, skip_serializing_if = "String::is_empty")]
#[serde(with = "proxmox_serde::string_as_base64")]
pub password: String,
#[serde(flatten)]
pub config: RemoteConfig,
}
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
config: {
type: RemoteConfig,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Remote properties.
pub struct RemoteWithoutPassword {
pub name: String,
#[serde(flatten)]
pub config: RemoteConfig,
}

View File

@ -1,134 +0,0 @@
//! Types for tape changer API
use serde::{Deserialize, Serialize};
use proxmox_schema::{
api, ApiStringFormat, ArraySchema, IntegerSchema, Schema, StringSchema, Updater,
};
use crate::{OptionalDeviceIdentification, PROXMOX_SAFE_ID_FORMAT};
pub const CHANGER_NAME_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const SCSI_CHANGER_PATH_SCHEMA: Schema =
StringSchema::new("Path to Linux generic SCSI device (e.g. '/dev/sg4')").schema();
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const SLOT_ARRAY_SCHEMA: Schema = ArraySchema::new(
"Slot list.",
&IntegerSchema::new("Slot number").minimum(1).schema(),
)
.schema();
pub const EXPORT_SLOT_LIST_SCHEMA: Schema = StringSchema::new(
"\
A list of slot numbers, comma separated. Those slots are reserved for
Import/Export, i.e. any media in those slots are considered to be
'offline'.
",
)
.format(&ApiStringFormat::PropertyString(&SLOT_ARRAY_SCHEMA))
.schema();
#[api(
properties: {
name: {
schema: CHANGER_NAME_SCHEMA,
},
path: {
schema: SCSI_CHANGER_PATH_SCHEMA,
},
"export-slots": {
schema: EXPORT_SLOT_LIST_SCHEMA,
optional: true,
},
"eject-before-unload": {
optional: true,
default: false,
}
},
)]
#[derive(Serialize, Deserialize, Updater)]
#[serde(rename_all = "kebab-case")]
/// SCSI tape changer
pub struct ScsiTapeChanger {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub export_slots: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
/// if set to true, tapes are ejected manually before unloading
pub eject_before_unload: Option<bool>,
}
#[api(
properties: {
config: {
type: ScsiTapeChanger,
},
info: {
type: OptionalDeviceIdentification,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Changer config with optional device identification attributes
pub struct ChangerListEntry {
#[serde(flatten)]
pub config: ScsiTapeChanger,
#[serde(flatten)]
pub info: OptionalDeviceIdentification,
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Entry Kind
pub enum MtxEntryKind {
/// Drive
Drive,
/// Slot
Slot,
/// Import/Export Slot
ImportExport,
}
#[api(
properties: {
"entry-kind": {
type: MtxEntryKind,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Mtx Status Entry
pub struct MtxStatusEntry {
pub entry_kind: MtxEntryKind,
/// The ID of the slot or drive
pub entry_id: u64,
/// The media label (volume tag) if the slot/drive is full
#[serde(skip_serializing_if = "Option::is_none")]
pub label_text: Option<String>,
/// The slot the drive was loaded from
#[serde(skip_serializing_if = "Option::is_none")]
pub loaded_slot: Option<u64>,
/// The current state of the drive
#[serde(skip_serializing_if = "Option::is_none")]
pub state: Option<String>,
}

View File

@ -1,55 +0,0 @@
use ::serde::{Deserialize, Serialize};
use proxmox_schema::api;
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Optional Device Identification Attributes
pub struct OptionalDeviceIdentification {
/// Vendor (autodetected)
#[serde(skip_serializing_if = "Option::is_none")]
pub vendor: Option<String>,
/// Model (autodetected)
#[serde(skip_serializing_if = "Option::is_none")]
pub model: Option<String>,
/// Serial number (autodetected)
#[serde(skip_serializing_if = "Option::is_none")]
pub serial: Option<String>,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Kind of device
pub enum DeviceKind {
/// Tape changer (Autoloader, Robot)
Changer,
/// Normal SCSI tape device
Tape,
}
#[api(
properties: {
kind: {
type: DeviceKind,
},
},
)]
#[derive(Debug, Serialize, Deserialize)]
/// Tape device information
pub struct TapeDeviceInfo {
pub kind: DeviceKind,
/// Path to the linux device node
pub path: String,
/// Serial number (autodetected)
pub serial: String,
/// Vendor (autodetected)
pub vendor: String,
/// Model (autodetected)
pub model: String,
/// Device major number
pub major: u32,
/// Device minor number
pub minor: u32,
}

View File

@ -1,278 +0,0 @@
//! Types for tape drive API
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, IntegerSchema, Schema, StringSchema, Updater};
use crate::{OptionalDeviceIdentification, CHANGER_NAME_SCHEMA, PROXMOX_SAFE_ID_FORMAT};
pub const DRIVE_NAME_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const LTO_DRIVE_PATH_SCHEMA: Schema =
StringSchema::new("The path to a LTO SCSI-generic tape device (i.e. '/dev/sg0')").schema();
pub const CHANGER_DRIVENUM_SCHEMA: Schema =
IntegerSchema::new("Associated changer drive number (requires option changer)")
.minimum(0)
.maximum(255)
.default(0)
.schema();
#[api(
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
}
}
)]
#[derive(Serialize, Deserialize)]
/// Simulate tape drives (only for test and debug)
#[serde(rename_all = "kebab-case")]
pub struct VirtualTapeDrive {
pub name: String,
/// Path to directory
pub path: String,
/// Virtual tape size
#[serde(skip_serializing_if = "Option::is_none")]
pub max_size: Option<usize>,
}
#[api(
properties: {
name: {
schema: DRIVE_NAME_SCHEMA,
},
path: {
schema: LTO_DRIVE_PATH_SCHEMA,
},
changer: {
schema: CHANGER_NAME_SCHEMA,
optional: true,
},
"changer-drivenum": {
schema: CHANGER_DRIVENUM_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Updater, Clone)]
#[serde(rename_all = "kebab-case")]
/// Lto SCSI tape driver
pub struct LtoTapeDrive {
#[updater(skip)]
pub name: String,
pub path: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub changer: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub changer_drivenum: Option<u64>,
}
#[api(
properties: {
config: {
type: LtoTapeDrive,
},
info: {
type: OptionalDeviceIdentification,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Drive list entry
pub struct DriveListEntry {
#[serde(flatten)]
pub config: LtoTapeDrive,
#[serde(flatten)]
pub info: OptionalDeviceIdentification,
/// the state of the drive if locked
#[serde(skip_serializing_if = "Option::is_none")]
pub state: Option<String>,
}
#[api()]
#[derive(Serialize, Deserialize)]
/// Medium auxiliary memory attributes (MAM)
pub struct MamAttribute {
/// Attribute id
pub id: u16,
/// Attribute name
pub name: String,
/// Attribute value
pub value: String,
}
#[api()]
#[derive(Serialize, Deserialize, Copy, Clone, Debug, PartialOrd, PartialEq)]
pub enum TapeDensity {
/// Unknown (no media loaded)
Unknown,
/// LTO1
LTO1,
/// LTO2
LTO2,
/// LTO3
LTO3,
/// LTO4
LTO4,
/// LTO5
LTO5,
/// LTO6
LTO6,
/// LTO7
LTO7,
/// LTO7M8
LTO7M8,
/// LTO8
LTO8,
/// LTO9
LTO9,
}
impl TryFrom<u8> for TapeDensity {
type Error = Error;
fn try_from(value: u8) -> Result<Self, Self::Error> {
let density = match value {
0x00 => TapeDensity::Unknown,
0x40 => TapeDensity::LTO1,
0x42 => TapeDensity::LTO2,
0x44 => TapeDensity::LTO3,
0x46 => TapeDensity::LTO4,
0x58 => TapeDensity::LTO5,
0x5a => TapeDensity::LTO6,
0x5c => TapeDensity::LTO7,
0x5d => TapeDensity::LTO7M8,
0x5e => TapeDensity::LTO8,
0x60 => TapeDensity::LTO9,
_ => bail!("unknown tape density code 0x{:02x}", value),
};
Ok(density)
}
}
#[api(
properties: {
density: {
type: TapeDensity,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Drive/Media status for Lto SCSI drives.
///
/// Media related data is optional - only set if there is a medium
/// loaded.
pub struct LtoDriveAndMediaStatus {
/// Vendor
pub vendor: String,
/// Product
pub product: String,
/// Revision
pub revision: String,
/// Block size (0 is variable size)
pub blocksize: u32,
/// Compression enabled
pub compression: bool,
/// Drive buffer mode
pub buffer_mode: u8,
/// Tape density
pub density: TapeDensity,
/// Media is write protected
#[serde(skip_serializing_if = "Option::is_none")]
pub write_protect: Option<bool>,
/// Tape Alert Flags
#[serde(skip_serializing_if = "Option::is_none")]
pub alert_flags: Option<String>,
/// Current file number
#[serde(skip_serializing_if = "Option::is_none")]
pub file_number: Option<u64>,
/// Current block number
#[serde(skip_serializing_if = "Option::is_none")]
pub block_number: Option<u64>,
/// Medium Manufacture Date (epoch)
#[serde(skip_serializing_if = "Option::is_none")]
pub manufactured: Option<i64>,
/// Total Bytes Read in Medium Life
#[serde(skip_serializing_if = "Option::is_none")]
pub bytes_read: Option<u64>,
/// Total Bytes Written in Medium Life
#[serde(skip_serializing_if = "Option::is_none")]
pub bytes_written: Option<u64>,
/// Number of mounts for the current volume (i.e., Thread Count)
#[serde(skip_serializing_if = "Option::is_none")]
pub volume_mounts: Option<u64>,
/// Count of the total number of times the medium has passed over
/// the head.
#[serde(skip_serializing_if = "Option::is_none")]
pub medium_passes: Option<u64>,
/// Estimated tape wearout factor (assuming max. 16000 end-to-end passes)
#[serde(skip_serializing_if = "Option::is_none")]
pub medium_wearout: Option<f64>,
}
#[api()]
/// Volume statistics from SCSI log page 17h
#[derive(Default, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct Lp17VolumeStatistics {
/// Volume mounts (thread count)
pub volume_mounts: u64,
/// Total data sets written
pub volume_datasets_written: u64,
/// Write retries
pub volume_recovered_write_data_errors: u64,
/// Total unrecovered write errors
pub volume_unrecovered_write_data_errors: u64,
/// Total suspended writes
pub volume_write_servo_errors: u64,
/// Total fatal suspended writes
pub volume_unrecovered_write_servo_errors: u64,
/// Total datasets read
pub volume_datasets_read: u64,
/// Total read retries
pub volume_recovered_read_errors: u64,
/// Total unrecovered read errors
pub volume_unrecovered_read_errors: u64,
/// Last mount unrecovered write errors
pub last_mount_unrecovered_write_errors: u64,
/// Last mount unrecovered read errors
pub last_mount_unrecovered_read_errors: u64,
/// Last mount bytes written
pub last_mount_bytes_written: u64,
/// Last mount bytes read
pub last_mount_bytes_read: u64,
/// Lifetime bytes written
pub lifetime_bytes_written: u64,
/// Lifetime bytes read
pub lifetime_bytes_read: u64,
/// Last load write compression ratio
pub last_load_write_compression_ratio: u64,
/// Last load read compression ratio
pub last_load_read_compression_ratio: u64,
/// Medium mount time
pub medium_mount_time: u64,
/// Medium ready time
pub medium_ready_time: u64,
/// Total native capacity
pub total_native_capacity: u64,
/// Total used native capacity
pub total_used_native_capacity: u64,
/// Write protect
pub write_protect: bool,
/// Volume is WORM
pub worm: bool,
/// Beginning of medium passes
pub beginning_of_medium_passes: u64,
/// Middle of medium passes
pub middle_of_tape_passes: u64,
/// Volume serial number
pub serial: String,
}

View File

@ -1,176 +0,0 @@
use ::serde::{Deserialize, Serialize};
use proxmox_schema::*;
use proxmox_uuid::Uuid;
use crate::{MediaLocation, MediaStatus, UUID_FORMAT};
pub const MEDIA_SET_UUID_SCHEMA: Schema = StringSchema::new(
"MediaSet Uuid (We use the all-zero Uuid to reseve an empty media for a specific pool).",
)
.format(&UUID_FORMAT)
.schema();
pub const MEDIA_UUID_SCHEMA: Schema = StringSchema::new("Media Uuid.")
.format(&UUID_FORMAT)
.schema();
#[api(
properties: {
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media Set list entry
pub struct MediaSetListEntry {
/// Media set name
pub media_set_name: String,
pub media_set_uuid: Uuid,
/// MediaSet creation time stamp
pub media_set_ctime: i64,
/// Media Pool
pub pool: String,
}
#[api(
properties: {
location: {
type: MediaLocation,
},
status: {
type: MediaStatus,
},
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media list entry
pub struct MediaListEntry {
/// Media label text (or Barcode)
pub label_text: String,
pub uuid: Uuid,
/// Creation time stamp
pub ctime: i64,
pub location: MediaLocation,
pub status: MediaStatus,
/// Expired flag
pub expired: bool,
/// Catalog status OK
pub catalog: bool,
/// Media set name
#[serde(skip_serializing_if = "Option::is_none")]
pub media_set_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub media_set_uuid: Option<Uuid>,
/// Media set seq_nr
#[serde(skip_serializing_if = "Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet creation time stamp
#[serde(skip_serializing_if = "Option::is_none")]
pub media_set_ctime: Option<i64>,
/// Media Pool
#[serde(skip_serializing_if = "Option::is_none")]
pub pool: Option<String>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Media label info
pub struct MediaIdFlat {
/// Unique ID
pub uuid: Uuid,
/// Media label text (or Barcode)
pub label_text: String,
/// Creation time stamp
pub ctime: i64,
// All MediaSet properties are optional here
/// MediaSet Pool
#[serde(skip_serializing_if = "Option::is_none")]
pub pool: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub media_set_uuid: Option<Uuid>,
/// MediaSet media sequence number
#[serde(skip_serializing_if = "Option::is_none")]
pub seq_nr: Option<u64>,
/// MediaSet Creation time stamp
#[serde(skip_serializing_if = "Option::is_none")]
pub media_set_ctime: Option<i64>,
/// Encryption key fingerprint
#[serde(skip_serializing_if = "Option::is_none")]
pub encryption_key_fingerprint: Option<String>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Label with optional Uuid
pub struct LabelUuidMap {
/// Changer label text (or Barcode)
pub label_text: String,
/// Associated Uuid (if any)
pub uuid: Option<Uuid>,
}
#[api(
properties: {
uuid: {
schema: MEDIA_UUID_SCHEMA,
},
"media-set-uuid": {
schema: MEDIA_SET_UUID_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Media content list entry
pub struct MediaContentEntry {
/// Media label text (or Barcode)
pub label_text: String,
/// Media Uuid
pub uuid: Uuid,
/// Media set name
pub media_set_name: String,
/// Media set uuid
pub media_set_uuid: Uuid,
/// MediaSet Creation time stamp
pub media_set_ctime: i64,
/// Media set seq_nr
pub seq_nr: u64,
/// Media Pool
pub pool: String,
/// Datastore Name
pub store: String,
/// Backup snapshot
pub snapshot: String,
/// Snapshot creation time (epoch)
pub backup_time: i64,
}

View File

@ -1,80 +0,0 @@
use anyhow::{bail, Error};
use proxmox_schema::{ApiStringFormat, Schema, StringSchema};
use crate::{CHANGER_NAME_SCHEMA, PROXMOX_SAFE_ID_FORMAT};
pub const VAULT_NAME_SCHEMA: Schema = StringSchema::new("Vault name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
#[derive(Debug, PartialEq, Eq, Clone)]
/// Media location
pub enum MediaLocation {
/// Ready for use (inside tape library)
Online(String),
/// Local available, but need to be mounted (insert into tape
/// drive)
Offline,
/// Media is inside a Vault
Vault(String),
}
proxmox_serde::forward_deserialize_to_from_str!(MediaLocation);
proxmox_serde::forward_serialize_to_display!(MediaLocation);
impl proxmox_schema::ApiType for MediaLocation {
const API_SCHEMA: Schema = StringSchema::new(
"Media location (e.g. 'offline', 'online-<changer_name>', 'vault-<vault_name>')",
)
.format(&ApiStringFormat::VerifyFn(|text| {
let location: MediaLocation = text.parse()?;
match location {
MediaLocation::Online(ref changer) => {
CHANGER_NAME_SCHEMA.parse_simple_value(changer)?;
}
MediaLocation::Vault(ref vault) => {
VAULT_NAME_SCHEMA.parse_simple_value(vault)?;
}
MediaLocation::Offline => { /* OK */ }
}
Ok(())
}))
.schema();
}
impl std::fmt::Display for MediaLocation {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
MediaLocation::Offline => {
write!(f, "offline")
}
MediaLocation::Online(changer) => {
write!(f, "online-{}", changer)
}
MediaLocation::Vault(vault) => {
write!(f, "vault-{}", vault)
}
}
}
}
impl std::str::FromStr for MediaLocation {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "offline" {
return Ok(MediaLocation::Offline);
}
if let Some(changer) = s.strip_prefix("online-") {
return Ok(MediaLocation::Online(changer.to_string()));
}
if let Some(vault) = s.strip_prefix("vault-") {
return Ok(MediaLocation::Vault(vault.to_string()));
}
bail!("MediaLocation parse error");
}
}

View File

@ -1,161 +0,0 @@
//! Types for tape media pool API
//!
//! Note: Both MediaSetPolicy and RetentionPolicy are complex enums,
//! so we cannot use them directly for the API. Instead, we represent
//! them as String.
use std::str::FromStr;
use anyhow::Error;
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, ApiStringFormat, Schema, StringSchema, Updater};
use proxmox_time::{CalendarEvent, TimeSpan};
use crate::{
PROXMOX_SAFE_ID_FORMAT, SINGLE_LINE_COMMENT_FORMAT, SINGLE_LINE_COMMENT_SCHEMA,
TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
};
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(2)
.max_length(32)
.schema();
pub const MEDIA_SET_NAMING_TEMPLATE_SCHEMA: Schema = StringSchema::new(
"Media set naming template (may contain strftime() time format specifications).",
)
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const MEDIA_SET_ALLOCATION_POLICY_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|s| {
MediaSetPolicy::from_str(s)?;
Ok(())
});
pub const MEDIA_SET_ALLOCATION_POLICY_SCHEMA: Schema =
StringSchema::new("Media set allocation policy ('continue', 'always', or a calendar event).")
.format(&MEDIA_SET_ALLOCATION_POLICY_FORMAT)
.schema();
/// Media set allocation policy
pub enum MediaSetPolicy {
/// Try to use the current media set
ContinueCurrent,
/// Each backup job creates a new media set
AlwaysCreate,
/// Create a new set when the specified CalendarEvent triggers
CreateAt(CalendarEvent),
}
impl std::str::FromStr for MediaSetPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "continue" {
return Ok(MediaSetPolicy::ContinueCurrent);
}
if s == "always" {
return Ok(MediaSetPolicy::AlwaysCreate);
}
let event = s.parse()?;
Ok(MediaSetPolicy::CreateAt(event))
}
}
pub const MEDIA_RETENTION_POLICY_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|s| {
RetentionPolicy::from_str(s)?;
Ok(())
});
pub const MEDIA_RETENTION_POLICY_SCHEMA: Schema =
StringSchema::new("Media retention policy ('overwrite', 'keep', or time span).")
.format(&MEDIA_RETENTION_POLICY_FORMAT)
.schema();
/// Media retention Policy
pub enum RetentionPolicy {
/// Always overwrite media
OverwriteAlways,
/// Protect data for the timespan specified
ProtectFor(TimeSpan),
/// Never overwrite data
KeepForever,
}
impl std::str::FromStr for RetentionPolicy {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "overwrite" {
return Ok(RetentionPolicy::OverwriteAlways);
}
if s == "keep" {
return Ok(RetentionPolicy::KeepForever);
}
let time_span = s.parse()?;
Ok(RetentionPolicy::ProtectFor(time_span))
}
}
#[api(
properties: {
name: {
schema: MEDIA_POOL_NAME_SCHEMA,
},
allocation: {
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
optional: true,
},
retention: {
schema: MEDIA_RETENTION_POLICY_SCHEMA,
optional: true,
},
template: {
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
optional: true,
},
encrypt: {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
},
)]
#[derive(Serialize, Deserialize, Updater)]
/// Media pool configuration
pub struct MediaPoolConfig {
/// The pool name
#[updater(skip)]
pub name: String,
/// Media Set allocation policy
#[serde(skip_serializing_if = "Option::is_none")]
pub allocation: Option<String>,
/// Media retention policy
#[serde(skip_serializing_if = "Option::is_none")]
pub retention: Option<String>,
/// Media set naming template (default "%c")
///
/// The template is UTF8 text, and can include strftime time
/// format specifications.
#[serde(skip_serializing_if = "Option::is_none")]
pub template: Option<String>,
/// Encryption key fingerprint
///
/// If set, encrypt all data using the specified key.
#[serde(skip_serializing_if = "Option::is_none")]
pub encrypt: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
}

View File

@ -1,21 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::api;
#[api()]
/// Media status
#[derive(Debug, PartialEq, Eq, Copy, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// Media Status
pub enum MediaStatus {
/// Media is ready to be written
Writable,
/// Media is full (contains data)
Full,
/// Media is marked as unknown, needs rescan
Unknown,
/// Media is marked as damaged
Damaged,
/// Media is marked as retired
Retired,
}

View File

@ -1,92 +0,0 @@
//! Types for tape backup API
mod device;
pub use device::*;
mod changer;
pub use changer::*;
mod drive;
pub use drive::*;
mod media_pool;
pub use media_pool::*;
mod media_status;
pub use media_status::*;
mod media_location;
pub use media_location::*;
mod media;
pub use media::*;
use const_format::concatcp;
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, const_regex, ApiStringFormat, Schema, StringSchema};
use proxmox_uuid::Uuid;
use crate::{
BackupType, BACKUP_ID_SCHEMA, BACKUP_NS_PATH_RE, FINGERPRINT_SHA256_FORMAT,
PROXMOX_SAFE_ID_REGEX_STR, SNAPSHOT_PATH_REGEX_STR,
};
const_regex! {
pub TAPE_RESTORE_SNAPSHOT_REGEX = concatcp!(r"^", PROXMOX_SAFE_ID_REGEX_STR, r":(?:", BACKUP_NS_PATH_RE,")?", SNAPSHOT_PATH_REGEX_STR, r"$");
}
pub const TAPE_RESTORE_SNAPSHOT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&TAPE_RESTORE_SNAPSHOT_REGEX);
pub const TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA: Schema =
StringSchema::new("Tape encryption key fingerprint (sha256).")
.format(&FINGERPRINT_SHA256_FORMAT)
.schema();
pub const TAPE_RESTORE_SNAPSHOT_SCHEMA: Schema =
StringSchema::new("A snapshot in the format: 'store:[ns/namespace/...]type/id/time")
.format(&TAPE_RESTORE_SNAPSHOT_FORMAT)
.type_text("store:[ns/namespace/...]type/id/time")
.schema();
#[api(
properties: {
pool: {
schema: MEDIA_POOL_NAME_SCHEMA,
optional: true,
},
"label-text": {
schema: MEDIA_LABEL_SCHEMA,
optional: true,
},
"media": {
schema: MEDIA_UUID_SCHEMA,
optional: true,
},
"media-set": {
schema: MEDIA_SET_UUID_SCHEMA,
optional: true,
},
"backup-type": {
type: BackupType,
optional: true,
},
"backup-id": {
schema: BACKUP_ID_SCHEMA,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// Content list filter parameters
pub struct MediaContentListFilter {
pub pool: Option<String>,
pub label_text: Option<String>,
pub media: Option<Uuid>,
pub media_set: Option<Uuid>,
pub backup_type: Option<BackupType>,
pub backup_id: Option<String>,
}

View File

@ -1,141 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_human_byte::HumanByte;
use proxmox_schema::{api, IntegerSchema, Schema, StringSchema, Updater};
use crate::{
CIDR_SCHEMA, DAILY_DURATION_FORMAT, PROXMOX_SAFE_ID_FORMAT, SINGLE_LINE_COMMENT_SCHEMA,
};
pub const TRAFFIC_CONTROL_TIMEFRAME_SCHEMA: Schema =
StringSchema::new("Timeframe to specify when the rule is active.")
.format(&DAILY_DURATION_FORMAT)
.schema();
pub const TRAFFIC_CONTROL_ID_SCHEMA: Schema = StringSchema::new("Rule ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
.max_length(32)
.schema();
pub const TRAFFIC_CONTROL_RATE_SCHEMA: Schema =
IntegerSchema::new("Rate limit (for Token bucket filter) in bytes/second.")
.minimum(100_000)
.schema();
pub const TRAFFIC_CONTROL_BURST_SCHEMA: Schema =
IntegerSchema::new("Size of the token bucket (for Token bucket filter) in bytes.")
.minimum(1000)
.schema();
#[api(
properties: {
"rate-in": {
type: HumanByte,
optional: true,
},
"burst-in": {
type: HumanByte,
optional: true,
},
"rate-out": {
type: HumanByte,
optional: true,
},
"burst-out": {
type: HumanByte,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize, Default, Clone, Updater, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Rate Limit Configuration
pub struct RateLimitConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub rate_in: Option<HumanByte>,
#[serde(skip_serializing_if = "Option::is_none")]
pub burst_in: Option<HumanByte>,
#[serde(skip_serializing_if = "Option::is_none")]
pub rate_out: Option<HumanByte>,
#[serde(skip_serializing_if = "Option::is_none")]
pub burst_out: Option<HumanByte>,
}
impl RateLimitConfig {
pub fn with_same_inout(rate: Option<HumanByte>, burst: Option<HumanByte>) -> Self {
Self {
rate_in: rate,
burst_in: burst,
rate_out: rate,
burst_out: burst,
}
}
}
#[api(
properties: {
name: {
schema: TRAFFIC_CONTROL_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
limit: {
type: RateLimitConfig,
},
network: {
type: Array,
items: {
schema: CIDR_SCHEMA,
},
},
timeframe: {
type: Array,
items: {
schema: TRAFFIC_CONTROL_TIMEFRAME_SCHEMA,
},
optional: true,
},
},
)]
#[derive(Clone, Serialize, Deserialize, PartialEq, Updater)]
#[serde(rename_all = "kebab-case")]
/// Traffic control rule
pub struct TrafficControlRule {
#[updater(skip)]
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
/// Rule applies to Source IPs within this networks
pub network: Vec<String>,
#[serde(flatten)]
pub limit: RateLimitConfig,
// fixme: expose this?
// /// Bandwidth is shared across all connections
// #[serde(skip_serializing_if="Option::is_none")]
// pub shared: Option<bool>,
/// Enable the rule at specific times
#[serde(skip_serializing_if = "Option::is_none")]
pub timeframe: Option<Vec<String>>,
}
#[api(
properties: {
config: {
type: TrafficControlRule,
},
},
)]
#[derive(Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// Traffic control rule config with current rates
pub struct TrafficControlCurrentRate {
#[serde(flatten)]
pub config: TrafficControlRule,
/// Current ingress rate in bytes/second
pub cur_rate_in: u64,
/// Current egress rate in bytes/second
pub cur_rate_out: u64,
}

View File

@ -1,226 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::{api, BooleanSchema, IntegerSchema, Schema, StringSchema, Updater};
use super::userid::{Authid, Userid, PROXMOX_TOKEN_ID_SCHEMA};
use super::{SINGLE_LINE_COMMENT_FORMAT, SINGLE_LINE_COMMENT_SCHEMA};
pub const ENABLE_USER_SCHEMA: Schema = BooleanSchema::new(
"Enable the account (default). You can set this to '0' to disable the account.",
)
.default(true)
.schema();
pub const EXPIRE_USER_SCHEMA: Schema = IntegerSchema::new(
"Account expiration date (seconds since epoch). '0' means no expiration date.",
)
.default(0)
.minimum(0)
.schema();
pub const FIRST_NAME_SCHEMA: Schema = StringSchema::new("First name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const LAST_NAME_SCHEMA: Schema = StringSchema::new("Last name.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.min_length(2)
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: ApiToken
},
},
"totp-locked": {
type: bool,
optional: true,
default: false,
description: "True if the user is currently locked out of TOTP factors",
},
"tfa-locked-until": {
optional: true,
description: "Contains a timestamp until when a user is locked out of 2nd factors",
},
}
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "kebab-case")]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub tokens: Vec<ApiToken>,
#[serde(skip_serializing_if = "bool_is_false", default)]
pub totp_locked: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub tfa_locked_until: Option<i64>,
}
fn bool_is_false(b: &bool) -> bool {
!b
}
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize, Deserialize, Clone, PartialEq)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub expire: Option<i64>,
}
impl ApiToken {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox_time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: FIRST_NAME_SCHEMA,
},
lastname: {
schema: LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: EMAIL_SCHEMA,
optional: true,
},
}
)]
#[derive(Serialize, Deserialize, Updater, PartialEq, Eq)]
/// User properties.
pub struct User {
#[updater(skip)]
pub userid: Userid,
#[serde(skip_serializing_if = "Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub email: Option<String>,
}
impl User {
pub fn is_active(&self) -> bool {
if !self.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = self.expire {
let now = proxmox_time::epoch_i64();
if expire > 0 && expire <= now {
return false;
}
}
true
}
}

View File

@ -1,78 +0,0 @@
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
const_regex! {
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new("Pool sector size exponent.")
.minimum(9)
.maximum(16)
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema = StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(default: "On")]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS compression algorithm to use.
pub enum ZfsCompressionType {
/// Gnu Zip
Gzip,
/// LZ4
Lz4,
/// LZJB
Lzjb,
/// ZLE
Zle,
/// ZStd
ZStd,
/// Enable compression using the default algorithm.
On,
/// Disable compression.
Off,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// The ZFS RAID level to use.
pub enum ZfsRaidLevel {
/// Single Disk
Single,
/// Mirror
Mirror,
/// Raid10
Raid10,
/// RaidZ
RaidZ,
/// RaidZ2
RaidZ2,
/// RaidZ3
RaidZ3,
}
#[api()]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
/// zpool list item
pub struct ZpoolListItem {
/// zpool name
pub name: String,
/// Health
pub health: String,
/// Total size
pub size: u64,
/// Used size
pub alloc: u64,
/// Free space
pub free: u64,
/// ZFS fragnentation level
pub frag: u64,
/// ZFS deduplication ratio
pub dedup: f64,
}

View File

@ -1,76 +0,0 @@
use pbs_api_types::{BackupGroup, BackupType, GroupFilter};
use std::str::FromStr;
#[test]
fn test_no_filters() {
let group_filters = vec![];
let do_backup = [
"vm/101", "vm/102", "vm/103", "vm/104", "vm/105", "vm/106", "vm/107", "vm/108", "vm/109",
];
for id in do_backup {
assert!(BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
}
#[test]
fn test_include_filters() {
let group_filters = vec![GroupFilter::from_str("regex:.*10[2-8]").unwrap()];
let do_backup = [
"vm/102", "vm/103", "vm/104", "vm/105", "vm/106", "vm/107", "vm/108",
];
let dont_backup = ["vm/101", "vm/109"];
for id in do_backup {
assert!(BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
for id in dont_backup {
assert!(!BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
}
#[test]
fn test_exclude_filters() {
let group_filters = [
GroupFilter::from_str("exclude:regex:.*10[1-3]").unwrap(),
GroupFilter::from_str("exclude:regex:.*10[5-7]").unwrap(),
];
let do_backup = ["vm/104", "vm/108", "vm/109"];
let dont_backup = ["vm/101", "vm/102", "vm/103", "vm/105", "vm/106", "vm/107"];
for id in do_backup {
assert!(BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
for id in dont_backup {
assert!(!BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
}
#[test]
fn test_include_and_exclude_filters() {
let group_filters = [
GroupFilter::from_str("exclude:regex:.*10[1-3]").unwrap(),
GroupFilter::from_str("regex:.*10[2-8]").unwrap(),
GroupFilter::from_str("exclude:regex:.*10[5-7]").unwrap(),
];
let do_backup = ["vm/104", "vm/108"];
let dont_backup = [
"vm/101", "vm/102", "vm/103", "vm/105", "vm/106", "vm/107", "vm/109",
];
for id in do_backup {
assert!(BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
for id in dont_backup {
assert!(!BackupGroup::new(BackupType::Vm, id).apply_filters(&group_filters));
}
}

View File

@ -1,5 +1,11 @@
//! Exports configuration data from the build system
pub const PROXMOX_BACKUP_CRATE_VERSION: &str = env!("CARGO_PKG_VERSION");
// TODO: clean-up, drop the RELEASE one, should not be required on its own and if it would be just
// the X.Y part, also add the Debian package revision (extracted through build.rs) in an existing
// or new constant.
pub const PROXMOX_PKG_VERSION: &str = concat!(
env!("CARGO_PKG_VERSION_MAJOR"),
".",
@ -92,6 +98,8 @@ pub const PROXMOX_BACKUP_KERNEL_FN: &str =
pub const PROXMOX_BACKUP_SUBSCRIPTION_FN: &str = configdir!("/subscription");
pub const APT_PKG_STATE_FN: &str = concat!(PROXMOX_BACKUP_STATE_DIR_M!(), "/pkg-state.json");
/// Prepend configuration directory to a file name
///
/// This is a simply way to get the full path for configuration files.

View File

@ -12,11 +12,8 @@ bytes.workspace = true
futures.workspace = true
h2.workspace = true
hex.workspace = true
http.workspace = true
hyper.workspace = true
lazy_static.workspace = true
libc.workspace = true
log.workspace = true
nix.workspace = true
openssl.workspace = true
percent-encoding.workspace = true
@ -30,6 +27,7 @@ tokio = { workspace = true, features = [ "fs", "signal" ] }
tokio-stream.workspace = true
tower-service.workspace = true
xdg.workspace = true
hickory-resolver.workspace = true
pathpatterns.workspace = true
@ -39,7 +37,7 @@ proxmox-compression.workspace = true
proxmox-http = { workspace = true, features = [ "rate-limiter" ] }
proxmox-human-byte.workspace = true
proxmox-io = { workspace = true, features = [ "tokio" ] }
proxmox-lang.workspace = true
proxmox-log = { workspace = true }
proxmox-router = { workspace = true, features = [ "cli", "server" ] }
proxmox-schema.workspace = true
proxmox-sys.workspace = true
@ -48,6 +46,5 @@ proxmox-time.workspace = true
pxar.workspace = true
pbs-api-types.workspace = true
pbs-buildcfg.workspace = true
pbs-datastore.workspace = true
pbs-tools.workspace = true

View File

@ -1,19 +1,17 @@
use anyhow::{format_err, Error};
use std::fs::File;
use std::io::{Seek, SeekFrom, Write};
use std::os::unix::fs::OpenOptionsExt;
use std::sync::Arc;
use futures::future::AbortHandle;
use serde_json::{json, Value};
use pbs_api_types::{BackupDir, BackupNamespace};
use pbs_api_types::{BackupArchiveName, BackupDir, BackupNamespace, MANIFEST_BLOB_NAME};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::data_blob_reader::DataBlobReader;
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::MANIFEST_BLOB_NAME;
use pbs_datastore::{BackupManifest, PROXMOX_BACKUP_READER_PROTOCOL_ID_V1};
use pbs_tools::crypt_config::CryptConfig;
use pbs_tools::sha::sha256;
@ -128,7 +126,8 @@ impl BackupReader {
/// The manifest signature is verified if we have a crypt_config.
pub async fn download_manifest(&self) -> Result<(BackupManifest, Vec<u8>), Error> {
let mut raw_data = Vec::with_capacity(64 * 1024);
self.download(MANIFEST_BLOB_NAME, &mut raw_data).await?;
self.download(MANIFEST_BLOB_NAME.as_ref(), &mut raw_data)
.await?;
let blob = DataBlob::load_from_reader(&mut &raw_data[..])?;
// no expected digest available
let data = blob.decode(None, None)?;
@ -141,20 +140,16 @@ impl BackupReader {
/// Download a .blob file
///
/// This creates a temporary file in /tmp (using O_TMPFILE). The data is verified using
/// the provided manifest.
/// This creates a temporary file (See [`crate::tools::create_tmp_file`] for
/// details). The data is verified using the provided manifest.
pub async fn download_blob(
&self,
manifest: &BackupManifest,
name: &str,
name: &BackupArchiveName,
) -> Result<DataBlobReader<'_, File>, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let mut tmpfile = crate::tools::create_tmp_file()?;
self.download(name, &mut tmpfile).await?;
self.download(name.as_ref(), &mut tmpfile).await?;
tmpfile.seek(SeekFrom::Start(0))?;
let (csum, size) = sha256(&mut tmpfile)?;
@ -167,20 +162,16 @@ impl BackupReader {
/// Download dynamic index file
///
/// This creates a temporary file in /tmp (using O_TMPFILE). The index is verified using
/// the provided manifest.
/// This creates a temporary file (See [`crate::tools::create_tmp_file`] for
/// details). The index is verified using the provided manifest.
pub async fn download_dynamic_index(
&self,
manifest: &BackupManifest,
name: &str,
name: &BackupArchiveName,
) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let mut tmpfile = crate::tools::create_tmp_file()?;
self.download(name, &mut tmpfile).await?;
self.download(name.as_ref(), &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read dynamic index '{}' - {}", name, err))?;
@ -194,20 +185,16 @@ impl BackupReader {
/// Download fixed index file
///
/// This creates a temporary file in /tmp (using O_TMPFILE). The index is verified using
/// the provided manifest.
/// This creates a temporary file (See [`crate::tools::create_tmp_file`] for
/// details). The index is verified using the provided manifest.
pub async fn download_fixed_index(
&self,
manifest: &BackupManifest,
name: &str,
name: &BackupArchiveName,
) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let mut tmpfile = crate::tools::create_tmp_file()?;
self.download(name, &mut tmpfile).await?;
self.download(name.as_ref(), &mut tmpfile).await?;
let index = FixedIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", name, err))?;

View File

@ -1,4 +1,5 @@
use anyhow::{bail, Error};
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
@ -6,10 +7,13 @@ const_regex! {
BACKUPSPEC_REGEX = r"^([a-zA-Z0-9_-]+\.(pxar|img|conf|log)):(.+)$";
}
pub const BACKUP_SOURCE_SCHEMA: Schema =
StringSchema::new("Backup source specification ([<label>:<path>]).")
.format(&ApiStringFormat::Pattern(&BACKUPSPEC_REGEX))
.schema();
pub const BACKUP_SOURCE_SCHEMA: Schema = StringSchema::new(
"Backup source specification ([<archive-name>.<type>:<source-path>]), the \
'archive-name' must contain alphanumerics, hyphens and underscores only. \
The 'type' must be either 'pxar', 'img', 'conf' or 'log'.",
)
.format(&ApiStringFormat::Pattern(&BACKUPSPEC_REGEX))
.schema();
pub enum BackupSpecificationType {
PXAR,
@ -34,7 +38,7 @@ pub fn parse_backup_specification(value: &str) -> Result<BackupSpecification, Er
"img" => BackupSpecificationType::IMAGE,
"conf" => BackupSpecificationType::CONFIG,
"log" => BackupSpecificationType::LOGFILE,
_ => bail!("unknown backup source type '{}'", extension),
_ => bail!("unknown backup source type '{extension}'"),
};
return Ok(BackupSpecification {
archive_name,
@ -43,5 +47,30 @@ pub fn parse_backup_specification(value: &str) -> Result<BackupSpecification, Er
});
}
bail!("unable to parse backup source specification '{}'", value);
bail!("unable to parse backup source specification '{value}'");
}
#[api]
#[derive(Default, Deserialize, Serialize)]
#[serde(rename_all = "lowercase")]
/// Mode to detect file changes since last backup run
pub enum BackupDetectionMode {
/// Encode backup as self contained pxar archive
#[default]
Legacy,
/// Split backup mode, re-encode payload data
Data,
/// Compare metadata, reuse payload chunks if metadata unchanged
Metadata,
}
impl BackupDetectionMode {
/// Selected mode is data based file change detection with split meta/payload streams
pub fn is_data(&self) -> bool {
matches!(self, Self::Data)
}
/// Selected mode is metadata based file change detection
pub fn is_metadata(&self) -> bool {
matches!(self, Self::Metadata)
}
}

View File

@ -0,0 +1,119 @@
//! Implements counters to generate statistics for log outputs during uploads with backup writer
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::Arc;
use std::time::Duration;
use crate::pxar::create::ReusableDynamicEntry;
/// Basic backup run statistics and archive checksum
pub struct BackupStats {
pub size: u64,
pub csum: [u8; 32],
pub duration: Duration,
pub chunk_count: u64,
}
/// Extended backup run statistics and archive checksum
pub(crate) struct UploadStats {
pub(crate) chunk_count: usize,
pub(crate) chunk_reused: usize,
pub(crate) chunk_injected: usize,
pub(crate) size: usize,
pub(crate) size_reused: usize,
pub(crate) size_injected: usize,
pub(crate) size_compressed: usize,
pub(crate) duration: Duration,
pub(crate) csum: [u8; 32],
}
impl UploadStats {
/// Convert the upload stats to the more concise [`BackupStats`]
#[inline(always)]
pub(crate) fn to_backup_stats(&self) -> BackupStats {
BackupStats {
chunk_count: self.chunk_count as u64,
size: self.size as u64,
duration: self.duration,
csum: self.csum,
}
}
}
/// Atomic counters for accounting upload stream progress information
#[derive(Clone)]
pub(crate) struct UploadCounters {
injected_chunk_count: Arc<AtomicUsize>,
known_chunk_count: Arc<AtomicUsize>,
total_chunk_count: Arc<AtomicUsize>,
compressed_stream_len: Arc<AtomicU64>,
injected_stream_len: Arc<AtomicUsize>,
reused_stream_len: Arc<AtomicUsize>,
total_stream_len: Arc<AtomicUsize>,
}
impl UploadCounters {
/// Create and zero init new upload counters
pub(crate) fn new() -> Self {
Self {
total_chunk_count: Arc::new(AtomicUsize::new(0)),
injected_chunk_count: Arc::new(AtomicUsize::new(0)),
known_chunk_count: Arc::new(AtomicUsize::new(0)),
compressed_stream_len: Arc::new(AtomicU64::new(0)),
injected_stream_len: Arc::new(AtomicUsize::new(0)),
reused_stream_len: Arc::new(AtomicUsize::new(0)),
total_stream_len: Arc::new(AtomicUsize::new(0)),
}
}
#[inline(always)]
pub(crate) fn add_known_chunk(&mut self, chunk_len: usize) -> usize {
self.known_chunk_count.fetch_add(1, Ordering::SeqCst);
self.total_chunk_count.fetch_add(1, Ordering::SeqCst);
self.reused_stream_len
.fetch_add(chunk_len, Ordering::SeqCst);
self.total_stream_len.fetch_add(chunk_len, Ordering::SeqCst)
}
#[inline(always)]
pub(crate) fn add_new_chunk(&mut self, chunk_len: usize, chunk_raw_size: u64) -> usize {
self.total_chunk_count.fetch_add(1, Ordering::SeqCst);
self.compressed_stream_len
.fetch_add(chunk_raw_size, Ordering::SeqCst);
self.total_stream_len.fetch_add(chunk_len, Ordering::SeqCst)
}
#[inline(always)]
pub(crate) fn add_injected_chunk(&mut self, chunk: &ReusableDynamicEntry) -> usize {
self.total_chunk_count.fetch_add(1, Ordering::SeqCst);
self.injected_chunk_count.fetch_add(1, Ordering::SeqCst);
self.reused_stream_len
.fetch_add(chunk.size() as usize, Ordering::SeqCst);
self.injected_stream_len
.fetch_add(chunk.size() as usize, Ordering::SeqCst);
self.total_stream_len
.fetch_add(chunk.size() as usize, Ordering::SeqCst)
}
#[inline(always)]
pub(crate) fn total_stream_len(&self) -> usize {
self.total_stream_len.load(Ordering::SeqCst)
}
/// Convert the counters to [`UploadStats`], including given archive checksum and runtime.
#[inline(always)]
pub(crate) fn to_upload_stats(&self, csum: [u8; 32], duration: Duration) -> UploadStats {
UploadStats {
chunk_count: self.total_chunk_count.load(Ordering::SeqCst),
chunk_reused: self.known_chunk_count.load(Ordering::SeqCst),
chunk_injected: self.injected_chunk_count.load(Ordering::SeqCst),
size: self.total_stream_len.load(Ordering::SeqCst),
size_reused: self.reused_stream_len.load(Ordering::SeqCst),
size_injected: self.injected_stream_len.load(Ordering::SeqCst),
size_compressed: self.compressed_stream_len.load(Ordering::SeqCst) as usize,
duration,
csum,
}
}
}

View File

@ -1,28 +1,35 @@
use std::collections::HashSet;
use std::future::Future;
use std::os::unix::fs::OpenOptionsExt;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Instant;
use anyhow::{bail, format_err, Error};
use futures::future::{self, AbortHandle, Either, FutureExt, TryFutureExt};
use futures::stream::{Stream, StreamExt, TryStreamExt};
use openssl::sha::Sha256;
use serde_json::{json, Value};
use tokio::io::AsyncReadExt;
use tokio::sync::{mpsc, oneshot};
use tokio_stream::wrappers::ReceiverStream;
use pbs_api_types::{BackupDir, BackupNamespace};
use pbs_api_types::{
ArchiveType, BackupArchiveName, BackupDir, BackupNamespace, CATALOG_NAME, MANIFEST_BLOB_NAME,
};
use pbs_datastore::data_blob::{ChunkInfo, DataBlob, DataChunkBuilder};
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
use pbs_datastore::index::IndexFile;
use pbs_datastore::manifest::{ArchiveType, BackupManifest, MANIFEST_BLOB_NAME};
use pbs_datastore::{CATALOG_NAME, PROXMOX_BACKUP_PROTOCOL_ID_V1};
use pbs_datastore::manifest::BackupManifest;
use pbs_datastore::PROXMOX_BACKUP_PROTOCOL_ID_V1;
use pbs_tools::crypt_config::CryptConfig;
use proxmox_human_byte::HumanByte;
use proxmox_log::{debug, enabled, info, trace, warn, Level};
use proxmox_time::TimeSpan;
use super::backup_stats::{BackupStats, UploadCounters, UploadStats};
use super::inject_reused_chunks::{InjectChunks, InjectReusedChunks, InjectedChunksInfo};
use super::merge_known_chunks::{MergeKnownChunks, MergedChunkInfo};
use super::{H2Client, HttpClient};
@ -39,11 +46,6 @@ impl Drop for BackupWriter {
}
}
pub struct BackupStats {
pub size: u64,
pub csum: [u8; 32],
}
/// Options for uploading blobs/streams to the server
#[derive(Default, Clone)]
pub struct UploadOptions {
@ -53,17 +55,12 @@ pub struct UploadOptions {
pub fixed_size: Option<u64>,
}
struct UploadStats {
chunk_count: usize,
chunk_reused: usize,
struct ChunkUploadResponse {
future: h2::legacy::client::ResponseFuture,
size: usize,
size_reused: usize,
size_compressed: usize,
duration: std::time::Duration,
csum: [u8; 32],
}
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<ChunkUploadResponse>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
impl BackupWriter {
@ -146,7 +143,7 @@ impl BackupWriter {
param: Option<Value>,
content_type: &str,
data: Vec<u8>,
) -> Result<h2::client::ResponseFuture, Error> {
) -> Result<h2::legacy::client::ResponseFuture, Error> {
let request =
H2Client::request_builder("localhost", method, path, param, Some(content_type))
.unwrap();
@ -186,6 +183,7 @@ impl BackupWriter {
mut reader: R,
file_name: &str,
) -> Result<BackupStats, Error> {
let start_time = Instant::now();
let mut raw_data = Vec::new();
// fixme: avoid loading into memory
reader.read_to_end(&mut raw_data)?;
@ -203,7 +201,12 @@ impl BackupWriter {
raw_data,
)
.await?;
Ok(BackupStats { size, csum })
Ok(BackupStats {
size,
csum,
duration: start_time.elapsed(),
chunk_count: 0,
})
}
pub async fn upload_blob_from_data(
@ -212,6 +215,7 @@ impl BackupWriter {
file_name: &str,
options: UploadOptions,
) -> Result<BackupStats, Error> {
let start_time = Instant::now();
let blob = match (options.encrypt, &self.crypt_config) {
(false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"),
@ -235,7 +239,12 @@ impl BackupWriter {
raw_data,
)
.await?;
Ok(BackupStats { size, csum })
Ok(BackupStats {
size,
csum,
duration: start_time.elapsed(),
chunk_count: 0,
})
}
pub async fn upload_blob_from_file<P: AsRef<std::path::Path>>(
@ -260,11 +269,105 @@ impl BackupWriter {
.await
}
/// Upload chunks and index
pub async fn upload_index_chunk_info(
&self,
archive_name: &BackupArchiveName,
stream: impl Stream<Item = Result<MergedChunkInfo, Error>>,
options: UploadOptions,
) -> Result<BackupStats, Error> {
let mut param = json!({ "archive-name": archive_name });
let prefix = if let Some(size) = options.fixed_size {
param["size"] = size.into();
"fixed"
} else {
"dynamic"
};
if options.encrypt && self.crypt_config.is_none() {
bail!("requested encryption without a crypt config");
}
let wid = self
.h2
.post(&format!("{prefix}_index"), Some(param))
.await?
.as_u64()
.unwrap();
let mut counters = UploadCounters::new();
let counters_readonly = counters.clone();
let is_fixed_chunk_size = prefix == "fixed";
let index_csum = Arc::new(Mutex::new(Some(Sha256::new())));
let index_csum_2 = index_csum.clone();
let stream = stream
.and_then(move |mut merged_chunk_info| {
match merged_chunk_info {
MergedChunkInfo::New(ref chunk_info) => {
let chunk_len = chunk_info.chunk_len;
let offset =
counters.add_new_chunk(chunk_len as usize, chunk_info.chunk.raw_size());
let end_offset = offset as u64 + chunk_len;
let mut guard = index_csum.lock().unwrap();
let csum = guard.as_mut().unwrap();
if !is_fixed_chunk_size {
csum.update(&end_offset.to_le_bytes());
}
csum.update(&chunk_info.digest);
}
MergedChunkInfo::Known(ref mut known_chunk_list) => {
for (chunk_len, digest) in known_chunk_list {
let offset = counters.add_known_chunk(*chunk_len as usize);
let end_offset = offset as u64 + *chunk_len;
let mut guard = index_csum.lock().unwrap();
let csum = guard.as_mut().unwrap();
if !is_fixed_chunk_size {
csum.update(&end_offset.to_le_bytes());
}
csum.update(digest);
// Replace size with offset, expected by further stream
*chunk_len = offset as u64;
}
}
}
future::ok(merged_chunk_info)
})
.merge_known_chunks();
let upload_stats = Self::upload_merged_chunk_stream(
self.h2.clone(),
wid,
archive_name,
prefix,
stream,
index_csum_2,
counters_readonly,
)
.await?;
let param = json!({
"wid": wid ,
"chunk-count": upload_stats.chunk_count,
"size": upload_stats.size,
"csum": hex::encode(upload_stats.csum),
});
let _value = self
.h2
.post(&format!("{prefix}_close"), Some(param))
.await?;
Ok(upload_stats.to_backup_stats())
}
pub async fn upload_stream(
&self,
archive_name: &str,
archive_name: &BackupArchiveName,
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
options: UploadOptions,
injections: Option<std::sync::mpsc::Receiver<InjectChunks>>,
) -> Result<BackupStats, Error> {
let known_chunks = Arc::new(Mutex::new(HashSet::new()));
@ -287,13 +390,13 @@ impl BackupWriter {
if !manifest
.files()
.iter()
.any(|file| file.filename == archive_name)
.any(|file| file.filename == archive_name.as_ref())
{
log::info!("Previous manifest does not contain an archive called '{archive_name}', skipping download..");
info!("Previous manifest does not contain an archive called '{archive_name}', skipping download..");
} else {
// try, but ignore errors
match ArchiveType::from_path(archive_name) {
Ok(ArchiveType::FixedIndex) => {
match archive_name.archive_type() {
ArchiveType::FixedIndex => {
if let Err(err) = self
.download_previous_fixed_index(
archive_name,
@ -302,10 +405,10 @@ impl BackupWriter {
)
.await
{
log::warn!("Error downloading .fidx from previous manifest: {}", err);
warn!("Error downloading .fidx from previous manifest: {}", err);
}
}
Ok(ArchiveType::DynamicIndex) => {
ArchiveType::DynamicIndex => {
if let Err(err) = self
.download_previous_dynamic_index(
archive_name,
@ -314,7 +417,7 @@ impl BackupWriter {
)
.await
{
log::warn!("Error downloading .didx from previous manifest: {}", err);
warn!("Error downloading .didx from previous manifest: {}", err);
}
}
_ => { /* do nothing */ }
@ -341,58 +444,59 @@ impl BackupWriter {
None
},
options.compress,
injections,
archive_name,
)
.await?;
let size_dirty = upload_stats.size - upload_stats.size_reused;
let size: HumanByte = upload_stats.size.into();
let archive = if log::log_enabled!(log::Level::Debug) {
archive_name
let archive = if enabled!(Level::DEBUG) {
archive_name.to_string()
} else {
pbs_tools::format::strip_server_file_extension(archive_name)
archive_name.without_type_extension()
};
if archive_name != CATALOG_NAME {
if upload_stats.chunk_injected > 0 {
info!(
"{archive}: reused {} from previous snapshot for unchanged files ({} chunks)",
HumanByte::from(upload_stats.size_injected),
upload_stats.chunk_injected,
);
}
if *archive_name != *CATALOG_NAME {
let speed: HumanByte =
((size_dirty * 1_000_000) / (upload_stats.duration.as_micros() as usize)).into();
let size_dirty: HumanByte = size_dirty.into();
let size_compressed: HumanByte = upload_stats.size_compressed.into();
log::info!(
"{}: had to backup {} of {} (compressed {}) in {:.2}s",
archive,
size_dirty,
size,
size_compressed,
info!(
"{archive}: had to backup {size_dirty} of {size} (compressed {size_compressed}) in {:.2} s (average {speed}/s)",
upload_stats.duration.as_secs_f64()
);
log::info!("{}: average backup speed: {}/s", archive, speed);
} else {
log::info!("Uploaded backup catalog ({})", size);
info!("Uploaded backup catalog ({})", size);
}
if upload_stats.size_reused > 0 && upload_stats.size > 1024 * 1024 {
let reused_percent = upload_stats.size_reused as f64 * 100. / upload_stats.size as f64;
let reused: HumanByte = upload_stats.size_reused.into();
log::info!(
info!(
"{}: backup was done incrementally, reused {} ({:.1}%)",
archive,
reused,
reused_percent
archive, reused, reused_percent
);
}
if log::log_enabled!(log::Level::Debug) && upload_stats.chunk_count > 0 {
log::debug!(
if enabled!(Level::DEBUG) && upload_stats.chunk_count > 0 {
debug!(
"{}: Reused {} from {} chunks.",
archive,
upload_stats.chunk_reused,
upload_stats.chunk_count
archive, upload_stats.chunk_reused, upload_stats.chunk_count
);
log::debug!(
debug!(
"{}: Average chunk size was {}.",
archive,
HumanByte::from(upload_stats.size / upload_stats.chunk_count)
);
log::debug!(
debug!(
"{}: Average time per request: {} microseconds.",
archive,
(upload_stats.duration.as_micros()) / (upload_stats.chunk_count as u128)
@ -406,14 +510,11 @@ impl BackupWriter {
"csum": hex::encode(upload_stats.csum),
});
let _value = self.h2.post(&close_path, Some(param)).await?;
Ok(BackupStats {
size: upload_stats.size as u64,
csum: upload_stats.csum,
})
Ok(upload_stats.to_backup_stats())
}
fn response_queue() -> (
mpsc::Sender<h2::client::ResponseFuture>,
mpsc::Sender<h2::legacy::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>>,
) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(100);
@ -436,11 +537,11 @@ impl BackupWriter {
tokio::spawn(
ReceiverStream::new(verify_queue_rx)
.map(Ok::<_, Error>)
.try_for_each(move |response: h2::client::ResponseFuture| {
.try_for_each(move |response: h2::legacy::client::ResponseFuture| {
response
.map_err(Error::from)
.and_then(H2Client::h2api_response)
.map_ok(move |result| log::debug!("RESPONSE: {:?}", result))
.map_ok(move |result| debug!("RESPONSE: {:?}", result))
.map_err(|err| format_err!("pipelined request failed: {}", err))
})
.map(|result| {
@ -455,6 +556,7 @@ impl BackupWriter {
h2: H2Client,
wid: u64,
path: String,
uploaded: Arc<AtomicUsize>,
) -> (UploadQueueSender, UploadResultReceiver) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(64);
let (verify_result_tx, verify_result_rx) = oneshot::channel();
@ -463,15 +565,21 @@ impl BackupWriter {
tokio::spawn(
ReceiverStream::new(verify_queue_rx)
.map(Ok::<_, Error>)
.and_then(move |(merged_chunk_info, response): (MergedChunkInfo, Option<h2::client::ResponseFuture>)| {
.and_then(move |(merged_chunk_info, response): (MergedChunkInfo, Option<ChunkUploadResponse>)| {
match (response, merged_chunk_info) {
(Some(response), MergedChunkInfo::Known(list)) => {
Either::Left(
response
.future
.map_err(Error::from)
.and_then(H2Client::h2api_response)
.and_then(move |_result| {
future::ok(MergedChunkInfo::Known(list))
.and_then({
let uploaded = uploaded.clone();
move |_result| {
// account for uploaded bytes for progress output
uploaded.fetch_add(response.size, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(list))
}
})
)
}
@ -491,7 +599,7 @@ impl BackupWriter {
digest_list.push(hex::encode(digest));
offset_list.push(offset);
}
log::debug!("append chunks list len ({})", digest_list.len());
debug!("append chunks list len ({})", digest_list.len());
let param = json!({ "wid": wid, "digest-list": digest_list, "offset-list": offset_list });
let request = H2Client::request_builder("localhost", "PUT", &path, None, Some("application/json")).unwrap();
let param_data = bytes::Bytes::from(param.to_string().into_bytes());
@ -519,15 +627,11 @@ impl BackupWriter {
pub async fn download_previous_fixed_index(
&self,
archive_name: &str,
archive_name: &BackupArchiveName,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<FixedIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let mut tmpfile = crate::tools::create_tmp_file()?;
let param = json!({ "archive-name": archive_name });
self.h2
@ -547,7 +651,7 @@ impl BackupWriter {
known_chunks.insert(*index.index_digest(i).unwrap());
}
log::debug!(
debug!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
@ -558,15 +662,11 @@ impl BackupWriter {
pub async fn download_previous_dynamic_index(
&self,
archive_name: &str,
archive_name: &BackupArchiveName,
manifest: &BackupManifest,
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
) -> Result<DynamicIndexReader, Error> {
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let mut tmpfile = crate::tools::create_tmp_file()?;
let param = json!({ "archive-name": archive_name });
self.h2
@ -585,7 +685,7 @@ impl BackupWriter {
known_chunks.insert(*index.index_digest(i).unwrap());
}
log::debug!(
debug!(
"{}: known chunks list length is {}",
archive_name,
index.index_count()
@ -609,7 +709,7 @@ impl BackupWriter {
pub async fn download_previous_manifest(&self) -> Result<BackupManifest, Error> {
let mut raw_data = Vec::with_capacity(64 * 1024);
let param = json!({ "archive-name": MANIFEST_BLOB_NAME });
let param = json!({ "archive-name": MANIFEST_BLOB_NAME.to_string() });
self.h2
.download("previous", Some(param), &mut raw_data)
.await?;
@ -636,77 +736,135 @@ impl BackupWriter {
known_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_config: Option<Arc<CryptConfig>>,
compress: bool,
injections: Option<std::sync::mpsc::Receiver<InjectChunks>>,
archive: &BackupArchiveName,
) -> impl Future<Output = Result<UploadStats, Error>> {
let total_chunks = Arc::new(AtomicUsize::new(0));
let total_chunks2 = total_chunks.clone();
let known_chunk_count = Arc::new(AtomicUsize::new(0));
let known_chunk_count2 = known_chunk_count.clone();
let mut counters = UploadCounters::new();
let counters_readonly = counters.clone();
let stream_len = Arc::new(AtomicUsize::new(0));
let stream_len2 = stream_len.clone();
let compressed_stream_len = Arc::new(AtomicU64::new(0));
let compressed_stream_len2 = compressed_stream_len.clone();
let reused_len = Arc::new(AtomicUsize::new(0));
let reused_len2 = reused_len.clone();
let append_chunk_path = format!("{}_index", prefix);
let upload_chunk_path = format!("{}_chunk", prefix);
let is_fixed_chunk_size = prefix == "fixed";
let (upload_queue, upload_result) =
Self::append_chunk_queue(h2.clone(), wid, append_chunk_path);
let start_time = std::time::Instant::now();
let index_csum = Arc::new(Mutex::new(Some(openssl::sha::Sha256::new())));
let index_csum_2 = index_csum.clone();
stream
.and_then(move |data| {
let chunk_len = data.len();
total_chunks.fetch_add(1, Ordering::SeqCst);
let offset = stream_len.fetch_add(chunk_len, Ordering::SeqCst) as u64;
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config);
let stream = stream
.inject_reused_chunks(injections, counters.clone())
.and_then(move |chunk_info| match chunk_info {
InjectedChunksInfo::Known(chunks) => {
// account for injected chunks
let mut known = Vec::new();
let mut guard = index_csum.lock().unwrap();
let csum = guard.as_mut().unwrap();
for chunk in chunks {
let offset = counters.add_injected_chunk(&chunk) as u64;
let digest = chunk.digest();
known.push((offset, digest));
let end_offset = offset + chunk.size();
csum.update(&end_offset.to_le_bytes());
csum.update(&digest);
}
future::ok(MergedChunkInfo::Known(known))
}
InjectedChunksInfo::Raw(data) => {
// account for not injected chunks (new and known)
let chunk_len = data.len();
let mut known_chunks = known_chunks.lock().unwrap();
let digest = chunk_builder.digest();
let mut chunk_builder = DataChunkBuilder::new(data.as_ref()).compress(compress);
let mut guard = index_csum.lock().unwrap();
let csum = guard.as_mut().unwrap();
if let Some(ref crypt_config) = crypt_config {
chunk_builder = chunk_builder.crypt_config(crypt_config);
}
let chunk_end = offset + chunk_len as u64;
let mut known_chunks = known_chunks.lock().unwrap();
let digest = *chunk_builder.digest();
let (offset, res) = if known_chunks.contains(&digest) {
let offset = counters.add_known_chunk(chunk_len) as u64;
(offset, MergedChunkInfo::Known(vec![(offset, digest)]))
} else {
match chunk_builder.build() {
Ok((chunk, digest)) => {
let offset =
counters.add_new_chunk(chunk_len, chunk.raw_size()) as u64;
known_chunks.insert(digest);
(
offset,
MergedChunkInfo::New(ChunkInfo {
chunk,
digest,
chunk_len: chunk_len as u64,
offset,
}),
)
}
Err(err) => return future::err(err),
}
};
if !is_fixed_chunk_size {
csum.update(&chunk_end.to_le_bytes());
}
csum.update(digest);
let mut guard = index_csum.lock().unwrap();
let csum = guard.as_mut().unwrap();
let chunk_is_known = known_chunks.contains(digest);
if chunk_is_known {
known_chunk_count.fetch_add(1, Ordering::SeqCst);
reused_len.fetch_add(chunk_len, Ordering::SeqCst);
future::ok(MergedChunkInfo::Known(vec![(offset, *digest)]))
} else {
let compressed_stream_len2 = compressed_stream_len.clone();
known_chunks.insert(*digest);
future::ready(chunk_builder.build().map(move |(chunk, digest)| {
compressed_stream_len2.fetch_add(chunk.raw_size(), Ordering::SeqCst);
MergedChunkInfo::New(ChunkInfo {
chunk,
digest,
chunk_len: chunk_len as u64,
offset,
})
}))
let chunk_end = offset + chunk_len as u64;
if !is_fixed_chunk_size {
csum.update(&chunk_end.to_le_bytes());
}
csum.update(&digest);
future::ok(res)
}
})
.merge_known_chunks()
.merge_known_chunks();
Self::upload_merged_chunk_stream(
h2,
wid,
archive,
prefix,
stream,
index_csum_2,
counters_readonly,
)
}
fn upload_merged_chunk_stream(
h2: H2Client,
wid: u64,
archive: &BackupArchiveName,
prefix: &str,
stream: impl Stream<Item = Result<MergedChunkInfo, Error>>,
index_csum: Arc<Mutex<Option<Sha256>>>,
counters: UploadCounters,
) -> impl Future<Output = Result<UploadStats, Error>> {
let append_chunk_path = format!("{prefix}_index");
let upload_chunk_path = format!("{prefix}_chunk");
let start_time = std::time::Instant::now();
let uploaded_len = Arc::new(AtomicUsize::new(0));
let (upload_queue, upload_result) =
Self::append_chunk_queue(h2.clone(), wid, append_chunk_path, uploaded_len.clone());
let progress_handle = if archive.ends_with(".img.fidx")
|| archive.ends_with(".pxar.didx")
|| archive.ends_with(".ppxar.didx")
{
let counters = counters.clone();
Some(tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(60)).await;
let size = HumanByte::from(counters.total_stream_len());
let size_uploaded = HumanByte::from(uploaded_len.load(Ordering::SeqCst));
let elapsed = TimeSpan::from(start_time.elapsed());
info!("processed {size} in {elapsed}, uploaded {size_uploaded}");
}
}))
} else {
None
};
stream
.try_for_each(move |merged_chunk_info| {
let upload_queue = upload_queue.clone();
@ -715,7 +873,7 @@ impl BackupWriter {
let digest = chunk_info.digest;
let digest_str = hex::encode(digest);
log::trace!(
trace!(
"upload new chunk {} ({} bytes, offset {})",
digest_str,
chunk_info.chunk_len,
@ -746,7 +904,13 @@ impl BackupWriter {
Either::Left(h2.send_request(request, upload_data).and_then(
move |response| async move {
upload_queue
.send((new_info, Some(response)))
.send((
new_info,
Some(ChunkUploadResponse {
future: response,
size: chunk_info.chunk_len as usize,
}),
))
.await
.map_err(|err| {
format_err!("failed to send to upload queue: {}", err)
@ -764,25 +928,14 @@ impl BackupWriter {
})
.then(move |result| async move { upload_result.await?.and(result) }.boxed())
.and_then(move |_| {
let duration = start_time.elapsed();
let chunk_count = total_chunks2.load(Ordering::SeqCst);
let chunk_reused = known_chunk_count2.load(Ordering::SeqCst);
let size = stream_len2.load(Ordering::SeqCst);
let size_reused = reused_len2.load(Ordering::SeqCst);
let size_compressed = compressed_stream_len2.load(Ordering::SeqCst) as usize;
let mut guard = index_csum_2.lock().unwrap();
let mut guard = index_csum.lock().unwrap();
let csum = guard.take().unwrap().finish();
futures::future::ok(UploadStats {
chunk_count,
chunk_reused,
size,
size_reused,
size_compressed,
duration,
csum,
})
if let Some(handle) = progress_handle {
handle.abort();
}
futures::future::ok(counters.to_upload_stats(csum, start_time.elapsed()))
})
}
@ -811,7 +964,7 @@ impl BackupWriter {
break;
}
log::debug!("send test data ({} bytes)", data.len());
debug!("send test data ({} bytes)", data.len());
let request =
H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self
@ -826,13 +979,13 @@ impl BackupWriter {
let _ = upload_result.await?;
log::info!(
info!(
"Uploaded {} chunks in {} seconds.",
repeat,
start_time.elapsed().as_secs()
);
let speed = ((item_len * (repeat as usize)) as f64) / start_time.elapsed().as_secs_f64();
log::info!(
info!(
"Time per request: {} microseconds.",
(start_time.elapsed().as_micros()) / (repeat as u128)
);

View File

@ -14,6 +14,7 @@ use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
use pathpatterns::{MatchEntry, MatchList, MatchPattern, MatchType, PatternFlag};
use pbs_api_types::PathPattern;
use proxmox_router::cli::{self, CliCommand, CliCommandMap, CliHelper, CommandLineInterface};
use proxmox_schema::api;
use proxmox_sys::fs::{create_path, CreateOptions};
@ -21,7 +22,8 @@ use pxar::accessor::ReadAt;
use pxar::{EntryKind, Metadata};
use pbs_datastore::catalog::{self, DirEntryAttribute};
use proxmox_async::runtime::block_in_place;
use proxmox_async::runtime::{block_in_place, block_on};
use proxmox_log::error;
use crate::pxar::Flags;
@ -105,7 +107,7 @@ fn complete_path(complete_me: &str, _map: &HashMap<String, String>) -> Vec<Strin
match shell.complete_path(complete_me) {
Ok(list) => list,
Err(err) => {
log::error!("error during completion: {}", err);
error!("error during completion: {}", err);
Vec::new()
}
}
@ -240,8 +242,7 @@ async fn list_selected_command(patterns: bool) -> Result<(), Error> {
input: {
properties: {
pattern: {
type: String,
description: "Match pattern for matching files in the catalog."
type: PathPattern,
},
select: {
type: bool,
@ -282,9 +283,8 @@ async fn restore_selected_command(target: String) -> Result<(), Error> {
description: "target path for restore on local filesystem."
},
pattern: {
type: String,
type: PathPattern,
optional: true,
description: "match pattern to limit files for restore."
}
}
}
@ -304,7 +304,6 @@ async fn restore_command(target: String, pattern: Option<String>) -> Result<(),
/// The `Path` type's component iterator does not tell us anything about trailing slashes or
/// trailing `Component::CurDir` entries. Since we only support regular paths we'll roll our own
/// here:
pub struct Shell {
/// Readline instance handling input and callbacks
rl: rustyline::Editor<CliHelper>,
@ -312,8 +311,9 @@ pub struct Shell {
/// Interactive prompt.
prompt: String,
/// Catalog reader instance to navigate
catalog: CatalogReader,
/// Optional catalog reader instance to navigate, if not present the Accessor is used for
/// navigation
catalog: Option<CatalogReader>,
/// List of selected paths for restore
selected: HashMap<OsString, MatchEntry>,
@ -347,7 +347,7 @@ impl PathStackEntry {
impl Shell {
/// Create a new shell for the given catalog and pxar archive.
pub async fn new(
mut catalog: CatalogReader,
mut catalog: Option<CatalogReader>,
archive_name: &str,
archive: Accessor,
) -> Result<Self, Error> {
@ -355,11 +355,31 @@ impl Shell {
let mut rl = rustyline::Editor::<CliHelper>::new();
rl.set_helper(Some(cli_helper));
let catalog_root = catalog.root()?;
let archive_root = catalog
.lookup(&catalog_root, archive_name.as_bytes())?
.ok_or_else(|| format_err!("archive not found in catalog"))?;
let position = vec![PathStackEntry::new(archive_root)];
let mut position = Vec::new();
if let Some(catalog) = catalog.as_mut() {
let catalog_root = catalog.root()?;
let archive_root = catalog
.lookup(&catalog_root, archive_name.as_bytes())?
.ok_or_else(|| format_err!("archive not found in catalog"))?;
position.push(PathStackEntry::new(archive_root));
} else {
let root = archive.open_root().await?;
let root_entry = root.lookup_self().await?;
if let EntryKind::Directory = root_entry.kind() {
let entry_attr = DirEntryAttribute::Directory {
start: root_entry.entry_range_info().entry_range.start,
};
position.push(PathStackEntry {
catalog: catalog::DirEntry {
name: archive_name.into(),
attr: entry_attr,
},
pxar: Some(root_entry),
});
} else {
bail!("unexpected root entry type");
}
}
let mut this = Self {
rl,
@ -398,7 +418,7 @@ impl Shell {
let args = match cli::shellword_split(&line) {
Ok(args) => args,
Err(err) => {
log::error!("Error: {}", err);
error!("Error: {}", err);
continue;
}
};
@ -450,7 +470,7 @@ impl Shell {
async fn resolve_symlink(
stack: &mut Vec<PathStackEntry>,
catalog: &mut CatalogReader,
catalog: &mut Option<CatalogReader>,
accessor: &Accessor,
follow_symlinks: &mut Option<usize>,
) -> Result<(), Error> {
@ -468,7 +488,7 @@ impl Shell {
};
let new_stack =
Self::lookup(stack, &mut *catalog, accessor, Some(path), follow_symlinks).await?;
Self::lookup(stack, catalog, accessor, Some(path), follow_symlinks).await?;
*stack = new_stack;
@ -484,7 +504,7 @@ impl Shell {
/// out.
async fn step(
stack: &mut Vec<PathStackEntry>,
catalog: &mut CatalogReader,
catalog: &mut Option<CatalogReader>,
accessor: &Accessor,
component: std::path::Component<'_>,
follow_symlinks: &mut Option<usize>,
@ -503,9 +523,27 @@ impl Shell {
if stack.last().unwrap().catalog.is_symlink() {
Self::resolve_symlink(stack, catalog, accessor, follow_symlinks).await?;
}
match catalog.lookup(&stack.last().unwrap().catalog, entry.as_bytes())? {
Some(dir) => stack.push(PathStackEntry::new(dir)),
None => bail!("no such file or directory: {:?}", entry),
if let Some(catalog) = catalog {
match catalog.lookup(&stack.last().unwrap().catalog, entry.as_bytes())? {
Some(dir) => stack.push(PathStackEntry::new(dir)),
None => bail!("no such file or directory: {entry:?}"),
}
} else {
let pxar_entry = parent_pxar_entry(stack)?;
let parent_dir = pxar_entry.enter_directory().await?;
match parent_dir.lookup(entry).await? {
Some(entry) => {
let entry_attr = DirEntryAttribute::try_from(&entry)?;
stack.push(PathStackEntry {
catalog: catalog::DirEntry {
name: entry.entry().file_name().as_bytes().into(),
attr: entry_attr,
},
pxar: Some(entry),
})
}
None => bail!("no such file or directory: {entry:?}"),
}
}
}
}
@ -515,7 +553,7 @@ impl Shell {
fn step_nofollow(
stack: &mut Vec<PathStackEntry>,
catalog: &mut CatalogReader,
catalog: &mut Option<CatalogReader>,
component: std::path::Component<'_>,
) -> Result<(), Error> {
use std::path::Component;
@ -531,11 +569,27 @@ impl Shell {
Component::Normal(entry) => {
if stack.last().unwrap().catalog.is_symlink() {
bail!("target is a symlink");
} else {
} else if let Some(catalog) = catalog.as_mut() {
match catalog.lookup(&stack.last().unwrap().catalog, entry.as_bytes())? {
Some(dir) => stack.push(PathStackEntry::new(dir)),
None => bail!("no such file or directory: {:?}", entry),
}
} else {
let pxar_entry = parent_pxar_entry(stack)?;
let parent_dir = block_on(pxar_entry.enter_directory())?;
match block_on(parent_dir.lookup(entry))? {
Some(entry) => {
let entry_attr = DirEntryAttribute::try_from(&entry)?;
stack.push(PathStackEntry {
catalog: catalog::DirEntry {
name: entry.entry().file_name().as_bytes().into(),
attr: entry_attr,
},
pxar: Some(entry),
})
}
None => bail!("no such file or directory: {entry:?}"),
}
}
}
}
@ -545,7 +599,7 @@ impl Shell {
/// The pxar accessor is required to resolve symbolic links
async fn walk_catalog(
stack: &mut Vec<PathStackEntry>,
catalog: &mut CatalogReader,
catalog: &mut Option<CatalogReader>,
accessor: &Accessor,
path: &Path,
follow_symlinks: &mut Option<usize>,
@ -559,7 +613,7 @@ impl Shell {
/// Non-async version cannot follow symlinks.
fn walk_catalog_nofollow(
stack: &mut Vec<PathStackEntry>,
catalog: &mut CatalogReader,
catalog: &mut Option<CatalogReader>,
path: &Path,
) -> Result<(), Error> {
for c in path.components() {
@ -612,12 +666,34 @@ impl Shell {
tmp_stack = self.position.clone();
}
Self::walk_catalog_nofollow(&mut tmp_stack, &mut self.catalog, &path)?;
(&tmp_stack.last().unwrap().catalog, base, part)
(&tmp_stack.last().unwrap(), base, part)
}
None => (&self.position.last().unwrap().catalog, "", input),
None => (&self.position.last().unwrap(), "", input),
};
let entries = self.catalog.read_dir(parent)?;
let entries = if let Some(catalog) = self.catalog.as_mut() {
catalog.read_dir(&parent.catalog)?
} else {
let dir = if let Some(entry) = parent.pxar.as_ref() {
block_on(entry.enter_directory())?
} else {
bail!("missing pxar entry for parent");
};
let mut out = Vec::new();
let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
for entry in entries {
let mut name = base.to_string();
let file_name = entry.file_name().as_bytes();
if file_name.starts_with(part.as_bytes()) {
name.push_str(std::str::from_utf8(file_name)?);
if entry.is_dir() {
name.push('/');
}
out.push(name);
}
}
return Ok(out);
};
let mut out = Vec::new();
for entry in entries {
@ -637,7 +713,7 @@ impl Shell {
// Break async recursion here: lookup -> walk_catalog -> step -> lookup
fn lookup<'future, 's, 'c, 'a, 'p, 'y>(
stack: &'s [PathStackEntry],
catalog: &'c mut CatalogReader,
catalog: &'c mut Option<CatalogReader>,
accessor: &'a Accessor,
path: Option<&'p Path>,
follow_symlinks: &'y mut Option<usize>,
@ -678,7 +754,23 @@ impl Shell {
let last = stack.last().unwrap();
if last.catalog.is_directory() {
let items = self.catalog.read_dir(&stack.last().unwrap().catalog)?;
let items = if let Some(catalog) = self.catalog.as_mut() {
catalog.read_dir(&stack.last().unwrap().catalog)?
} else {
let dir = if let Some(entry) = last.pxar.as_ref() {
entry.enter_directory().await?
} else {
bail!("missing pxar entry for parent");
};
let mut out = std::io::stdout();
let items = crate::pxar::tools::pxar_metadata_read_dir(dir).await?;
for item in items {
out.write_all(item.file_name().as_bytes())?;
out.write_all(b"\n")?;
}
return Ok(());
};
let mut out = std::io::stdout();
// FIXME: columnize
for item in items {
@ -705,7 +797,7 @@ impl Shell {
let file = Self::walk_pxar_archive(&self.accessor, &mut stack).await?;
std::io::stdout()
.write_all(crate::pxar::format_multi_line_entry(file.entry()).as_bytes())?;
.write_all(crate::pxar::tools::format_multi_line_entry(file.entry()).as_bytes())?;
Ok(())
}
@ -720,6 +812,14 @@ impl Shell {
&mut None,
)
.await?;
if new_position.is_empty() {
// Avoid moving below archive root into catalog root, thereby treating
// the archive root as its own parent directory.
self.position.truncate(1);
return Ok(());
}
if !new_position.last().unwrap().catalog.is_directory() {
bail!("not a directory");
}
@ -820,17 +920,36 @@ impl Shell {
async fn list_matching_files(&mut self) -> Result<(), Error> {
let matches = self.build_match_list();
self.catalog.find(
&self.position[0].catalog,
&mut Vec::new(),
&matches,
&mut |path: &[u8]| -> Result<(), Error> {
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)?;
if let Some(catalog) = self.catalog.as_mut() {
catalog.find(
&self.position[0].catalog,
&mut Vec::new(),
&matches,
&mut |path: &[u8]| -> Result<(), Error> {
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)?;
} else {
let parent_dir = if let Some(pxar_entry) = self.position[0].pxar.as_ref() {
pxar_entry.enter_directory().await?
} else {
bail!("missing pxar entry for archive root");
};
crate::pxar::tools::pxar_metadata_catalog_find(
parent_dir,
&matches,
&|path: &[u8]| -> Result<(), Error> {
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)
.await?;
}
Ok(())
}
@ -841,18 +960,37 @@ impl Shell {
MatchEntry::parse_pattern(pattern, PatternFlag::PATH_NAME, MatchType::Include)?;
let mut found_some = false;
self.catalog.find(
&self.position[0].catalog,
&mut Vec::new(),
&[&pattern_entry],
&mut |path: &[u8]| -> Result<(), Error> {
found_some = true;
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)?;
if let Some(catalog) = self.catalog.as_mut() {
catalog.find(
&self.position[0].catalog,
&mut Vec::new(),
&[&pattern_entry],
&mut |path: &[u8]| -> Result<(), Error> {
found_some = true;
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)?;
} else {
let parent_dir = if let Some(pxar_entry) = self.position[0].pxar.as_ref() {
pxar_entry.enter_directory().await?
} else {
bail!("missing pxar entry for archive root");
};
crate::pxar::tools::pxar_metadata_catalog_find(
parent_dir,
&[&pattern_entry],
&|path: &[u8]| -> Result<(), Error> {
let mut out = std::io::stdout();
out.write_all(path)?;
out.write_all(b"\n")?;
Ok(())
},
)
.await?;
}
if found_some && select {
self.selected.insert(pattern_os, pattern_entry);
@ -945,6 +1083,18 @@ impl Shell {
}
}
fn parent_pxar_entry(dir_stack: &[PathStackEntry]) -> Result<&FileEntry, Error> {
if let Some(parent) = dir_stack.last().as_ref() {
if let Some(entry) = parent.pxar.as_ref() {
Ok(entry)
} else {
bail!("missing pxar entry for parent");
}
} else {
bail!("missing parent entry on stack");
}
}
struct ExtractorState<'a> {
path: Vec<u8>,
path_len: usize,
@ -960,22 +1110,38 @@ struct ExtractorState<'a> {
extractor: crate::pxar::extract::Extractor,
catalog: &'a mut CatalogReader,
catalog: &'a mut Option<CatalogReader>,
match_list: &'a [MatchEntry],
accessor: &'a Accessor,
}
impl<'a> ExtractorState<'a> {
pub fn new(
catalog: &'a mut CatalogReader,
catalog: &'a mut Option<CatalogReader>,
dir_stack: Vec<PathStackEntry>,
extractor: crate::pxar::extract::Extractor,
match_list: &'a [MatchEntry],
accessor: &'a Accessor,
) -> Result<Self, Error> {
let read_dir = catalog
.read_dir(&dir_stack.last().unwrap().catalog)?
.into_iter();
let read_dir = if let Some(catalog) = catalog.as_mut() {
catalog
.read_dir(&dir_stack.last().unwrap().catalog)?
.into_iter()
} else {
let pxar_entry = parent_pxar_entry(&dir_stack)?;
let dir = block_on(pxar_entry.enter_directory())?;
let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
let mut catalog_entries = Vec::with_capacity(entries.len());
for entry in entries {
let entry_attr = DirEntryAttribute::try_from(&entry).unwrap();
catalog_entries.push(catalog::DirEntry {
name: entry.entry().file_name().as_bytes().into(),
attr: entry_attr,
});
}
catalog_entries.into_iter()
};
Ok(Self {
path: Vec::new(),
path_len: 0,
@ -1053,11 +1219,29 @@ impl<'a> ExtractorState<'a> {
entry: catalog::DirEntry,
match_result: Option<MatchType>,
) -> Result<(), Error> {
let entry_iter = if let Some(catalog) = self.catalog.as_mut() {
catalog.read_dir(&entry)?.into_iter()
} else {
self.dir_stack.push(PathStackEntry::new(entry.clone()));
let dir = Shell::walk_pxar_archive(self.accessor, &mut self.dir_stack).await?;
self.dir_stack.pop();
let dir = dir.enter_directory().await?;
let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
entries
.into_iter()
.map(|entry| {
let entry_attr = DirEntryAttribute::try_from(&entry).unwrap();
catalog::DirEntry {
name: entry.entry().file_name().as_bytes().into(),
attr: entry_attr,
}
})
.collect::<Vec<catalog::DirEntry>>()
.into_iter()
};
// enter a new directory:
self.read_dir_stack.push(mem::replace(
&mut self.read_dir,
self.catalog.read_dir(&entry)?.into_iter(),
));
self.read_dir_stack
.push(mem::replace(&mut self.read_dir, entry_iter));
self.matches_stack.push(self.matches);
self.dir_stack.push(PathStackEntry::new(entry));
self.path_len_stack.push(self.path_len);

Some files were not shown because too many files have changed in this diff Show More