Commit Graph

8437 Commits

Author SHA1 Message Date
Christian Ebner
fbdbda907b ui: expose the s3 client view in the navigation tree
Add a `S3 Clients` item to the navigation tree to allow accessing the
S3 client configuration view and edit windows.

Adds the required source files to the Makefile.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
c8cd77865b ui: add s3 client view for configuration
Adds the view to configure S3 clients in the Configuration section of
the UI.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
f0a9b12078 ui: add s3 client edit window for configuration create/edit
Adds an edit window for creating or editing S3 client configurations.
Loosely based on the same edit window for the remote configuration.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
cd5b188d71 ui: add datastore type selector and reorganize component layout
In preparation for adding the S3 backed datastore variant to the edit
window. Introduce a datastore type selector in order to distinguish
between creation of regular and removable datastores, instead of
using the checkbox as is currently the case.

This allows to more easily expand for further datastore type variants
while keeping the datastore edit window compact.

Since selecting the type is one of the first steps during datastore
creation, position the component right below the datastore name field
and re-organize the components related to the removable datastore
creation, while keeping additional required components for the S3
backed datastore creation in mind.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
6a880e8a44 datastore: implement garbage collection for s3 backend
Implements the garbage collection for datastores backed by an s3
object store.
Take advantage of the local datastore by placing marker files in the
chunk store during phase 1 of the garbage collection, updating their
atime if already present.
This allows us to avoid making expensive API calls to update object
metadata, which would only be possible via a copy object operation.

The phase 2 is implemented by fetching a list of all the chunks via
the ListObjectsV2 API call, filtered by the chunk folder prefix.
This operation has to be performed in batches of 1000 objects, given
by the APIs response limits.
For each object key, lookup the marker file and decide based on the
marker existence and it's atime if the chunk object needs to be
removed. Deletion happens via the delete objects operation, allowing
to delete multiple chunks by a single request.

This allows to efficiently lookup chunks which are not in use
anymore while being performant and cost effective.

Baseline runtime performance tests:
-----------------------------------

3 garbage collection runs were performed with hot filesystem caches
(by additional GC run before the test runs). The PBS instance was
virtualized, the same virtualized disk using ZFS for all the local
cache stores:

All datastores contained the same encrypted data, with the following
content statistics:
Original data usage: 269.685 GiB
On-Disk usage: 9.018 GiB (3.34%)
On-Disk chunks: 6477
Deduplication factor: 29.90
Average chunk size: 1.426 MiB

The resutlts demonstrate the overhead caused by the additional
ListObjectV2 API calls and their processing, but depending on the
object store backend.

Average garbage collection runtime:
Local datastore:             (2.04 ± 0.01) s
Local RADOS gateway (Squid): (3.05 ± 0.01) s
AWS S3:                      (3.05 ± 0.01) s
Cloudflare R2:               (6.71 ± 0.58) s

After pruning of all datastore contents (therefore including
DeleteObjects requests):
Local datastore:              3.04 s
Local RADOS gateway (Squid): 14.08 s
AWS S3:                      13.06 s
Cloudflare R2:               78.21 s

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
57b47366f7 datastore: get and set owner for s3 store backend
Read or write the ownership information from/to the corresponding
object in the S3 object store. Keep that information available if
the bucket is reused as datastore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
6ff078a5a0 datastore: prune groups/snapshots from s3 object store backend
When pruning a backup group or a backup snapshot for a datastore with
S3 object store backend, remove the associated objects by removing
them based on the prefix.

In order to exclude protected contents, add a filtering based on the
presence of the protected marker.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
b9a2fa4994 datastore: create/delete protected marker file on s3 storage backend
Commit 8292d3d2 ("api2/admin/datastore: add get/set_protection")
introduced the protected flag for backup snapshots, considering
snapshots as protected based on the presence/absence of the
`.protected` marker file in the corresponding snapshot directory.

To allow independent recovery of a datastore backed by an S3 bucket,
also create/delete the marker file on the object store backend. For
actual checks, still rely on the marker as encountered in the local
cache store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
5ea28683bb datastore: create namespace marker in s3 backend
The S3 object store only allows to store objects, referenced by their
key. For backup namespaces datastores however use directories, so
they cannot be represented as one to one mapping.

Instead, create an empty marker file for each namespace and operate
based on that.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
64031f24af verify: implement chunk verification for stores with s3 backend
For datastores backed by an S3 compatible object store, rather than
reading the chunks to be verified from the local filesystem, fetch
them via the s3 client from the configured bucket.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
adf21cddd3 verify worker: add datastore backed to verify worker
In order to fetch chunks from an S3 compatible object store,
instantiate and store the s3 client in the verify worker by storing
the datastore's backend. This allows to reuse the same instance for
the whole verification task.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
e3ca69adb0 datastore: local chunk reader: read chunks based on backend
Get and store the datastore's backend on local chunk reader
instantiantion and fetch chunks based on the variant from either the
filesystem or the s3 object store.

By storing the backend variant, the s3 client is instantiated only
once and reused until the local chunk reader instance is dropped.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
4124b6a8be api: reader: fetch chunks based on datastore backend
Read the chunk based on the datastores backend, reading from local
filesystem or fetching from S3 object store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
daf5d46c7c sync: pull: conditionally upload content to s3 backend
If the datastore is backed by an S3 object store, not only insert the
pulled contents to the local cache store, but also upload it to the
S3 backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
a97b237828 api: datastore: conditionally upload client log to s3 backend
If the datastore is backed by an s3 compatible object store, upload
the client log content to the s3 backend before persisting it to the
local cache store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
352a206578 api: backup: conditionally upload manifest to s3 object store backend
Reupload the manifest to the S3 object store backend on manifest
updates, if s3 is configured as backend.
This also triggers the initial manifest upload when finishing backup
snapshot in the backup api call handler.
Updates also the locally cached version for fast and efficient
listing of contents without the need to perform expensive (as in
monetary cost and IO latency) requests.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
9d66f486a4 api: backup: conditionally upload indices to s3 object store backend
If the datastore is backed by an S3 compatible object store, upload
the dynamic or fixed index files to the object store after closing
them. The local index files are kept in the local caching datastore
to allow for fast and efficient content lookups, avoiding expensive
(as in monetary cost and IO latency) requests.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
c9bd69a158 api: backup: conditionally upload blobs to s3 object store backend
Upload blobs to both, the local datastore cache and the S3 object
store if s3 is configured as backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
b84aad3660 api: backup: conditionally upload chunks to s3 object store backend
Upload fixed and dynamic sized chunks to either the filesystem or
the S3 object store, depending on the configured backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
62b932a874 api: backup: store datastore backend in runtime environment
Get and store the datastore's backend during creation of the backup
runtime environment and upload the chunks to the local filesystem or
s3 object store based on the backend variant.

By storing the backend variant in the environment the s3 client is
instantiated only once and reused for all api calls in the same
backup http/2 connection.

Refactor the upgrade method by moving all logic into the async block,
such that the now possible error on backup environment creation gets
propagated to the thread spawn call side.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
098ab91bd9 datastore: allow to get the backend for a datastore
Implements an enum with variants Filesystem and S3 to distinguish
between available backends. Filesystem will be used as default, if no
backend is configured in the datastores configuration. If the
datastore has an s3 backend configured, the backend method will
instantiate and s3 client and return it with the S3 variant.

This allows to instantiate the client once, keeping and reusing the
same open connection to the api for the lifetime of task or job, e.g.
in the backup writer/readers runtime environment.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
41e1cbd2b8 api/cli: add endpoint and command to check s3 client connection
Adds a dedicated api endpoint and a proxmox-backup-manager command to
check if the configured S3 client can reach the bucket.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
d07ccde395 api: datastore: check s3 backend bucket access on datastore create
Check if the configured S3 object store backend can be reached and
the provided secrets have the permissions to access the bucket.

Perform the check before creating the chunk store, so it is not left
behind if the bucket cannot be reached.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
e8a1971647 api: config: implement endpoints to manipulate and list s3 configs
Allows to create, list, modify and delete configurations for s3
clients via the api.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
690c6441da config: introduce s3 object store client configuration
Adds the client configuration for s3 object store as dedicated
configuration files.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
aeb4ff4992 datastore: add helpers for path/digest to s3 object key conversion
Adds helper methods to generate the s3 object keys given a relative
path and filename for datastore contents or digest in case of chunk
files.

Regular datastore contents are stored by grouping them with a content
prefix in the object key. In order to keep the object key length
small, given the max limit of 1024 bytes [0], `.cnt` is used as
content prefix. Chunks on the other hand are prefixed by `.chunks`,
same as on regular datastores.

The prefix allows for selective listing of either contents or chunks
by providing the prefix to the respective api calls.

[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Dominik Csapak
096505eaf7 tape: skip setting encryption if we can't and don't want to
Some settings on changers prevents changing the encryption parameters
via the application, e.g. some libraries have a 'encryption disabled' or
'encryption is library managed' option. While the former situation can
be fixed by setting the library to 'application managed', the latter is
sometimes necessary for FIPS compliance (to ensure the tape data is
encrypted).

When libraries are configured this way, the code currently fails with
'drive does not support AES-GCM encryption'. Instead of failing, check
on first call to set_encryption if we could set it, and save that
result.

Only fail when encryption is to be enabled but it is not allowed, but
ignore the error when the backup should be done unencrypted.

`assert_encryption_mode` must also check if it's possible, and skip any
error if it's not possible and we wanted no encryption.

With these changes, it should be possible to use such configured libraries
when there is no encryption configured on the PBS side. (We currently
don't have a library with such capabilities to test.)

Note that in contrast to normal operation, the tape label will also be
encrypted then and will not be readable in case the encryption key is
lost or changed.

Additionally, return an error for 'drive_set_encryption' in case the
drive reports that it does not support hardware encryption, because this
is now already caught one level above in 'set_encryption'.

Also, slightly change the error message to make it clear that the drive
does not support *setting* encryption, not that it does not support
it at all.

This was reported in the community forum:

https://forum.proxmox.com/threads/107383/
https://forum.proxmox.com/threads/164941/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250416070703.493585-1-d.csapak@proxmox.com
2025-07-22 19:16:44 +02:00
Thomas Lamprecht
04c6015676 api: node system services: postfix is again a non-templated systemd unit
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.

This mirrors commit 21a6ed782 from pve-manager

Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 10:45:11 +02:00
Thomas Lamprecht
f176a0774d ui: update online help info map
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 07:56:25 +02:00
Thomas Lamprecht
ead4ec5d7e ui: do not show consent banner twice for OIDC login
We unconditionally showed the consent banner when constructing the
login view, but for an OIDC based authentication flow the user might
visit that view twice, once when first loading the UI and the second
one when getting redirected back by their OIDC provider.

Checking if there was such an OIDC redirect and skip showing the
banner in that cases avoids this issue.

Fix is similar in principle to what we do for pve-manager when closing
issue #6311 but replaces the if guard with a reverse early-return.

Report: https://bugzilla.proxmox.com/show_bug.cgi?id=6311
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 07:55:49 +02:00
Thomas Lamprecht
ac11a77580 buildsys: cleanup old target files before generating new ones
Avoids an rather annoying confirmation prompt from `mv` if it's OK to
move over the file if one calls these targets repeatedly, like during
development edit+install+test cycles.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 07:55:38 +02:00
Thomas Lamprecht
a740264063 bump version to 4.0.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-19 20:05:31 +02:00
Shannon Sterz
00ac201db7 docs: update apt key installation guide to use the new release key rings
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250718091452.69458-1-s.sterz@proxmox.com
2025-07-19 20:01:23 +02:00
Thomas Lamprecht
7a5f194f00 pxar extract: linkat has no AT_SYMLINK_NOFOLLOW flag
This fixes extracting any pxar directory with a hardlink.

linkat defaults to not following symlinks for the olddir (source)
path, and only understands the `AT_SYMLINK_FOLLOW` (notice, there is
no "NO") and `AT_EMPTY_PATH` flags, as can be read in the linkat
man page.

The nix::unistd::LinkatFlags::NoSymlinkFollow flag was used here
previously with nix 0.26, but it was just a wrapper around the
AtFlags, but with NoSymlinkFollow resolving to AtFlags::empty() [0].
The nix 0.29 migration did a 1:1 translation from the now depracated
LinkatFlags to AtFlags, i.e. NoSymlinkFollow to AT_SYMLINK_FOLLOW,
which just cannot work for linkat, one must migrate to the empty
flags instead. That nix drops a safer type here seems a bit odd
though.

[0]: https://docs.rs/nix/0.26.1/src/nix/unistd.rs.html#1262-1263

Report: https://forum.proxmox.com/168633/
Fixes: 2a7012f96 ("update pbs-client to nix 0.29 and rustyline 0.14")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-19 20:00:14 +02:00
Shannon Sterz
cd23b5372f docs: update repository chapter to reflect new deb822 format
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250717075330.53355-1-s.sterz@proxmox.com
2025-07-17 17:58:30 +02:00
Thomas Lamprecht
b19d2c393f bump version to 4.0.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:52:15 +02:00
Thomas Lamprecht
57d82fde86 d/postinst: fix version used for notification-mode update guard
This wasn't known at development time as it needs to be lesser than
the version this was first shipped with.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:52:15 +02:00
Thomas Lamprecht
8382d66513 docs: actually add pbs3to4 man page
Fixes: 11b1bd5bc ("build: Adapt from pbs2to3 to pbs3to4")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:52:15 +02:00
Christian Ebner
ae3994e003 garbage collection: track chunk cache stats and show in task log
Count the chunk cache hits and misses and display the resulting
values and the hit ratio in the garbage collection task log summary.

This allows to investigate possible issues and tune cache capacity,
also by being able to compare to other values in the summary such
as the on disk chunk count.

Exemplary output
```
2025-05-16T22:31:53+02:00: Chunk cache: hits 15817, misses 873 (hit ratio 94.77%)
2025-05-16T22:31:53+02:00: Removed garbage: 0 B
2025-05-16T22:31:53+02:00: Removed chunks: 0
2025-05-16T22:31:53+02:00: Original data usage: 64.961 GiB
2025-05-16T22:31:53+02:00: On-Disk usage: 1.037 GiB (1.60%)
2025-05-16T22:31:53+02:00: On-Disk chunks: 874
2025-05-16T22:31:53+02:00: Deduplication factor: 62.66
2025-05-16T22:31:53+02:00: Average chunk size: 1.215 MiB
```

Sidenote: the discrepancy between cache miss counter and on-disk
chunk count in the output shown above can be attributed to the all
zero chunk, inserted during the atime update check at the start of
garbage collection, however not being referenced by any index file in
this examplary case.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250604153449.482640-3-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:52:15 +02:00
Lukas Wagner
f645974503 d/postinst: migrate notification mode default on update
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-5-l.wagner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:52:15 +02:00
Lukas Wagner
b289d294c8 ui: datastore options: notification: use radio controls to select mode
This makes it consistent with tape backup job options and PVE's backup
jobs. It also visualizes the dependency of 'notify' and 'notify-user'
onto 'notification-mode'.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-11-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
00290872a3 ui: datastore options: drop notify and notify-user rows
Even if the notification mode is set to 'notification-system', the
datastore options grid still shows the keys for 'Notify' and 'Notify
User', which have no effect in this mode:

        Notification:      [Use global notification settings]
        Notify:            [Prune: Default(always), etc...]
        Notify User:       [root@pam]

This is quite confusing.

Unfortunately, it seems be quite hard to dynamically disable/hide rows
in the grid panel used in this view.

For that reason these rows are removed completely for now. The options
are still visible when opening the edit window for the 'Notification'
row.

While this slightly worsens UX in some cases (information is hidden), it
improves clarity by reducing ambiguity, which is also a vital part of
good UX.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-10-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
0471464f4e ui: datastore options: notifications: use same jargon as tape-jobs and PVE
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-9-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
82963b7d4b ui: one-shot tape backup: use same wording as tape-backup jobs
Change the dialog of one-shot tape-backups in such a way that they use
the same jargon as scheduled tape backup jobs.

The width of the dialog is increased by 150px to 750px so that the
slightly larger amount of text fits nicely.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-8-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
025afdb9fe ui: tape backup job: move notification settings to a separate tab
For consistency, use the same UI approach as for PVE's backup jobs. Tape
backup jobs now gain a new tab for all notification related settings:

  ( ) Use global notification settings
  (x) Use sendmail to send an email (legacy)
      Recipient: [              ]

'Recipient' is disabled when the first radio control is selected.

The term 'Notification System' is altogether from the UI. It is not
necessarily clear to a user that this refers to the settings in
Configuration > Notifications.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-7-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
20053ec216 ui: datastore options view: switch to new notification-mode default
This default is displayed in the grid panel if the datastore config
retrieved from the API does not contain any value for notification-mode.
Since the default changed from 'legacy-sendmail' to 'notification-mode'
in the schema datatype, the defaultValue field needs to be adapted as
well.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-6-l.wagner@proxmox.com
2025-07-16 01:12:56 +02:00
Lukas Wagner
243e6a5784 cli: manager: add 'migrate-config default-notification-mode' command
This one migrates any datastore or tape backup job that relied on the
old default (legacy-sendmail) to an explicit setting of
legacy-sendmail. This allows us the change the default without changing
behavior for anybody.

This new command is intended to be called by d/postinst on upgrade to
the package version which introduces the new default value for
'notification-mode'.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-4-l.wagner@proxmox.com
2025-07-16 01:12:47 +02:00
Lukas Wagner
9526aee10a cli: manager: move update-to-prune-jobs command to new migrate-config sub-command
The new subcommand is introduced so that we have a common name space for
any config migration tasks which are triggered by d/postinst (or potentially
by hand).

No functional changes.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-3-l.wagner@proxmox.com
2025-07-16 01:12:47 +02:00
Thomas Lamprecht
99129bbbd1 require pbs-api-types 1.0.1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-16 01:00:45 +02:00
Christian Ebner
4cba26673d docs: rephrase and extend rate limiting description for sync jobs
Since commit 37a85cf6 ("fix: ui: sync job: edit rate limit based on
sync direction") rate limits for sync jobs can be correctly applied
for both directions. State this in the documentation and explicitley
mention the directions to reduce confusion.

Further, also mention the burst parameters, as they are not mentioned
at all.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250623124543.590388-1-c.ebner@proxmox.com
2025-07-16 00:14:55 +02:00