Add a `S3 Clients` item to the navigation tree to allow accessing the
S3 client configuration view and edit windows.
Adds the required source files to the Makefile.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the view to configure S3 clients in the Configuration section of
the UI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds an edit window for creating or editing S3 client configurations.
Loosely based on the same edit window for the remote configuration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In preparation for adding the S3 backed datastore variant to the edit
window. Introduce a datastore type selector in order to distinguish
between creation of regular and removable datastores, instead of
using the checkbox as is currently the case.
This allows to more easily expand for further datastore type variants
while keeping the datastore edit window compact.
Since selecting the type is one of the first steps during datastore
creation, position the component right below the datastore name field
and re-organize the components related to the removable datastore
creation, while keeping additional required components for the S3
backed datastore creation in mind.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements the garbage collection for datastores backed by an s3
object store.
Take advantage of the local datastore by placing marker files in the
chunk store during phase 1 of the garbage collection, updating their
atime if already present.
This allows us to avoid making expensive API calls to update object
metadata, which would only be possible via a copy object operation.
The phase 2 is implemented by fetching a list of all the chunks via
the ListObjectsV2 API call, filtered by the chunk folder prefix.
This operation has to be performed in batches of 1000 objects, given
by the APIs response limits.
For each object key, lookup the marker file and decide based on the
marker existence and it's atime if the chunk object needs to be
removed. Deletion happens via the delete objects operation, allowing
to delete multiple chunks by a single request.
This allows to efficiently lookup chunks which are not in use
anymore while being performant and cost effective.
Baseline runtime performance tests:
-----------------------------------
3 garbage collection runs were performed with hot filesystem caches
(by additional GC run before the test runs). The PBS instance was
virtualized, the same virtualized disk using ZFS for all the local
cache stores:
All datastores contained the same encrypted data, with the following
content statistics:
Original data usage: 269.685 GiB
On-Disk usage: 9.018 GiB (3.34%)
On-Disk chunks: 6477
Deduplication factor: 29.90
Average chunk size: 1.426 MiB
The resutlts demonstrate the overhead caused by the additional
ListObjectV2 API calls and their processing, but depending on the
object store backend.
Average garbage collection runtime:
Local datastore: (2.04 ± 0.01) s
Local RADOS gateway (Squid): (3.05 ± 0.01) s
AWS S3: (3.05 ± 0.01) s
Cloudflare R2: (6.71 ± 0.58) s
After pruning of all datastore contents (therefore including
DeleteObjects requests):
Local datastore: 3.04 s
Local RADOS gateway (Squid): 14.08 s
AWS S3: 13.06 s
Cloudflare R2: 78.21 s
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Read or write the ownership information from/to the corresponding
object in the S3 object store. Keep that information available if
the bucket is reused as datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When pruning a backup group or a backup snapshot for a datastore with
S3 object store backend, remove the associated objects by removing
them based on the prefix.
In order to exclude protected contents, add a filtering based on the
presence of the protected marker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 8292d3d2 ("api2/admin/datastore: add get/set_protection")
introduced the protected flag for backup snapshots, considering
snapshots as protected based on the presence/absence of the
`.protected` marker file in the corresponding snapshot directory.
To allow independent recovery of a datastore backed by an S3 bucket,
also create/delete the marker file on the object store backend. For
actual checks, still rely on the marker as encountered in the local
cache store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The S3 object store only allows to store objects, referenced by their
key. For backup namespaces datastores however use directories, so
they cannot be represented as one to one mapping.
Instead, create an empty marker file for each namespace and operate
based on that.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For datastores backed by an S3 compatible object store, rather than
reading the chunks to be verified from the local filesystem, fetch
them via the s3 client from the configured bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to fetch chunks from an S3 compatible object store,
instantiate and store the s3 client in the verify worker by storing
the datastore's backend. This allows to reuse the same instance for
the whole verification task.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend on local chunk reader
instantiantion and fetch chunks based on the variant from either the
filesystem or the s3 object store.
By storing the backend variant, the s3 client is instantiated only
once and reused until the local chunk reader instance is dropped.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Read the chunk based on the datastores backend, reading from local
filesystem or fetching from S3 object store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 object store, not only insert the
pulled contents to the local cache store, but also upload it to the
S3 backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an s3 compatible object store, upload
the client log content to the s3 backend before persisting it to the
local cache store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reupload the manifest to the S3 object store backend on manifest
updates, if s3 is configured as backend.
This also triggers the initial manifest upload when finishing backup
snapshot in the backup api call handler.
Updates also the locally cached version for fast and efficient
listing of contents without the need to perform expensive (as in
monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 compatible object store, upload
the dynamic or fixed index files to the object store after closing
them. The local index files are kept in the local caching datastore
to allow for fast and efficient content lookups, avoiding expensive
(as in monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload blobs to both, the local datastore cache and the S3 object
store if s3 is configured as backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload fixed and dynamic sized chunks to either the filesystem or
the S3 object store, depending on the configured backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend during creation of the backup
runtime environment and upload the chunks to the local filesystem or
s3 object store based on the backend variant.
By storing the backend variant in the environment the s3 client is
instantiated only once and reused for all api calls in the same
backup http/2 connection.
Refactor the upgrade method by moving all logic into the async block,
such that the now possible error on backup environment creation gets
propagated to the thread spawn call side.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements an enum with variants Filesystem and S3 to distinguish
between available backends. Filesystem will be used as default, if no
backend is configured in the datastores configuration. If the
datastore has an s3 backend configured, the backend method will
instantiate and s3 client and return it with the S3 variant.
This allows to instantiate the client once, keeping and reusing the
same open connection to the api for the lifetime of task or job, e.g.
in the backup writer/readers runtime environment.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a dedicated api endpoint and a proxmox-backup-manager command to
check if the configured S3 client can reach the bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check if the configured S3 object store backend can be reached and
the provided secrets have the permissions to access the bucket.
Perform the check before creating the chunk store, so it is not left
behind if the bucket cannot be reached.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to create, list, modify and delete configurations for s3
clients via the api.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the client configuration for s3 object store as dedicated
configuration files.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds helper methods to generate the s3 object keys given a relative
path and filename for datastore contents or digest in case of chunk
files.
Regular datastore contents are stored by grouping them with a content
prefix in the object key. In order to keep the object key length
small, given the max limit of 1024 bytes [0], `.cnt` is used as
content prefix. Chunks on the other hand are prefixed by `.chunks`,
same as on regular datastores.
The prefix allows for selective listing of either contents or chunks
by providing the prefix to the respective api calls.
[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some settings on changers prevents changing the encryption parameters
via the application, e.g. some libraries have a 'encryption disabled' or
'encryption is library managed' option. While the former situation can
be fixed by setting the library to 'application managed', the latter is
sometimes necessary for FIPS compliance (to ensure the tape data is
encrypted).
When libraries are configured this way, the code currently fails with
'drive does not support AES-GCM encryption'. Instead of failing, check
on first call to set_encryption if we could set it, and save that
result.
Only fail when encryption is to be enabled but it is not allowed, but
ignore the error when the backup should be done unencrypted.
`assert_encryption_mode` must also check if it's possible, and skip any
error if it's not possible and we wanted no encryption.
With these changes, it should be possible to use such configured libraries
when there is no encryption configured on the PBS side. (We currently
don't have a library with such capabilities to test.)
Note that in contrast to normal operation, the tape label will also be
encrypted then and will not be readable in case the encryption key is
lost or changed.
Additionally, return an error for 'drive_set_encryption' in case the
drive reports that it does not support hardware encryption, because this
is now already caught one level above in 'set_encryption'.
Also, slightly change the error message to make it clear that the drive
does not support *setting* encryption, not that it does not support
it at all.
This was reported in the community forum:
https://forum.proxmox.com/threads/107383/https://forum.proxmox.com/threads/164941/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250416070703.493585-1-d.csapak@proxmox.com
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.
This mirrors commit 21a6ed782 from pve-manager
Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We unconditionally showed the consent banner when constructing the
login view, but for an OIDC based authentication flow the user might
visit that view twice, once when first loading the UI and the second
one when getting redirected back by their OIDC provider.
Checking if there was such an OIDC redirect and skip showing the
banner in that cases avoids this issue.
Fix is similar in principle to what we do for pve-manager when closing
issue #6311 but replaces the if guard with a reverse early-return.
Report: https://bugzilla.proxmox.com/show_bug.cgi?id=6311
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoids an rather annoying confirmation prompt from `mv` if it's OK to
move over the file if one calls these targets repeatedly, like during
development edit+install+test cycles.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This fixes extracting any pxar directory with a hardlink.
linkat defaults to not following symlinks for the olddir (source)
path, and only understands the `AT_SYMLINK_FOLLOW` (notice, there is
no "NO") and `AT_EMPTY_PATH` flags, as can be read in the linkat
man page.
The nix::unistd::LinkatFlags::NoSymlinkFollow flag was used here
previously with nix 0.26, but it was just a wrapper around the
AtFlags, but with NoSymlinkFollow resolving to AtFlags::empty() [0].
The nix 0.29 migration did a 1:1 translation from the now depracated
LinkatFlags to AtFlags, i.e. NoSymlinkFollow to AT_SYMLINK_FOLLOW,
which just cannot work for linkat, one must migrate to the empty
flags instead. That nix drops a safer type here seems a bit odd
though.
[0]: https://docs.rs/nix/0.26.1/src/nix/unistd.rs.html#1262-1263
Report: https://forum.proxmox.com/168633/
Fixes: 2a7012f96 ("update pbs-client to nix 0.29 and rustyline 0.14")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This wasn't known at development time as it needs to be lesser than
the version this was first shipped with.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Count the chunk cache hits and misses and display the resulting
values and the hit ratio in the garbage collection task log summary.
This allows to investigate possible issues and tune cache capacity,
also by being able to compare to other values in the summary such
as the on disk chunk count.
Exemplary output
```
2025-05-16T22:31:53+02:00: Chunk cache: hits 15817, misses 873 (hit ratio 94.77%)
2025-05-16T22:31:53+02:00: Removed garbage: 0 B
2025-05-16T22:31:53+02:00: Removed chunks: 0
2025-05-16T22:31:53+02:00: Original data usage: 64.961 GiB
2025-05-16T22:31:53+02:00: On-Disk usage: 1.037 GiB (1.60%)
2025-05-16T22:31:53+02:00: On-Disk chunks: 874
2025-05-16T22:31:53+02:00: Deduplication factor: 62.66
2025-05-16T22:31:53+02:00: Average chunk size: 1.215 MiB
```
Sidenote: the discrepancy between cache miss counter and on-disk
chunk count in the output shown above can be attributed to the all
zero chunk, inserted during the atime update check at the start of
garbage collection, however not being referenced by any index file in
this examplary case.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250604153449.482640-3-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This makes it consistent with tape backup job options and PVE's backup
jobs. It also visualizes the dependency of 'notify' and 'notify-user'
onto 'notification-mode'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-11-l.wagner@proxmox.com
Even if the notification mode is set to 'notification-system', the
datastore options grid still shows the keys for 'Notify' and 'Notify
User', which have no effect in this mode:
Notification: [Use global notification settings]
Notify: [Prune: Default(always), etc...]
Notify User: [root@pam]
This is quite confusing.
Unfortunately, it seems be quite hard to dynamically disable/hide rows
in the grid panel used in this view.
For that reason these rows are removed completely for now. The options
are still visible when opening the edit window for the 'Notification'
row.
While this slightly worsens UX in some cases (information is hidden), it
improves clarity by reducing ambiguity, which is also a vital part of
good UX.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-10-l.wagner@proxmox.com
Change the dialog of one-shot tape-backups in such a way that they use
the same jargon as scheduled tape backup jobs.
The width of the dialog is increased by 150px to 750px so that the
slightly larger amount of text fits nicely.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-8-l.wagner@proxmox.com
For consistency, use the same UI approach as for PVE's backup jobs. Tape
backup jobs now gain a new tab for all notification related settings:
( ) Use global notification settings
(x) Use sendmail to send an email (legacy)
Recipient: [ ]
'Recipient' is disabled when the first radio control is selected.
The term 'Notification System' is altogether from the UI. It is not
necessarily clear to a user that this refers to the settings in
Configuration > Notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-7-l.wagner@proxmox.com
This default is displayed in the grid panel if the datastore config
retrieved from the API does not contain any value for notification-mode.
Since the default changed from 'legacy-sendmail' to 'notification-mode'
in the schema datatype, the defaultValue field needs to be adapted as
well.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-6-l.wagner@proxmox.com
This one migrates any datastore or tape backup job that relied on the
old default (legacy-sendmail) to an explicit setting of
legacy-sendmail. This allows us the change the default without changing
behavior for anybody.
This new command is intended to be called by d/postinst on upgrade to
the package version which introduces the new default value for
'notification-mode'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-4-l.wagner@proxmox.com
The new subcommand is introduced so that we have a common name space for
any config migration tasks which are triggered by d/postinst (or potentially
by hand).
No functional changes.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-3-l.wagner@proxmox.com
Since commit 37a85cf6 ("fix: ui: sync job: edit rate limit based on
sync direction") rate limits for sync jobs can be correctly applied
for both directions. State this in the documentation and explicitley
mention the directions to reduce confusion.
Further, also mention the burst parameters, as they are not mentioned
at all.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250623124543.590388-1-c.ebner@proxmox.com