Commit Graph

8509 Commits

Author SHA1 Message Date
Christian Ebner
cfc93ebd03 cli: use endpoint over client for s3 endpoint subcommands
In order to be consistent with the UI and thereby reduce possible
confusion, where the naming was changed form `client` to `endpoint`
as well.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
2838914c21 ui: use S3 endpoint over S3 client for ui elements
To distinguish the actual client from the endpoint configuration,
refer to the endpoint configuration and secrets as `S3 Endpoint`.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
96f9096931 docs: Add section describing how to setup s3 backed datastore
Describe required basic S3 client setup and possible configuration
options as well as the actual setup of a datastore using the client and
a bucket as backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
c10c8ffeca api/ui: add flag to allow overwriting in-use marker for s3 backend
Datastores backed by an s3 object store mark the corresponding bucket
prefix given by the datastore name as in-use to protect from
accidental reuse of the same datastore from other instances.

If the datastore has to be re-created because the Proxmox Backup
Server instance is no longer available, skipping the check and
overwriting the marker with the current hostname is necessary.

Expose this flag to the datastore create api endpoint and expose
it to the web ui and cli command.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
78d9265a15 datastore: run s3-refresh when reusing a datastore with s3 backend
Instead of relying on the user to manually trigger the refresh after
datastore creation, do it already automatically in the datastore
creation task, thereby improving ergonomics.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
3cc3c10d27 datastore: mark store as in-use by setting marker on s3 backend
Adds an in-use marker on the S3 store to protect from accidental reuse
of the same datastore by multiple Proxmox Backup Server instances. Set
the marker file on store creation.

The local cache folder is however always assumed to be empty and needs
creation on datastore creation to guarantee consistency.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
40a287727f bin: expose reuse-datastore flag for proxmox-backup-manager
It is currently not possible to create a new datastore config and reuse
an existing datastore. Expose the `reuse-datastore` flag also for the
proxmox-backup-manager command, equivalent to what is already exposed in
the WebUI.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
7229d7129c bin: implement client subcommands for s3 configuration manipulation
Implement and expose the proxmox-backup-manager commands to interact
with the s3 client configuration.

This mostly requires to insert the commands into the cli command map and
bind them to the corresponding api methods. The list method is the only
exception, as it requires rendering of the output given the provided
output format.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
22cd2711eb datastore: conditionally upload atime marker chunk to s3 backend
Since commit b18eab64 ("fix #5982: garbage collection: check atime
updates are honored"), the 4 MiB fixed sized, unencypted and
compressed chunk containing all zeros is inserted at datastore
creation if the atime safety check is enabled.

If the datastore is backed by an S3 object store, chunk uploads are
avoided by checking the presence of the chunks in the local cache
store. Therefore, the all zero chunk will however not be uploaded
since already inserted locally.

Fix this by conditionally uploading the chunk before performing the
atime update check for datastores backed by S3.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
57ade02bfb ui: expose s3 refresh button for datastores backed by object store
Allows to trigger a refresh of the local datastore contents from
the WebUI.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
3a616987c2 ui: render s3 refresh as valid maintenance type and task description
Analogous to the maintenance type `unmount`, show the `s3-refresh` as
translated string in the maintenance mode options and task
description.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
74f3a868dd cli: add dedicated subcommand for datastore s3 refresh
Allows to manually trigger an s3 refresh via proxmox-backup-manager
by calling the corresponding api endpoint handler.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
b2ffc83627 api/datastore: implement refresh endpoint for stores with s3 backend
Allows to easily refresh the contents on the local cache store for
datastores backed by an S3 object store.

In order to guarantee that no read or write operations are ongoing,
the store is first set into the maintenance mode `S3Refresh`. Objects
are then fetched into a temporary directory to avoid loosing contents
and consistency in case of an error. Once all objects have been
fetched, clears out existing contents and moves the newly fetched
contents in place.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
9072382886 api: backup: add no-cache flag to bypass local datastore cache
Adds the `no-cache` flag so the client can request to bypass the
local datastore cache for chunk uploads. This is mainly intended for
debugging and benchmarking, but can be used in cases the caching is
known to be ineffective (no possible deduplication).

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
014a049033 backup writer: refactor parameters into backup writer options struct
Introduce a BackupWriterOptions struct, bundling the currently
present writer start parameters n order to limit their number
and make this easier extensible.

No functional changes intended.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
f8304a3d31 datastore: local chunk reader: get cached chunk from local cache store
Check if a chunk is contained in the local cache and if so prefer
fetching it from the cache instead of pulling it via the S3 api. This
improves performance and reduces number of requests to the backend.

Basic restore performance tests:

Restored a snapshot containing the linux git repository (on-disk size
5.069 GiB, compressed 3.718 GiB) from an AWS S3 backed datastore, with
and without cached contents:
non cached: 691.95 s
all cached:  74.89 s

Signed-off-by: Christian Ebner <c.ebnner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
0adeafa17b api: reader: use local datastore cache on s3 backend chunk fetching
Take advantage of the local datastore filesystem cache for datastores
backed by an s3 object store in order to reduce number of requests
and latency, and increase throughput.

Also, reducing the number of requests is cost beneficial for S3 object
stores charging for fetching of objects.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
4bda068654 api: backup: use local datastore cache on s3 backend chunk upload
Take advantage of the local datastore cache to avoid re-uploading of
already known chunks. This not only helps improve the backup/upload
speeds, but also avoids additionally costs by reducing the number of
requests and transferred payload data to the S3 object store api.

If the cache is present, lookup if it contains the chunk, skipping
upload altogether if it is. Otherwise, upload the chunk into memory,
upload it to the S3 object store api and insert it into the local
datastore cache.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
299276be19 datastore: add local datastore cache for network attached storages
Use a local datastore as cache using LRU cache replacement policy for
operations on a datastore backed by a network, e.g. by an S3 object
store backend. The goal is to reduce number of requests to the
backend and thereby save costs (monetary as well as time).

Cached chunks are stored on the local datastore cache, already
containing the datastore's contents metadata (namespace, group,
snapshot, owner, index files, ecc..), used to perform fast lookups.
The cache itself only stores chunk digests, not the raw data itself.
When payload data is required, contents are looked up and read from
the local datastore cache filesystem, including fallback to fetch from
the backend if the presumably cached entry is not found.

The cacher allows to fetch cache items on cache misses via the access
method.

The capacity of the cache is derived from the local datastore cache
filesystem, or by the user configured value, whichever is smalller.
The capacity is only set on instantiation of the store, and the current
value kept as long as the datastore remains cached in the datastore
cache. To change the value, the store has to be either be set to offline
mode and back, or the services restarted.

Basic performance tests:

Backup and upload of contents of linux git repository to AWS S3,
snapshots removed in-between each backup run to avoid other chunk reuse
optimization of PBS.

no-cache:
    had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.76 s (average 102.258 MiB/s)
empty-cache:
    had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.42 s (average 102.945 MiB/s)
all-cached:
    had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 43.78 s (average 118.554 MiB/s)

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
0120e1ac21 tools: async lru cache: implement insert, remove and contains methods
Add methods to insert new cache entries without using the cacher,
remove cache entries given their key and check if the cache contains
a key, marking it the most recently used one if it does.

These methods will be used to implement the local datastore cache
which stores the values (chunks) on the filesystem rather than
keeping track of them by storing them in-memory in the cache. The lru
cache will only be used to allow for fast lookup and keep track of
the lookup order.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
8c29e18b8e tools: lru cache: add removed callback for evicted cache nodes
Add a callback function to be executed on evicted cache nodes. The
callback gets the key of the removed node, allowing to externally act
based on that value.

Since the callback might fail, extend the current LRU cache api to
return an error on insert, covering the error for the `removed`
callback.

Async lru cache, callsites and tests are adapted to include the
additional callback parameter accordingly.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
982ef637a1 ui: add s3 client selector and bucket field for s3 backend setup
In order to be able to create datastore with an s3 object store
backend. Implements a s3 client selector and exposes it in the
datastore edit window, together with the additional bucket name field
to associate with the datastore's s3 backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
fbdbda907b ui: expose the s3 client view in the navigation tree
Add a `S3 Clients` item to the navigation tree to allow accessing the
S3 client configuration view and edit windows.

Adds the required source files to the Makefile.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
c8cd77865b ui: add s3 client view for configuration
Adds the view to configure S3 clients in the Configuration section of
the UI.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
f0a9b12078 ui: add s3 client edit window for configuration create/edit
Adds an edit window for creating or editing S3 client configurations.
Loosely based on the same edit window for the remote configuration.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
cd5b188d71 ui: add datastore type selector and reorganize component layout
In preparation for adding the S3 backed datastore variant to the edit
window. Introduce a datastore type selector in order to distinguish
between creation of regular and removable datastores, instead of
using the checkbox as is currently the case.

This allows to more easily expand for further datastore type variants
while keeping the datastore edit window compact.

Since selecting the type is one of the first steps during datastore
creation, position the component right below the datastore name field
and re-organize the components related to the removable datastore
creation, while keeping additional required components for the S3
backed datastore creation in mind.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
6a880e8a44 datastore: implement garbage collection for s3 backend
Implements the garbage collection for datastores backed by an s3
object store.
Take advantage of the local datastore by placing marker files in the
chunk store during phase 1 of the garbage collection, updating their
atime if already present.
This allows us to avoid making expensive API calls to update object
metadata, which would only be possible via a copy object operation.

The phase 2 is implemented by fetching a list of all the chunks via
the ListObjectsV2 API call, filtered by the chunk folder prefix.
This operation has to be performed in batches of 1000 objects, given
by the APIs response limits.
For each object key, lookup the marker file and decide based on the
marker existence and it's atime if the chunk object needs to be
removed. Deletion happens via the delete objects operation, allowing
to delete multiple chunks by a single request.

This allows to efficiently lookup chunks which are not in use
anymore while being performant and cost effective.

Baseline runtime performance tests:
-----------------------------------

3 garbage collection runs were performed with hot filesystem caches
(by additional GC run before the test runs). The PBS instance was
virtualized, the same virtualized disk using ZFS for all the local
cache stores:

All datastores contained the same encrypted data, with the following
content statistics:
Original data usage: 269.685 GiB
On-Disk usage: 9.018 GiB (3.34%)
On-Disk chunks: 6477
Deduplication factor: 29.90
Average chunk size: 1.426 MiB

The resutlts demonstrate the overhead caused by the additional
ListObjectV2 API calls and their processing, but depending on the
object store backend.

Average garbage collection runtime:
Local datastore:             (2.04 ± 0.01) s
Local RADOS gateway (Squid): (3.05 ± 0.01) s
AWS S3:                      (3.05 ± 0.01) s
Cloudflare R2:               (6.71 ± 0.58) s

After pruning of all datastore contents (therefore including
DeleteObjects requests):
Local datastore:              3.04 s
Local RADOS gateway (Squid): 14.08 s
AWS S3:                      13.06 s
Cloudflare R2:               78.21 s

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
57b47366f7 datastore: get and set owner for s3 store backend
Read or write the ownership information from/to the corresponding
object in the S3 object store. Keep that information available if
the bucket is reused as datastore.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
6ff078a5a0 datastore: prune groups/snapshots from s3 object store backend
When pruning a backup group or a backup snapshot for a datastore with
S3 object store backend, remove the associated objects by removing
them based on the prefix.

In order to exclude protected contents, add a filtering based on the
presence of the protected marker.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
b9a2fa4994 datastore: create/delete protected marker file on s3 storage backend
Commit 8292d3d2 ("api2/admin/datastore: add get/set_protection")
introduced the protected flag for backup snapshots, considering
snapshots as protected based on the presence/absence of the
`.protected` marker file in the corresponding snapshot directory.

To allow independent recovery of a datastore backed by an S3 bucket,
also create/delete the marker file on the object store backend. For
actual checks, still rely on the marker as encountered in the local
cache store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
5ea28683bb datastore: create namespace marker in s3 backend
The S3 object store only allows to store objects, referenced by their
key. For backup namespaces datastores however use directories, so
they cannot be represented as one to one mapping.

Instead, create an empty marker file for each namespace and operate
based on that.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
64031f24af verify: implement chunk verification for stores with s3 backend
For datastores backed by an S3 compatible object store, rather than
reading the chunks to be verified from the local filesystem, fetch
them via the s3 client from the configured bucket.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
adf21cddd3 verify worker: add datastore backed to verify worker
In order to fetch chunks from an S3 compatible object store,
instantiate and store the s3 client in the verify worker by storing
the datastore's backend. This allows to reuse the same instance for
the whole verification task.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
e3ca69adb0 datastore: local chunk reader: read chunks based on backend
Get and store the datastore's backend on local chunk reader
instantiantion and fetch chunks based on the variant from either the
filesystem or the s3 object store.

By storing the backend variant, the s3 client is instantiated only
once and reused until the local chunk reader instance is dropped.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
4124b6a8be api: reader: fetch chunks based on datastore backend
Read the chunk based on the datastores backend, reading from local
filesystem or fetching from S3 object store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
daf5d46c7c sync: pull: conditionally upload content to s3 backend
If the datastore is backed by an S3 object store, not only insert the
pulled contents to the local cache store, but also upload it to the
S3 backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
a97b237828 api: datastore: conditionally upload client log to s3 backend
If the datastore is backed by an s3 compatible object store, upload
the client log content to the s3 backend before persisting it to the
local cache store.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
352a206578 api: backup: conditionally upload manifest to s3 object store backend
Reupload the manifest to the S3 object store backend on manifest
updates, if s3 is configured as backend.
This also triggers the initial manifest upload when finishing backup
snapshot in the backup api call handler.
Updates also the locally cached version for fast and efficient
listing of contents without the need to perform expensive (as in
monetary cost and IO latency) requests.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
9d66f486a4 api: backup: conditionally upload indices to s3 object store backend
If the datastore is backed by an S3 compatible object store, upload
the dynamic or fixed index files to the object store after closing
them. The local index files are kept in the local caching datastore
to allow for fast and efficient content lookups, avoiding expensive
(as in monetary cost and IO latency) requests.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
c9bd69a158 api: backup: conditionally upload blobs to s3 object store backend
Upload blobs to both, the local datastore cache and the S3 object
store if s3 is configured as backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
b84aad3660 api: backup: conditionally upload chunks to s3 object store backend
Upload fixed and dynamic sized chunks to either the filesystem or
the S3 object store, depending on the configured backend.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
62b932a874 api: backup: store datastore backend in runtime environment
Get and store the datastore's backend during creation of the backup
runtime environment and upload the chunks to the local filesystem or
s3 object store based on the backend variant.

By storing the backend variant in the environment the s3 client is
instantiated only once and reused for all api calls in the same
backup http/2 connection.

Refactor the upgrade method by moving all logic into the async block,
such that the now possible error on backup environment creation gets
propagated to the thread spawn call side.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
098ab91bd9 datastore: allow to get the backend for a datastore
Implements an enum with variants Filesystem and S3 to distinguish
between available backends. Filesystem will be used as default, if no
backend is configured in the datastores configuration. If the
datastore has an s3 backend configured, the backend method will
instantiate and s3 client and return it with the S3 variant.

This allows to instantiate the client once, keeping and reusing the
same open connection to the api for the lifetime of task or job, e.g.
in the backup writer/readers runtime environment.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
41e1cbd2b8 api/cli: add endpoint and command to check s3 client connection
Adds a dedicated api endpoint and a proxmox-backup-manager command to
check if the configured S3 client can reach the bucket.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
d07ccde395 api: datastore: check s3 backend bucket access on datastore create
Check if the configured S3 object store backend can be reached and
the provided secrets have the permissions to access the bucket.

Perform the check before creating the chunk store, so it is not left
behind if the bucket cannot be reached.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
e8a1971647 api: config: implement endpoints to manipulate and list s3 configs
Allows to create, list, modify and delete configurations for s3
clients via the api.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
690c6441da config: introduce s3 object store client configuration
Adds the client configuration for s3 object store as dedicated
configuration files.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Christian Ebner
aeb4ff4992 datastore: add helpers for path/digest to s3 object key conversion
Adds helper methods to generate the s3 object keys given a relative
path and filename for datastore contents or digest in case of chunk
files.

Regular datastore contents are stored by grouping them with a content
prefix in the object key. In order to keep the object key length
small, given the max limit of 1024 bytes [0], `.cnt` is used as
content prefix. Chunks on the other hand are prefixed by `.chunks`,
same as on regular datastores.

The prefix allows for selective listing of either contents or chunks
by providing the prefix to the respective api calls.

[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 21:43:43 +02:00
Dominik Csapak
096505eaf7 tape: skip setting encryption if we can't and don't want to
Some settings on changers prevents changing the encryption parameters
via the application, e.g. some libraries have a 'encryption disabled' or
'encryption is library managed' option. While the former situation can
be fixed by setting the library to 'application managed', the latter is
sometimes necessary for FIPS compliance (to ensure the tape data is
encrypted).

When libraries are configured this way, the code currently fails with
'drive does not support AES-GCM encryption'. Instead of failing, check
on first call to set_encryption if we could set it, and save that
result.

Only fail when encryption is to be enabled but it is not allowed, but
ignore the error when the backup should be done unencrypted.

`assert_encryption_mode` must also check if it's possible, and skip any
error if it's not possible and we wanted no encryption.

With these changes, it should be possible to use such configured libraries
when there is no encryption configured on the PBS side. (We currently
don't have a library with such capabilities to test.)

Note that in contrast to normal operation, the tape label will also be
encrypted then and will not be readable in case the encryption key is
lost or changed.

Additionally, return an error for 'drive_set_encryption' in case the
drive reports that it does not support hardware encryption, because this
is now already caught one level above in 'set_encryption'.

Also, slightly change the error message to make it clear that the drive
does not support *setting* encryption, not that it does not support
it at all.

This was reported in the community forum:

https://forum.proxmox.com/threads/107383/
https://forum.proxmox.com/threads/164941/

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250416070703.493585-1-d.csapak@proxmox.com
2025-07-22 19:16:44 +02:00
Thomas Lamprecht
04c6015676 api: node system services: postfix is again a non-templated systemd unit
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.

This mirrors commit 21a6ed782 from pve-manager

Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-22 10:45:11 +02:00