Commit 3cc3c10d ("datastore: mark store as in-use by setting marker
on s3 backend") introduced the marker object on datastores used by
another instance. The check was however flawed as it made the local
chunk store creation dependent on the s3 client instantiation.
Therefore, instead factor out the DatastoreBackendType determination,
use that for the check and never assume the local cache store to
be pre-existing.
Also, since contents from the s3 store are refreshed anyway, local
contents in the cache store will be removed, except chunks which
are now cleaned up on create.
Fixes: 3cc3c10d ("datastore: mark store as in-use by setting marker on s3 backend")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250724080233.282783-1-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
namely:
* backup to tape from s3 (including a configuring such a job)
* restore to s3 from tape
It does not work currently, but it probably does not make sense to allow
that at all for several reasons:
* both are designed to be 'off-site', so copying data from one off-site
location to another directly does not make sense most of the time
* (modern) tape operations can reach relatively high speeds (> 300MB/s)
and up/downloading to an (most likely remote) s3 storage will slow
down the tape
Note that we could make the check in the restore case more efficient
(since we already have the parsed DataStore struct), but this to be done
only once for each tape restore operation and most of the time there
aren't that many datastores involved, so the extra runtime cost is
probably not that bad vs having multiple code paths for the error.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250723143152.3829064-1-d.csapak@proxmox.com
this new flow returns HttpOnly cookies providing an additional layer
of security for clients operating in a browser environment. opt-in
only to not break existing clients.
most of the new protections were implement by a previous series that
adapted proxmox-auth-api and related crates [1]. this just enables
client's of the api to opt-into these protections.
[1]:
https://lore.proxmox.com/pdm-devel/20250304144247.231089-1-s.sterz@proxmox.com/T/#u
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250723151356.264229-7-s.sterz@proxmox.com
Use the API instead of running uuid_mount/mount directly in the CLI binary.
This ensures that all triggered tasks are handled by the proxy process.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-6-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ensure sync jobs are triggered only when the datastore is actually
mounted. If the datastore is already mounted, we don't fail,
but sync jobs should not be re-triggered unnecessarily. This change
prevents redundant sync job execution.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-5-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a datastore is mounted, spawn a new task to run all sync jobs
marked with `run-on-mount`. These jobs run sequentially and include
any job for which the mounted datastore is:
- The source or target in a local pull job
- The source in a push job to a remote datastore
- The target in a pull job from a remote datastore
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-4-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Sets or clears the run-on-mount flag in sync job configs, removing the
optional value from the config if requested for deletion via the api call.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-3-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to give immediate feedback to the caller, so it is not
required to re-enter all the datastore configuration if the bucket
cannot be accessed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to not return the secret key as part of the s3 endpoint
config, split the config into different struct depending on the
usecase. Either use the plain config without id and secret_key,
the struct with id and plain config or the combined variant with
all 3 fields present.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to be consistent with the UI and thereby reduce possible
confusion, where the naming was changed form `client` to `endpoint`
as well.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Datastores backed by an s3 object store mark the corresponding bucket
prefix given by the datastore name as in-use to protect from
accidental reuse of the same datastore from other instances.
If the datastore has to be re-created because the Proxmox Backup
Server instance is no longer available, skipping the check and
overwriting the marker with the current hostname is necessary.
Expose this flag to the datastore create api endpoint and expose
it to the web ui and cli command.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of relying on the user to manually trigger the refresh after
datastore creation, do it already automatically in the datastore
creation task, thereby improving ergonomics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds an in-use marker on the S3 store to protect from accidental reuse
of the same datastore by multiple Proxmox Backup Server instances. Set
the marker file on store creation.
The local cache folder is however always assumed to be empty and needs
creation on datastore creation to guarantee consistency.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is currently not possible to create a new datastore config and reuse
an existing datastore. Expose the `reuse-datastore` flag also for the
proxmox-backup-manager command, equivalent to what is already exposed in
the WebUI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implement and expose the proxmox-backup-manager commands to interact
with the s3 client configuration.
This mostly requires to insert the commands into the cli command map and
bind them to the corresponding api methods. The list method is the only
exception, as it requires rendering of the output given the provided
output format.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit b18eab64 ("fix #5982: garbage collection: check atime
updates are honored"), the 4 MiB fixed sized, unencypted and
compressed chunk containing all zeros is inserted at datastore
creation if the atime safety check is enabled.
If the datastore is backed by an S3 object store, chunk uploads are
avoided by checking the presence of the chunks in the local cache
store. Therefore, the all zero chunk will however not be uploaded
since already inserted locally.
Fix this by conditionally uploading the chunk before performing the
atime update check for datastores backed by S3.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to manually trigger an s3 refresh via proxmox-backup-manager
by calling the corresponding api endpoint handler.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to easily refresh the contents on the local cache store for
datastores backed by an S3 object store.
In order to guarantee that no read or write operations are ongoing,
the store is first set into the maintenance mode `S3Refresh`. Objects
are then fetched into a temporary directory to avoid loosing contents
and consistency in case of an error. Once all objects have been
fetched, clears out existing contents and moves the newly fetched
contents in place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the `no-cache` flag so the client can request to bypass the
local datastore cache for chunk uploads. This is mainly intended for
debugging and benchmarking, but can be used in cases the caching is
known to be ineffective (no possible deduplication).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Introduce a BackupWriterOptions struct, bundling the currently
present writer start parameters n order to limit their number
and make this easier extensible.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Take advantage of the local datastore filesystem cache for datastores
backed by an s3 object store in order to reduce number of requests
and latency, and increase throughput.
Also, reducing the number of requests is cost beneficial for S3 object
stores charging for fetching of objects.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Take advantage of the local datastore cache to avoid re-uploading of
already known chunks. This not only helps improve the backup/upload
speeds, but also avoids additionally costs by reducing the number of
requests and transferred payload data to the S3 object store api.
If the cache is present, lookup if it contains the chunk, skipping
upload altogether if it is. Otherwise, upload the chunk into memory,
upload it to the S3 object store api and insert it into the local
datastore cache.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When pruning a backup group or a backup snapshot for a datastore with
S3 object store backend, remove the associated objects by removing
them based on the prefix.
In order to exclude protected contents, add a filtering based on the
presence of the protected marker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For datastores backed by an S3 compatible object store, rather than
reading the chunks to be verified from the local filesystem, fetch
them via the s3 client from the configured bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to fetch chunks from an S3 compatible object store,
instantiate and store the s3 client in the verify worker by storing
the datastore's backend. This allows to reuse the same instance for
the whole verification task.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend on local chunk reader
instantiantion and fetch chunks based on the variant from either the
filesystem or the s3 object store.
By storing the backend variant, the s3 client is instantiated only
once and reused until the local chunk reader instance is dropped.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Read the chunk based on the datastores backend, reading from local
filesystem or fetching from S3 object store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 object store, not only insert the
pulled contents to the local cache store, but also upload it to the
S3 backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an s3 compatible object store, upload
the client log content to the s3 backend before persisting it to the
local cache store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reupload the manifest to the S3 object store backend on manifest
updates, if s3 is configured as backend.
This also triggers the initial manifest upload when finishing backup
snapshot in the backup api call handler.
Updates also the locally cached version for fast and efficient
listing of contents without the need to perform expensive (as in
monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 compatible object store, upload
the dynamic or fixed index files to the object store after closing
them. The local index files are kept in the local caching datastore
to allow for fast and efficient content lookups, avoiding expensive
(as in monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload blobs to both, the local datastore cache and the S3 object
store if s3 is configured as backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload fixed and dynamic sized chunks to either the filesystem or
the S3 object store, depending on the configured backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend during creation of the backup
runtime environment and upload the chunks to the local filesystem or
s3 object store based on the backend variant.
By storing the backend variant in the environment the s3 client is
instantiated only once and reused for all api calls in the same
backup http/2 connection.
Refactor the upgrade method by moving all logic into the async block,
such that the now possible error on backup environment creation gets
propagated to the thread spawn call side.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a dedicated api endpoint and a proxmox-backup-manager command to
check if the configured S3 client can reach the bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check if the configured S3 object store backend can be reached and
the provided secrets have the permissions to access the bucket.
Perform the check before creating the chunk store, so it is not left
behind if the bucket cannot be reached.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to create, list, modify and delete configurations for s3
clients via the api.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some settings on changers prevents changing the encryption parameters
via the application, e.g. some libraries have a 'encryption disabled' or
'encryption is library managed' option. While the former situation can
be fixed by setting the library to 'application managed', the latter is
sometimes necessary for FIPS compliance (to ensure the tape data is
encrypted).
When libraries are configured this way, the code currently fails with
'drive does not support AES-GCM encryption'. Instead of failing, check
on first call to set_encryption if we could set it, and save that
result.
Only fail when encryption is to be enabled but it is not allowed, but
ignore the error when the backup should be done unencrypted.
`assert_encryption_mode` must also check if it's possible, and skip any
error if it's not possible and we wanted no encryption.
With these changes, it should be possible to use such configured libraries
when there is no encryption configured on the PBS side. (We currently
don't have a library with such capabilities to test.)
Note that in contrast to normal operation, the tape label will also be
encrypted then and will not be readable in case the encryption key is
lost or changed.
Additionally, return an error for 'drive_set_encryption' in case the
drive reports that it does not support hardware encryption, because this
is now already caught one level above in 'set_encryption'.
Also, slightly change the error message to make it clear that the drive
does not support *setting* encryption, not that it does not support
it at all.
This was reported in the community forum:
https://forum.proxmox.com/threads/107383/https://forum.proxmox.com/threads/164941/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250416070703.493585-1-d.csapak@proxmox.com
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.
This mirrors commit 21a6ed782 from pve-manager
Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This one migrates any datastore or tape backup job that relied on the
old default (legacy-sendmail) to an explicit setting of
legacy-sendmail. This allows us the change the default without changing
behavior for anybody.
This new command is intended to be called by d/postinst on upgrade to
the package version which introduces the new default value for
'notification-mode'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-4-l.wagner@proxmox.com
The new subcommand is introduced so that we have a common name space for
any config migration tasks which are triggered by d/postinst (or potentially
by hand).
No functional changes.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-3-l.wagner@proxmox.com
Copied over pbs2to3 as base and did minimal adaptions to expected code
names and package and kernel versions, might need more work though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of passing the VerifyWorker state as reference to the various
verification related functions, implement them as methods or
associated functions of the VerifyWorker. This does not only make
their correlation more clear, but it also reduces the number of
function call parameters and improves readability.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250703131837.786811-8-c.ebner@proxmox.com
similar to the other changes:
- Body to Incoming or proxmox-http's Body
- use adapters between hyper<->tower and hyper<->tokio
- adapt to new proxmox-rest-server interfaces
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Move and make the helper function to get a backup groups notes file
path a `DataStore` method instead. This allows it to be reused when
access to the notes path is required from the datastore itself.
Further, use the plural `notes` wording also in the helper to be
consistent with the rest of the codebase.
In preparation for correctly removing the notes file from the backup
group on destruction.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
the below commit accidentally switched this lock to an exclusive lock
when it should just be a shared one as that is sufficient for a
reader:
e2c1866b: datastore/api/backup: prepare for fix of #3935 by adding
lock helpers
this has already caused failed backups for a user with a sync job that
runs while they are trying to create a new backup.
https://forum.proxmox.com/threads/165038
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>