set the name of the auth cookie when configuring apis to allow the
rest server to remove invalid tickets on 401 requests.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250725112357.247866-5-s.sterz@proxmox.com
In preparation of actual ngettext support for the ExtJS based UIs we
need a no-op method for the native language (english).
It might get improved but for now it's just important to make it
available.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The s3 check does perform more than just listing contents,
so it makes more sense to define this using the PUT method
instead, following common REST API practice.
This further allows to implement a list-bucket method with
a better fitting GET method.
Note: This is a breaking api change, however currently only
internal call side is the `proxmox-backup-manager s3 check`
and no official release based on the current state.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250724062501.68384-1-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The proxmox-backup-client benchmark command did unconditionally set
the no-cache flag to true. This is however not backwards compatible,
so expose it as additional cli flag instead, so the user can enable
it when benchmarking S3 backend, but default to false.
Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250724090039.443454-2-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 3cc3c10d ("datastore: mark store as in-use by setting marker
on s3 backend") introduced the marker object on datastores used by
another instance. The check was however flawed as it made the local
chunk store creation dependent on the s3 client instantiation.
Therefore, instead factor out the DatastoreBackendType determination,
use that for the check and never assume the local cache store to
be pre-existing.
Also, since contents from the s3 store are refreshed anyway, local
contents in the cache store will be removed, except chunks which
are now cleaned up on create.
Fixes: 3cc3c10d ("datastore: mark store as in-use by setting marker on s3 backend")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250724080233.282783-1-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Mostly affects docs and some JS UI components, but also changes the
section type name of the s3 client endpoints.
While the s3 client crate is aptly named, the config actually
describing how to access an S3 object storage is not really a client,
but a definition of an endpoint/remote/repo/address.
This is really no problem per se, but such internal names tend to leak
and can cause (a tiny bit!) confusion for users if they see with e.g.
"S3 Endpoints" in the UI but the same thing now uses "s3client" in the
config file.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
namely:
* backup to tape from s3 (including a configuring such a job)
* restore to s3 from tape
It does not work currently, but it probably does not make sense to allow
that at all for several reasons:
* both are designed to be 'off-site', so copying data from one off-site
location to another directly does not make sense most of the time
* (modern) tape operations can reach relatively high speeds (> 300MB/s)
and up/downloading to an (most likely remote) s3 storage will slow
down the tape
Note that we could make the check in the restore case more efficient
(since we already have the parsed DataStore struct), but this to be done
only once for each tape restore operation and most of the time there
aren't that many datastores involved, so the extra runtime cost is
probably not that bad vs having multiple code paths for the error.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250723143152.3829064-1-d.csapak@proxmox.com
if we decide to make the HttpOnly flow opt-out or remove the previous
authentication flow entirely, prepare the client to properly
authenticate against such servers as well
this does not opt the client into the new flow, as that has no real
security benefits. however, doing so would require additional network
traffic and/or state handling on the client to maintain backward
compatability. this would be rather convoluted. hence, avoid doing so
for now.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250723151356.264229-11-s.sterz@proxmox.com
this should add additional protections for cookie stealing and xss
attacks. it also makes it harder to overwrite the cookie from
malicious subdomains.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250723151356.264229-9-s.sterz@proxmox.com
this new flow returns HttpOnly cookies providing an additional layer
of security for clients operating in a browser environment. opt-in
only to not break existing clients.
most of the new protections were implement by a previous series that
adapted proxmox-auth-api and related crates [1]. this just enables
client's of the api to opt-into these protections.
[1]:
https://lore.proxmox.com/pdm-devel/20250304144247.231089-1-s.sterz@proxmox.com/T/#u
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250723151356.264229-7-s.sterz@proxmox.com
The latest updates to the backup-job UI completely drop the term
"Notification System" from the UI, instead we now use "Global
notification settings", which should be hopefully a bit clearer to users
with regards to what this actually means.
Some of the touched sections were slightly rephrased to improve clarity.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250723112220.278700-1-l.wagner@proxmox.com
Commit 90723828 ("api: backup: add no-cache flag to bypass local
datastore cache") introduced the additional flag to request bypassing
of the datastore cache by the Proxmox Backup Server.
The flag is however included in the backup api request parameters,
which is incompatible with older version of the server.
Fix this by only setting the flag if requested explicitley on
invocation, as it is then not included for requests to older servers
and for newer the default is to set this to false if not present
anyways.
Fixes: 90723828 ("api: backup: add no-cache flag to bypass local datastore cache")
Reported-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250723115015.711300-1-c.ebner@proxmox.com
They do nothing as standalone command, would only ever make sense
in combination as `command-that-can fail || true`, but for these
situations it's almost always better to output an error message as
`command-that-can fail || echo "..."` instead, which we already do.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To reduce friction, provide several provider specific example
configurations as reference.
With vhost style vs. path style bucket addressing, templating and all
the other provider specific configuration options, it can be rather
confusing on how to actually configure an S3 endpoint to be used as
PBS datastore backend. So having some concrete examples to lookup or
point to can help.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250723080231.189207-3-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
S3 object store providers typically charge not only for storage
usage, but also for API requests. Explicitley warn the user about
this in the docs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250723080231.189207-2-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As this is far from being experimental, but relaying that it's rather
new and might have some rough edges won't hurt.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-9-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This task will complete after the sync jobs triggered by it have been
completed. Given that, the task title has been chosen to reflect this,
as it will show up in the task log after the sync job tasks.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-8-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Exposes the `run-on-mount` flag in the advanced options of the sync job
edit window, allowing to set or clear it from the corresponding config.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-7-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the API instead of running uuid_mount/mount directly in the CLI binary.
This ensures that all triggered tasks are handled by the proxy process.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-6-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ensure sync jobs are triggered only when the datastore is actually
mounted. If the datastore is already mounted, we don't fail,
but sync jobs should not be re-triggered unnecessarily. This change
prevents redundant sync job execution.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-5-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a datastore is mounted, spawn a new task to run all sync jobs
marked with `run-on-mount`. These jobs run sequentially and include
any job for which the mounted datastore is:
- The source or target in a local pull job
- The source in a push job to a remote datastore
- The target in a pull job from a remote datastore
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-4-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Sets or clears the run-on-mount flag in sync job configs, removing the
optional value from the config if requested for deletion via the api call.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250721113314.59342-3-h.laimer@proxmox.com
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As otherwise one questions what one needs to do or select to enable
that more menu for "normal" local datastore, rather just hide it when
it never can be used anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While I originally suggested it avoid moving this into the remote
panel as sub-tab for now, rather just rename it to S3 Endpoints for
good visual differentiation and as the term endpoints is widely used
in the S3 world anyway FWICT.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to give immediate feedback to the caller, so it is not
required to re-enter all the datastore configuration if the bucket
cannot be accessed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to not return the secret key as part of the s3 endpoint
config, split the config into different struct depending on the
usecase. Either use the plain config without id and secret_key,
the struct with id and plain config or the combined variant with
all 3 fields present.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since S3 object store providers might show the endpoint via the https
scheme prefix, allow this as valid user input for the ui, but strip
it when sending the value.
This is to reduce friction for the user when copy/pasting the value.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The flag only makes sense in combination with the reuse-existing
datastore flag, which is unchecked by default. Therefore, opt for the
overwrite-in-use flag to be disabled if unchecked and hidden if not
an s3 datastore.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The WebUI and CLI have been adapted to use s3 endpoint rather than S3
client, so update the documentation to be consistent.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to be consistent with the UI and thereby reduce possible
confusion, where the naming was changed form `client` to `endpoint`
as well.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To distinguish the actual client from the endpoint configuration,
refer to the endpoint configuration and secrets as `S3 Endpoint`.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Describe required basic S3 client setup and possible configuration
options as well as the actual setup of a datastore using the client and
a bucket as backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Datastores backed by an s3 object store mark the corresponding bucket
prefix given by the datastore name as in-use to protect from
accidental reuse of the same datastore from other instances.
If the datastore has to be re-created because the Proxmox Backup
Server instance is no longer available, skipping the check and
overwriting the marker with the current hostname is necessary.
Expose this flag to the datastore create api endpoint and expose
it to the web ui and cli command.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of relying on the user to manually trigger the refresh after
datastore creation, do it already automatically in the datastore
creation task, thereby improving ergonomics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds an in-use marker on the S3 store to protect from accidental reuse
of the same datastore by multiple Proxmox Backup Server instances. Set
the marker file on store creation.
The local cache folder is however always assumed to be empty and needs
creation on datastore creation to guarantee consistency.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is currently not possible to create a new datastore config and reuse
an existing datastore. Expose the `reuse-datastore` flag also for the
proxmox-backup-manager command, equivalent to what is already exposed in
the WebUI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implement and expose the proxmox-backup-manager commands to interact
with the s3 client configuration.
This mostly requires to insert the commands into the cli command map and
bind them to the corresponding api methods. The list method is the only
exception, as it requires rendering of the output given the provided
output format.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit b18eab64 ("fix #5982: garbage collection: check atime
updates are honored"), the 4 MiB fixed sized, unencypted and
compressed chunk containing all zeros is inserted at datastore
creation if the atime safety check is enabled.
If the datastore is backed by an S3 object store, chunk uploads are
avoided by checking the presence of the chunks in the local cache
store. Therefore, the all zero chunk will however not be uploaded
since already inserted locally.
Fix this by conditionally uploading the chunk before performing the
atime update check for datastores backed by S3.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to trigger a refresh of the local datastore contents from
the WebUI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Analogous to the maintenance type `unmount`, show the `s3-refresh` as
translated string in the maintenance mode options and task
description.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to manually trigger an s3 refresh via proxmox-backup-manager
by calling the corresponding api endpoint handler.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to easily refresh the contents on the local cache store for
datastores backed by an S3 object store.
In order to guarantee that no read or write operations are ongoing,
the store is first set into the maintenance mode `S3Refresh`. Objects
are then fetched into a temporary directory to avoid loosing contents
and consistency in case of an error. Once all objects have been
fetched, clears out existing contents and moves the newly fetched
contents in place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the `no-cache` flag so the client can request to bypass the
local datastore cache for chunk uploads. This is mainly intended for
debugging and benchmarking, but can be used in cases the caching is
known to be ineffective (no possible deduplication).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Introduce a BackupWriterOptions struct, bundling the currently
present writer start parameters n order to limit their number
and make this easier extensible.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check if a chunk is contained in the local cache and if so prefer
fetching it from the cache instead of pulling it via the S3 api. This
improves performance and reduces number of requests to the backend.
Basic restore performance tests:
Restored a snapshot containing the linux git repository (on-disk size
5.069 GiB, compressed 3.718 GiB) from an AWS S3 backed datastore, with
and without cached contents:
non cached: 691.95 s
all cached: 74.89 s
Signed-off-by: Christian Ebner <c.ebnner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Take advantage of the local datastore filesystem cache for datastores
backed by an s3 object store in order to reduce number of requests
and latency, and increase throughput.
Also, reducing the number of requests is cost beneficial for S3 object
stores charging for fetching of objects.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Take advantage of the local datastore cache to avoid re-uploading of
already known chunks. This not only helps improve the backup/upload
speeds, but also avoids additionally costs by reducing the number of
requests and transferred payload data to the S3 object store api.
If the cache is present, lookup if it contains the chunk, skipping
upload altogether if it is. Otherwise, upload the chunk into memory,
upload it to the S3 object store api and insert it into the local
datastore cache.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use a local datastore as cache using LRU cache replacement policy for
operations on a datastore backed by a network, e.g. by an S3 object
store backend. The goal is to reduce number of requests to the
backend and thereby save costs (monetary as well as time).
Cached chunks are stored on the local datastore cache, already
containing the datastore's contents metadata (namespace, group,
snapshot, owner, index files, ecc..), used to perform fast lookups.
The cache itself only stores chunk digests, not the raw data itself.
When payload data is required, contents are looked up and read from
the local datastore cache filesystem, including fallback to fetch from
the backend if the presumably cached entry is not found.
The cacher allows to fetch cache items on cache misses via the access
method.
The capacity of the cache is derived from the local datastore cache
filesystem, or by the user configured value, whichever is smalller.
The capacity is only set on instantiation of the store, and the current
value kept as long as the datastore remains cached in the datastore
cache. To change the value, the store has to be either be set to offline
mode and back, or the services restarted.
Basic performance tests:
Backup and upload of contents of linux git repository to AWS S3,
snapshots removed in-between each backup run to avoid other chunk reuse
optimization of PBS.
no-cache:
had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.76 s (average 102.258 MiB/s)
empty-cache:
had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 50.42 s (average 102.945 MiB/s)
all-cached:
had to backup 5.069 GiB of 5.069 GiB (compressed 3.718 GiB) in 43.78 s (average 118.554 MiB/s)
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add methods to insert new cache entries without using the cacher,
remove cache entries given their key and check if the cache contains
a key, marking it the most recently used one if it does.
These methods will be used to implement the local datastore cache
which stores the values (chunks) on the filesystem rather than
keeping track of them by storing them in-memory in the cache. The lru
cache will only be used to allow for fast lookup and keep track of
the lookup order.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a callback function to be executed on evicted cache nodes. The
callback gets the key of the removed node, allowing to externally act
based on that value.
Since the callback might fail, extend the current LRU cache api to
return an error on insert, covering the error for the `removed`
callback.
Async lru cache, callsites and tests are adapted to include the
additional callback parameter accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to be able to create datastore with an s3 object store
backend. Implements a s3 client selector and exposes it in the
datastore edit window, together with the additional bucket name field
to associate with the datastore's s3 backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a `S3 Clients` item to the navigation tree to allow accessing the
S3 client configuration view and edit windows.
Adds the required source files to the Makefile.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the view to configure S3 clients in the Configuration section of
the UI.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds an edit window for creating or editing S3 client configurations.
Loosely based on the same edit window for the remote configuration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In preparation for adding the S3 backed datastore variant to the edit
window. Introduce a datastore type selector in order to distinguish
between creation of regular and removable datastores, instead of
using the checkbox as is currently the case.
This allows to more easily expand for further datastore type variants
while keeping the datastore edit window compact.
Since selecting the type is one of the first steps during datastore
creation, position the component right below the datastore name field
and re-organize the components related to the removable datastore
creation, while keeping additional required components for the S3
backed datastore creation in mind.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements the garbage collection for datastores backed by an s3
object store.
Take advantage of the local datastore by placing marker files in the
chunk store during phase 1 of the garbage collection, updating their
atime if already present.
This allows us to avoid making expensive API calls to update object
metadata, which would only be possible via a copy object operation.
The phase 2 is implemented by fetching a list of all the chunks via
the ListObjectsV2 API call, filtered by the chunk folder prefix.
This operation has to be performed in batches of 1000 objects, given
by the APIs response limits.
For each object key, lookup the marker file and decide based on the
marker existence and it's atime if the chunk object needs to be
removed. Deletion happens via the delete objects operation, allowing
to delete multiple chunks by a single request.
This allows to efficiently lookup chunks which are not in use
anymore while being performant and cost effective.
Baseline runtime performance tests:
-----------------------------------
3 garbage collection runs were performed with hot filesystem caches
(by additional GC run before the test runs). The PBS instance was
virtualized, the same virtualized disk using ZFS for all the local
cache stores:
All datastores contained the same encrypted data, with the following
content statistics:
Original data usage: 269.685 GiB
On-Disk usage: 9.018 GiB (3.34%)
On-Disk chunks: 6477
Deduplication factor: 29.90
Average chunk size: 1.426 MiB
The resutlts demonstrate the overhead caused by the additional
ListObjectV2 API calls and their processing, but depending on the
object store backend.
Average garbage collection runtime:
Local datastore: (2.04 ± 0.01) s
Local RADOS gateway (Squid): (3.05 ± 0.01) s
AWS S3: (3.05 ± 0.01) s
Cloudflare R2: (6.71 ± 0.58) s
After pruning of all datastore contents (therefore including
DeleteObjects requests):
Local datastore: 3.04 s
Local RADOS gateway (Squid): 14.08 s
AWS S3: 13.06 s
Cloudflare R2: 78.21 s
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Read or write the ownership information from/to the corresponding
object in the S3 object store. Keep that information available if
the bucket is reused as datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When pruning a backup group or a backup snapshot for a datastore with
S3 object store backend, remove the associated objects by removing
them based on the prefix.
In order to exclude protected contents, add a filtering based on the
presence of the protected marker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 8292d3d2 ("api2/admin/datastore: add get/set_protection")
introduced the protected flag for backup snapshots, considering
snapshots as protected based on the presence/absence of the
`.protected` marker file in the corresponding snapshot directory.
To allow independent recovery of a datastore backed by an S3 bucket,
also create/delete the marker file on the object store backend. For
actual checks, still rely on the marker as encountered in the local
cache store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The S3 object store only allows to store objects, referenced by their
key. For backup namespaces datastores however use directories, so
they cannot be represented as one to one mapping.
Instead, create an empty marker file for each namespace and operate
based on that.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For datastores backed by an S3 compatible object store, rather than
reading the chunks to be verified from the local filesystem, fetch
them via the s3 client from the configured bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to fetch chunks from an S3 compatible object store,
instantiate and store the s3 client in the verify worker by storing
the datastore's backend. This allows to reuse the same instance for
the whole verification task.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend on local chunk reader
instantiantion and fetch chunks based on the variant from either the
filesystem or the s3 object store.
By storing the backend variant, the s3 client is instantiated only
once and reused until the local chunk reader instance is dropped.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Read the chunk based on the datastores backend, reading from local
filesystem or fetching from S3 object store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 object store, not only insert the
pulled contents to the local cache store, but also upload it to the
S3 backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an s3 compatible object store, upload
the client log content to the s3 backend before persisting it to the
local cache store.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reupload the manifest to the S3 object store backend on manifest
updates, if s3 is configured as backend.
This also triggers the initial manifest upload when finishing backup
snapshot in the backup api call handler.
Updates also the locally cached version for fast and efficient
listing of contents without the need to perform expensive (as in
monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the datastore is backed by an S3 compatible object store, upload
the dynamic or fixed index files to the object store after closing
them. The local index files are kept in the local caching datastore
to allow for fast and efficient content lookups, avoiding expensive
(as in monetary cost and IO latency) requests.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload blobs to both, the local datastore cache and the S3 object
store if s3 is configured as backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Upload fixed and dynamic sized chunks to either the filesystem or
the S3 object store, depending on the configured backend.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Get and store the datastore's backend during creation of the backup
runtime environment and upload the chunks to the local filesystem or
s3 object store based on the backend variant.
By storing the backend variant in the environment the s3 client is
instantiated only once and reused for all api calls in the same
backup http/2 connection.
Refactor the upgrade method by moving all logic into the async block,
such that the now possible error on backup environment creation gets
propagated to the thread spawn call side.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements an enum with variants Filesystem and S3 to distinguish
between available backends. Filesystem will be used as default, if no
backend is configured in the datastores configuration. If the
datastore has an s3 backend configured, the backend method will
instantiate and s3 client and return it with the S3 variant.
This allows to instantiate the client once, keeping and reusing the
same open connection to the api for the lifetime of task or job, e.g.
in the backup writer/readers runtime environment.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a dedicated api endpoint and a proxmox-backup-manager command to
check if the configured S3 client can reach the bucket.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check if the configured S3 object store backend can be reached and
the provided secrets have the permissions to access the bucket.
Perform the check before creating the chunk store, so it is not left
behind if the bucket cannot be reached.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to create, list, modify and delete configurations for s3
clients via the api.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the client configuration for s3 object store as dedicated
configuration files.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds helper methods to generate the s3 object keys given a relative
path and filename for datastore contents or digest in case of chunk
files.
Regular datastore contents are stored by grouping them with a content
prefix in the object key. In order to keep the object key length
small, given the max limit of 1024 bytes [0], `.cnt` is used as
content prefix. Chunks on the other hand are prefixed by `.chunks`,
same as on regular datastores.
The prefix allows for selective listing of either contents or chunks
by providing the prefix to the respective api calls.
[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some settings on changers prevents changing the encryption parameters
via the application, e.g. some libraries have a 'encryption disabled' or
'encryption is library managed' option. While the former situation can
be fixed by setting the library to 'application managed', the latter is
sometimes necessary for FIPS compliance (to ensure the tape data is
encrypted).
When libraries are configured this way, the code currently fails with
'drive does not support AES-GCM encryption'. Instead of failing, check
on first call to set_encryption if we could set it, and save that
result.
Only fail when encryption is to be enabled but it is not allowed, but
ignore the error when the backup should be done unencrypted.
`assert_encryption_mode` must also check if it's possible, and skip any
error if it's not possible and we wanted no encryption.
With these changes, it should be possible to use such configured libraries
when there is no encryption configured on the PBS side. (We currently
don't have a library with such capabilities to test.)
Note that in contrast to normal operation, the tape label will also be
encrypted then and will not be readable in case the encryption key is
lost or changed.
Additionally, return an error for 'drive_set_encryption' in case the
drive reports that it does not support hardware encryption, because this
is now already caught one level above in 'set_encryption'.
Also, slightly change the error message to make it clear that the drive
does not support *setting* encryption, not that it does not support
it at all.
This was reported in the community forum:
https://forum.proxmox.com/threads/107383/https://forum.proxmox.com/threads/164941/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250416070703.493585-1-d.csapak@proxmox.com
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.
This mirrors commit 21a6ed782 from pve-manager
Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We unconditionally showed the consent banner when constructing the
login view, but for an OIDC based authentication flow the user might
visit that view twice, once when first loading the UI and the second
one when getting redirected back by their OIDC provider.
Checking if there was such an OIDC redirect and skip showing the
banner in that cases avoids this issue.
Fix is similar in principle to what we do for pve-manager when closing
issue #6311 but replaces the if guard with a reverse early-return.
Report: https://bugzilla.proxmox.com/show_bug.cgi?id=6311
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoids an rather annoying confirmation prompt from `mv` if it's OK to
move over the file if one calls these targets repeatedly, like during
development edit+install+test cycles.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This fixes extracting any pxar directory with a hardlink.
linkat defaults to not following symlinks for the olddir (source)
path, and only understands the `AT_SYMLINK_FOLLOW` (notice, there is
no "NO") and `AT_EMPTY_PATH` flags, as can be read in the linkat
man page.
The nix::unistd::LinkatFlags::NoSymlinkFollow flag was used here
previously with nix 0.26, but it was just a wrapper around the
AtFlags, but with NoSymlinkFollow resolving to AtFlags::empty() [0].
The nix 0.29 migration did a 1:1 translation from the now depracated
LinkatFlags to AtFlags, i.e. NoSymlinkFollow to AT_SYMLINK_FOLLOW,
which just cannot work for linkat, one must migrate to the empty
flags instead. That nix drops a safer type here seems a bit odd
though.
[0]: https://docs.rs/nix/0.26.1/src/nix/unistd.rs.html#1262-1263
Report: https://forum.proxmox.com/168633/
Fixes: 2a7012f96 ("update pbs-client to nix 0.29 and rustyline 0.14")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This wasn't known at development time as it needs to be lesser than
the version this was first shipped with.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Count the chunk cache hits and misses and display the resulting
values and the hit ratio in the garbage collection task log summary.
This allows to investigate possible issues and tune cache capacity,
also by being able to compare to other values in the summary such
as the on disk chunk count.
Exemplary output
```
2025-05-16T22:31:53+02:00: Chunk cache: hits 15817, misses 873 (hit ratio 94.77%)
2025-05-16T22:31:53+02:00: Removed garbage: 0 B
2025-05-16T22:31:53+02:00: Removed chunks: 0
2025-05-16T22:31:53+02:00: Original data usage: 64.961 GiB
2025-05-16T22:31:53+02:00: On-Disk usage: 1.037 GiB (1.60%)
2025-05-16T22:31:53+02:00: On-Disk chunks: 874
2025-05-16T22:31:53+02:00: Deduplication factor: 62.66
2025-05-16T22:31:53+02:00: Average chunk size: 1.215 MiB
```
Sidenote: the discrepancy between cache miss counter and on-disk
chunk count in the output shown above can be attributed to the all
zero chunk, inserted during the atime update check at the start of
garbage collection, however not being referenced by any index file in
this examplary case.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250604153449.482640-3-c.ebner@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This makes it consistent with tape backup job options and PVE's backup
jobs. It also visualizes the dependency of 'notify' and 'notify-user'
onto 'notification-mode'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-11-l.wagner@proxmox.com
Even if the notification mode is set to 'notification-system', the
datastore options grid still shows the keys for 'Notify' and 'Notify
User', which have no effect in this mode:
Notification: [Use global notification settings]
Notify: [Prune: Default(always), etc...]
Notify User: [root@pam]
This is quite confusing.
Unfortunately, it seems be quite hard to dynamically disable/hide rows
in the grid panel used in this view.
For that reason these rows are removed completely for now. The options
are still visible when opening the edit window for the 'Notification'
row.
While this slightly worsens UX in some cases (information is hidden), it
improves clarity by reducing ambiguity, which is also a vital part of
good UX.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-10-l.wagner@proxmox.com
Change the dialog of one-shot tape-backups in such a way that they use
the same jargon as scheduled tape backup jobs.
The width of the dialog is increased by 150px to 750px so that the
slightly larger amount of text fits nicely.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-8-l.wagner@proxmox.com
For consistency, use the same UI approach as for PVE's backup jobs. Tape
backup jobs now gain a new tab for all notification related settings:
( ) Use global notification settings
(x) Use sendmail to send an email (legacy)
Recipient: [ ]
'Recipient' is disabled when the first radio control is selected.
The term 'Notification System' is altogether from the UI. It is not
necessarily clear to a user that this refers to the settings in
Configuration > Notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-7-l.wagner@proxmox.com
This default is displayed in the grid panel if the datastore config
retrieved from the API does not contain any value for notification-mode.
Since the default changed from 'legacy-sendmail' to 'notification-mode'
in the schema datatype, the defaultValue field needs to be adapted as
well.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-6-l.wagner@proxmox.com
This one migrates any datastore or tape backup job that relied on the
old default (legacy-sendmail) to an explicit setting of
legacy-sendmail. This allows us the change the default without changing
behavior for anybody.
This new command is intended to be called by d/postinst on upgrade to
the package version which introduces the new default value for
'notification-mode'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-4-l.wagner@proxmox.com
The new subcommand is introduced so that we have a common name space for
any config migration tasks which are triggered by d/postinst (or potentially
by hand).
No functional changes.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250623141315.288681-3-l.wagner@proxmox.com
Since commit 37a85cf6 ("fix: ui: sync job: edit rate limit based on
sync direction") rate limits for sync jobs can be correctly applied
for both directions. State this in the documentation and explicitley
mention the directions to reduce confusion.
Further, also mention the burst parameters, as they are not mentioned
at all.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250623124543.590388-1-c.ebner@proxmox.com
The update from proxmox-backup-restore-image 0.7.0 -> 1.0.0 increased
the size of the initramfs image by couple of megabytes (~45 -> ~49),
making it too large to be successfully booted in a VM with 192MB of RAM.
This led to a "VM exited before connection could be established (500)"
error in the GUI when attempting to restore a single file,
while /var/log/proxmox-backup/file-restore/qemu.log reported the
following error:
Initramfs unpacking failed: write error
As a stop-gap measure, the minimum RAM allocation is bumped to 256MB.
Since the amount of RAM is based on the number of disks, giving the VM
more memory if a large number of disks is associated with the backup
snapshot, this patch was also tested with 19, 20 and 25 disks as to
ensure that the remaining cases still work fine without a bump.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250715101907.303115-1-l.wagner@proxmox.com
Copied over pbs2to3 as base and did minimal adaptions to expected code
names and package and kernel versions, might need more work though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The missing override caused 'create-datastore' tasks to not be
pretty-printed/localized in any task list in the UI.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250604124815.180174-1-l.wagner@proxmox.com
[TL: fleece in code reformatting changes from adaption of biome]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of passing the VerifyWorker state as reference to the various
verification related functions, implement them as methods or
associated functions of the VerifyWorker. This does not only make
their correlation more clear, but it also reduces the number of
function call parameters and improves readability.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250703131837.786811-8-c.ebner@proxmox.com
Since commit 23be00a4 ("fix #3336: datastore: remove group if the
last snapshot is removed"), a backup group directory is cleaned up
when the new locking mechanism is in use once:
- the group is requested to be destroyed and all the snapshots have
been deleted
- the last snapshot of a group has been destroyed
Since then, the owner file is also cleaned up separately.
However, the owner file might be already missing due to removal of
the group directory executed when removing the last backup snapshot
of the group, making the subsequent call in the backup group destroy
method fail.
Fix this by ignoring a missing owner file and continue with trying to
emove the group directory itself.
Fixes: 23be00a4 ("fix #3336: datastore: remove group if the last snapshot is removed")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250703131837.786811-7-c.ebner@proxmox.com
`cargo rustc` only passes the flags (like `target-feature` in this case) for
the final invocation, not for any dependency compilation.
unfortunately, switching to `cargo build` is not straight-forward:
- during a package build, $CARGO is the cargo wrapper which only honors
RUSTFLAGS in its `prepare-debian` invocation
- rustflags in cargo's config.toml are global/per target
- the unstable override that would allow setting them per profile is broken
- and it would only work for the final invocation anyway, just like `cargo rustc`
as a stop-gap measure, let's duplicate and adapt the generated config.toml, and
select it explicitly when doing the static compilation as part of the package
build. manual `make proxmox-backup-client-static` can still just pass RUSTFLAGS
via the environment..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[WB: separate -i and -e in sed invocation, add -r, drop backslashes]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
For better compat with the usrmerge/hermetic-/usr philosophies, and,
well, lintian complaining about this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To ensure the ReST synopsis documentation output is compatible with
the Sphinx version from Trixie.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To fix the following exception one gets with the Sphinx and Python
versions from Trixie:
File "/home/tom/sources/others/pbs/proxmox-backup/build/docs/_ext/proxmox-scanrefs.py", line 92, in write_doc
filename_html = re.sub('.rst', '.html', filename)
File "/usr/lib/python3.13/re/__init__.py", line 208, in sub
return _compile(pattern, flags).sub(repl, string, count)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got '_StrPath'
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Those are left over from early development experimenting but as they
have nothing to do with PBS itself this is definitively the wrong
place. As they are preserved in the git history forever anyway, just
delete them here completely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
similar to the other changes:
- Body to Incoming or proxmox-http's Body
- use adapters between hyper<->tower and hyper<->tokio
- adapt to new proxmox-rest-server interfaces
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like pbs-client and proxmox-http.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
similar to the http one:
- Body to Incoming for incoming requests
- Body to proxmox-http's Body for everything else
- use legacy client
- use wrappers for hyper<->tower and hyper<->tokio
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
similar changes to proxmox-http:
- Body to Incoming for incoming requests
- Body to proxmox-http's Body for everything else
- switch to "legacy" pooling client from hyper-util
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While eslint is an OK linter, its code formatting capabilities are
rather limited, so replace it with [Biome], which has both a good (and
fast!) linter and code formatter.
[Biome]: https://github.com/biomejs/biome
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when in blocks, the `var` leaks outside to the function scope, so let us
use `let`
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use `proxmox-biome format --write`, which is a small wrapper around
[Biome] that sets the desired config options for matching the Proxmox
style guide as close as possible.
Note that the space between function key word and parameter
parenthesis for anonymous functions or function names and parameter
parenthesis for named functions is rather odd and not per our style
guide, but this is not configurable and while it would be relatively
simple to change in the formatter code of biome, we would like to avoid
doing so for both maintenance reason and as this is seemingly the
default in the "web" industry, as prettier–one of the most popular
formatters for web projects–uses this and Biome copied the style
mostly from there.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[TL: add commit message with some background]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 1e7639bf ("fixup minimum lru capacity") the LRU cache
capacity is set to a minimum value of 1 to avoid issues with the edge
case of 0 capacity.
In commit f1a711c8 ("garbage collection: set phase1 LRU cache
capacity by tuning option") this was not taken into account, allowing
to set values in the range [0, 8*1024*1024] via the datastores tuning
parameters.
Bypass the cache by making it optional and do not use it if the cache
capacity is set to 0, which implies it being disabled.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Since commit 1e7639bf ("fixup minimum lru capacity") the minimum
cache capacity is forced to be 1 to bypass edge cases for it being 0.
Explicitly mention this in the doc comment, as this behavior can be
unexpected.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`ListGroupsType::new_at` creates a new iterator over all groups of
give backup type with provided parent file descriptor.
The parent directory file descriptor is passed to the `read_subdir`
call, which itself uses it to open the type directory via `openat`.
This call does however ignore the passed file handle if the given
path is absolute [0], which is always the case for the type path
generated via `DataStore::type_path`.
Fix this by passing only the type name as relative path to the
`read_subdir` call, use the absolute path only for
`ListGroupType::new`.
This helps avoiding re-traversing the absolute path in the
`ListGroups` iterator, and since it is then the only callside for
`ListGroupsType::new_at`, inline the instantiation.
[0] https://linux.die.net/man/2/openat
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`BackupDir::is_protected` is the general helper method to check the
protected state for a snapshot. This checks for the presence of the
protected marker file, which is performed by stating the file and
requires traversing the full path.
When generating the backup list for a backup group, the snapshot
directory contents are however scanned nevertheless. Take advantage
of this by extending the regex used to filter contents by scandir to
include also the protected marker filename and set the state based on
the presence/absence, thereby avoiding the additional stat syscall
altogether.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Explicitly mention that the value sets the available cache slots and
not only mention the value being set to 0 disables the cache, but
rather give also the default and maximum values.
Reported in the community forum:
https://forum.proxmox.com/threads/164869/post-771224
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Removing the group directory when forgetting a backup group or
removing the final backup snapshot of a group did not take into
consideration a potentially present group note file, leading for it
to fail.
Further, since the owner file is removed before trying to remove the
(not empty) group directory, the group will not be usable anymore as
the owner check will fail as well.
To fix this, remove the backup group's note file first, if present
and only after that try to cleanup the rest.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6358
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move and make the helper function to get a backup groups notes file
path a `DataStore` method instead. This allows it to be reused when
access to the notes path is required from the datastore itself.
Further, use the plural `notes` wording also in the helper to be
consistent with the rest of the codebase.
In preparation for correctly removing the notes file from the backup
group on destruction.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
the output of `zpool import` has changed, thus our current parser
failed to find a zpool with zfs userspace in version 2.3.2.
While ZFS 2.3 introduced JSON output for many commands `zpool import`
still lacks the option [0], thus I'd postpone this adapation once all
needed zfs/zpool commands provide JSON.
the change was probably introduced in zfs upstream commit:
5137c132a ("zpool import output is not formated properly.")
[0] https://github.com/openzfs/zfs/issues/17084
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Avoid converting the backup time string to the timestamp and back to
string again. `BackupDir::with_rfc3339` already performs the string
to time conversion, so use it over parsing the timestamp first only
to convert it back to string in `BackupDir::with_group`.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In case we run into a ready check timeout, query the drive, and
increase the timeout to 2 hours and 5 minutes if it's calibrating (5
minutes headroom). This is effectively a generalization of commit
0b1a30aa ("tape: adapt format_media for LTO9+"), which increased the
timeout for the format procedure, while this here covers also tape
that were not explicitly formatted but get auto-formatted indirectly
on the first action changing a fresh tape, like e.g. barcode labeling.
The actual reason for this is that since LTO-9, initial loading of
tapes into a drive can block up to 2 hours according to the spec. One
can find the IBM and HP LTO SCSI references rather easily [0][1]
As for the timeout, IBM says it only in their recommendations:
> Although most optimizations will complete within 60 minutes some
> optimizations may take up to 2 hours.
And HP states:
> Media initialization adds a variable amount of time to the
> initialization process that typically takes between 20 minutes and
> 2 hours.
So it seems there not a hard limit and depends, but most ordinary
setups should be covered and in my tests it always took around the 1
hour mark.
0: IBM LTO-9 https://www.ibm.com/support/pages/system/files/inline-files/LTO%20SCSI%20Reference_GA32-0928-05%20(EXTERNAL)_0.pdf
1: HP LTO-9 https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001239en_us&page=GUID-D7147C7F-2016-0901-0921-000000000450.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250415114043.2389789-1-d.csapak@proxmox.com
[TL: extend commit message with info that Dominik provided in a
reply]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
During phase 1 of garbage collection referenced chunks are marked as
in use by iterating over all index files and updating the atime on
the chunks referenced by these.
In an edge case for long running garbage collection jobs, where a
newly added snapshot (created after the start of GC) reused known
chunks from a previous snapshot, but the previous snapshot index
referencing them disappeared before the marking phase could reach
that index (e.g. pruned because only 1 snapshot to be kept by
retention setting), known chunks from that previous index file might
not be marked (given that by none of the other index files it was
marked).
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") this is even less likely as now the
iteration reads also index files added during phase 1, and
therefore either the new or the previous index file will account for
these chunks (the previous backup snapshot can only be pruned after
the new one finished, since locked). There remains however a small
race window between the reading of the snapshots in the backup group
and the reading of the actual index files for marking.
Fix this race by:
1. Checking if the last snapshot of a group disappeared and if so
2. generate the list again, looking for new index files previously
not accounted for
3. To avoid possible endless looping, lock the group if the snapshot
list changed even after the 10th time (which will lead to
concurrent operations to this group failing).
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-3-c.ebner@proxmox.com
Instead of returning a None, fail if the open index reader is called
on a blob file. Blobs cannot be read as index anyways and this allows
to distinguish cases where the index file cannot be read because
vanished.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-2-c.ebner@proxmox.com
the below commit accidentally switched this lock to an exclusive lock
when it should just be a shared one as that is sufficient for a
reader:
e2c1866b: datastore/api/backup: prepare for fix of #3935 by adding
lock helpers
this has already caused failed backups for a user with a sync job that
runs while they are trying to create a new backup.
https://forum.proxmox.com/threads/165038
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") not only snapshots present at the start of
the garbage collection run are considered for marking, but also newly
added ones. Take these into account by adapting the total index file
counter used for the progress output.
Further, correctly take into account also index files which have been
pruned during GC, therefore present in the list of still to process
index files but never encountered by the datastore iterators. These
would otherwise be interpreted incorrectly as strange paths and logged
accordingly, causing confusion as reported in the community forum [0].
Fixes: https://forum.proxmox.com/threads/164968/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
else parallel builds of the static binaries will not work correctly, just like
with the regular .do-cargo-build.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The debian package providing the dynamically linked version of the
proxmox-backup-client is packaged together with the pxar executable.
To be in line and for user convenience, include a statically linked
version of pxar to the static package as well.
Renames STATIC_BIN env variable to STATIC_BINS to reflect that this
now covers multiple binaries and store rustc flags in its own
variable so they can be reused since `cargo rustc` does not allow
invocations with multiple `--package` arguments at once.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Extends the log messages written to the server's backup worker task
log to include the snapshot name which is used as previous snapshot.
This information facilitates debugging efforts, as the previous
snapshot might have been pruned since.
For example, instead of
```
download 'index.json.blob' from previous backup.
register chunks in 'drive-scsi0.img.fidx' from previous backup.
download 'drive-scsi0.img.fidx' from previous backup.
```
this now logs
```
download 'index.json.blob' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
register chunks in 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
download 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Discurage the use of the statically linked binary for systems where
the regular one is available.
Moves the previous note into it's own section and link to the
installation section.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250410093059.130504-1-c.ebner@proxmox.com
To have something in the docs.
In the long run we want a somewhat fancy and safe mechanism to host
these builds directly on the CDN and implement querying that for
updates, verified with a backed in public key, but for starters this
very basic docs has to suffice.
We could also describe how to extract the client from the .deb through
`ar` or `dpkg -x`, but that feels a bit to hacky for the docs, maybe
better explained on-demand in the forum or the like.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a note mentioning that the statically linked binary does not use
the same mechanism for name resolution as the regular client, in
particular that this does not support NSS.
The statically linked binary cannot use the `getaddrinfo` based name
resolution because of possible ABI incompatibility. It therefore is
conditionally compiled and linked using the name resolution provided
by hickory-resolver, part of hickory-dns [0].
[0] https://github.com/hickory-dns/hickory-dns
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The dependency on the `getaddrinfo` based `GaiResolver` used by
default for the `HttpClient` is not suitable for the statically
linked binary of the `proxmox-backup-client`, because of the
dependency on glibc NSS libraries, as described in glibc's FAQs [0].
As a workaround, conditionally compile the binary using the `hickory-dns`
resolver.
[0] https://sourceware.org/glibc/wiki/FAQ#Even_statically_linked_programs_need_some_shared_libraries_which_is_not_acceptable_for_me.__What_can_I_do.3F
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
FG: bump proxmox-http dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4788
Build and package the a statically linked binary version of
proxmox-backup-client to facilitate updates and distribution.
This provides a mechanism to obtain and repackage the client for
external parties and Linux distributions.
The statically linked client is provided as dedicated package,
conflicting with the regular package.
Since the RUSTFLAGS env variables are not preserved when building
with dpkg-buildpackage, invoke via `cargo rustc` instead which allows
to set the recquried arguments.
Credit goes also to Christoph Heiss, as this patch is loosely based
on his pre-existing work for the proxmox-auto-install-assistant [0],
which provided a good template.
Also, place the libsystemd stub into its own subdirectory for cleaner
separation from the compiled artifacts.
[0] https://lore.proxmox.com/pve-devel/20240816161942.2044889-1-c.heiss@proxmox.com/
Suggested-by: Christoph Heiss <c.heiss@proxmox.com>
Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: fold in fixups
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since it affects whether cargo puts build artifacts directly into
target/debug (or target/release) or into a target-specific
sub-directory.
the package build will always pass `--target $(DEB_HOST_RUST_TYPE)`,
since it invokes the cargo wrapper in /usr/share/cargo/bin/cargo, so
this change unifies the behaviour across plain `make` and `make
deb`.
direct calls to `cargo build/test/..` will still work as before.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This aligns the tooltips to how we have in in Proxmox VE. Using "view"
instead of "open" should make it clear, that this is a safe read-only
action.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20241118104959.95159-1-a.lauterer@proxmox.com
This section is meant to give a basic overview on how to use
custom templates for notifications. It will be expanded in the
future, providing a more detailed view on how templates are resolved,
existing fallback mechanisms, available templates, template
variables and helpers.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Alexander Zeidler <a.zeidler@proxmox.com>
Link: https://lore.proxmox.com/20250409084628.125951-1-l.wagner@proxmox.com
to avoid interpreting HTML in the message when displaying the mask.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Explicitly mention how to set the rate limit for sync jobs in push
direction to avoid possible confusion.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-2-c.ebner@proxmox.com
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Commit 9aa213b8 ("ui: sync job: adapt edit window to be used for pull
and push") adapted the sync job edit so jobs in both, push and pull
can be edited using the same window. This however did not include the
switching of the direction to which the http client rate limit is
applied to.
Fix this by further adding the edit field for `rate-out` and
conditionally hide the less useful rate limit direction (rate-out for
pull and rate-in for push). This allows to preserve the values if
explicitly set via the sync job config.
Reported in the community forum:
https://forum.proxmox.com/threads/163414/
Fixes: 9aa213b8 ("ui: sync job: adapt edit window to be used for pull and push")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250318094756.204368-1-c.ebner@proxmox.com
instead of just the top-most context/error, which often excludes
relevant information, such as when locking fails.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the widget type is the most important property as it defines how every
other property will be interpreted, so it should always come first.
Move name afterwards, as that is almost always the key for how the
data will be send to the backend and thus also quite relevant.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That width is already used in a few places, we might even want to
change the edit window default in the future.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ensure title-case is honored, while at it drop the "snapshot" for the
advanced options, we do not use that for non-advanced option like
"Removed Vanished" either. This avoids that some field labels wrap
over multiple lines, at least for English.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For now just in the general datacenter option view, not when editing
the tuning options. For also allowing one to enter this we should
first provide our backend implementation as WASM to avoid having to
redo this in JavaScript.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was getting cramped, and while it might be actually even nicer to
got to more verbose style like we use for advanced settings of backup
jobs in Proxmox VE, with actual sentences describing the options basic
effects and rationale.
But this is way quicker to do and adds already a bit more rationale,
and we can always do more later on when there's less release time
pressure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a bullet point to the listed datastore tuning parameters,
describing its functionality, implications and typical values.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-4-c.ebner@proxmox.com
[TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Displays and allows to edit the configured LRU cache capacity via the
datastore tuning parameters.
A step of 1024 is used in the number field for convenience when using
the buttons, more fine grained values can be set by typing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-3-c.ebner@proxmox.com
[TL: address trivial merge conflict from context changes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to control the capacity of the cache used to track recently
touched chunks via the configured value in the datastore tuning
options. Log the configured value to the task log, if an explicit
value is set, allowing the user to confirm the setting and debug.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404130713.376630-2-c.ebner@proxmox.com
Skip over snapshots which have not been verified or encrypted if the
sync jobs has set the flags accordingly.
A snapshot is considered as encrypted, if all the archives in the
manifest have `CryptMode::Encrypt`. A snapshot is considered as
verified, when the manifest's verify state is set to
`VerifyState::Ok`.
This allows to only synchronize a subset of the snapshots, which are
known to be fine (verified) or which are known to be encrypted. The
latter is of most interest for sync jobs in push direction to
untrusted or less trusted remotes, where it might be desired to not
expose unencrypted contents.
Link to the bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=6072
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-4-c.ebner@proxmox.com
Extend the sync job config api to adapt the 'encrypted-only' and
'verified-only' flags, allowing to include only encrypted and/or
verified backup snapshots, excluding others from the sync.
Set these flags to the sync jobs push or pull parameters on job
invocation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/pbs-devel/20250404132106.388829-3-c.ebner@proxmox.com
... and move token deletion into new `do_delete_token` function.
Since it'll be resued later on user deletion.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Mostly taken from pve-docs and adapted as needed.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
The comment & default property can be updated for the built-in PBS
realm, which the AuthSimplePanel from widget-toolkit implements.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
This uses the functionality previously introduced in widget-toolkit as
part of this series, which is gated behind this flag.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
The built-in PAM and PBS use slightly different API paths, without the
type in the URL, as that would be redundant anyway. Thus move the
setting to per-realm.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
[TL: commit subject style fixe]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For the built-in PBS authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
For the built-in PAM authentication realm, the comment and whether it
should be the default login realm can be updated. Add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Currently, the built-in PAM and PBS authentication realms are (hackily)
hardcoded. Replace that with the new, proper API types for these two
realms, thus treating them like any other authentication realm.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Whenever the `default` field is set to `true` for any realm, the
`default` field must be unset first from all realms to ensure that only
ever exactly one realm is the default.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Now that all the realms support this field, add the required API
plumbing for it.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Document the gc-atime-cutoff option and describe the behavior it
controls, by adding it as additional bullet point to the
documented datastore tuning options.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to set the atime cutoff for phase 2 of garbage collection in
the datastores tuning parameters. This value changes the time after
which a chunk is not considered in use anymore if it falls outside of
the cutoff window.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the user configured atime cutoff over the default 24h 5m
margin if explicitly set, otherwise fallback to the default.
Move the minimum atime calculation based on the atime cutoff to the
sweep_unused_chunks() callside and pass in the calculated values, as
to have the logic in the same place.
Add log outputs shownig which cutoff and minimum access time is used
by the garbage collection.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Document the gc-atime-safety-check flag and describe the behavior it
controls, by adding it as additional bullet point to the documented
datastore tuning options.
This also fixes the intendation for the cli example how to set the
sync level, to make it clear that still belongs to the previous
bullet point.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to edit the atime safety check flag via the datastore tuning
options edit window.
Do not expose the flag for datastore creation as it is strongly
discouraged to create datastores on filesystems not correctly handling
atime updates as the garbage collection expects. It is nevertheless
still possible to create a datastore via the cli and pass in the
`--tuning gc-atime-safety-check=false` option.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Check if the filesystem backing the chunk store actually updates the
atime to avoid potential data loss in phase 2 of garbage collection,
in case the atime update is not honored.
Perform the check before phase 1 of garbage collection, as well as
on datastore creation. The latter to early detect and disallow
datastore creation on filesystem configurations which otherwise most
likely would lead to data losses. To perform the check also when
reusing an existing datastore, open the chunks store also on reuse.
Enable the atime update check by default, but allow to opt-out by
setting a datastore tuning parameter flag for backwards compatibility.
This is honored by both, garbage collection and datastore creation.
The check uses a 4 MiB fixed sized, unencypted and compressed chunk
as test marker, inserted if not present. This all zero-chunk is very
likely anyways for unencrypted backup contents with large all-zero
regions using fixed size chunking (e.g. VMs).
To avoid cases were the timestamp will not be updated because of the
Linux kernels timestamp granularity, sleep in-between chunk insert
(including an atime update if pre-existing) and the subsequent
stating + utimensat for 1 second.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5982
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Inserting a new chunk into the chunk store as process running with
root priviledger currently does not set an explicit ownership on the
chunk file. As a consequence this will lead to permission issues if
the chunk is operated on by a codepath executed in the less
privileged proxy task running as `backup` user.
Therefore, explicitly set the ownership and permissions of the chunk
file upon insert, if the process is executed as `root` user.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
We are going to add more credentials so it makes sense to have a common
helper to get the secrets.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
otherwise tokens without comments can no longer be created as the api
will reject the additional `delete` parameter. this bug was introduced
by commit:
3fdf876: api: token: make comment deletable
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
group or namespace removal can fail if the old locking mechanism is
still in use, as it is unsafe to properly clean up in that scenario.
return an error message that explains how to rectify that situation.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
[TL: address simple merge conflict and fine tune message to admins]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is only needed for removing the group if the last snapshot is
removed, ignore locking failures, as the user can't do anything to
rectify the situation anymore.
log the locking error for debugging purposes, though.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
[TL: line-wrap comment at 100cc and fix bullet-point indentation]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See commit 27dd7377 ("fix #3935: datastore/api/backup: move datastore
locking to '/run'") for details, as I'll bump PBS now we can fixate
the version and drop the safety-net "reminder" from d/rules again.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To reduce the number of atimes updates, keep track of the recently
marked chunks in phase 1 of garbage to avoid multiple atime updates
via expensive utimensat() calls.
Recently touched chunks are tracked by storing the chunk digests in
an LRU cache of fixed capacity. By inserting a digest, the chunk will
be the most recently touched one and if already present in the cache
before insert, the atime update can be skipped. The cache capacity of
1024 * 1024 was chosen as compromise between required memory usage
and the size of an index file referencing a 4 TiB fixed size chunked
image (with 4MiB chunk size).
The previous change to iterate over the datastore contents using the
datastore's iterator helps for increased cache hits, as subsequent
snapshots are most likely to share common chunks.
Basic benchmarking:
Number of utimensat calls shows significatn reduction:
unpatched: 31591944
patched: 1495136
Total GC runtime shows significatn reduction (average of 3 runs):
unpatched: 155.4 ± 3.5 s
patched: 22.8 ± 0.5 s
VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, average of 3 runs):
unpatched before: 1196028 ± 0 kB
unpatched after: 1196028 ± 0 kB
unpatched before: 1163337 ± 28317 kB
unpatched after: 1330906 ± 29280 kB
delta: 167569 kB
Dependence on the cache capacity:
capacity runtime[s] VmPeakDiff[kB]
1*1024 66.221 0
10*1024 36.164 0
100*1024 23.141 0
1024*1024 22.188 101060
10*1024*1024 23.178 689660
100*1024*1024 25.135 5507292
Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special
Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54
Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB
Distribution of snapshots (binned by month):
2023-11 11
2023-12 16
2024-01 30
2024-02 38
2024-03 17
2024-04 37
2024-05 17
2024-06 59
2024-07 99
2024-08 96
2024-09 115
2024-10 35
2024-11 42
2024-12 37
2025-01 162
2025-02 489
2025-03 1884
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5331
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of iterating over all index files found in the datastore in
an unstructured manner, use the datastore iterators to logically
iterate over them as other datastore operations will.
This allows to better distinguish index files in unexpected locations
from ones in their expected location, warning the user of unexpected
ones to allow to act on possible missconfigurations. Further, this
will allow to integrate marking of snapshots with missing chunks as
incomplete/corrupt more easily and helps improve cache hits when
introducing LRU caching to avoid multiple atime updates in phase 1 of
garbage collection.
This now iterates twice over the index files, as indices in
unexpected locations are still considered by generating the list of
all index files to be found in the datastore and removing regular
index files from that list, leaving unexpected ones behind.
Further, align terminology by renaming the `list_images` method to
a more fitting `list_index_files` and the variable names accordingly.
This will reduce possible confusion since throughout the codebase and
in the documentation files referencing the data chunks are referred
to as index files. The term image on the other hand is associated
with virtual machine images and other large binary data stored as
fixed-size chunks.
Basic benchmarking:
Total GC runtime shows no significatn change (average of 3 runs):
unpatched: 155.4 ± 2.6 s
patched: 155.4 ± 3.5 s
VmPeak measured via /proc/self/status before and after
`mark_used_chunks` (proxmox-backup-proxy was restarted in between
for normalization, no changes for all 3 runs):
unpatched before: 1196032 kB
unpatched after: 1196032 kB
patched before: 1196028 kB
patched after: 1196028 kB
List image shows a slight increase due to the switch to a HashSet
(average of 3 runs):
unpatched: 64.2 ± 8.4 ms
patched: 72.8 ± 3.7 ms
Description of the PBS host and datastore:
CPU: Intel Xeon E5-2620
Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x
ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special
Namespaces: 45
Groups: 182
Snapshots: 3184
Index files: 6875
Deduplication factor: 44.54
Original data usage: 120.742 TiB
On-Disk usage: 2.711 TiB (2.25%)
On-Disk chunks: 1494727
Average chunk size: 1.902 MiB
Distribution of snapshots (binned by month):
2023-11 11
2023-12 16
2024-01 30
2024-02 38
2024-03 17
2024-04 37
2024-05 17
2024-06 59
2024-07 99
2024-08 96
2024-09 115
2024-10 35
2024-11 42
2024-12 37
2025-01 162
2025-02 489
2025-03 1884
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Refactor the archive type and index file reader opening with its
error handling into a helper method for better reusability.
This allows to use the same logic for both, expected image paths
and unexpected image paths when iterating trough the datastore
in a hierarchical manner.
Improve error handling by switching to anyhow's error context.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Until now errors are shown ignoring the anyhow error context. In
order to allow the garbage collection to return additional error
context, format the error including the context as single line.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a boolean return type to LruCache::insert(), telling if the node
was already present in the cache or if it was newly inserted.
This will allow to use the LRU cache for garbage collection, where
it is required to skip atime updates for chunks already marked in
use.
That improves phase 1 garbage collection performance by avoiding,
multiple atime updates.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, the only way to delete a comment on a token is to set it to
just spaces. Since we trim it in the endpoint, it gets deleted as a
side effect. This allows the comment to be deleted properly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Using a single thread for reading is not optimal in some cases, e.g.
when the underlying storage can handle reads from multiple threads in
parallel.
We use the ParallelHandler to handle the actual reads. Make the
sync_channel buffer size depend on the number of threads so we have
space for two chunks per thread. (But keep the minimum to 3 like
before).
How this impacts the backup speed largely depends on the underlying
storage and how the backup is laid out on it.
I benchmarked the following setups:
* Setup A: relatively spread out backup on a virtualized pbs on single HDDs
* Setup B: mostly sequential chunks on a virtualized pbs on single HDDs
* Setup C: backup on virtualized pbs on a fast NVME
* Setup D: backup on bare metal pbs with ZFS in a RAID10 with 6 HDDs
and 2 fast special devices in a mirror
(values are reported in MB/s as seen in the task log, caches were
cleared between runs, backups were bigger than the memory available)
setup 1 thread 2 threads 4 threads 8 threads
A 55 70 80 95
B 110 89 100 108
C 294 294 294 294
D 118 180 300 300
So there are cases where multiple read threads speed up the tape backup
(dramatically). On the other hand there are situations where reading
from a single thread is actually faster, probably because we can read
from the HDD sequentially.
I left the default value of '1' to not change the default behavior.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[TL: update comment about mpsc buffer size for clarity and drop
commented-out debug-code]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In preparation for skipping over snapshots when synchronizing with
encrypted/verified only flags set. In these cases, the manifest has
to be fetched from the remote and it's status checked. If the
snapshot should be skipped, the snapshot directory including the
temporary manifest file has to be cleaned up, given the snapshot
directory has been newly created. By reorganizing the current
snapshot pull logic, this can be achieved more easily.
The `corrupt` flag will be set to `false` in the snapshot
prefiltering, so the previous explicit distinction for newly created
snapshot directories must not be preserved.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The template files for this one have simply been copied from PVE,
including the HTML template.
In PBS we actually don't provide any HTML templates for any other type
of notification, so especially with the template override mechanism on
the horizon, it's probably better to remove this template until we
also provide an HTML version for the other types as well.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
This commit adds a separate type for the data passed to this type of
notification template. Also we make sure that we do not expose any
non-primitive types to the template renderer, any data needed in the
template is mapped into the new dedicated template data type.
This ensures that any changes in types defined in other places do not
leak into the template rendering process by accident.
These changes are also preparation for allowing user-overrides for
notification templates.
This commit also tries to unify the style and naming of template
variables.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The next commit is going to add a separate submodule for notification
template data types.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Empty backup groups are not visible in the API or GUI. This led to a
confusing issue where users were unable to create a group because it
already existed and was still owned by another user. Resolve this
issue by removing the group if its last snapshot is removed.
Also fixes an issue where removing a group used the non-atomic
`remove_dir_all()` function when destroying a group unconditionally.
This could lead to two different threads suddenly holding a lock to
the same group. Make sure that the new locking mechanism is used,
which prevents that, before removing the group. This is also a bit
more conservative now, as it specifically removes the owner file and
group directory separately to avoid accidentally removing snapshots in
case we made an oversight.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
when two clients change the owner of a backup store, a race condition
arose. add locking to avoid this.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
adds double stat'ing and removes directory hierarchy to bring manifest
locking in-line with other locks used by the BackupDir trait.
if the old locking mechanism is still supposed to be used, this still
falls back to the previous lock file. however, we already add double
stat'ing since it is trivial to do here and should only provide better
safety when it comes to removing locks.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
to avoid issues when removing a group or snapshot directory where two
threads hold a lock to the same directory, move locking to the tmpfs
backed '/run' directory. also adds double stat'ing to make it possible
to remove locks without certain race condition issues.
this new mechanism is only employed when we can be sure, that a reboot
has occured so that all processes are using the new locking mechanism.
otherwise, two separate process could assume they have exclusive
rights to a group or snapshot.
bumps the rust version to 1.81 so we can use `std::fs::exists` without
issue.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[TL: drop unused format_err import]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid duplicate code, add helpers for locking groups and snapshots
to the BackupGroup and BackupDir traits respectively and refactor
existing code to use them.
this also adapts error handling by adding relevant context to each
locking helper call site. otherwise, we might loose valuable
information useful for debugging. note, however, that users that
relied on specific error messages will break.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Since we are exposing functions now to get the password and encryption
password this should be private.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Allows to load credentials passed down by systemd. A possible use-case
is safely storing the server's password in a file encrypted by the
systems TPM, e.g. via
```
systemd-ask-password -n | systemd-creds encrypt --name=proxmox-backup-client.password - my-api-token.cred
```
which then can be used via
```
systemd-run --pipe --wait --property=LoadCredentialEncrypted=proxmox-backup-client.password:my-api-token.cred \
proxmox-backup-client ...
```
or from inside a service.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Adapt the description for the backup specification to use
`archive-name` and `type` over `label` and `ext`, to be in line with
the terminology used in the documentation.
Further, explicitley describe the `path` as `source-path` to be less
ambigouos.
In order to avoid formatting issues in the man pages because of line
breaks after a hyphen, show the backup specification description in
multiple lines.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently if any `?`/`bail!` happens between mounting and completing
the creation process unmounting will be skipped. Adding this guard
solves that problem and makes it easier to add things in the future
without having to worry about a disk not being unmounted in case of a
failed creation.
Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
fixes the clippy warning on types T implementing Copy:
```
warning: using `clone` on type `T` which implements the `Copy` trait
```
followed by formatting fixups via `cargo fmt`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Drop the pub scope for `DataStore`s `list_images` method.
This method is only used to generate a list of index files found in
the datastore for iteration during garbage collection. There are no
other call sites and this is intended to only be used within the
module itself. Allows to be more flexible for future method signature
adaptions.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
by switching on deprecations and using some backported types already
available on 0.14:
- use body::HttpBody::collect() instead of to_bytes() directly on Body
- use server::conn::http2::Builder instead of server::conn::Http with
http2_only
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid upgrading to hyper 1 / http 1 right now. this is a Debian/Proxmox
specific workaround.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Commit:
efc09f63c (docs: tech overview: avoid 'we' and other small style fixes/additions)
introduced the comparison with 13 lottery games, but sadly without any
mention how to arrive at that number.
When calculating I did arrive at 8-9 games (8 is more probable, 9 is
less probable), so rewrite to 'chance is lower than 8 lottery games' and
give the calculation directly inline as a reference.
Fixes: efc09f63 ("docs: tech overview: avoid 'we' and other small style fixes/additions")
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[TL: reference commit that introduced this]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The type `Box<dyn IndexFile + Send>>, usize, Vec<(usize, u64)>` is not
Sync so it makes more sense to use Rc. This is suggested by clippy.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Use the UTIME_NOW and UTIME_OMIT constants defined in libc crate
instead of redefining them. This improves consistency, as utimesat
and its timespec parameter are also defined via the libc crate.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When wiping a block device with a GUID partition table, the header
backup might get left behind at the end of the disk. This commit also
wipes the last 4096 bytes of the disk, making sure that a GPT header
backup is erased, even from disks with 4k sector sizes.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Replace the external invocation of `dd` with direct file writes using
`std::os::unix::fs::FileExt::write_all_at` to zero out the start of the
disk.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Mention in the docs and the api parameter description the limitations
for archive name labels. They must contain alphanumerics, hyphens and
underscores only to match the regex pattern.
By setting this in the api parameter description, it will be included
in the man page for proxmox-backup-client.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=6185
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add the new pbs-api-types crate to the cargo override section. Reorder
the overrides to be alphabetic.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Previously we just wrote to syslog directly. This doesn't work anymore
since the tracing update and we won't get any output in the tasklog.
Reported-by: https://forum.proxmox.com/threads/158764/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Fixes a race condition where the backup upload stream can miss an
error returned by pxar::create_archive, because the error state is
only set after the backup stream was already polled.
On instantiation, `PxarBackupStream` spawns a future handling the
pxar archive creation, which sends the encoded pxar archive stream
(or streams in case of split archives) through a channel, received
by the pxar backup stream on polling.
In case this channel is closed as signaled by returning an error, the
poll logic will propagate an eventual error occurred during pxar
creation by taking it from the `PxarBackupStream`.
As this error might not have been set just yet, this can lead to
incorrectly terminating a backup snapshot with success, eventhough an
error occurred.
To fix this, introduce a dedicated notifier for each stream instance
and wait for the archiver to signal it has finished via this
notification channel. In addition, extend the `PxarBackupStream` by a
`finished` flag to allow early return on subsequent polls, which
would otherwise block, waiting for a new notification.
In case of premature termination of the pxar backup stream, no
additional measures have to been taken, as the abort handle already
terminates the archive creation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The prune schedule simulator returned "X/Y is not an integer" error
for a schedule that uses a `start..end` hour range combined with a
`/`-separated time step-gap, while that works out fine for actual
prune jobs in PBS.
Previously, a schedule like `5..23/3` was mistakenly interpreted as
hour-start = `5`, hour-end = `23/3`, hour-step = `1`, resulting in
above parser error for hour-end. By splitting the right hand side on
`/` to extract the step and normalizing that we correctly get
hour-start = `5`, hour-end = `23`, hour-step = `3`.
Short reminder: hours and minutes part are treated as separate and can
both be declared as range, step or range-step, so `5..23/3:15` does
not mean the step size is 3:15 (i.e. 3.25 hours or 195 minutes) but
rather 3 hours step size and each resulting interval happens on the
15 minute of that hour.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[TL: add context to commit message partially copied from bug report
and add a short reminder how these intervals work, can be confusing]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add new markers so that we can refer to the chapters.
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Use a intermediate variable for the frequently used datastore name and
backup snapshod name, while it's not often the case the diff(stat)
makes a good argument that it's worth it here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 8ea00f6e ("allow to abort verify jobs") errors
propagated up to the verify jobs worker call side are interpreted as
job aborts.
The manifest update did not honor this, leading to the verify job
being aborted with the misleading log entry:
`verification failed - job aborted`
Instead, handle the manifest update error non-fatal just like any
other verification related error, log it including the error message
and continue verification with the next item.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes: 5b7f4455 ("docs: add manual page for verification.cfg")
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[TL: add references to commit that this fixes]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Have our docgen tool generate a synopsis for the prune.cfg schema, and
use that output in a new prune.cfg manpage, and include it in the
appropriate appendix of our html/pdf rendered admin guide.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[TL: expand commit message and keep alphabetical order for configs in
the guide.]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds the missing log context for cases were a prune is not executed as
dedicated tokio task.
Commit 432de66a ("api: make prune-group a real workertask") moved the
prune group logic into it's own tokio task conditionally.
However, the log context was missing for cases where no dedicated
task/thread is started, leading to the worker task state being
unknown after finish, as no logs are written to the worker task log
file.
Reported in the community forum:
https://forum.proxmox.com/threads/161273/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com
* Improved errors when panics occur and the panic message is a
formatted (not static) string. This worked already for &str literals,
but not for Strings.
Downcasting to both &str and String is also done by the Rust Standard
Library in the default panic handler. See:
b605c65b6e/library/std/src/panicking.rs (L777)
* Switched from eprintln! to tracing::error when logging panics in the
task scheduler.
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Fixes the suspicious_open_options clippy lint, for example:
```
warning: file opened with `create`, but `truncate` behavior not defined
--> src/api2/tape/restore.rs:1713:18
|
1713 | .create(true)
| ^^^^^^^^^^^^- help: add: `.truncate(true)`
|
= help: if you intend to overwrite an existing file entirely, call `.truncate(true)`
= help: if you instead know that you may want to keep some parts of the old file, call `.truncate(false)`
= help: alternatively, use `.append(true)` to append to the file instead of overwriting it
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_open_options
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
As per its documentation [1]:
> If .create_new(true) is set, .create() and .truncate() are ignored.
This gets rid of the "file opened with `create`, but `truncate`
behavior not defined " clippy warnings.
[1] https://doc.rust-lang.org/std/fs/struct.OpenOptions.html#method.create_new
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Instead of using and depending on the `http` crate directly, use and
depend on the re-exported `hyper::http`. Adapt namespace prefixes
accordingly.
This makes sure the `hyper::http` types are version compatible and
allows to possibly depend on incompatible versions of `http` in the
workspace in the future.
No functional changes intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, the list of backups only shows removed backups and is
missing backups that are kept, though they are shown correctly in the
calendar view.
The reason is that a refactor (see Fixes tag) moved the definition of
a custom field renderer referencing `me` to a scope where `me` is not
defined. This causes the renderer to error out for "kept" backups,
which apparently causes the grid to skip the rows altogether (without
any messages in the console).
Fix this by replacing the broken `me` reference.
Fixes: bb044304 ("prune sim: move PruneList to more static declaration")
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
We moved the whole code from the pbs-api-types subdirectory into the proxmox
git repository and build a rust debian package for the crate.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Fixes:
warning: needless call to `as_bytes()`
--> pbs-tape/src/sg_pt_changer.rs:913:45
|
913 | let rem = SCSI_VOLUME_TAG_LEN - voltag.as_bytes().len();
| ^^^^^^^^^^^^^^^^^^^^^^^ help: `len()` can be called directly on strings: `voltag.len()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_as_bytes
= note: `#[warn(clippy::needless_as_bytes)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Commit da11d226 ("fix #5710: api: backup: stat known chunks on backup
finish") introduced a seemingly cheap server side check to verify
existence of known chunks in the chunk store by stating. This check
however does not scale for large backup snapshots which might contain
millions of known chunks, as reported in the community forum [0].
Revert the changes for now instead of making this opt-in/opt-out, a
more general approach has to be thought out to mark backup snapshots
which fail verification.
Link to the report in the forum:
[0] https://forum.proxmox.com/threads/158812/
Fixes: da11d226 ("fix #5710: api: backup: stat known chunks on backup finish")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
[ TL: add tag to subject and shorten it ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this moves the parsing of the concrete DataStoreConfig as well as the
check whether a store is mounted after the authorization checks.
otherwise we always check for all datastore whether they are mounted,
even if the requesting user has no privileges to list the specified
datastore anyway.
this may improve performance for large setups, as we won't need to stat
mounted datastores regardless of the useres privileges. this was
suggested on the mailing list [1].
[1]: https://lore.proxmox.com/pbs-devel/embeb48874-d400-4e69-ae0f-2cc56a39d592@93f95f61.com/
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
QEMU CLI option parsing requires doubling the commas for values, this
seems to be also used when a combined option is used to pass down the
key=value pairs to the internal options, like for the combined -drive
option that was replaced by the slightly lower-level blockdev option
in commit 668b8383 ("file restore: qemu helper: switch to more modern
blockdev option for drives"). So there we now could drop the comma
duplication as blockdev directly interprets these options, thus no
need for escaping the comma.
We missed two instances because they were not part of the "main"
format string, which broke some use cases.
Fixes: 668b8383 ("file restore: qemu helper: switch to more modern blockdev option for drives")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: add more context, but it's a bit guesstimation ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Gotify and webhook targets will use the HTTP proxy settings from
node.cfg, the documentation should mention this.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Change detection mode set to metadata compares regular file entries
metadata to the reference metadata archive of the previous run. The
`pxar::format::Stat` as stored in `pxar::Metadata` however does not
include the actual file size, it only partially stores information
gathered from stating the file.
This means however that the actual file size is never compared and
therefore, that if the file size did change, but the other metadata
information did not (including the mtime which might have been
restored), that file will be incorrectly reused.
A subsequent restore will however fail, because the expected file size
as encoded in the metadata archive does not match the file size as
stored in the payload archive.
Fix this by adding the missing file size check, comparing the size
for the given file against the one stored in the metadata archive.
Link to issue reported in community forum:
https://forum.proxmox.com/threads/158722/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
From the QEMU man page:
> The most explicit way to describe disks is to use a combination of
> -device to specify the hardware device and -blockdev to describe the
> backend. The device defines what the guest sees and the backend
> describes how QEMU handles the data. It is the only guaranteed stable
> interface for describing block devices and as such is recommended for
> management tools and scripting.
> The -drive option combines the device and backend into a single
> command line option which is a more human friendly. There is however
> no interface stability guarantee although some older board models
> still need updating to work with the modern blockdev forms.
From the perspective of live restore, there should be no behavioral
change, except that the used driver is now explicitly specified. The
'-device' options are still the same, the fact that 'if=none' is gone
shouldn't matter, because the '-device' option was already used to
define the interface (i.e. virito-blk) and the 'id' option needed to
be replaced with 'node-name'.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Remove the `log` dependency in pbs-client and change all the invocations
to tracing logs.
No functional change intended.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
This was introduced by commit fdea4e53 ("client: implement prepare
reference method") to read a reference metadata archive for detection
of unchanged, reusable files when using change detection mode set to
`metadata`.
Avoid unnecessary cloning of the atomic reference counted
`BackupReader` instance, as it is used exclusively for this codepath.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes the cargo doc warning:
```
warning: unresolved link to `UserInformation`
--> src/auth.rs:418:53
|
418 | /// Check if a userid is enabled and return a [`UserInformation`] handle.
| ^^^^^^^^^^^^^^^ no item named `UserInformation` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the cargo doc lint:
```
warning: unclosed HTML tag `uuid`
--> pbs-datastore/src/datastore.rs:60:41
|
60 | /// - could not stat /dev/disk/by-uuid/<uuid>
| ^^^^^^
|
= note: `#[warn(rustdoc::invalid_html_tags)]` on by default
warning: unclosed HTML tag `uuid`
--> pbs-datastore/src/datastore.rs:61:26
|
61 | /// - /dev/disk/by-uuid/<uuid> is not a block device
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Also fix `Entries` link.
Fixes the cargo doc lint:
```
warning: redundant explicit link target
--> pbs-client/src/pxar/extract.rs:212:27
|
212 | /// * The [`Entry`][E]'s filename is invalid (contains nul bytes or a slash)
| ------- ^ explicit target is redundant
| |
| because label contains path that resolves to same destination
|
note: referenced explicit link target defined here
--> pbs-client/src/pxar/extract.rs:221:14
|
221 | /// [E]: pxar::Entry
| ^^^^^^^^^^^
= note: when a link's destination is not specified,
the label is used to resolve intra-doc links
= note: `#[warn(rustdoc::redundant_explicit_links)]` on by default
help: remove explicit link target
|
212 | /// * The [`Entry`]'s filename is invalid (contains nul bytes or a slash)
| ~~~~~~~~~
warning: redundant explicit link target
--> pbs-client/src/pxar/extract.rs:215:37
|
215 | /// fetching the next [`Entry`][E]), the error may be handled by the
| ------- ^ explicit target is redundant
| |
| because label contains path that resolves to same destination
|
note: referenced explicit link target defined here
--> pbs-client/src/pxar/extract.rs:221:14
|
221 | /// [E]: pxar::Entry
| ^^^^^^^^^^^
= note: when a link's destination is not specified,
the label is used to resolve intra-doc links
help: remove explicit link target
|
215 | /// fetching the next [`Entry`]), the error may be handled by the
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the cargo doc lint:
```
warning: this URL is not a hyperlink
--> pbs-datastore/src/data_blob.rs:555:5
|
555 | /// https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: bare URLs are not automatically turned into clickable links
= note: `#[warn(rustdoc::bare_urls)]` on by default
help: use an automatic link instead
|
555 | /// <https://github.com/facebook/zstd/blob/dev/lib/common/error_private.h>
| + +
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
otherwise users will get a `b.store is null` error in the console and
a loading spinner is shown for a while.
the issue in question seems to stem from the event handler that gets
attached when the "Prune & GC Jobs" tab is opened for a specific
datastore. however, that event handler should *not* be attached for
the "Datastore" -> "Prune & GC Jobs" panel. it seems that the event
handler does still get attached, and will fire in the "Datastore"
view if it hasn't fired while opened in a specific datastore
(it should only trigger a single time).
that scenario seems to occur when a different tab was previously
selected in a specific datastore and navigation is triggered via the
side bar from the "Datastore" -> "Prune GC Jobs" to a specific
datastore. that leads to the "Prune & GC Jobs" view for that specific
datastore being opened very briefly in which the event handler gets
attached, navigation then automatically moves to the previously
selected tab. this will stop the store from updating ensuring that
the event is never triggered. when we then move to
the "Datastore" -> "Prune & GC Jobs" tab again the event handler will
be triggered but the store of the view is null leading to the error.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Since we don't want to have lingering file descriptors on any fork +
exec, like the reload code from the proxmox-daemon crate we're using
for the rest-server(s) does, as that can have serious side effects and
even cause hangs.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ TL: Reword commit message ]}
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
otherwise:
```
warning: unclosed HTML tag `uid`
--> proxmox-file-restore/src/main.rs:686:63
|
686 | /// "www-data", so we use a custom one in /run/proxmox-backup/<uid> instead.
| ^^^^^
|
= note: `#[warn(rustdoc::invalid_html_tags)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Otherwise:
```
warning: unresolved link to `PRIVILEGES`
--> pbs-config/src/acl.rs:15:71
|
15 | /// Map of pre-defined [Roles](Role) to their associated [privileges](PRIVILEGES) combination
| ^^^^^^^^^^ no item named `PRIVILEGES` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
```
warning: field assignment outside of initializer for an instance created with Default::default()
--> pbs-datastore/src/chunker.rs:431:5
|
431 | ctx.total = buffer.len() as u64;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: consider initializing the variable with `chunker::Context { total: buffer.len() as u64, ..Default::default() }` and removing relevant reassignments
--> pbs-datastore/src/chunker.rs:430:5
|
430 | let mut ctx = Context::default();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#field_reassign_with_default
= note: `#[warn(clippy::field_reassign_with_default)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy lint:
```
warning: empty line after doc comment
--> src/tape/pool_writer/mod.rs:441:5
|
441 | / /// updated.
442 | |
| |_
...
448 | / pub fn append_snapshot_archive(
449 | | &mut self,
450 | | snapshot_reader: &SnapshotReader,
451 | | ) -> Result<(bool, usize), Error> {
| |_____________________________________- the comment documents this method
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#empty_line_after_doc_comments
= help: if the empty line is unintentional remove it
help: if the documentation should include the empty line include it in the comment
|
442 | ///
|
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Use the trait implementations of `ApiVersion` to perform operator
based version comparisons. This makes the comparison more readable
and reduces the risk for errors.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Derive and implement the traits to allow comparison of two
`ApiVersion` instances for more direct and easy api version
comparisons. Further, add some basic test cases to reduce risk of
regressions.
This is useful for e.g. feature compatibility checks by comparing api
versions of remote instances.
Example comparison:
```
api_version >= ApiVersion::new(3, 3, 0)
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The `ApiVersion` type was introduced in commit a926803b
("api/api-types: refactor api endpoint version, add api types")
including the `repoid`, added for completeness when converting from
a pre-existing `ApiVersionInfo` instance, as returned by the
`version` api endpoint.
Drop the additional `repoid` field, since this is currently not used,
can be obtained fro the `ApiVersionInfo` as well and only hinders the
implementation for easy api version comparison.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Fixes the question_mark clippy lint:
```
warning: this `let...else` may be rewritten with the `?` operator
--> pbs-datastore/src/datastore.rs:101:5
|
101 | / let Some(ref device_uuid) = config.backing_device else {
102 | | return None;
103 | | };
| |______^ help: replace it with: `let ref device_uuid = config.backing_device?;`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#question_mark
= note: `#[warn(clippy::question_mark)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the lines_filter_map_ok clippy lint:
```
warning: `filter_map()` will run forever if the iterator repeatedly produces an `Err`
--> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:195:14
|
195 | .filter_map(Result::ok)
| ^^^^^^^^^^^^^^^^^^^^^^ help: replace with: `map_while(Result::ok)`
|
note: this expression returning a `std::io::Lines` may produce an infinite number of `Err` in case of a read error
--> proxmox-restore-daemon/src/proxmox_restore_daemon/disk.rs:193:18
|
193 | for f in BufReader::new(File::open("/proc/filesystems")?)
| __________________^
194 | | .lines()
| |____________________^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#lines_filter_map_ok
= note: `#[warn(clippy::lines_filter_map_ok)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the single_component_path_imports clippy lint:
```
warning: this import is redundant
--> proxmox-file-restore/src/block_driver_qemu.rs:15:1
|
15 | use proxmox_systemd;
| ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
= note: `#[warn(clippy::single_component_path_imports)]` on by default
warning: this import is redundant
--> proxmox-backup-client/src/mount.rs:19:1
|
19 | use proxmox_systemd;
| ^^^^^^^^^^^^^^^^^^^^ help: remove it entirely
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
= note: `#[warn(clippy::single_component_path_imports)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the suspicious_doc_comments clippy lints:
```
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/main.rs:1:1
|
1 | ///! Daemon binary to run inside a micro-VM for secure single file restore of disk images
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
= note: `#[warn(clippy::suspicious_doc_comments)]` on by default
help: use an inner doc comment to document the parent module or crate
|
1 | //! Daemon binary to run inside a micro-VM for secure single file restore of disk images
|
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/proxmox_restore_daemon/mod.rs:1:1
|
1 | ///! File restore VM related functionality
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
|
1 | //! File restore VM related functionality
|
warning: this is an outer doc comment and does not apply to the parent module or crate
--> proxmox-restore-daemon/src/proxmox_restore_daemon/api.rs:1:1
|
1 | ///! File-restore API running inside the restore VM
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#suspicious_doc_comments
help: use an inner doc comment to document the parent module or crate
|
1 | //! File-restore API running inside the restore VM
|
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The mount types were probably here for compatibility with older proxmox-sys.
Fixes the useless_conversion clippy lints:
```
warning: useless conversion to the same type: `std::os::fd::OwnedFd`
--> proxmox-backup-client/src/mount.rs:172:23
|
172 | let pr: OwnedFd = pr.into(); // until next sys bump
| ^^^^^^^^^ help: consider removing `.into()`: `pr`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
= note: `#[warn(clippy::useless_conversion)]` on by default
warning: useless conversion to the same type: `std::os::fd::OwnedFd`
--> proxmox-backup-client/src/mount.rs:173:23
|
173 | let pw: OwnedFd = pw.into();
| ^^^^^^^^^ help: consider removing `.into()`: `pw`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
warning: useless conversion to the same type: `pbs_api_types::BackupArchiveName`
--> proxmox-file-restore/src/main.rs:484:18
|
484 | &archive_name.try_into()?,
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= help: consider removing `.try_into()`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
= note: `#[warn(clippy::useless_conversion)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the needless_option_as_deref clippy lint:
```
warning: derefed type is same as origin
--> proxmox-backup-client/src/main.rs:1154:21
|
1154 | payload_target.as_ref().as_deref(),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `payload_target.as_ref()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_option_as_deref
= note: `#[warn(clippy::needless_option_as_deref)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the doc_lazy_continuation clippy lint, e.g.:
```
warning: doc list item without indentation
--> src/server/pull.rs:764:5
|
764 | /// -- attempt to pull each NS in turn
| ^
|
= help: if this is supposed to be its own paragraph, add a blank line
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#doc_lazy_continuation
help: indent this line
|
764 | /// -- attempt to pull each NS in turn
| ++
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the get_first clippy lint:
```
warning: accessing first element with `matching_stores.get(0)`
--> src/bin/proxmox_backup_manager/datastore.rs:284:26
|
284 | if let Some(store) = matching_stores.get(0) {
| ^^^^^^^^^^^^^^^^^^^^^^ help: try: `matching_stores.first()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#get_first
= note: `#[warn(clippy::get_first)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The current version check does not cover cases where the minor
version is 3, but the release version is below 11. Fix this by
extending the check accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: re-sort line to go from bigger to smaller ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently, showing the Datastore summary page leads to errors since
the status returned by the API does not contain any fields that are
checked by the component rendering the datastore summary. We solve
this by checking if the datastore is currently mounted first and mask
the element if it is currently unmounted.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
The tooltip text shown for the remove vanished flag when hovering
is incorrect for push direction. By using `sync target` over `local`,
make the text agnostic to the actual sync direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Our package uses <x>.<y>.<z>-<rev> as version format, here we get
version=<x>.<y> and release=<z>, so we rendered the version like
<x>.<y>-<z>, which is rather wrong.
And while the return value of the API call might be a bit odd and
should probably change (or at least add a full version property), but
for now it's what it is, so at least render it correctly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I made a mistake and applied the v1 not the v2 of the series, show
this by merging the actual v2; albeit this should not be done to
frequently to avoid making the git history to messy – sorry!
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.
There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.
The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to check for a pre-existing mount unit.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Without this check, if a mount unit is present, but the file system is
not mounted, it will just get overwritten. The unit might belong to an
existing datastore.
There already is a check against a duplicate datastore, but only after
the mount unit is already overwritten and having the add-datastore
flag present is not a precondition to trigger the issue.
The check is done even if the newly created directory datastore is
removable. While in that case, the mount unit is not overwritten, the
conflict for the mount point is still present, so it is nice to fail
early.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to check for a pre-existing mount unit.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: move format template variable directly into string ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pass the `-E` option to, quoting it's man-page, "don't use a saved
environment (the structure caching all cross-references, but rebuild
it completely."
As with reusing the environment one gets some empty results for
synopsis stuff depending on build order, for example the synopsis in
the command-syntax appendix HTML output is empty while the same
synopsis used for the dedicated HTML page is complete.
By making the build-log more verbose I caught the attention of some
emitted 'env-purge-doc' events from sphinx; while this itself might be
harmless (I didn't followed the rat tail to its end), it made me a bit
suspicious about caching and wrong/missing invalidation.
With ignoring the environment this is fixed, a diffoscope comparison
shows that not only the command-syntax page, but many others have the
various synposis content added again. There are solely added lines, no
removed nor changed, so it seems fine to enabled that option without
an in-depth sphinx review.
Note, I first suspected the use of a separate "doctree pickles" cache
directory (`-d` option) and is used for all output types besides the
man-pages one, which uses the default .doctree directory.
But changing the man-page target to also use the custom doctree cache
had no effect on the build-result whatsoever (compared with
diffoscope).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are particularly problematic since GC will walk the whole datastore tree
on the file system, and will thus pick up indices (but not chunks!) from nested
directories that are ignored in other code paths that use our regular
iterators..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and improve the variable namign while we are at it. this allows the check to be
re-used in other code paths, like when starting a garbage collection.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
there two kinds of overlap we need to check here:
- two removable datastores backed by the same device must not have nested
relative paths on the device
- any two datastores must not have nested absolute paths
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
To be in line with the updated permission requirements, as
Datastore.Audit is now required to read and edit sync jobs in push
direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Sync jobs in push and pull direction require a different set of
privileges for the various api methods provided. Update the
descriptitons to include the push direction and list them
accordingly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Users require `Datastore.Audit` on the source datastore to read sync
jobs. Further restrict also the permissions to modify sync jobs in
push direction to include the `Datastore.Audit` permission on the
source, as otherwise a user is able to create or edit sync jobs in
push direction, but not able to see them.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
* consistently use "medium" (singular), as only one is needed for
installation (installation-media.rst not renamed)
* add short introduction to recently added chapter "Installation Media"
* update minimum required flash drive storage space to 2 GB
* remove CD-ROM (too little storage space) but keep DVD
* mention explicitly that data get overwritten on installation media /
installation target disks
* mention that using `dd` will require root privileges
* add accidentally cut off text when copying from PVE docs
* add reference labels to currently needed section titles
* reword some paragraphs for completeness and readability
* mention all installation methods in the intro of "Server Installation"
* add the boot order as possible boot issue
* remove recently added redundant product website hyperlinks (as earlier
with commit 34407477e2)
* fix broken heading level of APT-based PBC repo
* slightly reorder sub-chapters of "Installation":
After adding the chapter "Installation Media" (d363818641), the chapter
order under "Installation" is:
1. System Requirements
2. Installation Media
3. Debian Package Repositories
4. Server Installation
5. Client Installation
But repos are more likely to be configured after installation, and for
other installation methods chapter links exist anyway. So to keep the
chapter order more logical, "Debian Package Repositories" is now moved
after "Client Installation".
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
pbs-datastore::datastore::is_datastore_mounted_at() verifies that the
mounted file system has the expected UUID. Therefore we don't have to
error out if we try to mount an already mounted removable datastore.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Show the full error context when fetching the remote target
namespaces fails. As logging of the error is handled by the calling
sync job, reformat the error to include the error context before
returning.
Instead of the error
```
TASK ERROR: Fetching remote namespaces failed, remote returned error
```
the user is now presented with an error like
```
TASK ERROR: Fetching remote namespaces failed, remote returned error: datastore 'removable1' is not mounted
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To ensure we can bind to the emptyText of a display-edit field,
otherwise the empty text can be confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We don't use the abbreviation anywhere else in our UI or docs.
To avoid any confusion about this (loaded) abbreviation, this
commits replaces it with the full word "Namespace".
There is more than enough space in the top bar for the larger button
size, even on low resolution screens (checked on 1280x700).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
instead of resetting to the originalValue. This makes it behave like
other similar fields (e.g. the combogrid).
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In an earlier version of this series the datastore path was absolute
for removable datastores. This is simply a leftover that was missed
when changing that to relative paths.
Reported-by: Markus Frank <m.frank@proxmox.com>
Fixes: 94a068e31 ("api: node: allow creation of removable datastore through directory endpoint")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
when loading the verification state for a local snapshot, it must
first be ensured that it actually exists, else the lack of manifest
will be interpreted as corrupt snapshot triggering a "resync" that is
actually a sync of all missing snapshots, not just the newer ones,
which is what's actually wanted here.
The diff is best seen by telling git to ignore the whitespace changes.
Fixes: 0974ddfa ("fix #3786: api: add resync-corrupt option to sync jobs")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ TL: reword subject and add a bit to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes a regression introduced when switching from the plain string
to be used for archive names to the BackupArchiveName api type in
commit addfae26 ("api types: introduce `BackupArchiveName` type").
The archive name now always is stored including the server archive
name extension. Adapt the check for which archive types to display
the progress log output to reflect this change.
Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
the current phrase leads to clumsy log messages such as:
> datastore 'store' is in datastore is being unmounted
this commit re-phrases that too:
> datastore 'store' is unavailable: datastore is being unmounted
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
A lot of removable datastore actions can alter the system state
(mounting, unmounting), so require Sys.Modify for lack of better
alternative.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ TL: improve commit subject and add access-description for create,
and delete, where we do a dynamic access check ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the SyncJobConfig struct now contains a 'sync-direction' property, we can
omit the 'direction' property of the SyncJobStatus struct. This makes a
few adaptions in the ui necessary:
* use the correct field
* handle 'pull' as default (since we don't necessarily get a
'sync-direction' in that case)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In case we add another direction or another call site, doing it
without a wildcard match arm seems cleaner and more future-proof.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: adapt subject/message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Jobs for both sync directions are now stored using the same `sync`
config section type, so drop the outdated helpers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Add the sync direction for the sync job as optional config parameter
and refrain from using the config section type for conditional
direction check, as they are now the same (see previous commit).
Use the configured sync job parameter instead of passing it to the
various methods as function parameter and only filter based on sync
direction if an optional api parameter to distingush/filter based on
direction is given.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Use `sync` as config section type string for both, sync jobs in push
and pull direction, renaming the now combined config plugin to sync
plugin.
Commit bcd80bf9 ("api types/config: add `sync-push` config type for
push sync jobs") introduced the additional config type with the
intend to reduce possible misconfiguration. Partially revert this to
use the same config type string again, since the misconfiguration
can happen nevertheless (by editing the config type) and currently
sync job configs are only listed partially when fetched via the
config api endpoint. The filtering based on the additional api
parameter is however retained.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
filter by (remote) store, remote, id, owner, direction.
Local store is only included on the globabl view not the datastore
specific one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of just the id, which makes the list in the global datastore
view a bit more easier to digest (since it's now sorted by store first)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
but add a separate column for the direction so one still sees the
separate jobs.
change the 'local owner/user' to a single column, but add a tooltip in
the header to explain when it does what.
This makes the 'SyncJobsPullPushView' unnecessary, so delete it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so that one can list all sync jobs, both pull and push, at the same
time. To not confuse existing clients that only know of pull syncs, show
only them by default and make the 'all' parameter opt-in. (But add a
todo for 4.x to change that)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Commit addfae26 ("api types: introduce `BackupArchiveName` type")
introduced a dedicated archive name api type to add rust type
checking and bundle helpers to the api type. Since this, the backup
archive name to server archive name mapping is handled by its parser.
This however did not cover the `.conf` extension used for VM config
files. Add the missing `.conf` to `.conf.blob` to the match statement
and the test cases.
Fixes: addfae26 ("api types: introduce `BackupArchiveName` type")
Reported-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Specifically about jobs and how they behave when the datastore is not
mounted, how to create and use deivices with multiple datatstores on
multiple PBS instances and options how to handle failed unmounts.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... and deserialize with default if field is missing in data.
Reported-by: Aaron Lauterer <a.lauterer@proxmox.com>
Fixes: 76609915d6 ("pbs-api-types: add mount_status field to DataStoreListItem")
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
So it is possible to reset it after a failed unmount, or abort an
unmount task by resetting it through the API.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
The same word occurring twice in succession can lead to the brain
skipping the second occurrence. Change the name of the archives in the
example from backup.pxar to archive-name.pxar to avoid that effect.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
[ TL: squash in Christian's suggestion to use 'archive-name.pxar' ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The `Add datastore` window labels the flag for creating a removable
datastore as `Removable datastore`, while creating the datastore via the
storage/disks interface will refer to it as `is removable`.
Use the same `Removable datastore` as label for both locations.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Explain that the change detection mode data makes sure that no files
are considered reusable, even if their metadata might match and that
the use of ctime and inode number is not possible for detection of
unchanged files if the filesystem was synced to a temporary location,
therefore the mtime and size are used for detection.
Also note the reduced deduplication when storing snaphshots with
mixed archive formats on the same datastore.
Further, mention the backwards compatibility to older version of the
Proxmox Backup Server.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The example commands in the Change Detection Mode / File Exclusion
section are missing the command in the client invocation. Add the
backup command to the examples, so they are actually valid.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Add onlineHelp link to the consent-banner docs section in the popup when
inserting the consent-banner text.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
This allows users to add/edit new webhook targets.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-By: Stefan Hanreich <s.hanreich@proxmox.com>
If a datastore with a prune job is removed, the prune job is preserverd
as it is stored in /etc/proxmox-backup/prune.cfg. We also create a
default prune job for every datastore – this means that when reusing a
datastore that previously existed, you end up with duplicate prune jobs.
To avoid this we check if a prune job already exists, and when it does,
we refrain from creating the default one. (We also check if specific
keep-options have been added, if yes, then we create the job
nevertheless.)
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Switch over using the controller of the info panel directly, avoiding
firing events, and add a single store load to cause the mask-logic
when the status update store goes from succeeding to failure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No point in querying RRD metrics if it will fail anyway, so stop them
like we stop the status store, and start them again once it can work.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Disabling basically was already done only on an transition edge from
"success" -> "failure" (= !success), as we stopped the periodic store
load in that case, thus we never trigger to "failures" after each
other without any user input.
But on success we always unconditionally fired an activate, which
cause the status store to start its store updates, which in turn
immediately triggered as store load. So the verbose status call of the
info panel was now coupled to the 1s update period of the encompassing
summary panel, not the slower 5s period it actually wanted to trigger
an update.
So save the last state and check if it actually differs before causing
such action.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Without this, we immediately start the store updates even before the
browser created the (async) mount API request. So it's very likely
that the first store load will still get an error due to the backing
device of the datastore not being mounted yet. That in turn will
trigger our error detection behavior in the load even listener and
disable periodic store updates again.
Move the start of the update into the taskDone handler. We do not need
to check if the task succeeded, as either it did, and we will do
periodic updates, or it did not and we do at least one update to load
the current status and then stop again auto-loading the store anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's not nice if a existing always visible button moves around
depending on the datastore type. Rather move the optional buttons to
the right and add a separator for visual grouping.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some filesystems like FAT don't include a concept of UUIDs.
Instead, tools like blkid tools like blkid derive these
identifiers based on certain filesystem metadata, such as
volume serial numbers or other unique information. This does
however not follow the format specified in RFC 9562[1].
[1] https://datatracker.ietf.org/doc/html/rfc9562
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Data deletion is only possible if the datastore is mounted, won't attempt
mounting it for the purpose of deleting data.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
If a device houses multiple datastore, none of them will be mounted
automatically. If a device only contains a single datastore it will be
mounted automatically. The reason for not mounting multiple datastore
automatically is that we don't know which is actually wanted, and since
mounting all means also all have to be unmounted manually, it made sense
to have the user choose which to mount.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
We can't just directly delegate these commands to the API endpoints
since both mounting and unmounting are done in a worker, and that one
would be killed when the parent ends. In this case that would be the CLI
process, which basically ends right after spwaning the worker.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Devices can contains multiple datastores.
If the specified path already contains a datastore, `reuse datastore` has
to be set so it'll be added without creating a chunckstore.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Removable datastores can be mounted unless
- they are already
- their device is not present
For unmounting the maintenance mode is set to `unmount`,
which prohibits the starting of any new tasks envolving any
IO, this mode is unset either
- on completion of the unmount
- on abort of the unmount tasks
If the unmounting itself should fail, the maintenance mode stays in
place and requires manual intervention by unsetting it in the config
file directly. This is intentional, as unmounting should not fail,
and if it should the situation should be looked at.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... at a specific location. Also adds two additional functions to
get the mount status, and ensuring a removable datastore is mounted.
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
... and add MaintenanceType::Delete to it. We also want to clear any
cach entries if we are deleting the datastore, not just if it is marked
as offline.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
To ensure the adapted handlebars escaper that keeps '=' as is gets
used, required for the consent banner.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before showing the LoginView, check if we got a non-empty consent text
from the template. If there is a non-empty text, display it in a modal.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Add consent_text option to the node.cfg config. Embed the value into
index.html file using handlebars.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
we already have two different password schemas, `PBS_PASSWORD_SCHEMA`
being the stricter one, which ensures a minimum length of new
passwords. however, this wasn't used on the change password endpoint
before, so add it there too. this is also in-line with NIST's latest
recommendations [1].
[1]: https://pages.nist.gov/800-63-4/sp800-63b.html#passwordver
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
currently if a password is provided, we check whether the user that is
going to be updated can authenticate with it. later on, the password
is then set as the same password. this means that the password here
can only be changed if it is the exact same one that is already used.
so in essence, the password cannot be changed through this endpoint
already. remove all of this logic here in favor of the
`PUT /access/password` endpoint.
to keep the api stable for now, just ignore the parameter and add a
description that explains what to use instead.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Fix switching the source for group filters based on the sync jobs
sync direction.
The helper to set the local namespace for the group filers was
introduced in commit 43a92c8c ("ui: group filter: allow to set
namespace for local datastore"), but never used because lost during
subsequent iterations of reworking the patch series.
The switching is corrected by:
- correctly initializing the local store and namespace for the group
filer of sync jobs in push direction in the controller init, if a
datastore is set.
- fixing an incorrect check for the sync direction in the remote
datastore selector change listener.
- conditionally switching namespace to be set for the group filter in
the remote and local namespace selector change listeners.
- conditionally switching datastore to be set for the group filter in
the local datastore selector change listener.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
These were put in place so that initial release of the new
notification system for Proxmox Backup Server can already include
improved notification matchers, which at that time have not been yet
merged into proxmox-widget-toolkit.
In the meanwhile, the changes have been merged an released in
proxmox-widget-toolkit 4.2.4, hence we can remove the override.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
We need "notification: matcher: match-field: show known fields/values",
which was released in proxmox-widget-toolkit 4.2.4
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
PathPatterns is hard to distinguish from PathPattern, so would need to be
renamed anyway.. but there isn't really a reason to define a separate API type
just for this.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
When the user is only interested in a subset of the entries stored in
a file-level backup, it is convenient to be able to provide a list of
match patterns for the entries intended to be restored.
The required restore logic is already in place. Therefore, expose it
for the `proxmox-backup-client restore` command by adding the optional
array of patterns as command line argument and parse these before
passing them via the pxar restore options to the archive extractor.
Link to bugtracker issue:
https://bugzilla.proxmox.com/show_bug.cgi?id=2996
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the common api type with schema based input validation for all
match pattern parameters exposed via the api macro.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of taking a plain string as input parameter, use the
corresponding api type performing additional input validation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Introduces a dedicated api type `PathPattern` and the corresponding
format and input validation schema. Further, add a `PathPatterns`
type for collections of path patterns and implement required traits
to be able to replace currently defined api parameters.
In preparation for using this common api type for all api endpoints
exposing a match pattern parameter.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Currently, common details regarding garbage collection are documented
in the backup client and the maintenance task. Deduplicate this
information by moving the details to the background section of the
maintenance task and reference that section in the backup client
part.
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Users should be made aware that the data stored in chunks outlives
the backup snapshots on pruning and that backups created using the
change-detection-mode set to metadata might reference chunks
containing files which have vanished since the previous backup, but
might still be accessible when access to the chunks raw data is
possible (client or server side).
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add short section explaining the `resync-corrupt` option on the
sync-job.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add the `resync-corrupt` option to the ui and the
`proxmox-backup-manager` cli. It is listed in the `Advanced` section,
because it slows the sync-job down and is useless if no verification
job was run beforehand.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This option allows us to "fix" corrupt snapshots (and/or their chunks)
by pulling them from another remote. When traversing the remote
snapshots, we check if it exists locally, and if it is, we check if the
last verification of it failed. If the local snapshot is broken and the
`resync-corrupt` option is turned on, we pull in the remote snapshot,
overwriting the local one.
This is very useful and has been requested a lot, as there is currently
no way to "fix" corrupt chunks/snapshots even if the user has a healthy
version of it on their offsite instance.
Originally-by: Shannon Sterz <s.sterz@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add helper functions to retrieve the verify_state from the manifest of a
snapshot. Replaced all the manual "verify_state" parsing with the helper
function.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Log also empty backup groups with no snapshots encountered during the
sync so the log output contains this additional information as well,
reducing possible confusion.
Nevertheless, continue with the regular logic, so that pruning of
vanished snapshot is honored.
Examplary output in the sync jobs task log:
```
2024-11-22T18:32:40+01:00: Syncing datastore 'datastore', root namespace into datastore 'push-target-store', namespace 'test'
2024-11-22T18:32:40+01:00: Found 2 groups to sync (out of 2 total)
2024-11-22T18:32:40+01:00: skipped: 1 snapshot(s) (2024-11-22T13:40:18Z) - older than the newest snapshot present on sync target
2024-11-22T18:32:40+01:00: Group 'vm/200' contains no snapshots to sync to remote
2024-11-22T18:32:40+01:00: Finished syncing root namespace, current progress: 1 groups, 0 snapshots
```
Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add an anyhow context to errors and display the full error context
in the log output. Further, make it clear which errors stem from api
calls by explicitly mentioning this in the context message.
This also fixes incorrect error handling by placing the error context
on the api result instead of the serde deserialization error for
cases this was handled incorrectly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: add missing format!
FG: run cargo fmt
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Parsing of the type based on the archive name extension is now
handled by `BackupArchiveName`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: add removal of import
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Instead of using the plain String or slices of it for archive names,
use the dedicated api type and its methods to parse and check for
archive type based on archive filename extension.
Thereby, keeping the checks and mappings in the api type and
resticting function parameters by the narrower wrapper type to reduce
potential misuse.
Further, instead of declaring and using the archive name constants
throughout the codebase, use the `BackupArchiveName` helpers to
generate the archive names for manifest, client logs and encryption
keys.
This allows for easy archive name comparisons using the same
`BackupArchiveName` type, at the cost of some extra allocations and
avoids the currently present double constant declaration of
`CATALOG_NAME`.
A positive ergonomic side effect of this is that commands now also
accept the archive type extension optionally, when passing the archive
name.
E.g.
```
proxmox-backup-client restore <snapshot> <name>.pxar.didx <target>
```
is equal to
```
proxmox-backup-client restore <snapshot> <name>.pxar <target>
```
The previously default mapping of any archive name extension to a blob
has been dropped in favor of consistent mapping by the api type
helpers.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use LazyLock for constant archive names
FG: add missing import
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Introduces a dedicated wrapper type to be used for backup archive
names instead of plain strings and associated helper methods for
archive type checks and archive name mappings.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use LazyLock for constant archive names reduces churn, and saves some
allocations
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Moving the `ArchiveType` to avoid crate dependencies on
`pbs-datastore`.
In preparation for introducing a dedicated `BackupArchiveName` api
type, allowing to set the corresponding archive type variant when
parsing the archive name based on it's filename.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
these can occur in practice, and neither setting nor getting them throws an
error. if "invalid" ACLs are non-restorable, this means that creating a pxar
archive with such an ACL is possible, but restoring it isn't.
reported in our community forum:
https://forum.proxmox.com/threads/155477
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else, error messages using this path_info refer to the parent directory instead
of the actual file entry causing the problem. since this is just for
informational purposes, lossy conversion is acceptable.
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Known chunks are expected to be present on the datastore a-priori,
allowing clients to only re-index these chunks without uploading the
raw chunk data. The list of reusable known chunks is send to the
client by the server, deduced from the indexed chunks of the previous
backup snapshot of the group.
If however such a known chunk disappeared (the previous backup
snapshot having been verified before that or not verified just yet),
the backup will finish just fine, leading to a seemingly successful
backup. Only a subsequent verification job will detect the backup
snapshot as being corrupt.
In order to reduce the impact, stat the list of previously known
chunks when finishing the backup. If a missing chunk is detected, the
backup run itself will fail and the previous backup snapshots verify
state is set to failed.
This prevents the same snapshot from being reused by another,
subsequent backup job.
Note:
The current backup run might have been just fine, if the now missing
known chunk is not indexed. But since there is no straight forward
way to detect which known chunks have not been reused in the fast
incremental mode for fixed index backups, the backup run is
considered failed.
link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5710
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Various smaller adaptions such as capitalization of the start of
sentences, expansion of abbreviations and shortening of to long
error messages.
To improve consistency with the rest of the error messages for the
sync job in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: use "skipping" for non-owner-groups - we haven't started uploading at that
point, there is nothing to "abort"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
so that they are logged in the success case, since the error case already has
its own log messages.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Mixing of terms only makes the errors harder to understand.
In order to make error messages more intuitive, always refer to the
sync push target as remote, mention the remote explicitly and/or
improve messages where needed.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When updating the datastore config using `proxmox-backup-manager` we
need to make an api-call, because the api-route starts a tokio task to
update the proxy-cache and the client will kill the task if we don't
wait. With an api-call the tokio task will be executed on the api
process and runs in the background while the endpoint handler has
already returned.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
When creating a datastore without the "reuse-datastore" option and the
datastore contains a `lost+found` directory (which is quite common), the
creation fails. Add `lost+found` to the ignore list.
Reported here: https://forum.proxmox.com/threads/bug-when-adding-new-storage-task-error-datastore-path-is-not-empty.157629/#post-721733
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
FG: slight code style change
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Permissions are stored in the lower 9 bits (rwxrwxrwx),
so we have to mask `st_mode` with 0o777.
The datastore root dir is created with 755, the `.chunks` dir and its
contents with 750 and the `.lock` file with 644, this changes the
expected permissions accordingly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Fixes: 6e101ff757 ("fix #5439: allow to reuse existing datastore")
Reviewed-By: Gabriel Goller <g.goller@proxmox.com>
With single ticks the containing modes and archive formats are
displayed cursive, to be consistent with other sections of the
documentation use inline blocks.
Adapted line wrappings to the additional line length.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Currently, the change detection modes are described in the client
usage section, not intended for in-depth explanation on how these
client option works, but rather with focus on how to use them.
Therefore, add a reference to the more detailed technical section
regarding the change detection modes and reduce duplicate
explanations.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Describe in more details how the different change detection modes
operate and give insights into the inner workings, especially for the
more complex `metadata` mode, which involves lookahead caching and
padding calculation for reused payload chunks.
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
this got reported via e-mail - seems this one occurrence was
forgotten. grepped through the docs (and the whole repo) for 'Mail'
and 'Gateway', and it seems this was the only one.
Fixes: cbd7db1d ("docs: certificates")
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Skip and warn the user for files which returned a stale file handle
error while reading the metadata associated to that file, or the
target in case of a symbolic link.
Instead of returning with a hard error, report the stale file handle
and skip over encoding this file entry in the pxar archive.
Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5853
Link to thread in community forum:
https://forum.proxmox.com/threads/156822/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not fail hard if a file open fails because of a stale file handle.
Warn the user and ignore the file, just like the client already does
in case of missing privileges to access the file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the entries when a stale file handle is encountered during
generation of the entry list of a directory entry.
This will lead to the directory not being backed up if the directory
itself was invalidated, as then reading all child entries will fail
also, or the directory is backed up without entries which have been
invalidated.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Skip over the whole directory in case the file handle was invalidated
and therefore the filesystem type check returns with ESTALE.
Encode the directory start entry in the archive and the catalog only
after the filesystem type check, so the directory can be fully skipped.
At this point it is still possible to ignore the invalidated
directory. If the directory is invalidated afterwards, it will be
backed up only partially.
Introduce a helper method to report entries for which a stale file
handle was encountered, providing an optional path for cases where
the `Archiver`s state does not store the correct path.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch from mutable reference to shared reference on `self` and drop
unused return value.
These helpers only write log messages, there is currently no need for
a mutable reference to `self`, nor to return a `Result`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
else, combined with remove_vanished everything on the target side would be
removed.
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the group filters need adaptations both for pushing and local pulling, so left
those out for now.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid attempting to create them multiple times in case a whole hierarchy is
missing, and misleadingly logging that they were created multiple times as
well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the error message for failure to sync the whole namespace was too long, so
split it into two lines and make it a warning.
the namespace creation one lacked context (that the error was caused by the
remote side or the connection) and had too much (the datastore, which is
already logged very often) at the same time.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
add a bit more detail for the pull side, and reword some comments on the push
side to make them easier to read.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`try_exists` will return Ok(false) if the path is or containts a dangling
symlink, treat that as hard error just like if `try_exists` has returned an
Err(..).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
one million chunks are a bit much, considering that chunks are representing
1-2MB (dynamic) to 4MB (fixed) of input data, that would mean 1-4TB of re-used
input data in a single snapshot.
64k chunks are still representing 64-256GB of input data, which should be
plenty (and for such big snapshots with lots of re-used chunks, growing the
allocation of the HashSet should not be the bottleneck), and is also the
default capacity used for pulling.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of calling this three times, call it once:
retrieving the highest backup timestamp doesn't need its own request, it can
re-use the "main" result, the corresponding helper can thus be dropped.
remove_vanished can re-use the earlier result - if anybody prunes the backup
group or adds new snapshots while the sync is running, the whole group sync is
racy and might cause spurious errors anyway.
since re-syncing the last already existing snapshot is not possible at the
moment, the code can also be simplified by treating such a snapshots already
fully synced.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
a vanished namespace is one that
- exists on the target side, below the target prefix
- but within the specified max_depth
- and was not part of the synced namespaces
Co-developed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
two parameters that only differ by a letter are not very nice for quickly
understanding semantics..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
BackupGroup is serializable as its API parameter components, like BackupDir.
move the (always present) namespace closer to the group to improve readability.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make it easier to distinguish from missing "Prune" privs when removing
vanished groups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to make the complex logic code shorter and easier to parse. no semantic changes
intended.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Documents the caveats of sync jobs in push direction, explicitly
recommending setting up dedicted remotes for these sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Expose the 'prune-delete-stats' as supported feature, in order for
the sync job in pull direction to pass the optional
`error-on-protected=false` flag to the api calls when pruning backup
snapshots, groups or namespaces.
Add and optionally expose the backup group delete statistics by adding the
return type to the corresponding REST API endpoints.
Clients can opt-into the new behaviour by setting the new `error-on-protected`
flag to `false` when calling the api endpoints, which results in removal not
erroring out when encountering protected snapshots.
The default value for the flag remains `true` for now, to remain backwards
compatible with older clients expecting this behaviour.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
FG: reworded commit message slightly
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
In order to load data using the same model from different sources,
set the proxy on the store instead of the model.
This allows to use the view to display sync jobs in either pull or
push direction, by setting the `sync-direction` ont the view.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch the subject and labels to be shown based on the direction of
the sync job, and set the `sync-direction` parameter from the
submit values in case of push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Show sync jobs in pull and in push direction in two separate grids,
visually separating them to limit possible misconfiguration.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Switch to the local datastore, used as sync source for jobs in push
direction, to get the available group filter options.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The namespace has to be set in order to get the correct groups to be
used as group filter options with a local datastore as source,
required for sync jobs in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`list_sync_jobs` exists as api method in `api2::admin::sync` and
`api2::config::sync`.
Rename the admin api endpoint method to `list_config_sync_jobs` in
order to reduce possible confusion when searching/reviewing.
No functional change intended.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Exposes and switch the config type for sync job operations based
on the `sync-direction` parameter, exposed on required api endpoints.
If not set, the default config type is `sync` and the default sync
direction is `pull` for full backwards compatibility. Whenever
possible, determine the sync direction and config type from the sync
job config directly rather than requiring it as optional api
parameter.
Further, extend read and modify access checks by sync direction to
conditionally check for the required permissions in pull and push
direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the sync job owner check to its own helper function, for it to
be reused for the owner check for sync jobs in push direction.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Read access to sync jobs is not granted to users not having at least
PRIV_DATASTORE_AUDIT permissions on the datastore. However a user is
able to create or modify such jobs, without having the audit
permission.
Therefore, further restrict the modify check by also including the
audit permissions.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Moves and refactores the sync_job_do function into the common server
sync module so that it can be reused for both sync directions, pull
and push.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Expose the sync job in push direction via a dedicated API endpoint,
analogous to the pull direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order for sync jobs to be either pull or push jobs, allow to
configure the direction of the job.
Adds an additional config type `sync-push` to the sync job config, to
clearly distinguish sync jobs configured in pull and in push
direction and defines and implements the required `SyncDirection` api
type.
This approach was chosen in order to limit possible misconfiguration,
as unintentionally switching the sync direction could potentially
delete still required snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds the functionality required to push datastore contents from a
source to a remote target.
This includes syncing of the namespaces, backup groups and snapshots
based on the provided filters as well as removing vanished contents
from the target when requested.
While trying to mimic the pull direction of sync jobs, the
implementation is different as access to the remote must be performed
via the REST API, not needed for the pull job which can access the
local datastore via the filesystem directly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a dedicated api type for the `version` api endpoint and helper
methods for supported feature comparison.
This will be used to detect api incompatibility of older hosts, not
supporting some features.
Use the new api type to refactor the version endpoint and set it as
return type.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To correctly account also for the number of deleted backup groups, in
preparation to correctly return the delete statistics when removing
contents via the REST API.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Make the `BackupGroupDeleteStats` exposable via the API by implementing
the ApiTypes trait via the api macro invocation and add an additional
field to account for the number of deleted groups.
Further, add a method to add up the statistics.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for the delete stats to be exposed as return type to
the backup group delete api endpoint.
Also, rename the private field `unremoved_protected` to a better
fitting `protected_snapshots` to be in line with the method names.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adding the privileges to allow backup, namespace creation and prune
on remote targets, to be used for sync jobs in push direction.
Also adds dedicated roles setting the required privileges.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add `remote_acl_path` method which generates the acl path from the sync
job configuration. This helper allows to easily generate the acl path
from a given sync job config for privilege checks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a `remote_acl_path` helper method for creating acl paths for
remote namespaces, to be used by the priv checks on remote datastore
namespaces for e.g. the sync job in push direction.
Factor out the common path extension into a dedicated method.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Extend the component limit for ACL paths of `remote` to include
possible namespace components.
This allows to limit the permissions for sync jobs in push direction
to a namespace subset on the remote datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Combine the two if statements checking the datastores ACL path
components, which can be represented more concisely as one.
Further, extend the pre-existing comment to clarify that `datastore`
ACL paths are not limited to the datastore name, but might have
further sub-components specifying the namespace.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a method `upload_index_chunk_info` to be used for uploading an
existing index and the corresponding chunk stream.
Instead of taking an input stream of raw bytes as the
`upload_stream`, this takes a stream of `MergedChunkInfo` object
provided by the local chunk reader of the sync jobs source.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for implementing push support for sync jobs.
Factor out the upload stream for merged chunks, which can be reused
to upload the local chunks to a remote target datastore during a
snapshot sync operation in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for push support in sync jobs.
Extend and move `BackupStats` into `backup_stats` submodule and add
method to create them from `UploadStats`.
Further, introduce `UploadCounters` struct to hold the Arc clones of
the chunk upload statistics counters, simplifying the house keeping.
By bundling the counters into the struct, they can be passed as
single function parameter when factoring out the common stream future
in the subsequent implementation of the chunk upload for sync jobs
in push direction.
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to filter namespaces by given callback function. This will be
used to pre-filter the list of namespaces to push to a remote target
for sync jobs in push direction, based on the privs of the sync jobs
local user on the source datastore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
`BackupGroup` implements `cmp::Ord`, so use that implementation for
comparing groups during sorting. Furtuher, only sort the list of
backup groups after filtering, thereby possibly reducing the number
of required comparisons.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To ensure the fix for avoiding printing verbose log levels to stderr,
stdout is included, as that spams the log with the full worker log
tasks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I plugged in a USB pen drive and the whole disk list UI became
completely unusable because smartctl fails to handle that device due
to some `Unknown USB bridge [0x090c:0x1000 (0x1100)]` error.
That itself might be improvable, but most often I do not care at all
about smart data, and certainly not enough to make failing gathering
it disallow me from viewing my disks (or the smart data from disks
where it still could be gathered, for that matter!)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
removable datastores will have a PBS-managed mountpoint as path, direct
access to the field needs to be replaced with a helper that can account
for this.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
instead, require 'Tape.Write' or 'Tape.Modify' on '/tape' path.
This makes it possible for a TapeOperator to destroy tapes and for a
TapeAdmin to update the tape status, instead of just root@pam.
I opted for the path '/tape' since we don't have a dedicated acl
structure for single tapes, just '/tape/pool' (which does not apply
since not all tapes have to have a pool), '/tape/device' (which is
intended for drives/changers) and '/tape/jobs' (which is for jobs only).
Also we use that path for e.g. move_tape already.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
To ensure the recent fixes for the "infinite loop on early connection
abort when trying to detect the TLS handshake" problem is included.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Log the path of directory entries matched by an exclude pattern in
order to more conveniently debug possible issues.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
While traversing the filesystem tree, `generate_directory_file_list`
generates the list of entries to include for each directory level,
already matching the entry against the given list of match patterns.
Since this already excludes entries which should not be included in
the archive, the same check in the `add_entry` call is redundant,
as it is executed for each entry which is included in the list
generated by `generate_directory_file_list`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Factors the kernel version compatibility check into its own method and
adds test cases for a set of expected and unexpected kernel versions.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
To improve the performance of the smartctl checks, especially when a lot
of disks are used, parallelize the checks using the `ParallelHandler`.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Avoid running `lsblk` twice when executing the `list_disk`
endpoint/command. This and the various other small nits improve the
performance of the endpoint.
Does not really fix, but is related to: #4961.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Avoid to underflow the catalogs shell position stack by navigating
below the archives root directory into the catalog root. Otherwise
the shell will panic, as the root entry is always expected to be
present.
This threats the archive root directory as being it's own parent
directory, mimicking the behaviour of most common shells.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Disallows creating a datastore in root on the frontend side, by
filtering the '/' path. Add reuse-flag to permit us to open existing
datastores.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Disallow creating datastores in non-empty directories. Allow adding
existing datastores via a 'reuse-datastore' checkmark. This only checks
if all the necessary directories (.chunks + subdirectories and .lock)
exist and have the correct permissions. Note that the reuse-datastore
path does not open the datastore, so that we don't drop the
ProcessLocker of an existing datastore.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Seems this was forgotten while bumping it in Cargo.toml in dcd863e0.
Fixes: dcd863e0 ("bump proxmox-subscription to 0.5.0")
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
make it a bit easier to parse and include some examples of what the resync
might be able to pick up.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Decouple the actual filter logic from the skip reason output logic by
pulling the latter out of the filter closue.
Makes the filtering logic more intuitive.
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The last snapshot synced during the previous sync job might not have
been fully completed just yet (e.g. backup log still missing,
verification still ongoing, ...).
Explicitley mention the reason and that the resync is therefore
intentional by a comment in the filter logic.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
While checking which snapshots to sync, the filter logic incorrectly
included the first snapshot newer that the last synced one
unconditionally, bypassing the transfer last check for that one
snapshot. Following snapshots are correctly handled again.
E.g. of an incorrect sync by excerpt of a task log provided by a user
in the community forum [0], with transfer last set to 1:
```
skipped: 2 snapshot(s) (2024-09-29T18:00:28Z .. 2024-10-20T18:00:29Z) - older than the newest local snapshot
skipped: 5 snapshot(s) (2024-10-28T19:00:28Z .. 2024-11-01T19:00:32Z) - due to transfer-last
sync snapshot vm/110/2024-10-27T19:00:25Z
...
sync snapshot vm/110/2024-11-02T19:00:23Z
```
Not only the last, but the first newer than newest and last were
incorrectly synced.
By dropping the early return, leading to incorrect inclusion of the
snapshot, the transfer last condition is now correctly checked as
well.
Link to the issue reported in the community forum:
[0] https://forum.proxmox.com/threads/156873/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Drop the payload offset output for the multi line formatting helper,
as the formatting was skewed anyways and the `stat` output is not
intended for debugging.
Commit 51e8fa96 ("client: pxar: include payload offset in entry
listing") introduced the payload offset output for pxar entries
in case of split archives for both, single line and multi line
formatting helpers with debugging prupose.
While the payload offset output is fine for the single line entry
formatting (generates the pxar dump output in debugging mode),
it should not be included in the multi line entry formatting helper,
used to generate the output for the `stat` command of the catalog
shell.
Fixes: 51e8fa96 ("client: pxar: include payload offset in entry listing")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Make the catalog optional and use the pxar accessor for navigation if
the catalog is not provided.
This allows to use the metadata archive for navigraion, as for split
pxar archives no dedicated catalog is encoded.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds helper functions to reimplement the catalog shell functionality
for snapshots being encoded as split pxar archives.
Just as the `CatalogReader`s find method, recursively iterate entries
and call the given callback on all entries matched by the match
patterns, starting from the given parent entry.
The helper has been split into 2 functions for the async recursion to
work.
Commit c0302805c "client: backup: conditionally write catalog for
file level backups" drops encoding of the dedicated catalog when
archives are encoded as split metadata/data archives with the
`change-detection-mode` set to `data` or `metadata`.
Since the catalog is not present anymore, fallback to use the pxar
metadata archives in the manifest (if present) for generating the
listing of contents in a compatible manner.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implements the methods to dump the contents of a metadata pxar
archive using the same output format as used by the catalog dump.
The helper function has been split into 2 for async recursion to
work.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Perform the conversion from pxar file entries to catalog entry
attributes by implementing `TryFrom<&FileEntry<T>>` for
`DirEntryAttribute` and use that.
Allows the reuse for the catalog shell, when using the split pxar
archive instead of the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the logic to generate `FileEntry` paths with a given prefix to
its own helper function for it to be reusable for the catalog shell
implementation of split pxar archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `get_remote_pxar_reader` helper function so it can be reused
also for getting the metadata archive reader instance for the catalog
dump.
No functional changes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `handle_root_with_optional_format_version_prelude` helper,
purely related to handling the root entry for pxar format version 2
archives, to the more fitting pxar tools submodule.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The lookup helper used to generate catalog entries via the metadata
archive for split archive backups is pxar specific, therefore move it
to the appropriate pxar tools submodlue.
Change namespace visibility for tools submodule to be accessible from
other creates, to be used for common pxar related helpers.
Switch helpers declared as `pub` to `pub(crate)` in order to keep module
encapsulation, adapt namespace for functions required to be `pub`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a short sentence describing the function of the remove vanished
flag since this has not been documented explicitly.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
As node.cfg is a rather general name that could clash with manual
pages from other packages, or at least be a bit confusing if there's
another tool providing a node.cfg.
In the long term we should rename all existing manual pages from
section 5 and 7, i.e. all those that are not directly named after an
executable. As those normally talk about product-specific configs and
topics where just the filename is not specific enough for a system
wide manual page.
Note that there was some off-list discussion with proposal of using
"section suffixes" that man supports and can be used to differ between
manual pages with the same name (and in the same section), for example
`man 3pm Git`, but to me this seems a bit more obscure and potentially
less discoverable, but can be a great way to provide an link alias for
convenience.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add man page for the node.cfg config file.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[ TL: pull out sorting of synopsis file list to separate commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It was fine as is, but IMO saving a few lines is nice, albeit it makes
the atomic fetch expressions look slightly complexer by wrapping them
directly with the HumanByte and TimeSpan from-constructors.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Spawn a new tokio task which about every minute displays the
cumulative progress of the backup for pxar, ppxar or img archive
streams. Catalog and metadata archive streams are excluded from the
output for better readability, and because the catalog upload lives
for the whole upload time, leading to possible temporal
misalignments in the output. The actual payload data is written via
the other streams anyway.
Add accounting for uploaded chunks, to distinguish from chunks queued
for upload, but not actually uploaded yet.
Example output in the backup task log:
```
...
INFO: processed 2.471 GiB in 1m, uploaded 2.439 GiB
INFO: processed 4.963 GiB in 2m, uploaded 4.929 GiB
INFO: processed 7.349 GiB in 3m, uploaded 7.284 GiB
...
```
This partially fixes issue 5560:
https://bugzilla.proxmox.com/show_bug.cgi?id=5560
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The chunks subdirectories are only using the chunk's 2 byte checksum
prefix given in hex notation.
Also, clarify that chunks are grouped into subdirectories.
Reported in the community forum:
https://forum.proxmox.com/threads/155751/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reading from the metric cache is somewhat expensive, so validate as many
of the required permissions as possible. For host metrics, we can
do the full check in advance. For datastores, we check if we have
audit permissions for *any* datastore. If we do not have privs for
either of those, we return early and avoid reading from the
cache altogether.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This one is modelled exactly as the one in PVE (there it
is available under /cluster/metrics/export).
The returned data format is quite simple, being an array of
metric records, including a value, a metric name, an id to identify
the object (e.g. datastore/foo, host), a timestamp and a type
('gauge', 'derive', ...). The latter property makes the format
self-describing and aids the metric collector in choosing a
representation for storing the metric data.
[
...
{
"metric": "cpu_avg1",
"value": 0.12,
"timestamp": 170053205,
"id": "host",
"type": "gauge"
},
...
]
In terms of permissions, the new endpoint requires Sys.Audit
on /system/status for metrics of the 'host' object,
and Datastore.Audit on /datastore/{store} for 'datastore/{store}'
metric objects.
Via the 'history' and 'start-time' parameters one can query
the last 30mins of metric history. If these parameters
are not provided, only the most recent metric generation
is returned.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Any pull-metric API endpoint can alter access the cache to
retrieve metric data for a limited time (30mins).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
It is only needed there and could be considered an implementation detail
of how this module works.
No functional changes intended.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
With the upcoming pull-metric system/metric caching, these
things should go into a sepearate module.
No functional changes intended.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
By moving and refactoring the check for a sync job exceeding the
global maximum namespace limit, the same function can be reused for
sync jobs in push direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
By specifying that the snapshot is being skipped because of the
condition met on the sync target instead of 'local', the same message
can be reused for the sync job in push direction without loosing
sense.
Make `SkipReason` and `SkipInfo` accessible for sync operations of
both direction variants, push and pull.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Rename the `PullSource` trait to `SyncSource` and move the trait and
types implementing it to the common sync module, making them
reusable for both sync directions, push and pull.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the `PullReader` trait and the types implementing it to the
common sync module, so this can be reused for the push direction
variant for a sync job as well.
Adapt the naming to be more ambiguous by renaming `PullReader` trait to
`SyncSourceReader`, `LocalReader` to `LocalSourceReader` and
`RemoteReader` to `RemoteSourceReader`.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move and rename the `PullStats` to `SyncStats` as well as moving the
`RemovedVanishedStats` to make them reusable for sync operations in
push direction as well as pull direction.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Commit f16c5de757 ("pxar: bin: test `pxar list` with payload-input")
introduced a regression test for listing of split pxar archives. This
test relies on a large pxar blob file, the large size (> 100M) being
overlooked when writing the test.
In order to not depend on this file any further in the future, drop
it and rewrite the test to dynamically generate the files, needed and
further extend the test thereby also cover the archive creation and
extraction for split pxar archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When using the `log` to `tracing` translation layer, the messages get
padded with whitespaces. This bug will get fixed upstream [0], but in
the meantime we switch to the `tracing` macros.
[0]: https://github.com/tokio-rs/tracing/pull/3070
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
It seems some changers are setting the PVolTag/AVolTag flags in the
ELEMENT STATUS page response, but don't include the actual fields then.
To make it work with such changers, downgrade the errors to warnings, so
we can continue to decode the remaining data.
This is OK since one volume tag is optional and the other is skipped
anyway.
Reported in the forum:
https://forum.proxmox.com/threads/hpe-storeonce-vtl.152547/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Add tracing logger to all client binaries and remove env_logger.
The reason for this change is twofold: our migration to tracing, and the
behavior when the client calls an api handler directly. Currently the
proxmox-backup-manager calls the api handlers directly for some
commands. This results in no output (on console and task log), as no
tracing logger is instantiated.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The rate and burst parameters are integers, so the mapping from value
with `.as_str()` will always return `None` effectively never
applying any rate limit at all.
Fix it by turning them into a HumanByte instead of an integer.
To not crowd the parameter section so much, create a
ClientRateLimitConfig struct that gets flattened into the parameter list
of the backup client.
To adapt the description of the parameters, add new schemas that copy
the `HumanByte` schema but change the description.
With this, the rate limit actually works, and there is no lower limit
any more.
The old TRAFFIC_CONTROL_RATE/BURST_SCHEMAs can be deleted since the
client was the only user of them.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we currently use the behavior of zstd that is not part of the public
api, so this is at risk to be changed without notice.
There is a public api that we could use, but it's only available
with zstd_sys >= 2.0.9, which at this time, is not yet packaged for/by
us.
Add a comment that we can use the public api for this when the
new version of the crate gets available.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Make the `network reload` command in proxmox-backup-manager wait on the
api handler's workertask. Otherwise the task would be killed when the
client exits.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Without this, the button is enabled if no entry at all is selected (e.g.
when switching to the 'User Management' tab), with the button then
(obviously) being a noop.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
rustc warns about creating references to them (although it does allow
using `.as_ref()` on them for some reason), and this will become a
hard error with edition 2024.
Previously we could not use Mutex there as its ::new() was not a
`const fn` , but not we can, so let's drop the `mut`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Don't hardcode the debug flag but retrieve the currently enabled level
using tracing. This will change the default log-behavior and disable
some logs that have been printed previously. F.e.: the "protocol upgrade
done" message is not visible anymore per default because it is printed
with debug.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
In the following commit we will make use of std::sync::LazyLock which
was introduced in rust 1.80.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Otherwise proxmox-daily-update panics if attempting to send a
notification for any available new updates:
"context for proxmox-notify has not been set yet"
Reported on our community forum:
https://forum.proxmox.com/threads/152429/
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Some systemd code got split out from proxmox-sys and left there
re-exported with a deprecation marker, use the newer crate, the
workspace already depends on proxmox-systemd anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by combining the compression call from both encrypted and unencrypted
paths and deciding on the header magic at one site.
No functional changes intended, besides reusing the same buffer for
compression.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Increase the zstd compression throughput by not using the
`zstd::stream::copy_encode` method, because it seems it uses an
internal buffer size of 32 KiB [0], copies at least once extra in the
target buffer and might have some additional (allocation and/or
syscall) overhead. Due to the amount of wrappers and indirections it's
a bit hard to tell for sure. In anyway, there can be a reduced
throughput observed if all, the target and source storage and the
network are so fast that the operations from creating chunks, like
compressions, can become the bottleneck.
Instead use the lower-level `zstd_safe::compress` which avoids (big)
allocations, since we provide the target buffer.
In case of a compression error just return the uncompressed data,
there's nothing we can do and saving uncompressed data is better than
having none. Additionally, log any such error besides the one for the
target buffer being too small.
Some benchmarks on my machine (Intel i7-12700K with DDR5-4800 memory
using a ASUS Prime Z690-A motherboard) from a tmpfs to a datastore on
tmpfs:
Type without patches (MiB/s) with patches (MiB/s)
.img file ~614 ~767
pxar one big file ~657 ~807
pxar small files ~576 ~627
The new approach is faster by a factor of 1.19.
Note that the new approach should not have a measurable negative
impact, e.g. (peak) memory usage wise. That is because we always
reserved a vector with max-data-size (data length + header length) and
thus did not have to add a new buffer, rather we actually removed the
buffer that the high-level zstd wrapper crate used internally.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check the error code of zstd not to be 'Destination buffer
to small' (dstSize_tooSmall), but currently there is no practical API
that is also public. So we introduce a helper that uses the internal
logic of zstd to determine the error.
Since this is not guaranteed to be a stable api, add a test for that
so we catch that error early on build. This should be fine, as long as
the zstd behavior only changes with e.g. major debian upgrades, which
is normally the only time where the zstd version is updated.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: re-order fn, rename test and reword comments ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is leftover code that is not currently used outside of its own
tests.
Should we need it again, we can just revert this commit.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Add External Metrics page to PBS's documentation. Most of it is copied
from the PVE documentation, minus the Graphite part.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
with the default 8k input buffer size, the client will spend most of the time
polling instead of reading/chunking/uploading.
tested with 16G random data file from tmpfs to fresh datastore backed by tmpfs,
without encryption.
stock:
Time (mean ± σ): 36.064 s ± 0.655 s [User: 21.079 s, System: 26.415 s]
Range (min … max): 35.663 s … 36.819 s 3 runs
patched:
Time (mean ± σ): 23.591 s ± 0.807 s [User: 16.532 s, System: 18.629 s]
Range (min … max): 22.663 s … 24.125 s 3 runs
Summary
patched ran
1.53 ± 0.06 times faster than stock
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by dropping the print-per-chunk and making the input buffer size configurable
(8k is the default when using `new()`).
this allows benchmarking various input buffer sizes. basically the same code is
used for image-based backups in proxmox-backup-client, but just the
reading and chunking part. looking at the flame graphs the smaller input
buffer sizes clearly show most of time spent polling, instead of
reading+copying (or reading and scanning and copying).
for a fixed chunk size stream with a 16G input file on tmpfs:
fixed 1M ran
1.06 ± 0.17 times faster than fixed 4M
1.22 ± 0.11 times faster than fixed 16M
1.25 ± 0.09 times faster than fixed 512k
1.31 ± 0.10 times faster than fixed 256k
1.55 ± 0.13 times faster than fixed 128k
1.92 ± 0.15 times faster than fixed 64k
3.09 ± 0.31 times faster than fixed 32k
4.76 ± 0.32 times faster than fixed 16k
8.08 ± 0.59 times faster than fixed 8k
(from 15.275s down to 1.890s)
dynamic chunk stream, same input:
dynamic 4M ran
1.01 ± 0.03 times faster than dynamic 1M
1.03 ± 0.03 times faster than dynamic 16M
1.06 ± 0.04 times faster than dynamic 512k
1.07 ± 0.03 times faster than dynamic 128k
1.12 ± 0.03 times faster than dynamic 64k
1.15 ± 0.20 times faster than dynamic 256k
1.23 ± 0.03 times faster than dynamic 32k
1.47 ± 0.04 times faster than dynamic 16k
1.92 ± 0.05 times faster than dynamic 8k
(from 26.5s down to 13.772s)
same input file on ext4 on LVM on CT2000P5PSSD8 (with caches dropped for each run):
fixed 4M ran
1.06 ± 0.02 times faster than fixed 16M
1.10 ± 0.01 times faster than fixed 1M
1.12 ± 0.01 times faster than fixed 512k
1.15 ± 0.02 times faster than fixed 128k
1.17 ± 0.01 times faster than fixed 256k
1.22 ± 0.02 times faster than fixed 64k
1.55 ± 0.05 times faster than fixed 32k
2.00 ± 0.07 times faster than fixed 16k
3.01 ± 0.15 times faster than fixed 8k
(from 19.807s down to 6.574s)
dynamic 4M ran
1.04 ± 0.02 times faster than dynamic 512k
1.04 ± 0.02 times faster than dynamic 128k
1.04 ± 0.02 times faster than dynamic 16M
1.06 ± 0.02 times faster than dynamic 1M
1.06 ± 0.02 times faster than dynamic 256k
1.08 ± 0.02 times faster than dynamic 64k
1.16 ± 0.02 times faster than dynamic 32k
1.34 ± 0.03 times faster than dynamic 16k
1.70 ± 0.04 times faster than dynamic 8k
(from 31.184s down to 18.378s)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
`cargo build` and `cargo install` pick up different config files, by symlinking
the wrapper config into a place with higher precedence than the one in the
top-level git repo dir, we ensure the package build actually picks up the
desired config instead of the one intended for quick dev builds.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The root namespace is displayed as empty string when used in the
format string. Distinguish and explicitly write out the root namespace
in the sync info message shown in the sync jobs task log.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Describe the `pull` direction of the sync operation more precisely
before adding also a `push` direction as synchronization operation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds a helper to create temporal files in XDG_CACHE_HOME. If we cannot
create a file there, we fallback to /tmp as before.
Note that the temporary files stored by the client might grow
arbitrarily in size, making XDG_RUNTIME_DIR a less desirable option.
Citing the Arch wiki [1]:
> Should not store large files as it may be mounted as a tmpfs.
While the cache directory is most often not backed up by an ephemeral
FS, using the `O_TMPFILE` flag avoids the need for potential cleanup,
e.g. on interruption of a command. As with this flag set the data will
be discarded when the last file descriptor is closed.
[1] https://wiki.archlinux.org/title/XDG_Base_Directory
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[ TL: mention TMPFILE flag for clarity ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to pull in the fix for restoring backwards compatibility due to the
digest from that crate using a u8 slice instead of our dedicated
ConfigDigest type, which would serialize to String.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit ea584a75 "move more api types for the client" deprecated
the `archive_type` function in favor of the associated function
`ArchiveType::from_path`.
Replace all remaining callers of the deprecated function with its
replacement.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[WB: and remove the deprecated function]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
When getting the `full_path` of a snapshot we did not use the cached
time string. By using it we avoid a call to the super-slow libc strftime.
This has some minor performance improvements of circa 7%. That is ~100ms
on my datastore with ~5000 snapshots.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The protected status of the snapshot is retrieved twice. This is slow
because it stat's the .protected file multiple times.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Add local dependencies for new crates `proxmox-apt-api-types` and
`proxmox-config-digest`. Also fix order of deps.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Use `Confirmation` helper in the wipe-disk command prompt.
Improves: 887d83cb (cli: add interactive confirmation for block device wipe, 2023-11-29)
Cc: Markus Frank <m.frank@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
... if `.chunks/` is not available(deleted/moved) ChunkStore::open
fails, but that would happen after updating the active operations on the
datastore, so no reference that could be dropped is returned. Leading to
the operations counter to always increase. This only updates the counter
when a reference is returned, not before.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
The re-authentication request can also fail due to network instability,
and not necesarrily only due to an invalid ticket. In that case it makes
sense to retry refreshing the ticket in 15 minutes. Also, the future does
not depend on a failed re-authentication to be clean up properly, so that
happens already somewhere else, therefore we don't rely on this return
anyway. If the ticket is actually invalid or timed out, the main job
will fail and also terminate the renewal future, same applies if the
network is not just unstable but straight up not working.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The .pxarexclude-cli encodes the exclude patterns the client was
invoked with in the pxar archive as regular file entry. The current
behaviour of setting the uid and gid to default 0 (root) causes
however issues when trying to backup and restore the backup as
non-root user.
Opt for using the uid/gid of the user the executable was called as,
allowing the restore for this user to succeed. Root will succeed
to restore anyways.
Link to issue in bugtracker:
https://bugzilla.proxmox.com/show_bug.cgi?id=5304
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
When using the `proxmox-backup-client mount` command, the parent sometimes
exits before we can print any error message. Most notably this happens
when no PBS_REPOSITORY is passed, as this is the first option checked.
If the underlying file descriptor has been closed, wait for the client
to complete and return the error message.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Suggested-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Commit 08fe5052 introduced functionality to mount split pxar archives
(sharing code with the map command), moving the manifest lookup
exclusive to fixed index archives.
However, the lookup now uses the incorrect archive name, not
containing the `.fidx` extension, which is however required for the
lookup in the manifest.
Fix the issue by calling the method with the correct server archive
name including the required extension.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Fixes: 08fe5052 ("client: mount: make split pxar archives mountable")
[FG: reworded, add proper "Fixes:" trailer.]
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it builds about 1.5 times faster than regular `make deb` (shaving off a
whopping 100s on my machine). the resulting debs containing executables are of
course bigger (since the debug symbols are not split out into their own
package, and the ELF linkage stripping is also skipped), but other than the
associated file and memory mapping overhead there should be no difference in
behaviour or performance, and such debs are suitable for local testing (both of
the build process, and the built code).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
HashMap::remove() returns the value it removes as an Option<>, so
instead of first checking if the key exists before removing it, just
try to remove it and use the returned Option<> to test whether we
should bail!().
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Fixes the clippy warning:
warning: this multiplication by -1 can be written more succinctly
--> pbs-client/src/tools/mod.rs:700:58
|
700 | SignedDuration::Negative(val) => -1 * i64::try_from(val.as_secs())?,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using: `-i64::try_from(val.as_secs())?`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#neg_multiply
= note: `#[warn(clippy::neg_multiply)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy warning:
warning: unnecessary `pub(self)`
--> src/api2/access/mod.rs:35:1
|
35 | pub(self) async fn user_update_auth<S: AsRef<str>>(
| ^^^^^^^^^ help: remove it
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pub_self
= note: `#[warn(clippy::needless_pub_self)]` on by default
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy warning:
warning: unnecessary use of `get(&user2).is_none()`
--> pbs-config/src/acl.rs:1067:36
|
1067 | assert!(node.users.get(&user2).is_none());
| -----------^^^^^^^^^^^^^^^^^^^^^
| |
| help: replace it with: `!node.users.contains_key(&user2)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
pbs2to3 was missing from the list of to-be-compiled binaries, and thus was only
compiled as a side-effect of running `cargo test` (which is skipped when the
build is using the `nocheck` build profile).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fixes the compile-time warning:
warning: Cargo.toml: `default_features` is deprecated in favor of `default-features` and will not work in the 2024 edition
(in the `proxmox-router` dependency)
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
else we don't pick up the options set by the wrapper, which include generation
of debug symbols. until rustc 1.77, this was not needed because compiled
binaries always included a non-stripped libstd. now, without this change, the
binaries built with `cargo build --release` have no debug symbols at all
trigger a warning. fix this and include debug symbols when building a package,
like was originally intended for release package builds.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to work with cargo 1.77, which changed from
pbs-api-types 0.1.0 (path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types)
to
path+file:///home/fgruenbichler/Sources/proxmox-backup/pbs-api-types#0.1.0
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Add the command `proxmox-backup-client group forget <group>` so
that we can forget (delete) whole groups with all the containing
snapshots.
To avoid printing full datastore paths (which are in the error messages)
we filter out the most common one (group not found) and rephrase it.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[WB: rebased & sorted import statements in client's main.rs]
[WB: replace extract_repository_from_value with
remove_repository_from_value since the parameter is rejected on
the remote side]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'extract_repository_from_value' takes an immutable reference and
doesn't remove the parsed parameter (whereas in contrast in our PVE
codebase, the 'extract_param' method does remove it).
This adds a variant that explicitly removes it called
'remove_repository_from_value'.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Instead of storing the error as a string in the PxarBackupStream, we
store it as an anyhow::Error. As we can't clone an anyhow::Error, we take
it out from the mutex and return it. This won't change anything as
the consumation of the stream will stop if it gets a Some(Err(..)).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
To create a pxar archive, we recursively traverse the target folder.
If there is an error further down and we add a context using anyhow,
the context will be duplicated and we get an output like:
> Error: error at "xattr/xattr.txt": error at "xattr/xattr.txt": E2BIG [skip]
This is obviously not optimal, so in recursive contexts we can use the
UniqueContext, which quickly checks the context from the last item in
the error chain and only adds it if it is unique.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The sole purpose of the ArchiveError was to add the file-path to the
error. Using anyhow::Error we can add this information using the context
and don't need this struct anymore.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
String concatenating a variable with some static text as gettext
parameter cannot really work, and it also does not make sense to do
most of the time, as even if we'd use some overly generic format
string like '{0} (disabled)', it would be not easy to translate
correctly in all languages in such a generic way.
So just use the actual full string, which is already contained in our
translation catalogue anyway…
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is basically semantic revert of e5c0d80c ("docs: add note for not
using remote storages") that, while well intended, has a few problems,
e.g.:
- This is the minimal/recommended requirements section, which should
list the rough basic specs a setup must/should have. Listing
everything that is not best to do would bloat this list
significantly and it's just the wrong place for it, i.e., it isn't a
recommended against list.
- while it's true that a remote storage will basically always have
_some_ overhead over using the same HW with a (modern) local storage
(file) system, that does **not** mean that the remote storage has
insufficient performance characteristics. We know of lots of fast
Ceph setups, even release benchmarks for them, or storages like
BlockBridge, that provide high performance while being remote.
So avoid this X-Y-problem style argumentation and focus on what is
actually important, even though I naturally get that there are some
users that use slow NFS attached storages, but breaking style here
won't cure them and I'm sure that they are capable of setting up such
a slow local storage that it won't make a real difference compared to
the NFS one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adapt to the decoder/accessor method changes introduced in the pxar
library, which were introduced in order to move the consistency check
for metadata and payload data archives.
The new location of the checks allows to access the pxar archive via
a `Split` variant reader instance, without penalization when just
accessing the metadata, not reading any payload data.
This greatly improves performance when accessing fuse mounted
archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
bumped dependency after pxar version bump
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
such as NFS or SMB. They will not provide the expected performance
and it's better to recommend against them.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since that leads to errors that we don't currently catch before we
reach the regular early warning on tape.
This can be read/set by the Device Configuration Extension Mode Page.
ignore errors on reading or writing, since it may not be available on
LTO-4
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
that would drop the final byte, and the corresponding code has been removed
from pxar now as well.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Currently, whether to encode the exlcude patterns passed via cli as
prelude or via the `.pxar-exclude-cli` is based on the presence of
a previous metadata accessor.
That leaves however to the encoding of the file entry instead of the
prelude for split archives in `data` mode and for the first snapshot
in a backup, creating undesired padding in the first payload chunk.
Therefore, use the pxar writer variant to make the decision instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The current encoding is not extensible, so encode the cli exclude
patterns as json instead. By this, the prelude is easily seralized
and deserialized, while remaining human readable.
Originally-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not attach the payload reader for split pxar archives, as only the
metadata has to be accessed for listing.
This avoids that the decoder performs consistency checks with the
payload stream, which require chunk download and decoding, making the
listing unusable slow.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The currently default variant is named `Default`, which is not future
prove since the default might change in the future. So rename it to
`Legacy` instead.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
by skipping the payloader reader entirely, it's not needed for listing contents
and would make accessing larger archives too expensive.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Christian Ebner <c.ebner@proxmox.com>
Only write the catalog when using the regular backup mode, do not write
it when using the split archive mode.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In case of pxar archives with split metadata and payload data, the
metadata archive has to be used to lookup entries for navigation
before performing a single file restore.
Decide based on the archive filename extension whether to use the
`catalog` or the `pxar-lookup` api endpoint.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The `proxmox-file-restore list` command will uses the provided path to
lookup and list directory entries via the catalog. Fallback to using
the metadata archive if the catalog is not present for fast lookups in
a backup snapshot.
This is in preparation for dropping encoding of the catalog for
snapshots using split archive encoding. Proxmox VE's storage plugin
uses this to allow single file restore for LXCs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Payload data archives cannot be used to navigate the content, so
exclude them from the archive listing, as this is used by
Proxmox VE to list in the file browser.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to pass the archive name as optional api call parameter instead
of having it as prefix to the path.
If this parameter is given, instead of splitting of the archive name
from the path, the parameter itself is used, leaving the path
untouched.
This allows to restore single files from the archive, without having
to artificially construct the path in case of file restores for split
pxar archives, where the response path of the listing does not
include the archive, as opposed to the response provided by lookup
via the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add an optional `archive-name` parameter, indicating the metadata
archive to be used for directory content lookups instead of the
catalog. If provided, instead of the catalog reader, a pxar Accessor
instance is created to perform the lookup.
This is in preparation for dropping catalog encoding for snapshots
with split pxar archive encoding.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation to lookup entries via the pxar metadata archive
instead of the catalog, in order to drop encoding the catalog
for snapshots using split pxar archives altogehter.
This helper allows to lookup the directory entries via the provided
accessor instance and formats them to be compatible with the output
as produced by lookups via the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move code that can be reused when having to perform a lookup via the
pxar metadata archive instead of the catalog out of the thread.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The file path passed to the catalog is base64 encoded, with an exception
for the root.
Factor this check and decoding step out into a helper function to make
it reusable when doing the same for lookups via the metadata archive
instead of the catalog.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The test will fail for all users not having euid/egid set to
1000/1000, as the reference test folder structure cannot be created
with the expected ownership.
Therefore, skip over the test if either euid or egid do not match
this condition.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Setting the uid/gid for the files and folders of the test directory
structure will not work when lacking the permissions.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Describe the motivation and basic principle of the clients change
detection mode and show an example invocation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The lookahead cache size requires the resource limit for open file
handles to be high in order to allow for efficient reuse of unchanged
file payloads.
Increase the nofile soft limit to the hard limit and dynamically adapt
the cache size to the new soft limit minus the half of the previous
soft limit.
The `PxarCreateOptions` and the `Archiver` are therefore extended by
an additional field to store the maximum cache size, with fallback to
a default size of 512 entries.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The default soft limit for open file handles is rather low, as some
apis (e.g. the POSIX `select(2)` syscall) do not work [0].
The lookahead cache use during the backup clients metadata comparison
to reuse unchanged files however requires much higher limits to work
effectively.
This helper function allows to raise the soft limit to the hard
limit, as provided by the `getrlimit(2)` syscall.
[0] https://0pointer.net/blog/file-descriptor-limits.html
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Use the dedicated chunker with boundary suggestions for the payload
stream, by attaching the channel sender to the archiver and the
channel receiver to the payload stream chunker.
The archiver sends the file boundaries for the chunker to consume.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implement the Chunker trait for a dedicated payload stream chunker,
which extends the regular chunker by the option to suggest boundaries
to be used over the hast based boundaries whenever possible.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add the Chunker trait and move the current Chunker to ChunkerImpl to
implement the trait instead. This allows to use different chunker
implementations by dynamic dispatch and is in preparation for
implementing a dedicated payload chunker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allow to pass an optional input path to mount a split pxar archive
with dedicated payload data file.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Do not list the pxar format version and the prelude entries in the
output of pxar list, these are not regular entries. Do include them
however when dumping with the debug environmet variable set.
Since the prelude is arbitrary in size, only show the content size.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In addition to the entries, also show the padding encountered in-between
referenced payloads.
Example invocation: `PXAR_LOG=debug pxar list archive.mpxar`
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Pxar archives allow to store additional information in a prelude
entry since pxar format version 2.
Add an optional parameter to `pxar` and `proxmox-backup-client` to
specify the path to restore the prelude to and pass this to the
archive extraction by extending the `PxarExtractOptions` by a
corresponding field. If none is given, the prelude is simply skipped
during restore.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of encoding the pxar cli exclude patterns as regular file
within the root directory of an archive, store this information
directly after the pxar format version entry in the entry of kind
Prelude.
This behavior is however currently exclusive to the archives written
with format version 2 in a split metadata and payload case.
This is a breaking change for the encoding of new cli exclude
parameters. Any new exclude parameter will not be added to an already
present .pxar-cliexclude file, and it will not be created if not
present.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Pxar archives with format version 2 allows to store optional
information file format version and prelude entries.
Cover the case for these entries, the file format version entry being
introduced to distinguish between different file formats used for
encoding as well as the prelude entry used to store optional metadata
such as the pxar cli exlude parameters.
Add the logic to accept and decode these prelude entries when
accessing the archive via a decoder instance.
For now simply ignore them.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the additional output in case of split pxar archives, the upload
statistics logged by the backup writer following a backup are crowded
and hard to read.
Make the output more concise by merging the currenlty 2 lines per
upload stream, shown as e.g.:
```
data.ppxar: had to backup 4 MiB of 10.943 GiB (compressed 159 B) in 49.30s
data.ppxar: average backup speed: 83.09 KiB/s
```
into a single line, shown as e.g.:
```
data.ppxar: had to back up 4 MiB of 10.943 GiB (159 B compressed) in 49.30 s (average 83.09 KiB/s)
```
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When walking the file system tree, check for each entry if it is
reusable, meaning that the metadata did not change and the payload
chunks can be reindexed instead of reencoding the whole data.
If the metadata matched, the range of the dynamic index entries for
that file are looked up in the previous payload data index.
Use the range and possible padding introduced by partial reuse of
chunks to decide whether to reuse the dynamic entries and encode
the file payloads as payload reference right away or cache the entry
for now and keep looking ahead.
If however a non-reusable (because changed) entry is encountered
before the padding threshold is reached, the entries on the cache are
flushed to the archive by reencoding them, resetting the cached state.
Reusable chunk digests and size as well as reference offsets to the
start of regular files payloads within the payload stream are injected
into the backup stream by sending them to the chunker via a dedicated
channel, forcing a chunk boundary and inserting the chunks.
If the threshold value for reuse is reached, the chunks are injected
in the payload stream and the references with the corresponding
offsets encoded in the metadata stream.
Since multiple files might be contained within a single chunk, it is
assured that the deduplication of chunks is performed, by keeping back
the last chunk, so following files might as well reuse that same
chunk without double indexing it. It is assured that this chunk is
injected in the stream also in case that the following lookups lead to
a cache clear and reencoding.
Directory boundaries are cached as well, and written as part of the
encoding when flushing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the catalog directory start and end encoding from `add_entry`
to the `add_directory`, the latter being called by the previous.
By this, the `add_entry` method can be reused to walk the filesystem
tree in the context of an enabled lookahead cache without encoding
anything.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add a lookahead cache and the neccessary types to store the required
data and keep track of directory boundaries while traversing the
filesystem tree, in order to postpone a decision if to reuse or
reencode a given regular file with unchanged metadata.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add method to compare metadata of current file entry against metadata
of the entry looked up in the previous backup snapshot. If the
metadata matched, the start offset pointing to the files payload
header in the payload steam is returned.
This is in preparation for reusing payload chunks for unchanged files.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Implement a method that prepares the decoder instance to access a
previous snapshots metadata index and payload index in order to
pass it to the pxar archiver. The archiver than can utilize these
to compare the metadata for files to the previous state and gather
reusable chunks.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds the specification for switching the detection mode used to
identify regular files which changed since a reference backup run.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
To reuse dynamic entries of a previous backup run and index them for
the new snapshot. Adds a non-blocking channel between the pxar
archiver and the chunk stream, as well as the chunk stream and the
backup writer.
The archiver sends forced boundary positions and the dynamic
entries to inject into the chunk stream following this boundary.
The chunk stream consumes this channel inputs as receiver whenever a
new chunk is requested by the upload stream, forcing a non-regular
chunk boundary in the pxar stream at the requested positions.
The dynamic entries to inject and the boundary are then send via the
second asynchronous channel to the backup writer's upload stream,
indexing them by inserting the dynamic entries as known chunks into
the upload stream.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
When forcing a boundary, the internal chunker state is not in sync
with the chunk stream anymore. The reset method therefore allows
to reset the internal state.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds a dedicated structure to hold the optional sender and receiver
instances and state for injection of reused dynamic entries in the
payload stream for split stream pxar archives.
The asynchronous channels must only be attached to the payload
archive, leaving the current behavior for the metadata archive and
current default encoding without reusing payload chunks of previous
snapshots.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In order to be included in the backups index file, reused payload
chunks have to be injected into the payload upload stream at a
forced boundary. The chunker forces a chunk boundary and sends the
list of reusable dynamic entries to be uploaded.
This implements the logic to receive these dynamic entries via the
corresponding communication channel from the chunker and inject the
entries into the backup upload stream by looking for the matching
chunk boundary, already forced by the chunker.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The helper method allows to lookup the entries of a dynamic index
which fully cover a given offset range. Further, the helper returns
the start padding from the start offset of the dynamic index entry
to the start offset of the given range and the end padding.
This will be used to lookup size and digest for chunks covering the
payload range of a regular file in order to re-use found chunks by
indexing them in the archives index file instead of re-encoding the
payload.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Also display the payload offset as listing output when the regular file
entry had a payload reference rather than the payload encoded in the
archive. This allows for debugging by inspecting the raw payload data
file at given offset.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to list entries of split pxar archives. As the decoder skips
over the file payloads, the corresponding payload file has to be
provided. Otherwise the decoder would skip inside the metadata
archive, leading to incorrect decoding.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to pass the optional payload input to restore for cases where the
regular file payloads are stored in the split archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Attach the payload data archive as input stream to the decoder
and accessor instances for split archives.
Allows to restore contents from split archives via the
`proxmox-file-restore extract` command, by passing the metadata
archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Factor out the logic to get the pxar reader into a dedicated function
so it can be reused to get the payload data archive reader instance.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data for pxar archives written as split archive.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Allows to access the pxar metadata archives for navigation and
download via the Proxmox Backup Server web ui.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives,
using the corresponding helper methods.
Allows to restore split metadata and payload stream pxar archives via
the catalog shell.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Attach the payload chunk reader for pxar archives which have been
uploaded using split streams for metadata and payload data.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the cases where the pxar archive was uploaded as split payload
data and metadata streams. Instantiate the required reader and
decoder instances to access the metadata and payload data archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Cover the additional `.mpxar` for metadata archive and `.ppxar` for
the payload data file in the cli parameter completion callback.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Whenever a split pxar archive is encountered, instantiate and attach
the required dedicated reader instance to the decoder instance on
restore.
Piping the output to stdout is not possible for these, as this would
require a decoder instance which can decode the input stream, while
maintaining the pxar stream format as output.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the introduction of split pxar archives, the allowed extensions
are now `.pxar`, `.mpxar` and `.ppxar`. Add a helper function to
allow to check for all valid variants, including the optional
additional `.didx` in case of a server archive name.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Helper method that takes an archive name as input and checks if the
given archive is present in the manifest, by also taking possible
split archive extensions into account.
Returns the pxar archive name if found or the split archive names if
the split archive variant is present in the manifest.
If neither is matched, an error is returned signaling that nothing
matched entries in the manifest.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Add module to place helper methods which need to be used in different
submodules of the client.
Add `get_pxar_fuse_reader`, `get_buffered_pxar_reader` and
`get_pxar_fuse_accessor` to create reader instances to access pxar
archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... and attach the split payload writer variant to the pxar archive
creation. By this, metadata and payload data will create different
dynamic indexes, allowing to lookup and reuse payload chunks without
the additional overhead of the pxar archive's metadata.
For now this functionality remains disabled and will be enabled in a
later patch once the logic for reusing the payload chunks is in
place.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Introduce a `PxarWriters` struct to bundle all writer instances
required for the pxar archive creation into a single object to limit
the number of function call parameters.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
... and adapt to the new reader/writer variant for encoder or
decoder/accessor to attach a dedicated payload input/output for split
pxar archives.
In preparation for look-ahead caching, where a passing around of
per-directory level encoder instances with internal references is
not feasible.
Previously, for each directory level a new encoder instance has been
generated, restricting possible implementation errors. These encoder
instances have been internally linked by references to keep track of
the state changes in a parent child relationship.
This is however not feasible when the encoder has to be passed by
mutable reference, as required by the look-ahead cache
implementation. The encoder has therefore been adapted to use a
single instance implementation with an internal stack keeping track
of the state.
Depends on the bumped pxar library version, including the patches to
attach the corresponding variant for the pxar reader/writer
instantiation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
In preparation for injecting reused payload chunks in payload streams
for regular files with unchanged metaddata. Allows to get the digest
of a dynamic index entry to construct a reusable dynamic entry from
it.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Move the code to get the local chunk reader to a dedicated function
to make it reusable. The same code is required to get the local chunk
reader for the payload stream for split stream archives.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Instead of composing the backup target name and pushing it to the
backup list, push the archive name and extension separately, only
constructing it while iterating the list later.
By this it remains possible to additionally prefix the extension, as
required with the separate pxar metadata and payload indexes.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
currently we don't lock the shadow file when removing or storing a
password. by adding locking here we avoid a situation where storing
and/or removing a password concurrently could lead to a race
condition. in this scenario it is possible that a password isn't
persisted or a password isn't removed. we already do this for
the "token.shadow" file, so just use the same mechanism here.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
With proxmox-widget-toolkit < 4.1.4, loading the UI will fail with
a JavaScript error:
> Uncaught TypeError: Proxmox.Utils.overrideNotificationFieldName is not a function
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The read_interface endpoint uses the wrong path identifier. It has been
renamed to 'iface' some time ago but hasn't been changed here.
When a user has a permission on '/' with 'Admin', he wasn't able to
show the config of a single interface, as the non-existent path didn't
match.
Reported-by: https://forum.proxmox.com/threads/permissons-not-working-for-network-settings.147899/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
The product name is Proxmox Backup Server, not just Backup Server,
that makes no sense on its own and it really cannot be expected by
tools extracting any Medium Auxiliary Memory (MAM) info to render it
as `${app_vendor} ${app_name}`.
Drop the comment about ignoring errors, that's pretty clear with
the only-log-error construct.
Instead, add some comments about what the hex numbers refers too and
what their respective length (limit) is. The names where taken from
Table 315 "MAM Host type attributes" in the "IBM LTO SCSI Reference"
for LTO 9.
Slightly off-topic: The tape code really is a mess with sprinkling
those hex numbers hard coded all over the place, often with some
unchecked coupling in other places (like here, the list of set MAM
attrs and the one that get cleared can easily get out of sync..), but
that's for another time to clean-up (I need to cut a release).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of blocking on input without telling the user what's going on.
Reported on the forum: https://forum.proxmox.com/threads/147058/
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This new section describes how the notification-mode parameter works.
The section also contains also parts of the old notification section
from the maintenance chapter, reusing the description of the
`notify` and `notify-user` parameters.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
When using the legacy notifications the sync mode would pick up the
settings from the prune-job, which default to Error. This completely
disables notifications for successful sync-jobs when using the legacy
system.
Reported in the forum: https://forum.proxmox.com/threads/147018/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
this commit switches pbs over to generating ed25519 keys when
generating new auth api keys. this also removes the last direct
usages of openssl here and further unifies key handling in the auth
api.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
this commit moves away from using openssl's `PKey` and uses the
wrappers from proxmox-auth-api. this allows us to handle keys in a
more flexible way and enables as to move to ec based crypto for the
authkey in the future.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
if a users password is not hashed with the latest password hashing
function, re-hash the password with the newest hashing function. we
can only do this on login and after the password has been validated,
as this is the only point at which we have access to the plain text
password and also know that it matched the original password.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
previously we used a self-rolled implementation for csrf tokens. while
it's unlikely to cause issues in reality, as csrf tokens are only
valid for a given tickets lifetime, there are still theoretical
attacks on our implementation. so move all of this code into the
proxmox-auth-api crate and use hmac instead.
this change should not impact existing installations for now, as this
falls back to the old implementation if a key is already present. hmac
keys will only be used for new installations and if users manually
remove the old key and
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
For one these different views have different columns shown, and more
importantly: with the state being shared one could change sorting in
the global view and then have that applied in the per-datastore view
too, even if one cannot sort that view explicitly otherwise as there's
just one row anyway. This small glitch might lead to a bit of
confusion in the worst case and looks unpolished in any way.
Note that I explicitly decided against encoding the datastore in the
state-id for the per-datastore views for now, as most users will want
to adapt layout (like column width) for all per-datastores views.
Having to re-do that for every datastore separately can be quite a
nuisance while the same user wanting different layout for each
datastore in their per-datastore view seems rather to be an edge case.
And we can always change this, so starting out with the slightly more
restricted design that has less browser local data to be saved seems
better w.r.t. maintainability.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make columns sortable in the global 'Prune & GC Jobs' view. In the
per-datastore view the columns will not be sortable as there can only be
one job.
Fixes: db3fd213 ("fix #3217: ui: global prune and gc job view")
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
the disk serial given to virtio disks only can be 20 characters, so
looking for a disk with a longer serial will always fail (like
'drive-tpmstate0-backup'). If the serial is longer, also try with the
truncated one. Leave the first try in place in case the limit changes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in case we cannot stat a file in the restore vm, log the path and reason
why. This should normally not happen, but when it does, the path and
error might help us find the issue.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the change in our restore image to ntfs3, non iso8859-1 filenames
were broken. Fix that by adding the 'iocharset' option to ntfs3.
Leave the ntfs option in place, so that if the image gets booted
with an older kernel for some reason, this still works.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely:
Vendor: Proxmox
Name: Backup Server
Version: current running package version
User Label Text: the label text
Media Pool: the current media pool
write it on labeling and when writing a new media-set to a tape.
While we currently don't use this info for anything, this can help users
to identify tapes, even with different backup software.
If we need it in the future, we can e.g. make decisions based on these
fields (e.g. the version).
On format, delete them again.
Note that some VTLs don't correctly delete the attributes from the
virtual tapes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Some MAM attributes are of type 'TEXT' that is not only ascii, but
controlled by an addition field that specifies various 8bit text
formats.
For now, simply assume utf8 as the default is ascii, and we don't expect
any data that is not ASCII anyway.
This will be needed when we'll want to write those attributes.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since we don't query each drives status seperately, but rely on a single
call to the drives listing parameter for that, we now add the option
to query the activity there too. This makes that data avaiable for us
to show in a seperate (by default hidden) column.
Also we show the activity in the 'State' column when the drive is idle
from our perspective. This is useful when e.g. an LTO-9 tape is loaded
the first time and is calibrating, since that happens automatically.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when the tape drive has an activity (and the tape is in motion), certain
calls block until the operation is finished. Since we cannot predict how
long it's going to be and it can be quite long in certain cases,
skip those calls when the drive is doing anything.
If we cannot determine the activity, try to do the queries.
We have to extend the check for a loaded drive in the UI, since the
position is not available during any activity.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we use the VHF part from the DT Device Activity page for that.
This is intended to query the drive for it's current state and activity.
Currently only the activity is parsed and used.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and show them on the ui. This can help uses with seeing how much a tape
is used.
The value is updated on 'commit' and when the tape is changed during a
backup.
For drives not supporting the volume statistics, this is simply skipped.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
A small example that simply writes pseudo-random chunks to a drive.
This is useful to benchmark throughput on tape drives.
The output and behavior is similar to what the pool writer does, but
without writing multiple files, committing or loading data from disk.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When writing data on tape, the idea was to sync/committing to tape and
the catalog to disk every 128GiB of data. For that the counter
'bytes_written' was introduced and checked after every chunk/snapshot
archive.
Sadly we forgot to reset the counter after doing so, which meant that
after 128GiB was written onto the tape, we synced/committed after every
archive on the tape for the remaining length of the tape.
Since syncing to tape and writing to disk takes a bit of time, the drive
had to slow down every time and reduced the available throughput. (In
our tests here from ~300MB/s to ~255MB/s).
By resetting the value to zero after syncing, we avoid that and increase
throughput performance when backups are bigger than 128GiB on tape.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to ensure that it can handle the recently lifted restrictions on the
organization and bucket parameters correctly by URL encoding them.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Remove the regex for influxdb organizations and buckets. Influxdb does
not place any constraints on these names and allows all characters. This
allows influxdb organization names with slashes.
Also remove a duplicate comment and add some missing ones.
This also aligns the behavior to PVE as there are no restrictions there
either.
The motivation for this patch is this forum post:
https://forum.proxmox.com/threads/influx-db-organization-doesnt-allow-slash.145402/
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
to ensure the next build contains the 78bf05a4 ("fix: use fragmented
block size for space calculation") improvement.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When creating a new sync job and a local namespace is configured
without setting a remote first, the createMaxPrefixLength
was passed an array instead of a string/undefined/null, which
triggered a 'ns2.match is not a funtion exception', making the UI
glitchy afterwards.
Fixed by explicitly checking for a string. Verified that the other
user of NamespaceMaxDepthReduced, the prune job edit window, does not
break after the change.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
since the api rejects unknown parameters, deleteEmpty needs to be
unset here, because the endpoint for creating backups does not support
deleting parameters. otherwise a user will get a fairly cryptic error
message in the gui.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
The default mail author for SMTP and Sendmail target is
"Proxmox Backup Server - <hostname>" and not
"Proxmox Backup Server (<hostname>)".
This is just a cosmetical change which affects the empty text for the
'Author' field in the sendmail/smtp edit window.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
* Enabled the "Linux VLAN" option when creating a new interface.
* This requires the updated widget-toolkit to contain vlan field widget.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Folke Gleumes <f.gleumes@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* Implement setting vlan-id and vlan-raw-device in the create and update api.
* Checking if the provided vlan-raw-device exists
* Moved VLAN_INTERFACE_REGEX to top level network module to use it in
the checking functions there. Changed to match with named capture groups.
* Unit tests to verify parsing vlan_id and vlan_raw_device from name.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Folke Gleumes <f.gleumes@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Support three types of vlan configurations defined in interfaces,
conforming to the PVE configurations:
iface nic.<vlan-id> inet
iface vlan<vlan-id> inet
vlan-raw-device <nic>
iface <arbitraty-name> inet
vlan-id <vlan-id>
vlan-raw-device <nic>
* Add lexer Token enum variants for vlan-id and vlan-raw-device and parse
them in parse_iface_attributes.
* Add tests to verify this works in the above scenarios
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Folke Gleumes <f.gleumes@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* Add vlan_id and vlan_raw_device fields to the Interface api type
* Write to the network config the vlan specific properties for vlan
interface type
* Add several tests to verify the functionally
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Folke Gleumes <f.gleumes@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
All current tests in network/mod.rs only test parser functionality and
should therefore live in the parser module.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Folke Gleumes <f.gleumes@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by checking the whole section config for an existing id, not only the
ones of the given type.
This prevents creation of a drive config with the same name as an
existing changer and vice versa, as it is confusing that existing things
get deleted, and we can get in the situation that we reference a changer
that does not exist anymore, i.e. consider this:
* create a changer with name `foo`
* create a drive with name `foo` and select changer `foo` for it
this would delete the changer config, but still reference it, leading
to errors when trying to use it.
We could implement support for separate id namespaces in section configs
for different types, but this is much more easier to do and be enough
for now.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Mention and briefly explain it. The main part of the documentation will
live in the Wiki for now as it applies to not just Proxmox Mail Gateway.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ TL: adapt to changes made in the wiki article ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this function is called every time a user tries to log in to check
whether a tfa challenge is required. since the tfa config may need to
be written by the auth api (e.g. when a recovery key is used) this
needs to use a write lock instead of a read lock in order to avoid
potential races.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Basically just a thin wrapper over the existing LDAP-based realm sync
job, which retrieves the appropriate config and sets the correct user
attributes.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Use the /admin/datatore API instead of /config/datastore to get a list
of all available datastores - this ensures that users can see
datastores even if they only have Datastore.Backup privs.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These changes have not been applied yet in widget toolkit, but
are very valuable for the initial integration in PBS.
We override modified components and replace them with the patched
variants.
The changes change the edit window such that known field names and
values are suggested in a combobox. Also, the 'exact' match mode
can now match multiple values.
This can and *should* be removed once the changes from [1] are
merged into the widget toolkit.
[1] https://lists.proxmox.com/pipermail/pve-devel/2024-April/063539.html
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Otherwise, 'Proxmox VE' is shown as the default author in the UI.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Mostly copied from PVE and adapted where it makes sense.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The /system/notifications ACL path is used for configuring the
notification system.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This mechanism allows having nice, translatable notification event
types and fields.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit adds the same notification configuration panel that we
already use in Proxmox VE.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the `notification-mode` parameter is set to `legacy-sendmail`, then
we still use the new infrastructure, but don't consider the
notification config and use a hard-coded sendmail endpoint directly.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the `notification-mode` parameter is set to `legacy-sendmail`, then
we still use the new infrastructure, but don't consider the
notification config and use a hard-coded sendmail endpoint directly.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the `notification-mode` parameter is set to `legacy-sendmail`, then
we still use the new infrastructure, but don't consider the
notification config and use a hard-coded sendmail endpoint directly.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the `notification-mode` parameter is set to `legacy-sendmail`, then
we still use the new infrastructure, but don't consider the
notification config and use a hard-coded sendmail endpoint directly.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If the `notification-mode` parameter is set to `legacy-sendmail`, then
we still use the new infrastructure, but don't consider the
notification config and use a hard-coded sendmail endpoint directly.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Same as with datastores, this option determines whether we send
notifications the old way (send email via sendmail to a user's email
address) or the new way (emit matchable notification events to the
notification stack).
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This one lets the user choose between the old notification behavior
(selecting an email address/user and always/error/never behavior per
datastore) and the new one (emit notification events to the
notification system)
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit/Sys.Modify permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit/Sys.Modify permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit/Sys.Modify permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit/Sys.Modify permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit/Sys.Modify permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
These endpoints require Sys.Audit permissions on
/system/notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This one will be used for configuring the new notification stack.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The notification stack loads handlebar templates for notifications
from /usr/share/proxmox-backup-server/templates/default/. This commit
modifies the build system to install template files from the
'templates' directory at that location. First, we only have templates
for test notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
- Set the context in proxmox_notify
- Add helper function which queues notifications to a spool
directory
- Set up a worker task, running in the privileged process, which
periodically checks the spool directory for queued notifications
The queuing is needed because on PBS we send most if not all
notifications from the proxy-process running as the `backup` user.
However, to have access to the protected passwords/tokens for various
notification endpoints, we need to read the notification config as
root.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The module will be extended to interact with the proxmox_notify crate,
hence the name change seems to be in order.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the schedule handling is the same whether there was a last run or not, so let's
do it once and not twice. the duration can be stored right away, instead of
using an intermediate variable.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the latter was newly introduced, and they both return basically the same
information now. the new extended (job) status struct is a strict superset of
the old status struct, so this is not a breaking change API wise.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid drifting definitions and reduce duplication. with the next major
release, the 'upid' field could then be renamed and aliased to be in line with
the other jobs, which all use 'last-run-upid'. doing it now would break
existing callers of the GC status endpoint (or consumers of the on-disk status
file).
the main difference is that the GC status fields are now not optional (except
for the UPID) in the job status, since flattening an optional value is not
possible. this only affects datastores that were never GCed at all, and only
direct API consumers, since the UI handles those fields correctly.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
setting `width` and `flex` in a column simultaneously won't work, and
the `flex` value takes priority. So remove the unused `width`
properties.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
before, this was only used where the top list was a fixed size and only
for one datastore (which limits the number of prune jobs a bit)
since now we show gc jobs for all datastores here too and all their
prune jobs, this panel can get much bigger.
To improve it's scrolling sizing behavior, make the prune jobs panel
`flex: 1`, so it fills out the rest of the view, and add a splitter
between them so one can resize them on the fly. To prevent making one of
the panels too small, set an appropriate minHeight for both and make the
surrounding panel scrollable.
To not save the height into it's state, we have to filter that out for
the GCView.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The ternary ? operator should be at the start of the line if the
the expression is split into multiple lines.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
table expands to the full width and relevant data is still visible on a
narrow screen.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Show the removed and pending data of the last run formatted with
Proxmox.Utils.format_size for better readability identically to data
display in the overview tab.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Suggested-by: Lukas Wagner <l.wagner@proxmox.com>
proxmox-backup-manager garbage-collection list
to list the garbage collection job status for all datastores,
including datastores without gc jobs.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
[LW: add ref to bugzilla issue to commit message]
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Make the order identical to local datastore view.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
In the global datastore view, extend the prune view to display gc job
status as a table. Use the same widget in the local view and dispaly gc
job status as a single row.
The local PruneAndGC view is parameterized (cbind) with the datastore.
At initialization the only row is selected. This allows the rest of the
grid to act on selected rows and it requires far less special casing if
the datastore is set on the view or not.
Having a single row always selected and therefore highlighted, is
visually not appealing. Therefore, highlighting of selected rows is
disabled in the local view.
Moved GCView to different file and enhanced it with last, next run,
status and duration. Added button to show task log.
Changed `render_task_status()` to also take in account upids stored in
other 'columns'.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
[LW: include ref to bugzilla in commit message]
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Originally-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Adds an api endpoint on the datastore that reports the gc job status
such as:
- Schedule
- State (of last run)
- Duration (of last run)
- Last Run
- Next Run (if scheduled)
- Pending Chunks (of last run)
- Pending Bytes (of last run)
- Removed Chunks (of last run)
- Removed Bytes (of last run)
Adds a dedicated endpoint admin/gc that reports gc job status for all
datastores including the onces without a gc-schedule.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Originally-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewd-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
we have to iterate over the keys of the state object here, not over the
values. This meant one could not reset the layout from the settings
window.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Maintenance mode Delete locks the datastore. It must not be possible to go
back to normal modes, because the datastore may be in undefined state.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
In the edit dialog we already use 'Max. Depth', so it makes sense
to use the same term in the overview.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
The sphinx documentation [0] describes the _static folder as the
location for the custom.js and custom.css so we move the files there, as
we do not need those files outside the directory.
This also removes the error message when building:
WARNING: html_static_path entry '_static' does not exist
[0] https://www.sphinx-doc.org/en/master/development/theming.html#add-your-own-static-files-to-the-build-assets
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
`prune-group` is currently not a real workertask, ie it behaves like one
but doesn't start a thread nor a task to do its work.
Changed it to start a tokio-task, so that we can delete snapshots
asynchronously. The `dry-run` feature still behaves in the same way and
returns early.
This paves the way for the new logging infra (which uses `task_local` to
define a logger) and improves performance of bigger backup-groups.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
When formatting and creating a filesystem on a disk it's important
that the target directory in `/mnt/datastore/<name>` either doesn't
exist yet, or is empty and not a mountpoint of an existing FS. As that
way we ensure that no data is lost, or gets hidden, on creating a new
datastore. Our current check was a bit stricter than required, it
always bailed if the target directory existed, even if it was a plain
& empty directory on the root file-system.
So adapt the check and also check whether an existing target directory
is empty and not already mounted, as then it can be used just fine.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Tested-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: reword subject and commit message to include more details ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of taking ownership of the http client when starting a new
BackupWriter instance, only borrow the client.
This allows to reuse the http client to later reuse it to start also a
BackupReader instance as required for backup runs with metadata based
file change detection mode, where both must use the same http client.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
instead of rejecting any non-leaf certificate not pre-validated by OpenSSL,
treat them as valid but keep track of the fact that the pre-validation result
is no logner trustable.
certificate chains completely trusted by openssl are still accepted like
before, and leaf certificates without a chain are also handled the same (since
the verify callback is only ever called with depth == 0 in that case).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Since both only needs a handful of attributes anyway, pass them
explicitly instead of as an LDAP-specific config object, such that these
types can be reused for other realms like the new Active Directory one.
No functional changes.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This will be needed by the AD authenticator as well, so avoid duplicate
code.
No functional changes.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This will be needed by the AD authenticator as well, so avoid duplicate
code.
No functional changes.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It is more readable than using match. We also inline variables in
eprintln!.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The schedule value for prune jobs can not be empty.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On this `ls` command the shell prompt ('#') was missing.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added two examples for the `--exclude` parameter of the
`proxmox-backup-client backup` command.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extend the current task log summary to include a log entry stating the
number of removed because vanished on the source side snapshots,
backup groups and namespaces.
The additional task log line states, e.g.:
> Summary: removed vanished: snapshots: 2, groups: 1, namespaces: 0
The log line is not shown if the sync jobs `remove_vanished` flag was
not set and therefore no removed vanished stats are present.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Include statistics of vanished and therefore removed snapshots, backup
groups and namespaces in the `PullStats`.
In preparation for including these values in the sync jobs task log
output.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No functional change intended: In preparation for including the
removed vanished groups and snapshots statistics in a sync jobs task
log output.
Instead of returning a boolean value showing whether all of the
snapshots of the group have been removed, return an instance of
`BackupGroupDeleteStats`, containing the count of deleted and
protected snapshots, the latter not having been removed from the
group.
The `removed_all` method is introduced as replacement for the previous
boolean return value and can be used to check if all snapshots have
been removed. If there are no protected snapshots, the group is
considered to be deleted.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When navigating to Datastores -> Content, it is now possible to
right-click on a snapshot/group and copy the name to the clipboard.
This makes the proxmox-backup-client much easier to use, especially when
restoring archives.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The `document.execCommand` call is deprecated since a few years [0] so I
went ahead and removed it. We only use it to copy stuff to the clipboard
and the recommended way now is to use `navigator.clipboard.writeText`
[1]. `writeText` is kind of new, but I think we'll be alright regarding
compatibility (Compat table is also available at [1]).
Making the handler functions async is okay because extjs executes the
handler and does not expect any result from it, nor does it need to do
some work afterwards.
[0]: https://developer.mozilla.org/en-US/docs/Web/API/document/execCommand
[1]: https://developer.mozilla.org/en-US/docs/Web/API/Clipboard/writeText
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The commands to add a zfs cache and log had the same description.
Differentiate them more clearly by explaining the benefit.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
We keep a DataStore cache, so ChunkStore's and lock files are kept by
the proxy process and don't have to be reopened every time. However,
for specific maintenance modes, e.g. 'offline', our process should not
keep file in that datastore open. This clears the cache entry of a
datastore if it is in a specific maintanance mode and the last task
finished, which also drops any files still open by the process.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Similar to a recent change in pve-access-control [0], add a new
'confirmation-password' parameter to the change-password endpoint and
require non-root users to confirm their passwords.
Doing so avoids that an attacker that has direct access to a computer
where a user is logged in to the PVE interface can change the password
of said user and thus either prolong their possibility to attack,
and/or create a denial of service situation, where the original user
cannot login into the PVE host using their old credentials.
Note that this might sound worse than it is, as for this attack to
work the attacker needs either:
- physical access to an unlocked computer that is currently logged in
to a PVE host
- having taken over such a computer already through some unrelated
vulnerability
As these required pre-conditions are pretty big implications, which
allow (temporary) access to all of the resources (including PVE ones)
that the user can control, we see this as slight improvement that
won't hurt, might protect one in some specific cases that is simply
too cheap not to do.
For now we avoid additional confirmation through a second factor, as
that is a much higher complexity without that much gain, and some
forms like (unauthenticated) button press on a WebAuthn token or the
TOTP code would be easy to circumvent in the physical access case and
in the local access case one might be able to MITM themselves too.
[0]: https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=5bcf553e3a193a537d92498f4fee3c23e22d1741
Reported-by: Wouter Arts <security@wth-security.nl>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ TL: Extend ocmmit message, squash in UI change ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no need to keep a copy of that component here, just re-use the common
one from widget-toolkit. That one provides also some more features
that will be used here with a next commit.
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ TL: move switch to common widget up front ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
restoring the old code does not work since we now don't have the
components as macros anymore, switch to concatcp for it
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Commit 2416aea8d4 accidentally removed this since they looked the
same as the ones we already have in proxmox-schema now. However, we
make use of the *capture groups* here.
Added a comment to the code to avoid this in the future.
Fixes 2416aea8d4 ("pbs-api-types: use const_format and new api-types from proxmox-schema")
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
[ TL: condense this to something more general ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the default timeout of 30 seconds is too short to properly wait for a
slot transfer. Increase the timeout to a value of 3 minutes. In my
tests, it took about 60 seconds in a very basic changer to move a tape
between two slots, so triple that to account for bigger and more
complicated libraries.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Previously, if there was no data to pull one could get:
> Summary: sync job pulled 0 B in 0 chunks (average rate: NaN B/s)
Now one gets the following log entry in that case:
> Summary: sync job found no new data to pull
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the methods provided by HumanByte for the output for consistency
with the rest of the task log and better readability.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a summary to the end of the task log showing the size and number
of chunks pulled as well as the average transfer rate.
Such an entry looks something like:
> Summary: sync job pulled 214.445 MiB in 166 chunks (average rate: 111.012 MiB/s)
Link: https://bugzilla.proxmox.com/show_bug.cgi?id=5285
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Return basic statistics on pull related methods via `PullStats`
objects, in order to construct a global summary for sync jobs.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
values.username just does not exist, and we do not need to delete the
username part anyway, as that field is used to assemble the full
userid by concatenating the name@realm parts.
While at it move this over to let-assignments and do not call setting
expiry explicitly a hack, it's fine and warranted code, because if one
wants to use a datefield's empty value as 0 one needs to do so
explicitly, nothing hacky there..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the api does not accept a realm property here, it is only needed to
construct a proper user id of the form `{username}@{realm}`. so
remove it before sending it to the api and getting an error in return.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
since that's not a valid api parameter there
we have to pass the `isCreate` value through to the inputpanel, we even
used it there already but it was never set.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the prune input panel is used in various contexts (add/editing a
prunejob, adding a datastore, executing a prune). These different api
calls don't all take the same parameters, so we have to correctly set
the `isCreate` to not send a `delete` paramter for those request if
there was an empty field.
Also set 'max-depth:0' only when recursive was not set *and* we can
set 'recursive', because for creating a datastore that is not supported
by the api, and for the prune job editing we override the whole
onGetValues anyway so that's not an issue there.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is not a valid parameter for the create call. To do that in the
onGetValues method, we have to pass the 'isCreate' value through to the
input panels via cbind.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we accidentally always tried to load an existing config, even when
creating a new entry. This returned the list of all configured ones plus
the digest (which gets set by the edit window). When the digest is set,
the edit window will send it along, but that does not exist for the
create api call, so it failed.
To fix it, guard the load behind the `serverid` property, which is only
set when we edit an existing entry.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The "Connection: upgrade" header is strictly expected to be included
in the response sent by the server when an upgrade to a different
protocol is requested by the client.
A detailed explanation as well as additional context follows below.
Background
----------
Neither RFC 9110 (HTTP Semantics) [0] or RFC 7540 (HTTP/2) [1]
*explicitly state* that the "Connection: upgrade" header must be
included *in the server's response* when a client requests an upgrade
to a different protocol. For clients, however, it is specified [2]:
> A sender of Upgrade MUST also send an "Upgrade" connection option in
> the Connection header field (Section 7.6.1) to inform intermediaries
> not to forward this field.
Yet, the example for a response provided in RFC 9110 [3] does include
the header:
> HTTP/1.1 101 Switching Protocols
> Connection: upgrade
> Upgrade: websocket
>
> [... data stream switches to websocket with an appropriate response
> (as defined by new protocol) to the "GET /hello" request ...]
The example in RFC 7540 [4] also includes the header:
> HTTP/1.1 101 Switching Protocols
> Connection: Upgrade
> Upgrade: h2c
>
> [ HTTP/2 connection ...
Additionally, RFC 9113 [5], which obsoletes RFC 7540 [1], mentions:
> The HTTP/1.1 Upgrade mechanism is deprecated and no longer specified
> in this document. It was never widely deployed, with plaintext
> HTTP/2 users choosing to use the prior-knowledge implementation
> instead.
I therefore initially concluded that whether the "Connection: upgrade"
header should / should not / must / must not be included in the
server's response was unspecified.
Further Revelations
-------------------
As per Thomas's suggestion [6], I opened a discussion over at Caddy's
GitHub issue tracker [7]. This discussion revealed that RFC 7230 [8],
which is obsoleted by RFC 9110 [1], does in fact specify that the
header must be included [9], thus proving my initial conclusion to be
incorrect:
> When a header field aside from Connection is used to supply control
> information for or about the current connection, the sender MUST
> list the corresponding field-name within the Connection header
> field. [...]
The discussion [7] also revealed that the WebSocket RFC 6455 [10]
specifies the usage of the "Connection" header in more detail [11]:
> 3. If the response lacks a |Connection| header field or the
> |Connection| header field doesn't contain a token that is an ASCII
> case-insensitive match for the value "Upgrade", the client MUST
> _Fail the WebSocket Connection_.
Furthermore [12]:
> 5. If the server chooses to accept the incoming connection, it
> MUST reply with a valid HTTP response indicating the following.
>
> [...]
>
> 3. A |Connection| header field with value "Upgrade".
Although we're using the upgrade mechanism for HTTP/2, the WebSocket
RFC [10] specifies its usage more clearly and most importantly, in an
explicit manner.
Final Conclusion
----------------
The "Connection: upgrade" header must therefore definitely be included
as per RFC 7230 section 6.1 [8], even if the newer RFC 9110 [1] does
not specify this explicitly anymore.
Finally, this fixes bug #5217 [13] and allows PBS to be deployed
behind Caddy. Also tested with nginx, which still works as expected.
[0]: https://datatracker.ietf.org/doc/html/rfc9110
[1]: https://datatracker.ietf.org/doc/html/rfc7540
[2]: https://datatracker.ietf.org/doc/html/rfc9110#section-7.8-14
[3]: https://datatracker.ietf.org/doc/html/rfc9110#section-7.8-13
[4]: https://datatracker.ietf.org/doc/html/rfc7540#section-3.2
[5]: https://datatracker.ietf.org/doc/html/rfc9113#appendix-B-2.3
[6]: https://lists.proxmox.com/pipermail/pbs-devel/2024-February/007948.html
[7]: https://github.com/caddyserver/caddy/issues/6134
[8]: https://datatracker.ietf.org/doc/html/rfc7230
[9]: https://datatracker.ietf.org/doc/html/rfc7230#section-6.1
[10]: https://datatracker.ietf.org/doc/html/rfc6455
[11]: https://datatracker.ietf.org/doc/html/rfc6455#section-4.1
[12]: https://datatracker.ietf.org/doc/html/rfc6455#section-4.2.2
[13]: https://bugzilla.proxmox.com/show_bug.cgi?id=5217
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
While PVE and PMG use a rather brittle "replace whole config" style on
their DNS entry CRUD API, the PBS one was made with a per-entry level
granularity, so that single entries can modified, or deleted, without
touching the others.
But the UI from the widget-toolkit was made for the older PVE/PMG
behavior and did not sent along the delete-array of to-be-deleted
keys.
Since widget-toolkit commit 8d161ac ("dns: update comment to avoid
coupling to downstream dependency") the DNS edit window supports
opting into that by setting the new `deleteEmpty` config parameter.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: expand commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
users that add the correct subscription key just get unnecessarily
confused with a "value does not match the regex pattern" error if
they accidentally have a stray whitespace at the end or beginning
otherwise.
Switch to using our `proxmoxtextfield` component that provides a
`trimValue` config option since widget-toolkit commit 5d7d30d ("text
field: add trimValue config") that was made just for this case.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
[ TL: reference widget toolkit commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Move the exclude pattern matching further up to avoid unnecessary
instantiation of the metadata object, not needed if the entry was
matched.
No functional change intended.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
The formulation "Keep backups for the last N intervals" might suggest
that intervals without backups also count, which they do not.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Some filesystems (f.e. zfs) support xattrs bigger than 64kB, sadly we
can't get them because the kernel vfs limits us. The syscalls listxattr
and getxattr will return a E2BIG error in this case.
Added a flag --ignore-e2big-xattr to the client, this will ignore the
metadata (but still backup the file) if this error occurs.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Fixes the clippy lint:
```
warning: in a `match` scrutinee, avoid complex blocks or closures with blocks; instead, move the block or closure higher and bind it with a `let`
--> src/bin/proxmox-backup-proxy.rs:874:58
|
874 | let stats = match tokio::task::spawn_blocking(|| {
| __________________________________________________________^
875 | | let hoststats = collect_host_stats_sync();
876 | | let (hostdisk, datastores) = collect_disk_stats_sync();
877 | | Arc::new((hoststats, hostdisk, datastores))
878 | | })
| |_________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#blocks_in_conditions
= note: `#[warn(clippy::blocks_in_conditions)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy lint:
```
warning: `to_string` applied to a type that implements `Display` in `writeln!` args
--> src/server/report.rs:141:72
|
141 | let _ = writeln!(out, "error during read-dir - {}", err.to_string());
| ^^^^^^^^^^^^ help: remove this
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#to_string_in_format_args
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy lint:
```
warning: useless conversion to the same type: `std::ffi::OsString`
--> src/tools/disks/mod.rs:1161:9
|
1161 | count_str.into(),
| ^^^^^^^^^^^^^^^^ help: consider removing `.into()`: `count_str`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_conversion
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
two-letter abbreviations should only be used for things that have a very common
meaning (e.g. NS, RE, ..), not arbitrary things.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The api parameter "delete-groups" was missing on the
proxmox-backup-client command. This allows the client to remove
non-empty namespaces.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Fixes the clippy lint
```
warning: accessing first element with `self.transports.get(0)`
--> pbs-tape/src/lib.rs:283:9
|
283 | / self.transports
284 | | .get(0)
| |___________________^ help: try: `self.transports.first()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#get_first
= note: `#[warn(clippy::get_first)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
We need to annotate some cases to allow the compile to infer the types.
Fixes the clippy lint:
```
warning: use of `or_insert_with` to construct default value
--> src/api2/tape/restore.rs:750:18
|
750 | .or_insert_with(Vec::new);
| ^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `or_default()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unwrap_or_default
= note: `#[warn(clippy::unwrap_or_default)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the following clippy lint:
```
warning: using `SeekFrom::Current` to start from current position
--> src/tape/media_catalog.rs:798:23
|
798 | let pos = file.seek(SeekFrom::Current(0))?; // get current pos
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: replace with: `file.stream_position()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#seek_from_current
= note: `#[warn(clippy::seek_from_current)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes the clippy lint:
```
warning: the borrowed expression implements the required traits
--> src/server/report.rs:193:47
|
193 | get_directory_content(&path)
| ^^^^^ help: change this to: `path`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrows_for_generic_args
= note: `#[warn(clippy::needless_borrows_for_generic_args)]` on by default
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Fixes:
```
warning: redundant explicit link target
--> src/tools/mod.rs:47:42
|
47 | /// Returns a new instance of [`Client`](proxmox_http::client::Client) configured for PBS usage.
| -------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ explicit target is redundant
| |
| because label contains path that resolves to same destination
|
= note: when a link's destination is not specified,
the label is used to resolve intra-doc links
= note: `#[warn(rustdoc::redundant_explicit_links)]` on by default
help: remove explicit link target
|
47 | /// Returns a new instance of [`Client`] configured for PBS usage.
| ~~~~~~~~~~
```
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The idea was to limit the number of tapes in a media set, but this was
not enforced when adding a medium to a media set, only on read/parsing
the inventory. With that, it is possible to create media sets greater
than the limit which in turn blocks access to most functions via
api/cli/gui due to the check.
Instead of enforcing an arbitrary limit, simply warn on creation when
the media-set is very large (20).
To restore the whole media set, the time taken would still be at least 38
hours for LTO-4 and 250 hours for LTO-9.
We already have a section in the docs where we tell about the
disadvantages of large media sets.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Allow more complex strings for the acr-value when using openid. The
openid documentation only specifies the acr-value *should* be an URI
[0]. Implemented a regex that loosely disallows some of the reserved
URI characters specified in the RFC [1].
Currently values like:
- "urn:mace:incommon:iap:silver"
- "urn:comsolve.nl:idp:contract:rba:location"
do NOT work, although they are correct URI's and common acr tokens.
For Proxmox VE we had to actually make this more strict to align with
each other, as there we accepted any string.
[0]: https://openid.net/specs/openid-connect-core-1_0.html
[1]: https://www.rfc-editor.org/rfc/rfc2396.txt
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Since we additonally also support delcaring a "type" property for
`oneOf` schemas (to use with serde's *internally* tagged enum
representation, this contains an additional `typeProperty` and
`typeSchema` value.
It dumps as follows:
{
"type": "object",
"description": ...,
"typeProperty": "name-of-type-property",
"typeSchema": {
"type": "string",
"enum": [ ... ], // technically not enforced by the code
},
"oneOf": [
{
"title": "<value from the above 'enum' array>",
<schema>,
},
{
"title": "<value from the above 'enum' array>",
<schema>,
},
... <one for each 'enum' above>
// ^ exact match is not technically enforced by code
}
}
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
makes it a bit more readable as there's less "noise" in the read_label
function and as the separate new fn allows us to nicely use ? to early
return as it has an option in the return signature avoiding 5 lines of
code while not really getting more terse.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 1343dcaf we automatically try to load the key into the
drive after reading the media-set label, this cannot work for the case
where we actually restore the key from the tape itself.
To address this special case while preserving the automatic key
loading, everything except the setup of the key has been separated
from the 'read_label' method into a new function named
'read_label_without_loading_key'. Consequently, the 'restore-key' API
endpoint can be switched to utilize this new method, thereby avoiding
the issue.
Fixes: 1343dcaf ("tape: move 'set_encryption' calls to the TapeDriver")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: reword and shorten commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Nightly rustc now warns about unused private fields in the case of a
non-pub newtype struct, so use an underscore-prefixed dummy field name
to get rid of the warning.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and use renamed structs from proxmox-rrd
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
[w.bumiller@proxmox.com: squash "and use renamed structs from proxmox-rrd" as build fix]
[w.bumiller@proxmox.com: bump d/control]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
by introducing an 'assert_encryption_mode' that checks the desired
state, and bails out if it's different, called directly where we
previously set the encryption mode (which is now done automatically)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: add drive_ prefix and fleece in comment ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
namely everytime we know what the key for the tape has to be:
* after we write the MediaSetLabel
* after reading the MediaSetLabel
When handling data on tape, we always have to have the MediaSetLabel, so
we should always trigger one of these. Because of that, we should not be
able to forget to set the encryption mode.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
For security, we want to automatically unload the encryption key from
the drive when we're done, so there was a Drop handler for SgTape that
handles that. Sadly, our tool we use to set it in the first place, also
invoked the Drop handler, thus unloading the keys again immediately
To fix that, move the Drop handler one logical level higher to the
LtoTapeHandle, which is not used by the 'sg-tape-cmd'.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since sg-tape-cmd is only necessary if we want to load the key, we don't
have to call it when we don't have one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of LtoTapeHandle. This way, we can simply always call the binary
from LtoTapeHandle, and don't have to concern ourselves with the sg_tape
calling.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Prepares for the use in sg-tape-cmd, since we want to use the SgTape
directly instead of LtoTapeHandle.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
rename the inner 'set_encryption' in sg_tape to drive_set_encryption,
so that it's a bit clearer where it comes from.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This already works in pve and is also possible in pbs when using the
`proxmox-backup-manager user create` command.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Use the existing key, if it's not specified, just like we do in the
PVE API.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
let them manage it completely themselves, as we cannot really say if a
code-block fits for the whole output, like it was the case for the
function that returned a limited output of a 'top' process status
command.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and add it as a hidden column. This now displays all tapes even if there
are some with identical label-texts.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
quite a few parts of our code assumes that the label-text is unique in
the inventory, which leads to rather unexpected behaviour when having
more than one tape with the same label-text, e.g. a
`proxmox-tape media destroy <LABEL>`
destroys the first one in the config
(same with moving to vault, etc.)
since having multiple tapes with the same human readable name is always
confusing, simply disallow that here
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
so we can uniquely identify the tapes with duplicate labels.
The change is intended to be backwards compatible.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
find_media_by_label_text assumes that the label-texts are unique, but
currently this is not necessarily the case. To properly handle that,
change the signature to return a result, and in case there are duplicate
ones, return an error.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
By this it becomes clear that the error stems from a parsing error when
getting the backup group owner.
See also: https://forum.proxmox.com/threads/139482/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
as the previous commit: simply keep the previous Display impl and call
it from out of the new GroupFilter impl
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This checks if including and excluding works as expected. That the
filter are added out of order is on purpose since it sould make no
difference.
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
To make the UI compatible, the Group Filter dialogue has been extended
by a second list, so it now features a list for all include filter and
one for all exclude filters.
Internally, all include as well as exclude filter are managed into one
list. The 2 list view is just for a cleaner representation in the UI.
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
After some discussion I canged the include/exclude behavior to first run
all include filter and after that all exclude filter (rather then
allowing to alternate inbetween). This is done by splitting them into 2
lists, running include first.
A lot of discussion happened how edge cases should be handled and we
came to following conclusion:
no include filter + no exclude filter => include all
some include filter + no exclude filter => filter as always
no include filter + some exclude filter => include all then exclude
Since a GroupFilter now also features an behavior, the Struct has been
renamed To GroupType (since simply type is a keyword). The new
GroupFilter now has a behaviour as a flag 'is_exclude'.
I considered calling it 'is_include' but a reader later then might not
know what the opposite of 'include' is (do not include? deactivate?). I
also considered making a new enum 'behaviour' but since there are only 2
values I considered it over engeneered.
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
allocation length for read element status is a 3 byte field, but it
seems some changers only look at the bottom two bytes. Since we used
0x010000 for it, those changers did not return any data and the calls
failed.
To work around it, request one byte less (0xFFFF) which should still be
enough for the data, but should now work with those buggy
implementations.
Reported by a user in the forum: https://forum.proxmox.com/threads/137391/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The gdisk package contains the `sgdisk` command, which gets used when
initializing a disk with gpt.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
instead of having it in a property string. For now this should be fine,
and if we need many more such options, we can still move them into a
property string if we want.
Also update the cli command in the docs on how to set it now.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
LTO-9 requires a bit of special handling while formatting/first use, so
document that, so nobody is suprised by this behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by converting the bool into an option, otherwise having the options not
set at all will fail the unload while deserializing with
'eject-before-unload is not optional'
Also if we can automatically decide this in the future, we can now
detect if the option was explicitely set or not.
Fixes: 66402cdc ("fix #4904: tape changer: add option to eject before unload")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
some tape libraries need the tape being ejected from the drive before
doing an unload. Since we cannot easily detect if that's the case,
introduce an 'eject_before_unload' option.
Instead of just adding a bool flag to the config, add a new 'options'
property string where we can put such niche options similar to how we
handle the datastore tuning options.
Extend the LtoTapeHandle with 'medium_present' which just uses a
TEST UNIT READY command to check for present medium, so we don't
try to eject an already ejected tape.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we'll need more info from there in the future, so derive clone for it
and save the whole config instead of adding an additional field.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of wrapping the function body in a 'try_block', simply move the
map_err to the only call site, where we can even add more context than
in the function itself.
aside from better error output, no functional change intended
this could help in debugging cases like this issue reported in the forum:
https://forum.proxmox.com/threads/137391/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
starting with LTO9, a FORMAT(04h) command also reinitializates the tape,
which can take up to tw hours. Since we don't actually want to do that
every time we format, use 'erase_media' when we want a fast erase.
(On a slow erase, we let it run and wait until the drive is ready
again).
The users have to pre-initializate the tapes before using it for them to
work properly though.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of hardcodign the default timeout as only option. This will come
in handy when we need to wait for LTO9+ initialization that can take up
to two hours.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Instead of returning -1 if we can't get the attributes, we use an
Option which will not be serialized on `None`.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Correctly display missing 'avail' and 'used' attributes in the
datatstore summary. This simply sets it to 0, so that we don't get any
errors in the console.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
If the ca demands external account binding credentials, the user will be
asked for them. If a custom directory is used, the user will be asked if
eab should be used.
Signed-off-by: Folke Gleumes <f.gleumes@proxmox.com>
The ID_PART_ENTRY_* values describe what kind of partition this is and
thus can be used to implement the `.is_partition()` method which we
use in the next patch to avoid calling out to `lsblk`.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
previously this would always refer to the "top" namespace of the source,
instead of properly iterating over the namespace tree. adapt the trait
accordingly, since this was the only call site.
this fixes a cosmetic issue only.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the snapshot pulling code always selected the "top" namespace of the
source, instead of the passed in namespace parameter.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
without any default value in the viewModel, the resulting url would be:
`<id>?destroy-data=<value>&keep-job-configs=`
which is missing the actual value, so add the default
Fixes: e9979a1a ("ui: add 'keep configs' checkbox to datastore removal window")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
to make the system load/status summary one look less cramped, as that
got recently the boot-mode information line added.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a guard clause that checks `job.remote`, otherwise the template
fails to render to to handlebars being configured in strict mode.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
by globally calling the 'status' api once and saving the fingerprint
into the global Proxmox variable.
since not all users might have that permission, ignore errors for that,
and don't show the fingerprint in this case
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this has a similar functionality as the 'show fingerprint' button,
but for repository strings that are needed e.g. for the cli
included with and without the current user for convenience
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: squash in window title rename and iconCls fix for light-mode ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extract and display the build version and kernel
release nicely.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Return a struct with all the components of the kernel version like it
has been done in pve. Also return the legacy `kversion` to keep
backwards compat.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Shows the bootmode of the instance. Options are Legacy BIOS,
EFI, or EFI(Secure Boot).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Added field that shows the bootmode of the node. The bootmode is either
Legacy Bios, EFI, or EFI (Secure Boot). To detect the mode we use the
exact same method as in pve: We check if the `/sys/firmware/efi` folder
exists, then check if the `SecureBoot-xx...` file in the `efivars`
directory has the SecureBoot flag enabled.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
ported over from pve-manager:
'pve7to8: check for proper grub meta-package for bootmode'
`67c655b9333714f31d5115de80961a2abc4b6506`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
ported over from pve-manager: 'pve7to8: Add check for dkms modules'
`0329876ccf1d78b848897718bb0c2337c6a55fbb`
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
If stdin is a TTY, an interactive prompt is added to confirm the deletion
of a block device, ensuring user verification before proceeding.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
... since the API already accepts a boolean for that.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ DC: actually send the option to the api ]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This reverts commit 3940f48c47 as it's
bogus and was already fixed on master, so that's why testing this
change made it look like it was working now compared to the previous
version.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when editing a local sync job, the field would be empty because of
this and not be set to the previously configured remote-store.
The binding is already used for the local datastore, not sure why it
should even be applied to the target where it might not even be valid.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Set `supportsWipeDisk` to true to enables the wipe button in the web
UI.
The entry for override_task_descriptions is copied from pve-manager.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
A new cli subcommand which calls the api wipe_disk function to wipe a
disk/partition with a specified dev name.
Examples:
proxmox-backup-manager disk wipe sda2
proxmox-backup-manager disk wipe sda
proxmox-backup-manager disk wipe nvme0n1p1
The complete_partition_name from tools/disks/mod.rs is used for
command completion.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
An api function similar to PVE wipedisk function that takes a
disk/partition dev name as argument to wipe it in a new WorkerTask
thread.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
The wipe_blockdev & change_parttype functions are similar to
PVE::Diskmanage's wipe_blockdev & change_parttype functions.
The partition_by_name & complete_partition_name functions are
modified disk_by_name & complete_disk_name functions for partitions.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
The new regex is similar to BLOCKDEVICE_NAME_REGEX but also allows
numbers at the end of the device name (also allows partitions names).
For nvme partitions it also allows the letter p and a number.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>
and show the relevant actions. They will be forwarded to the controller,
so we can reuse that code without big refactoring them into another
class/place.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
... since the store field was cleared when the window opened.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Fixes: 9039d6709e
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
previously when an offline key was set it wasn't verified that the
subscription was for the correct product. while pom only applies
subscriptions for the corresponding products, a user could manually
invoke the `subscription set-offline-key` command to circumvent that.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
some libraries (e.g. Qualstar) don't support the DVCID bit in the READ
ELEMENT (B8) command (to return vendor/model of connected drives), so
make that part optional if it fails. We only ever use the serial number
in the `pmtx` tool, so there is not much downside to not having this.
This increases compatibility with such libraries
Reported in the forum:
https://forum.proxmox.com/threads/cant-query-tape-robot-status.131833/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
... making the pull logic independent from the actual source
using two traits.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Tested-by: Gabriel Goller <g.goller@proxmox.com>
... since the functions don't actually need to own the value.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Tested-by: Gabriel Goller <g.goller@proxmox.com>
Place hyperlinks only at the beginning of a chapter and where it makes
sense, so as not to be distracted by redundant hyperlinks
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
Improve error message output by showing the full Error context, using
the alternate selector '{:#}" [0].
Without this, only the outermost context is displayed, which in case
of pxar extraction errors is mostly not enough to find the underlying
issue.
[0] https://docs.rs/anyhow/1.0.69/anyhow/struct.Error.html#display-representations
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
creates a default prune job if prune-schedule is set when creating the
datastore.
Auto generates a name for a prune-job with a truncated uuid to avoid
collisions.
Prune settings were stored in the datastore config but have no effect.
Prune settings are not stored there anymore
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
pass the WorkerTaksContext to do_create_prune_job because we want
logging when calling within a worker context.
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
support for it got added to Proxmox repositories, so there is no need to use
custom logic and manual fetching for this anymore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Fixed a few rustdoc warnings. Converted some 'html'-links to
intra-doc-links and surrounded paths with '`'.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
as the current table format isn't really a recommended way to encode
tables for reStructuredText, and breaks various editor integrations
(and possibly parsing in the future).
From the two supported options, i.e., csv-table and list-table, the
first one seems to be easier to maintain in the long-run, so go for
that.
https://docutils.sourceforge.io/docs/ref/rst/directives.html#csv-table-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previously, the snapshot grid returned one of three possible types of
values:
* a list of snapshots
* a list of datastores (if only whole datastores were selected)
* the string 'all' (when all snapshots were selected)
this led to some confusing and wrong code, especially the part:
```
if (source === 'all') {
source = values.store;
}
```
which basically set the selected *target* store as a source. (meaning
it tried restoring a datastore with the selected target name,
regardless if it existed or not)
This fell through in testing, since we most often only restored to the
same datastore anyway were the target and source name were the same.
Rework the return value to return the empty array in case all
snapshots are selected, since selecting none is not a valid anyway.
This means we always get an array back, which makes the code a bit
cleaner overall.
At the same time, we now differentiate correctly the 'all selected'
case, by setting the selected target as a default target.
So instead of previously having `target=target` as datastore
parameter, we now have `target` which is the correct behavior when we
want to restore the whole media set anyway.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
some of the variable names did not really tell the full story, so
extend them a bit. This makes the intention much clearer.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
by counting the returned tapes and compare it to the sequence number.
If the tape count is lower than the highest sequence number plus one,
there must be a tape missing.
Mark it in the text and add the proxmox-warning-row class.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The option was introduced for sphinx 5.0, but back then still using an
empty set as default value, but since (for us still future) 6.0 the
default will be ['booktabs', 'colorrows'], which looks better so use
it now already.
https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-latex_table_style
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoid a few ugly errors that we get here since basing of the Debian
Bookworm release, which is the first to ship a sphinx version newer
than 5.0, which removed support for allowing None as language [0]
[0]: https://www.sphinx-doc.org/en/master/changes.html#release-5-0-0-released-may-30-2022
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With this patch it is possible to remove systemd mount units via the webui.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
note, we do not filter by *.list or *.source, so one might get also
files that apt won't read, like .dpkg-dist files, but also those with
typos, and thus possibly helpful when debugging things.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
allows one to render this via any of the thousands markdown viewers to
get a better formatting.
We can switch our web ui widget to (optionally) render this as html
when a user is viewing it from the UI too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit adds the missing "Connection: upgrade" HTTP header [1]
when requesting an upgrade to HTTP 2.
Doing so is mandated in the HTTP Semantics RFC [2], and without this,
(reverse) proxies that strictly follow the standard could potentially
break.
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Upgrade
[2]: RFC 9110, 7.8. Upgrade: “[...] sender of Upgrade MUST also send
an "Upgrade" connection option in the Connection header [...]”
Reported-By: McTwist <rajb89@hotmail.com>
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
[ TL: added RFC reference and use case to commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added lifetime to `find` function. We need this lifetime
because of the `impl MatchList` and 'anonymous lifetimes in
`impl Trait` are unstable'.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Switch from serde_json::Value to an empty tuple, to not suggest this
actually returns a value from the API other than a possible error.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a snapshot gets deleted (forgotten), the proxmox backup client
currently returns returns
"Result: {
"data": null
}"
This feedback may confuse users therefore this patch removes the output.
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
When there is no comment for a backup group, the comment of the last
(most recent) snapshot in this group will be shown as dimmed text, as
long as the back group is collapsed.
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
the ui shows the default 'root' namespace as target, but this only
worked when no namespace was selected. as soon as one source datastore
had a target namespace selected, the others datastores would be
skipped as there was no namespace mapping for them. To fix that, we
simply send a default namespace mapping for each source datastore
without a target (no target means 'root')
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Mira Limbeck <m.limbeck@proxmox.com>
this only existed for a week ~2 years ago[0], making those two variables empty.
0: removed in 67d00d5c0e
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the match-arms let one follow the different branches easier than the
relatively crowded if-condition it replaces.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Warnings in the task log/state normally means that the task actually
did its main job, but there was some detected (potential) issue that
the users should be made aware of. Exiting with an error code in that
case would be a bit odd.
While just exiting with success might not be the best solution either,
it's definitively more correct than a failure-exit-code, so go for
that for now as a stop-gap.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
[ TL: rebased on current master (v3 was already applied) and rewrite
commit message accordingly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the job start-time as end-time when it's stuck in the
`JobState::Starting` state, no active working is running and the task
log of the last run doesn't exists.
A user experienced a power loss, which left a GC job in the `Started`
state, but the task log did not exist. This breaks the schedule and no
following GC runs. Now, the error is simply ignored and a new gc job
is started on the next occurrence.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
When walking through a datastore on a GC run, it can
happen that the snapshot is deleted, and then walked over.
For example:
- read dir entry for group
- walk entries (snapshots)
- snapshot X is removed/pruned
- walking reaches snapshot X, but ENOENT
Previously we bailed here, now we just ignore it.
Backups that are just created (and a atomic rename from
tmpdir happens, which might triggers a ENOENT error) are
not a problem here, the GC handles them separately.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Now we make an additional request on `api2/json/.../tasks/{upid}/status` to
get the `exitstatus` of the task. This allows us to `bail` and thus
get a non-zero exit code in the cli.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
This will show the ip-address of the client creating
the backup in the logs. For example it will output:
"starting new backup on datastore 'test1' from ::ffff:192.168.1.192:
"host/test/2023-08-21T07:28:10Z"".
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
this sounded like we need to skip lost+found to avoid pruning too many chunks,
while the opposite is true - it's safe to skip lost+found on EPERM without
pruning too many chunks, but this is not the case for all EPERM situations..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Simply pull out the inner IO error and the affected path first.
Clean up style-wise a bit while touching this anyway, but no semantic
change intended.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the reset button only makes sense for editing existing entries,
not for creating new.
This brings it inline with the ZFS create window from PVE.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The match condition has gotten a bit large, and the error case is a
bit more concise with a pattern guard.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Passed a closure with the `stat()` function call to `matches()`. This
will traverse through all patterns and try to match using the path only, if a
`file_mode` is needed, it will run the closure. This means that if we exclude
a file with the `MatchType::ANY_FILE_TYPE`, we will skip it without running
`stat()` on it. As we updated the `matches()` function, we also updated all the
invocations of it.
Added `pathpatterns` crate to local overrides in cargo.toml.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
During the redesign of www.proxmox.com the menu structure and therefore
some url changed. Update the external link in order to avoid an
unneccessary redirect.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Adds OverwriteFlags for granular control of which entry types should
overwrite entries present on the filesystem during a restore.
The original overwrite flag is refactored in order to cover all of the
other cases.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Creating symlinks or hardlinks might fail if a directory entry with the
same name is already present on the filesystem during restore.
When the overwrite flag is given, on failure unlink the existing entry
(except directories) and retry hard/symlink creation.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
/sys/firmware/efi is a directory and std::path::Path seems to detect
only regular files with is_file [0].
Reported in our Enterprise support portal.
Quickly tested the fix on a VM.
https://doc.rust-lang.org/stable/std/path/struct.Path.html#method.is_file
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
lto 9 tapes have a new density code which leads to these tapes not
being recognized properly. add the new density code and TapeDensity to
improve lto 9 support. since the documentation states that we support
lto 5 and above this constitutes a bug fix for lto 9 support.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
the wrong info here was rather misleading, especially when encountering errors
just talking about "blobs" when the actual problem is with a chunk.
chunks did originally have their own magic values, but that got removed in
4ee8f53d07 "remove DataChunk file format - use DataBlob instead"
back in 2019, before the 0.1.1 release(!)
Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Commit d97ff8ae ("use new auth api crate") moved all auth-related code
into it's own crate inside the `proxmox` repo, including this file. Thus
drop it here, it's not even included in the compile.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
The crates `proxmox-apt` and `proxmox-openid` have been moved to the `proxmox`
workspace. Adjusted the path in the Cargo.toml file.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
this actually affected the matcher's ability to differentiate between directory
and file patterns, and the alternative would require matching patterns twice
for full coverage, so let's try a different approach altogether.
This reverts commit c8ed10095d.
When executing `proxmox-backup-client backup ...
--exclude "test/test.txt"` it still executed stat() on "test.txt",
which won't work when the current user doesn't have access to the
file or the parent folder. Now we check if the file is excluded,
and if it is not, then we execute stat().
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
since `totp_locked` is not wrapped in an `Option` we need to
explicitly tell serde about its default
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We check if the manifest contains an index for the requested archive, if
it does not we avoid downloading it and report a more helpful error
message.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Having commonly used device paths (like /dev/sdb) in an example
command may cause damage if the user simply copies them without
checking. With a pseudo device path (like /dev/sdX), they would simply
get an error
Signed-off-by: Philipp Hufnagl <p.hufnagl@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>
If this flag is provided, any errors that occur during the extraction
of a device node are silently ignored.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
This enum's purpose is to provide context to errors that occur during
the extraction of a pxar archive, making it possible to handle
extraction errors in a more granular manner.
For now, it's only implemented in `ExtractorIter::next()`, but may be
used in other places if necessary or desired.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
This change factors the body of `extract_archive()` into a separate
struct named `ExtractorIter` which implements the `Iterator` trait.
This refactor has two goals:
* Make it easier to provide and propagate errors and additional
information via `anyhow::Context`
* Introduce a means to handle errors that occur during extraction,
with the possibility to continue extraction if the handler decides
that the error is not fatal
The latter point benefits from the information provided by the former;
previously, errors could only be handled in certain locations
(e.g. application of metadata), but not on a "per-entry" basis.
Since `extract_archive()` was already using a "desugared" version of
the iterator pattern to begin with, wrapping its body up in an actual
`Iterator` made the most sense, as it didn't require changing the already
existing control flow that much.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
In order to preserve the source(s) of errors, `anyhow::Context` is
used instead of propagating errors via `Result::map_err()` and / or
`anyhow::format_err!()`.
This makes it possible to access e.g. an underlying `io::Error` or
`nix::Errno` etc. that caused an execution path to fail.
Certain usages of `anyhow::bail!()` are also changed / replaced
in order to preserve context.
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
the debug representation of a repository
'BackupRepository { auth_id: Some(Authid { user: Userid { data: "test@pbs", name_len: 4 }, tokenname: None }), host: Some("127.0.0.1"), port: None, store: "tank" }'
is rather verbose and unreadable, use the plain one
'test@pbs@127.0.0.1:8007:tank'
intead.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
"Commandline", "command line" & "command-line" were being used
interchangeably, which is not correct use command-line when it is an
adjective (e.g. "command-line interface") and use command line when
it is a noun (e.g. "change the setting from the command line")
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
[T: fix typos in commit message and reflow ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by adding the 'totp-locked' column to the model
a diff store can only know if a column has changed if the column is
defined in the model, otherwise it'll only load it the first time
(when the 'load' called on the diff store)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Like in PVE.
This means that /access/users is now a 'protected' call to
get access to 'tfa.cfg'.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this commit makes the ldap realm endpoints check whether a new or
updated configuration works correctly. it uses the new
`check_connection` function to make sure that a configuration can be
successfully used to connect to and query an ldap directory.
doing so allows us to remove the ldap domain regex. instead of relying
on a regex to make sure that a given distinguished name (dn) could be
correct, we simply let the ldap directory tell us whether it accepts
it. this should also aid with usability as a dn that looks correct
could still be invalid.
this also implicitly removes unauthenticated binds, since the new
`check_connection` function does not support those. it will simply
bail out of the check if a `bind_dn` but no password is configured.
therefore, this is a breaking change.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
but fallback to 'eslint' otherwise
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[T: move into www/manager Makefile directly]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
From rust-lang:
> Why is this bad?
>
> First, it’s more complex, involving two calls instead of one. Second,
> Box::default() can be faster in certain cases.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
The function will always be called. This is only bad if it allocates or does some non-trivial amount of work.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Adds the commands
proxmox-backup-manager user tfa list <userid>
proxmox-backup-manager user tfa delete <userid> <id>
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
In the next commit we expose a command to list the tfa methods of a
user. Without this annotation one would get the following error
unable to format result: got unexpected data (expected null).
when running the proposed cli command.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
during some tests recently I wondered why a debug log-message was not
printed, despite running with PBS_QEMU_DEBUG.
This patch sets the loglevel for the cli logger to debug if the
variable is present and not-empty (see qemu_helper.rs for the other
usage).
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
zfs_arc_min was raised to 32M (for linux) in zfs-commit
121b3cae742a0670d902a51bc61d49dc4a3e4445
while the current logic would still set the min_size to 32M (it's
max(32M, allmem/32), which results to 32M for memory sizes up to
1024M), setting it explicitly to the minimum makes it clear, and will
still be kept should the restore vm have more than 1G of memory at
some point.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Currently the values set for zfs_arc_min and zfs_arc_max are ignored
by the kernel:
```
Unknown kernel command line parameters... will be passed to user space
```
module parameters provided on the commandline usually need to be
prefixed with the modulename (e.g. zfs.zfs_arc_min, see [0] for a bit
on related information (the issue itself is not related)).
Paradoxically currently ZFS will print spurious warnings about
settings being ignored when they are actually set - see [1].
Booting the debug image and connecting the shell on the serial console
confirmed that the values did not seem to be set:
`grep '^c_' /proc/spl/kstat/zfs/arcstats` showed half of the memory
for c_max.
[0] https://github.com/openzfs/zfs/issues/698
[1] https://github.com/openzfs/zfs/issues/12504
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
We recently took into account the selected datastore when restoring
from tape, but the snapshot grids value may not only be a single
datastore, it can also be a list of snapshots, datastores or 'all'.
Handle these cases and extract the source datastore correctly.
This fixes tape restoration when not a whole datastore is selected.
Reported in the forum:
https://forum.proxmox.com/threads/restore-from-lto-parameter-verification-errors-store.128445
Fixes: df881ed0 ("ui: tape: fix restoring a single datastore")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
previously the build process was broken for some versions of `awk`
(most notably `mawk`) as they did not understand the shorthand `\s`
notation for matching a whitspace. use the more universal and more
explicit `[[:space:]]` instead.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
most had no (significant) change, but where bumped to provide some
version space for future stable-2 updates without clashing with
future master
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
:http-proxy:Set proxy for apt and subscription checks.
:email-from:Fallback email from which notifications will be sent.
:ciphers-tls-1.3:List of TLS ciphers for TLS 1.3 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:ciphers-tls-1.2:List of TLS ciphers for TLS <= 1.2 that will be used by the proxy. Colon-separated and in descending priority (https://docs.openssl.org/master/man1/openssl-ciphers/). (Proxy has to be restarted for changes to take effect.)
:default-lang:Default language used in the GUI.
:description:Node description.
:task-log-max-days:Maximum days to keep task logs.
label = "<fv>FORMAT_VERSION\l|PRELUDE\l|<f0>ENTRY\l|\{XATTR\}\* extended attribute list\l|\{ACL_USER\}\* USER ACL entries\l|\{ACL_GROUP\}\* GROUP ACL entries\l|\[ACL_GROUP_OBJ\] the ACL_GROUP_OBJ \l|\[ACL_DEFAULT\] the various default ACL fields\l|\{ACL_DEFAULT_USER\}\* USER ACL entries\l|\{ACL_DEFAULT_GROUP\}\* GROUP ACL entries\l|\[FCAPS\] file capability in Linux disk format\l|\[QUOTA_PROJECT_ID\] the ext4/xfs quota project ID\l|{<pl> PAYLOAD_REF|SYMLINK|DEVICE|{<de> \{DirectoryEntries\}\*|GOODBYE}}"
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.