When --compare-content is set, the command will compare the
file content instead on relying on mtime to detect modified files.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
This commit enriches the output of the `diff archive` command,
showing pxar entry type, mode, uid, gid, size, mtime and filename.
Attributes that changed between both snapshots are prefixed
with a "*".
For instance:
$ proxmox-backup-debug diff archive ...
A f 644 10045 10000 0 B 2022-11-28 13:44:51 add.txt
M f 644 10045 10000 6 B *2022-11-28 13:45:05 content.txt
D f 644 10045 10000 0 B 2022-11-28 13:17:09 deleted.txt
M f 644 10045 *29 0 B 2022-11-28 13:16:20 gid.txt
M f *777 10045 10000 0 B 2022-11-28 13:42:47 mode.txt
M f 644 10045 10000 0 B *2022-11-28 13:44:33 mtime.txt
M f 644 10045 10000 *7 B *2022-11-28 13:44:59 *size.txt
M f 644 *64045 10000 0 B 2022-11-28 13:16:18 uid.txt
M *f 644 10045 10000 10 B 2022-11-28 13:44:59 type_changed.txt
Also, this commit ensures that we always show the *new* type.
Previously, the command showed the old type if it was changed.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
tapes that are labeled into a pool but are not in a media-set yet, belong
to the special 'all zero' media-set. these will never have a catalog on them,
so skip them
fixes the issue, that an inventory with 'catalog restore' aborted on
such a tape
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
a tape assigned to a pool but no media-set, gets the special 'all zero'
media set in it's MediaSetLabel. Instead of having that constant
scattered all over the code, hide this fact by using wrapper functions
to initialize it that way and to check for it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we can still do that as notifications for prune jobs weren't released
yet.
We may want to evaluate if we adapt (some) other notification types
too on next major release.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
under some conditions, the smartctl exitcode sets bit 2, even if the
smartctl call succeeded, but has e.g. some warnings derived from the
attributes
we do the same in pve, but it is only the first step in fixing #4353, since
we probably should parse the smartcl output better to include
such warnings
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The API now exposes the field 'available' as well, with which the
unprivileged total is calculated in all corresponsing views in the
frontend.
The rrd charts now also display the total as the unprivileged total
if available, otherwise the absolute total is used.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The rrd data now includes tracking the available field in disk usage.
The calculation for the estimated_time_full was adapted to use the
total for the unpriviliged user, which is the sum of used + available.
The total for unprivileged users is preferable, because datastores are
always written to by the backup user. Which means that any storage
space reserved for root is unusable for our purposes.
To avoid resetting the estimate when switching to this new version,
the backend will try to use the available value to calculate the
unprivileged total. When that is not an option, it will fall back to
using the absolute total.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true. If the limit parameter is set to
0, all lines in the tasklog will be returned in json format.
To make a file stream and a json response in the same API call work, I
had to use one of the lower level apimethod types from the
proxmox-router. Therefore, the routing declarations and parameter
schemas have been changed accordingly.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
This new subcommand compares a pxar archive in two different
snapshots and prints a list of added/modified/deleted file
entries.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
With the old code the rate limit parameters got passed in their own
dictionary under the limit key, but the API expects the rate-limit
settings as top-level keys. This commit correctly sets the rate-limit
parameters so the API actually uses them.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
currently, we don't (f)sync on chunk insertion (or at any point after
that), which can lead to broken chunks in case of e.g. an unexpected
powerloss. To fix that, offer a tuning option for datastores that
controls the level of syncs it does:
* None (default): same as current state, no (f)syncs done at any point
* Filesystem: at the end of a backup, the datastore issues
a syncfs(2) to the filesystem of the datastore
* File: issues an fsync on each chunk as they get inserted
(using our 'replace_file' helper) and a fsync on the directory handle
a small benchmark showed the following (times in mm:ss):
setup: virtual pbs, 4 cores, 8GiB memory, ext4 on spinner
size none filesystem file
2GiB (fits in ram) 00:13 0:41 01:00
33GiB 05:21 05:31 13:45
so if the backup fits in memory, there is a large difference between all
of the modes (expected), but as soon as it exceeds the memory size,
the difference between not syncing and syncing the fs at the end becomes
much smaller.
i also tested on an nvme, but there the syncs basically made no difference
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in a disaster recovery case, it is useful to not only re-invetorize
the labels + media-sets, but also to try to recover the catalogs
from the tape (to know whats on there). This adds an option to
the inventory api call that tries to do a fast catalog restore
from each tape to be inventorized.
also sets the correct default for 'read-all-labels' in the api and
converts to a bool
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this way we can omit the pattern
```
let status_path = Path::new(TAPE_STATUS_DIR);
some_function(status_path);
```
and give the TAPE_STATUS_DIR directly. In some instances we now have to
give TAPE_STATUS_DIR more often, but most often we save a few
intermediary Paths.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we converted the prune settings of datastores to prune-jobs, but did
not actually implement the notifications for them, even though
we had the notification options in the gui (they did not work).
implement the basic ok/error notification for prune jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and use self.inventory_path. This is only used internally (not pub) so there
is no need to have it as a static function.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This makes it consistent with the naming scheme in PVE/GUI.
Keep value for API stability reasons, and remove it in next major version.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.cspak@proxmox.com>
no real change for PBS usage - the ApiHandler enum is marked
non_exhaustive now because it has extra values if the new (enabled by
default) "server" feature is enabled.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Some of them could easily be grouped in a kind of
RestoreWorker struct, but that'll still leave one bigger
function that's more annoying to change.
Let's just allow it for now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
In the auth code we rather #[allow] the binding, because in
this case we explicitly want to assert the type.
In fact, it would make more sense for clippy to not warn
about a unit type if the unit type is explicitly spelled
out.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the remaining ones are:
- type complexity
- fns with many arguments
- new() without default()
- false positives for redundant closures (where closure returns a static
value)
- expected vs actual length check without match/cmp
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
key location is now in a single place, missing key and no signature is
not fatal anymore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The underlying issue seems to be the case when the thread that runs
the IO driver is polling its own tasks, while that happens the IO
driver/poller won't run and thus work stealing won't happen, meaning
that idle and parked threads will keep being parked even if there's
pending work they could do.
A promising solution for tokio is proposed in its issue tracker [0],
but it wasn't yet implemented. So, as stop gap spawn a separate
thread that periodically spawns a no-op ready future in the runtime
which would unpark a worker in the aforementioned case and thus
should break the bogus idleness. Choose a 3s period for that without
any overly elaborate reasons, our main goal is to ensure we accept
incoming connections and 3s is well below a HTTP timeout and leaves
some room for high network latencies while not invoking to much
additional wakeups for systems that are really idling.
[0]: https://github.com/tokio-rs/tokio/issues/4730#issuecomment-1147975074
Link: https://github.com/tokio-rs/tokio/issues/4730
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
not much value in waiting an extra minute, that doesn't really
guarantees better scheduling (as in, less impact on startup).
Dropping that also allows easily to drop the counter by just moving
the sleep to the beginning of the loop.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is a stop-gap measure to prevent snapshot listing from
blocking the main async worker threads as it can potentially
do a *lot* of I/O.
Ideally we'll move to a proper streaming API, but this will
be an API break.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when snapshots vanish during tape backup, we skip them. Until now,
we also warned with the error and failed the task at the end.
Since deleting snapshots during tape backup does not really interfere
with it, don't fail the whole task, and only add a log line that it
was skipped.
To differentiate from different errors (e.g. permission problems),
introduce a 'SnapshotBackupResult' which is returned by 'backup_snapshot'.
Also remove the 'pub' there since we don't want to leak the
SnapshotBackupResult type and it's not used anywhere outside this file.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the superuser's email will be used to notify them that certificate
renewal has failed.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
but in contrast to pve, we split the api by type of the section config,
since we cannot handle multiple types in the updater
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and keep the data as similar as possible to pve (tags/fields)
datastores get their own 'object' type and reside in the "blockstat"
measurement
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
that way we can reuse the stats gathered
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
No semantic change intended. IMO the interface of "both a datastore
and NS mapping must be present" is still a bit weird, at least in how
its used here to decide what to skip and what not, maybe we can
implement this in a more clear way (or maybe I'm just overlooking
something that makes it clearer as is).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the moved 'fs_info' helpers from the proxmox-sys crate (available
from there since proxmox-sys 0.3.0) as replacement for 'disk_usage'
in the workspace local tools crate and remove the latter as we do not
need it anymore.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ T: squashed in removal of now unused import and reworded commit
message to include version availability info, among other things ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoid assembling a hash mapping of namespaces only to not use it,
i.e., throw it away then anyway
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The split out helpers will (partially) be used in later patches for
call sites where we only need parts of the info assembled here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
not wanting to play code golf here, but bloat in code makes it often
also harder to read, so try to reduce some of that without making it
to terse.
No semantic change intended.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make the assumption that if a user has any privilege that would make
an NS and (parts) of its content visible they also should be able to
know about the datastore and very basic errors on lookup (path
existence and maintenance mode) even if that NS doesn't even exists
(yet), as they could, e.g., make or view a backup and find out
anyway.
This avoids iterating over parts of the whole datastore folder tree
on disk, doing a priv check on each, swapping IO to virtual in memory
checks on info we got available already anyway, is always a good idea
after all
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid the problematic open fresh datastore with fresh chunkstore
with, and that's the actual problematic part, fresh process locker.
As the latter uses posix record locks which are pretty dangreous as
they operate on a path level (not FD level) and thus closing any file
opened (even if it wasn't opened for locking at all) drops all active
locks on the same file on completely unrelated file descriptors -.-
Also, no operation wasn't exactly correct for this thing in the first
place, but we cannot use Operation::Lookup either, as we're currently
indeed using a rather stupid-simple way and *are* reading.
So until we optimize this to allow querying the AclTree if there's
any priv XYZ below a path, use the Operation::Read.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This endpoint only lists all accessible namespace, and one doesn't
necessarily needs to have permissions on the parent itself just to
have OK ACLs on deeper down NS.
So, drop the upfront check on parent but explicitly avoid leaking if
a NS exists or not, i.e., only do so if they got access on the parent
NS.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
I.e., for those that only got permissions on a sub namespace and
those that onlöy got BACKUP_READ, as both they could just list and
count themselves too after all, so not exactly secret info.
The UI needs some adaptions to cope with gc-stats and usage being
optional, will be done in a next commit.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we can now use it for the error case and will further use it for the
can access namespace but not datastore case in a future patch
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead move the acl_path helper to BackupNamespace, and introduce a new
helper for printing a store+ns when logging/generating error messages.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
these all contain the path in the error message already, so no (new)
potential for leakage..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it includes the path, which might be helpful when users are switching to
using namespaces. datastore and namespace lookup happens after, so this
doesn't leak anything.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of doing a manual lookup and check - this changes the returned
error slightly since check_privs will include the checked ACL path, but
that is okay here, checks are before we even lookup the namespace/store,
so no chance to leak anything.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
where appropriate. these should never leak anything sensitive, as we
check privs before checking existence or existence is already known at
that point via other privileges.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
no redundant store+namespace mapping, and synchronize namespace creation
check with that of manual creation and creation as part of sync.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
syncing to a namespace only requires privileges on the namespace (and
potentially its children during execution).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the namespace is optional, but should be captured to allow ACL checks
for unprivileged non-job-owners.
also add FIXME for other job types and workers that (might) need
updating.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
replacing them with chunks of zero bytes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Hannes Laimer <h.laimer@proxmox.com>
instead of a string. The underlying catalog implementation has to
care about how this is formatted, not the external caller
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
use the relatively new variant of ListAccessibleBackupGroups to also
allow pruning the groups that one doesn't own but has the respective
privileges on their namespace level.
This was previously handled by the API endpoint itself, which was ok
as long as only one level was looked at.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Not only check all owned backup groups, but also all that an auth_id
has DATASTORE_AUDIT or DATASTORE_READ on the whole namespace.
best viewed with whitespace change ignore (-w)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The "owner override" privs will skip the owner check completely if
the authid has a permission for any of the bitwise OR'd privs
requested on the namespace level.
The "owner and privs" are for the case where being the owner is not
enough, e.g., pruning, if set they need to match all, not just any,
on the namespace, otherwise we don't even look at the groups from the
current NS level.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This reverts commit 7a1a5d206d.
We could already cause the behavior by simply setting ignore-verified
to false, aas that flag is basically an on/off switch for even
considering outdated-after or not.
So avoid the extra logic and just make the gui use the previously
existing way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
properly encode the namespace as separate field both for manual prunes
and the job. fix the access checks as well now that the job doesn't use
the jobid as workerid anymore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
after all we couldn't delete all that got requested, ideally this
should become a task where we can log what got deleted and what
not...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And make that opt-in in the API endpoint, to avoid bad surprises by
default.
If not set we'll only prune empty namespaces.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by adding the 'default' serde hint and renaming 'recursion_depth' to
'max_depth' (to be in line with sync job config)
also add the logic to actually add/update the tape backup job config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the snapshot string format is not backwards compatible since it now has
an in-line namespace prefix. it's possible to select which magic to use
at the start of the backup, since a tape backup job knows whether it
operates on non-root namespaces up-front.
the MediaCatalog itself also has a similar incompatible change, but
there
- updating existing catalogs in-place
- not knowing what the catalog will contain in the future when initially
creating/opening it
makes bumping the magic there harder. since the tape contents are
sufficiently guarded by the other two bumps, ignoring the
backwards-incomaptible change of the on-disk catalogs seems like an okay
tradeoff.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by adding a new parameter 'namespaces', which contains a mapping
for a namespace like this:
store=datastore,source=foo,target=bar,max-depth=2
if source or target are omitted the root namespace is used for its value
this mapping can be given several times (on the cli) or as an array (via
api) to have mappings for multiple datastores
if a specific snapshot list is given simultaneously, the given snapshots
will be restored according to this mapping, or to the source namespace
if no mapping was found.
to do this, we reutilize the restore_list_worker, but change it so that
it does not hold a lock for the duration of the restore, but fails
if the snapshot does exist at the end. also the snapshot will now
be temporarily restored into the target datastore into the
'.tmp/<media-set-uuid>' folder.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
checks the privilegs for the target namespace. If that does not exist,
try to recursively create them while checking the privileges.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
both used the 'Display' trait of pbs_datastore::BackupDir, which is not
intended to be serialized anywhere. Instead, manually format the path
using the print_ns_and_snapshot helper, and conversely, parse with
'parse_ns_and_snapshot'. to be a bit safer, change the register_snapshot
signature to take a BackupNamespace and BackupDir instead of a string.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when continuing a media set, we first move to the end of the tape and
start with the next (chunk) archive. If that takes long, the task logs
last line is 'moving to end of media' even if we already startet
writing. To make this less confusing, log that we arrived at the
end of the media.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to handle the unlikely case of `ns` being deeper than `remote-ns`,
`max-depth` being set to `None` and a too-deep sub-ns of `ns` existing.
such a sub-ns cannot have been created by a previous run of this sync
job, so avoid unexpectedly removing it.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
into the regular one (with default == MAX) and the one used for
pull/sync, where the default is 'None' which actually means the remote
end reduces the scope of sync automatically (or, if needed,
backwards-compat mode without any remote namespaces at all).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and use it when creating a sync job, and simplify the check on updating
(only check the final, resulting config instead of each intermediate
version).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else (grand)-parents and siblings/cousins of remote-ns are also
included, and mapping the remote-ns prefix fails.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
and fall back to only syncing the root namespace, if possible. the sync
job will still be marked as failed to prompt the admin to resolve the
situation:
- explicitly mark the job as syncing *only* the root namespace
- or upgrade remote end to support namespaces
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
There is a race upon reload, where it can happen that:
1. systemd forks off /bin/kill -HUP $MAINPID
2. Current instance forks off new one and notifies systemd with the
new MAINPID.
3. systemd sets new MAINPID.
4. systemd receives SIGCHLD for the kill process (which is the current
control process for the service) and reads the PID of the old
instance from the PID file, resetting MAINPID to the PID of the old
instance.
5. Old instance exits.
6. systemd receives SIGCHLD for the old instance, reads the PID of the
old instance from the PID file once more. systemd sees that the
MAINPID matches the child PID and considers the service exited.
7. systemd receivese notification from the new PID and is confused.
The service won't get active, because the notification wasn't
handled.
To fix it, update the PID file before sending the MAINPID
notification, similar to what a comment in systemd's
src/core/service.c suggests:
> /* Forking services may occasionally move to a new PID.
> * As long as they update the PID file before exiting the old
> * PID, they're fine. */
but for our Type=notify "before sending the notification" rather than
"before exiting", because otherwise, the mix-up in 4. could still
happen (although it might not actually be problematic without the
mix-up in 6., it still seems better to avoid).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
without that one gets a "failed to lookup datastore X" in the log for
every datastore that is in read-only or offline maintenance mode,
even if they aren't scheduled for GC anyway.
Avoid that by first opening the datastore through a Lookup operation,
and only re-open it as Write op once we know that GC needs to get
scheduled for it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
RDD update did not use lookup_datastore() and therefore bypassed
the maintenance mode checks. This adds the needed check directly.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Allow pulling all groups from a certain source namespace, and
possibly sub namespaces until max-depth, into a target namespace.
If any sub-namespaces get pulled, they will be mapped relatively from
the source parent namespace to the target parent namespace.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we do not have normal GET variables available in the checks provided
by the rest server from the api macro, so do it manually.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is the most common sequence of checks we have in this file, so
let's have a single place where we implement it.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these happen together very often.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else this could leak existence of datastore.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the helper now takes both high-privilege and lesser-privilege privs, so
the resulting bool can be used to quickly check whether additional
checks like group ownership are needed or not.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We decided to go this route because it'll most likely be
safer in the API as we need to explicitly add namespaces
support to the various API endpoints this way.
For example, 'pull' should have 2 namespaces: local and
remote, and the GroupFilter (which would otherwise contain
exactly *one* namespace parameter) needs to be applied for
both sides (to decide what to pull from the remote, and what
to *remove* locally as cleanup).
The *datastore* types still contain the namespace and have a
`.backup_ns()` getter.
Note that the datastore's `Display` implementations are no
longer safe to use as a deserializable string.
Additionally, some datastore based methods now have been
exposed via the BackupGroup/BackupDir types to avoid a
"round trip" in code.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We probably can combine the base permission + owner check, but for
now add explicit ones to upfront so that the change is simpler as
only one thing is done.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
allow to list any namespace with privileges on it and allow to create
and delete namespaces if the user has modify permissions on the parent
namespace.
Creation is only allowed if the parent NS already exists.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are the ones for non-#[api] methods, also fill in the
namespace in prune operations
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make it easier by adding an helper accepting either group or
directory
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
used_datastores returned the 'target', but in the full_restore_worker,
we interpreted it as the source and searched for a mapping
(which we then locked)
since we cannot return a HashSet of Arc<T> (missing Hash trait on DataStore),
we have now a map of source -> target
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
thanks to commit 70142e607dda43fc778f39d52dc7bb3bba088cd3 from
proxmox repos's proxmox-http crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and remove already fixed fixmes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ T: squash in cargo fmt fixup for some trailing ws ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pull_store is the entrypoint used by other code, the rest does not need
to be visible at all.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
else this might remove groups which are not part of the pull scope. note
that setting/using remove_vanished already checks the required privs
earlier.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
On reload the old process hands over to the new process but needs to
keep running until all its worker tasks are finished to avoid
breaking a in-progress action like a xterm.js web shell or a backup
creation/restore.
During that wait time the receiving channel was already closed, but
the TCP sockt accept listener was still left active by mistake.
That paired with the `SO_REUSEPORT` being set on the underlying
socket, made the kernel choose either the old or new process for new
incoming connections, both still listened for them after all and
reuse-port + multiple processes is often used as load-balancer
mechanism.
As the old proxy accepted connections but didn't process them anymore
one could observer sporadic connection failures on any API call, well
any new connection to the proxy, depending on which process got the
it assigned.
The fix is to stop accepting new connections one we shutdown, so poll
the shutdown_future too during accept and just exit the accept-loop
on shutdown.
Note: This part of the code, nor other parts that could influence it,
wasn't changed at all in recent times, so it's still unresolved for
why it pops up only now.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Co-authored-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ T: add more (root cause) info and reword a bit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Returning the GC status was dropped by mistake in commit 762f7d15
("datastore status: factor out api type DataStoreStatusListItem")
As this is considered a breaking change which we also felt, due to
the gc-status being used in the web interface for the datastore
overview list (not the dashboard), re add it.
Fixes: 762f7d15
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ T: add reference to breaking commit, reword message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
data blobs can only appear in a BackupDir (snapshot) in the backup
hierachy, so makes more sense that it lives in there.
As it wasn't widely used anyway it's easy to move the single
non-package call site over to the new one directly and drop the
implementation from Datastore completely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
> The hash set will be able to hold at least capacity elements
> without reallocating. If capacity is 0, the hash set will not
> allocate.
-- rustdoc, HashSet::with_capacity
So, the number we pass is the amount of chunk "IDs" we safe, which is
then 64Ki, not 16Ki and thus the size we can reference too is also
256 GiB, not 64 GiB.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And drop the base_path parameter on a first bunch of
functions (more reordering will follow).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
like for the index, instead of manually stripping it.
this (and the previous change) is backwards-compatible since `Remote`
already skipped serializing empty strings, so the returned JSON is
identical.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
And use the api-types for their contents.
These are supposed to be instances for a datastore, the pure
specifications are the ones in pbs_api_types which should be
preferred in crates like clients which do not need to deal
with the datastore directly.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
datastore's group_path will be moved to BackupDir soon and
this is required to be able to properly distinguish them
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
The type is a real enum.
All are API types and implement Display and FromStr. The
ordering is the same as it is in pbs-datastore.
Also, they are now flattened into a few structs instead of
being copied manually.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid collecting the whole group list in memory only to iterate and
filter over it again.
Note that the change could result in a indentation change, so best
viewed with `-w` flag.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The restore_key api-endpoint is tape/drive/{drive}/restore-key.
Since I cannot set the url parameter for the drivename to null or
undefined, when restoring by exported-key, I moved the
added restore_key-api-code to
"create_key aka POST api2/json/config/tape-encryption-keys" and
added an ApiHandler call in the cli's "restore_key" to call
"create_key" in the api.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
by using the newly added 'create_tar' and the 'ZstdEncoder'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
namely /admin/datastore/{store}/snapshots
and /nodes/{node}/tasks
since those are api calls where the result can get quite large
with this change, the serialization is now streaming instead of making
a `Value` in memory.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
namely 'StreamingSync' and 'StreamingAsync'
in rest-server by using the new formatter function,
and in the debug binary by using 'to_value'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
`&Value` itself implements `Deserializer` and can therefore
be passed directly to `T::deserialize` without requiring an
intermediate `clone()`. (This also enables optionally
borrowing strings if the result has a short enough lifetime)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Saves the currently active read/write operation counts in a file. The
file is updated whenever a reference returned by lookup_datastore is
dropped and whenever a reference is returned by lookup_datastore. The
files are locked before every access, there is one file per datastore.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
by not bubbling up most errors, and continuing on. this avoids that we
stop cleaning up because e.g. one directory was missing.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As serde_json will otherwise read files 1 byte at a time.
Writing is a bit better, but syntacitcal elements (quotes, braces,
commas) still often show up as single write syscalls, so use BufWriter
there as well.
Note that while we do store the file in the resulting objects, we do not
need to keep the buffered read/writers as we always `seek` to the
beginning on further file operations.
Reported-by: Mark Schouten <mark@tuxis.nl>
Link: https://lists.proxmox.com/pipermail/pbs-devel/2022-April/004909.html
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
added a parameter to the cli for importing tape key via a json-parameter or
via reading a exported paperkey-file or json-file.
For this i also added a backupkey parameter to the api, but here it only
accepts json.
The cli interprets the parameter first as json-string, then json-file
and last as paperkey-file.
functionality:
proxmox-tape key paperkey [fingerprint of existing key] > paperkey.backup
proxmox-tape key restore --backupkey paperkey.backup # key from line above
proxmox-tape key restore --backupkey paperkey.json # only the json
proxmox-tape key restore --backupkey '{"kdf": {"Scrypt": ...' # json as string
for importing the key as paperkey-file it is irrelevant, if the paperkey got exported as html
or txt.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
add support for multi-line comments to node.cfg and the api, similar to
how pve handles multi-line comments
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the latest changes to this api call changed/removed some things that
were actually necessary for the gui. Readd those and document them this
time.
The change from u64 to i64 limits us to 8EiB of Datastore sizes (instead if
16EiB) but if we reach that, we must adapt most other parts to use 128bit
sizes anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we use http to make the api call, we have to parse the parameters
before, else we might send the string "true" instead of the boolean true
and the api rejects it with a 'Parameter verification error'.
We already have all api call schemas here, so parsing is possible.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'objset_id' already contains that, so the error was
"could not parse 'objset-objset-0xFFFF' stat file"
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this can only real fail for two reasons:
* the format is wrong:
this should not happen unless the format changed, then it will
happen every time
* the file can't be read:
this can happen if a user deletes and recreates a dataset manually,
since the mapped file does not exist anymore but the dataset does
for the second case, delete the mapping from the hashmap, so that the
next call will refresh the mapping with the correct file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the api should return a 404 error for entries that do not exist
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when using the 'extjs' formatter, it marks them in a way, so that
the gui can mark the form fields with the error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
currently, we sort chunks by inode when verifying or backing up to tape.
we get the inode# by stat'ing each chunk, which may be more expensive
than the gains of reading the chunks in order
Since that is highly dependent on the underlying storage of the datastore,
introduce a tuning option so that the admin can tune that behaviour
for each datastore.
The default stays the same (sorting by inode)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
avoid that the acme renewal is skipped due to bailing out earlier
from a subscription or apt update error.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and move the comment from the local io_bail in pbs-client/src/pxar/fuse.rs
to the only use
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
currently, the iterator goes over *all* chunks of the index, even
those already backed up by a previous snapshots in the same tape
backup. this is bad since for each iterator, we stat each chunk to
sort by inode number. so to avoid stat'ing the same chunks over
and over for consecutive snapshots, add a 'skip_fn' to the iterator
and in the pool writer and check the catalog_set if we can skip it
this means we can drop the later check for the catalog_set
(since we don't modify that here)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Adds the '--force' flag to the proxmox-tape command allowing users
with root privileges to overwrite the passphrase of a given key.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
When force is used, the current passphrase is not required. Instead
it will be read from the file pointed to by TAPE_KEYS_FILENAME and
the old key configuration will be overwritten using the new
passphrase. Requires super user privileges.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
This is something that is checked all the time. Having it further up
saves on scrolling and brings it into better alignment with PVE & PMG
regarding where in the report the info is located.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
So that we can make 'log::debug' messages actually appear in the
syslog.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For the API the parameter --hint is not optional. This patch fixes
the man page and cli command doesn't send an API call, if the
parameter does not exist.
Signed-off-by: Markus Frank <m.frank@proxmox.com>