Compare commits

..

14 Commits

Author SHA1 Message Date
Thomas Lamprecht
58fb448be5 bump version to 3.4.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-16 14:45:45 +02:00
Dominik Csapak
07a21616c2 tape: wait for calibration of LTO-9 tapes
In case we run into a ready check timeout, query the drive, and
increase the timeout to 2 hours and 5 minutes if it's calibrating (5
minutes headroom). This is effectively a generalization of commit
0b1a30aa ("tape: adapt format_media for LTO9+"), which increased the
timeout for the format procedure, while this here covers also tape
that were not explicitly formatted but get auto-formatted indirectly
on the first action changing a fresh tape, like e.g. barcode labeling.

The actual reason for this is that since LTO-9, initial loading of
tapes into a drive can block up to 2 hours according to the spec. One
can find the IBM and HP LTO SCSI references rather easily [0][1]

As for the timeout, IBM says it only in their recommendations:
> Although most optimizations will complete within 60 minutes some
> optimizations may take up to 2 hours.

And HP states:
> Media initialization adds a variable amount of time to the
> initialization process that typically takes between 20 minutes and
> 2 hours.

So it seems there not a hard limit and depends, but most ordinary
setups should be covered and in my tests it always took around the 1
hour mark.

0: IBM LTO-9 https://www.ibm.com/support/pages/system/files/inline-files/LTO%20SCSI%20Reference_GA32-0928-05%20(EXTERNAL)_0.pdf
1: HP LTO-9 https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001239en_us&page=GUID-D7147C7F-2016-0901-0921-000000000450.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250415114043.2389789-1-d.csapak@proxmox.com
 [TL: extend commit message with info that Dominik provided in a
  reply]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-16 14:28:04 +02:00
Christian Ebner
cb9814e331 garbage collection: fix rare race in chunk marking phase
During phase 1 of garbage collection referenced chunks are marked as
in use by iterating over all index files and updating the atime on
the chunks referenced by these.

In an edge case for long running garbage collection jobs, where a
newly added snapshot (created after the start of GC) reused known
chunks from a previous snapshot, but the previous snapshot index
referencing them disappeared before the marking phase could reach
that index (e.g. pruned because only 1 snapshot to be kept by
retention setting), known chunks from that previous index file might
not be marked (given that by none of the other index files it was
marked).

Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") this is even less likely as now the
iteration reads also index files added during phase 1, and
therefore either the new or the previous index file will account for
these chunks (the previous backup snapshot can only be pruned after
the new one finished, since locked). There remains however a small
race window between the reading of the snapshots in the backup group
and the reading of the actual index files for marking.

Fix this race by:
1. Checking if the last snapshot of a group disappeared and if so
2. generate the list again, looking for new index files previously
   not accounted for
3. To avoid possible endless looping, lock the group if the snapshot
   list changed even after the 10th time (which will lead to
   concurrent operations to this group failing).

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Acked-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-3-c.ebner@proxmox.com
2025-04-16 14:17:24 +02:00
Christian Ebner
31dbaf69ab garbage collection: fail on ArchiveType::Blob in open index reader
Instead of returning a None, fail if the open index reader is called
on a blob file. Blobs cannot be read as index anyways and this allows
to distinguish cases where the index file cannot be read because
vanished.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250416105000.270166-2-c.ebner@proxmox.com
2025-04-16 14:17:24 +02:00
Shannon Sterz
af5ff86a26 sync: switch reader back to a shared lock
the below commit accidentally switched this lock to an exclusive lock
when it should just be a shared one as that is sufficient for a
reader:

e2c1866b: datastore/api/backup: prepare for fix of #3935 by adding
lock helpers

this has already caused failed backups for a user with a sync job that
runs while they are trying to create a new backup.

https://forum.proxmox.com/threads/165038

Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
2025-04-16 11:35:27 +02:00
Christian Ebner
5fc281cd89 garbage collection: fix: account for created/deleted index files
Since commit 74361da8 ("garbage collection: generate index file list
via datastore iterators") not only snapshots present at the start of
the garbage collection run are considered for marking, but also newly
added ones. Take these into account by adapting the total index file
counter used for the progress output.

Further, correctly take into account also index files which have been
pruned during GC, therefore present in the list of still to process
index files but never encountered by the datastore iterators. These
would otherwise be interpreted incorrectly as strange paths and logged
accordingly, causing confusion as reported in the community forum [0].

Fixes: https://forum.proxmox.com/threads/164968/
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:17:21 +02:00
Fabian Grünbichler
6c6257b94e build: add .do-static-cargo-build target
else parallel builds of the static binaries will not work correctly, just like
with the regular .do-cargo-build.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2025-04-15 12:16:51 +02:00
Christian Ebner
c644f7bc85 build: include pxar in static binary compilation and package
The debian package providing the dynamically linked version of the
proxmox-backup-client is packaged together with the pxar executable.

To be in line and for user convenience, include a statically linked
version of pxar to the static package as well.

Renames STATIC_BIN env variable to STATIC_BINS to reflect that this
now covers multiple binaries and store rustc flags in its own
variable so they can be reused since `cargo rustc` does not allow
invocations with multiple `--package` arguments at once.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:16:51 +02:00
Christian Ebner
4a022e1a3f api: backup: include previous snapshot name in log message
Extends the log messages written to the server's backup worker task
log to include the snapshot name which is used as previous snapshot.

This information facilitates debugging efforts, as the previous
snapshot might have been pruned since.

For example, instead of
```
download 'index.json.blob' from previous backup.
register chunks in 'drive-scsi0.img.fidx' from previous backup.
download 'drive-scsi0.img.fidx' from previous backup.
```

this now logs
```
download 'index.json.blob' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
register chunks in 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
download 'drive-scsi0.img.fidx' from previous backup 'vm/101/2025-04-15T09:02:10Z'.
```

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2025-04-15 12:12:06 +02:00
Christian Ebner
9247d57fdf docs: describe the intend for the statically linked pbs client
Discurage the use of the statically linked binary for systems where
the regular one is available.

Moves the previous note into it's own section and link to the
installation section.

Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250410093059.130504-1-c.ebner@proxmox.com
2025-04-10 21:01:24 +02:00
Gabriel Goller
427c687e35 restrict consent-banner text length
Add a maxLength in of 64*1024 in the frontend and the api. We allow
a max body size of 512*1024 in the api (with patch [0]) so we should be
fine.

[0]: https://git.proxmox.com/?p=proxmox.git;a=commit;h=cf9e6c03a092acf8808ce83dad9249414fe4d588

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250410082052.53097-1-g.goller@proxmox.com
2025-04-10 11:40:51 +02:00
Lukas Wagner
f9532a3a84 ui: token view: rephrase token regenerate dialog message
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250410085124.81931-2-l.wagner@proxmox.com
2025-04-10 11:38:51 +02:00
Lukas Wagner
d400673641 ui: token view: fix typo in 'lose'
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250410085124.81931-1-l.wagner@proxmox.com
2025-04-10 11:38:51 +02:00
Thomas Lamprecht
cdc710a736 d/control: normalize with wrap-and-sort -tkn
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-04-09 18:16:35 +02:00
16 changed files with 189 additions and 72 deletions

View File

@ -1,5 +1,5 @@
[workspace.package]
version = "3.4.0"
version = "3.4.1"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",

View File

@ -50,6 +50,8 @@ COMPILEDIR := target/$(DEB_HOST_RUST_TYPE)/debug
STATIC_COMPILEDIR := $(STATIC_TARGET_DIR)/$(DEB_HOST_RUST_TYPE)/debug
endif
STATIC_RUSTC_FLAGS := -C target-feature=+crt-static -L $(STATIC_COMPILEDIR)/deps-stubs/
ifeq ($(valgrind), yes)
CARGO_BUILD_ARGS += --features valgrind
endif
@ -59,8 +61,8 @@ CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN) $(RESTORE_BIN))
STATIC_BIN := \
$(addprefix $(STATIC_COMPILEDIR)/,proxmox-backup-client-static)
STATIC_BINS := \
$(addprefix $(STATIC_COMPILEDIR)/,proxmox-backup-client-static pxar-static)
export DEB_VERSION DEB_VERSION_UPSTREAM
@ -153,7 +155,7 @@ clean: clean-deb
$(foreach i,$(SUBDIRS), \
$(MAKE) -C $(i) clean ;)
$(CARGO) clean
rm -f .do-cargo-build
rm -f .do-cargo-build .do-static-cargo-build
# allows one to avoid running cargo clean when one just wants to tidy up after a package build
clean-deb:
@ -202,12 +204,25 @@ $(COMPILED_BINS) $(COMPILEDIR)/dump-catalog-shell-cli $(COMPILEDIR)/docgen: .do-
--bin sg-tape-cmd
touch "$@"
.PHONY: proxmox-backup-client-static
proxmox-backup-client-static:
rm -f .do-static-cargo-build
$(MAKE) $(STATIC_BINS)
$(STATIC_BINS): .do-static-cargo-build
.do-static-cargo-build:
mkdir -p $(STATIC_COMPILEDIR)/deps-stubs/ && \
echo '!<arch>' > $(STATIC_COMPILEDIR)/deps-stubs/libsystemd.a # workaround for to greedy linkage and proxmox-systemd
$(CARGO) rustc $(CARGO_BUILD_ARGS) --package pxar-bin --bin pxar \
--target-dir $(STATIC_TARGET_DIR) -- $(STATIC_RUSTC_FLAGS)
$(CARGO) rustc $(CARGO_BUILD_ARGS) --package proxmox-backup-client --bin proxmox-backup-client \
--target-dir $(STATIC_TARGET_DIR) -- $(STATIC_RUSTC_FLAGS)
.PHONY: lint
lint:
cargo clippy -- -A clippy::all -D clippy::correctness
install: $(COMPILED_BINS) $(STATIC_BIN)
install: $(COMPILED_BINS) $(STATIC_BINS)
install -dm755 $(DESTDIR)$(BINDIR)
install -dm755 $(DESTDIR)$(ZSH_COMPL_DEST)
$(foreach i,$(USR_BIN), \
@ -227,6 +242,7 @@ install: $(COMPILED_BINS) $(STATIC_BIN)
$(foreach i,$(SERVICE_BIN), \
install -m755 $(COMPILEDIR)/$(i) $(DESTDIR)$(LIBEXECDIR)/proxmox-backup/ ;)
install -m755 $(STATIC_COMPILEDIR)/proxmox-backup-client $(DESTDIR)$(BINDIR)/proxmox-backup-client-static
install -m755 $(STATIC_COMPILEDIR)/pxar $(DESTDIR)$(BINDIR)/pxar-static
$(MAKE) -C www install
$(MAKE) -C docs install
$(MAKE) -C templates install
@ -241,14 +257,3 @@ upload: $(SERVER_DEB) $(CLIENT_DEB) $(RESTORE_DEB) $(DOC_DEB) $(STATIC_CLIENT_DE
tar cf - $(CLIENT_DEB) $(CLIENT_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pve,pmg,pbs-client" --dist $(UPLOAD_DIST)
tar cf - $(STATIC_CLIENT_DEB) $(STATIC_CLIENT_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pbs-client" --dist $(UPLOAD_DIST)
tar cf - $(RESTORE_DEB) $(RESTORE_DBG_DEB) | ssh -X repoman@repo.proxmox.com upload --product "pve" --dist $(UPLOAD_DIST)
.PHONY: proxmox-backup-client-static
proxmox-backup-client-static:
rm -f $(STATIC_BIN)
$(MAKE) $(STATIC_BIN)
$(STATIC_BIN):
mkdir -p $(STATIC_COMPILEDIR)/deps-stubs/ && \
echo '!<arch>' > $(STATIC_COMPILEDIR)/deps-stubs/libsystemd.a # workaround for to greedy linkage and proxmox-systemd
$(CARGO) rustc $(CARGO_BUILD_ARGS) --package proxmox-backup-client --bin proxmox-backup-client \
--target-dir $(STATIC_TARGET_DIR) -- -C target-feature=+crt-static -L $(STATIC_COMPILEDIR)/deps-stubs/

23
debian/changelog vendored
View File

@ -1,3 +1,26 @@
rust-proxmox-backup (3.4.1-1) bookworm; urgency=medium
* ui: token view: fix typo in 'lose' and rephrase token regenerate dialog
message for more clarity.
* restrict consent-banner text length to 64 KiB.
* docs: describe the intend for the statically linked pbs client.
* api: backup: include previous snapshot name in log message.
* garbage collection: account for created/deleted index files concurrently
to GC to avoid potentially confusing log messages.
* garbage collection: fix rare race in chunk marking phase for setups doing
high frequent backups in quick succession while immediately pruning to a
single backup snapshot being left over after each such backup.
* tape: wait for calibration of LTO-9 tapes in general, not just in the
initial tape format procedure.
-- Proxmox Support Team <support@proxmox.com> Wed, 16 Apr 2025 14:45:37 +0200
rust-proxmox-backup (3.4.0-1) bookworm; urgency=medium
* fix #4788: build statically linked version of the proxmox-backup-client

2
debian/control vendored
View File

@ -208,7 +208,7 @@ Description: Proxmox Backup Client tools
Package: proxmox-backup-client-static
Architecture: any
Depends: qrencode, ${misc:Depends},
Conflicts: proxmox-backup-client
Conflicts: proxmox-backup-client,
Description: Proxmox Backup Client tools (statically linked)
This package contains the Proxmox Backup client, which provides a
simple command line tool to create and restore backups.

View File

@ -1 +1,2 @@
debian/proxmox-backup-client.bc proxmox-backup-client
debian/pxar.bc pxar

View File

@ -1,2 +1,4 @@
usr/share/man/man1/proxmox-backup-client.1
usr/share/man/man1/pxar.1
usr/share/zsh/vendor-completions/_proxmox-backup-client
usr/share/zsh/vendor-completions/_pxar

View File

@ -34,13 +34,13 @@ usr/share/man/man5/media-pool.cfg.5
usr/share/man/man5/notifications-priv.cfg.5
usr/share/man/man5/notifications.cfg.5
usr/share/man/man5/proxmox-backup.node.cfg.5
usr/share/man/man5/prune.cfg.5
usr/share/man/man5/remote.cfg.5
usr/share/man/man5/sync.cfg.5
usr/share/man/man5/tape-job.cfg.5
usr/share/man/man5/tape.cfg.5
usr/share/man/man5/user.cfg.5
usr/share/man/man5/verification.cfg.5
usr/share/man/man5/prune.cfg.5
usr/share/proxmox-backup/templates/default/acme-err-body.txt.hbs
usr/share/proxmox-backup/templates/default/acme-err-subject.txt.hbs
usr/share/proxmox-backup/templates/default/gc-err-body.txt.hbs

1
debian/rules vendored
View File

@ -49,6 +49,7 @@ override_dh_auto_install:
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
mkdir -p debian/proxmox-backup-client-static/usr/bin
mv debian/tmp/usr/bin/proxmox-backup-client-static debian/proxmox-backup-client-static/usr/bin/proxmox-backup-client
mv debian/tmp/usr/bin/pxar-static debian/proxmox-backup-client-static/usr/bin/pxar
override_dh_installsystemd:
dh_installsystemd -pproxmox-backup-server proxmox-backup-daily-update.timer

View File

@ -46,11 +46,21 @@ user\@pbs!token@host:store ``user@pbs!token`` host:8007 store
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ================== ================== ===========
.. Note:: If you are using the statically linked binary of proxmox backup client
name resolution will not be performed via the mechanisms provided by libc,
but uses a resolver written purely in the Rust programming language.
Therefore, features and modules provided by Name Service Switch cannot be
used.
.. _statically_linked_client:
Statically Linked Backup Client
-------------------------------
A statically linked version of the Proxmox Backup client is available for Linux
based systems where the regular client is not available. Please note that it is
recommended to use the regular client when possible, as the statically linked
client is not a full replacement. For example, name resolution will not be
performed via the mechanisms provided by libc, but uses a resolver written
purely in the Rust programming language. Therefore, features and modules
provided by Name Service Switch cannot be used.
The statically linked client is available via the ``pbs-client`` repository as
described in the :ref:`installation <install_pbc>` section.
.. _environment-variables:

View File

@ -1031,13 +1031,15 @@ impl DataStore {
Ok(list)
}
// Similar to open index, but ignore index files with blob or unknown archive type.
// Further, do not fail if file vanished.
fn open_index_reader(&self, absolute_path: &Path) -> Result<Option<Box<dyn IndexFile>>, Error> {
// Similar to open index, but return with Ok(None) if index file vanished.
fn open_index_reader(
&self,
absolute_path: &Path,
) -> Result<Option<Box<dyn IndexFile>>, Error> {
let archive_type = match ArchiveType::from_path(absolute_path) {
Ok(archive_type) => archive_type,
// ignore archives with unknown archive type
Err(_) => return Ok(None),
Ok(ArchiveType::Blob) | Err(_) => bail!("unexpected archive type"),
Ok(archive_type) => archive_type,
};
if absolute_path.is_relative() {
@ -1064,7 +1066,7 @@ impl DataStore {
.with_context(|| format!("can't open dynamic index '{absolute_path:?}'"))?;
Ok(Some(Box::new(reader)))
}
ArchiveType::Blob => Ok(None),
ArchiveType::Blob => bail!("unexpected archive type blob"),
}
}
@ -1129,7 +1131,7 @@ impl DataStore {
// the detected index files not following the iterators logic.
let mut unprocessed_index_list = self.list_index_files()?;
let index_count = unprocessed_index_list.len();
let mut index_count = unprocessed_index_list.len();
let mut chunk_lru_cache = LruCache::new(cache_capacity);
let mut processed_index_files = 0;
@ -1143,57 +1145,108 @@ impl DataStore {
let namespace = namespace.context("iterating namespaces failed")?;
for group in arc_self.iter_backup_groups(namespace)? {
let group = group.context("iterating backup groups failed")?;
let mut snapshots = group.list_backups().context("listing snapshots failed")?;
// Sort by snapshot timestamp to iterate over consecutive snapshots for each image.
BackupInfo::sort_list(&mut snapshots, true);
for snapshot in snapshots {
for file in snapshot.files {
worker.check_abort()?;
worker.fail_on_shutdown()?;
let mut path = snapshot.backup_dir.full_path();
path.push(file);
// Avoid race between listing/marking of snapshots by GC and pruning the last
// snapshot in the group, following a new snapshot creation. Otherwise known chunks
// might only be referenced by the new snapshot, so it must be read as well.
let mut retry_counter = 0;
'retry: loop {
let _lock = match retry_counter {
0..=9 => None,
10 => Some(
group
.lock()
.context("exhausted retries and failed to lock group")?,
),
_ => bail!("exhausted retries and unexpected counter overrun"),
};
let index = match self.open_index_reader(&path)? {
Some(index) => index,
None => continue,
};
self.index_mark_used_chunks(
index,
&path,
&mut chunk_lru_cache,
status,
worker,
)?;
unprocessed_index_list.remove(&path);
let percentage = (processed_index_files + 1) * 100 / index_count;
if percentage > last_percentage {
info!(
"marked {percentage}% ({} of {index_count} index files)",
processed_index_files + 1,
);
last_percentage = percentage;
let mut snapshots = match group.list_backups() {
Ok(snapshots) => snapshots,
Err(err) => {
if group.exists() {
return Err(err).context("listing snapshots failed")?;
}
break 'retry;
}
};
// Always start iteration with the last snapshot of the group to reduce race
// window with concurrent backup+prune previous last snapshot. Allows to retry
// without the need to keep track of already processed index files for the
// current group.
BackupInfo::sort_list(&mut snapshots, true);
for (count, snapshot) in snapshots.into_iter().rev().enumerate() {
for file in snapshot.files {
worker.check_abort()?;
worker.fail_on_shutdown()?;
match ArchiveType::from_path(&file) {
Ok(ArchiveType::FixedIndex) | Ok(ArchiveType::DynamicIndex) => (),
Ok(ArchiveType::Blob) | Err(_) => continue,
}
let mut path = snapshot.backup_dir.full_path();
path.push(file);
let index = match self.open_index_reader(&path)? {
Some(index) => index,
None => {
unprocessed_index_list.remove(&path);
if count == 0 {
retry_counter += 1;
continue 'retry;
}
continue;
}
};
self.index_mark_used_chunks(
index,
&path,
&mut chunk_lru_cache,
status,
worker,
)?;
if !unprocessed_index_list.remove(&path) {
info!("Encountered new index file '{path:?}', increment total index file count");
index_count += 1;
}
let percentage = (processed_index_files + 1) * 100 / index_count;
if percentage > last_percentage {
info!(
"marked {percentage}% ({} of {index_count} index files)",
processed_index_files + 1,
);
last_percentage = percentage;
}
processed_index_files += 1;
}
processed_index_files += 1;
}
break;
}
}
}
let strange_paths_count = unprocessed_index_list.len();
if strange_paths_count > 0 {
warn!("found {strange_paths_count} index files outside of expected directory scheme");
}
let mut strange_paths_count = unprocessed_index_list.len();
for path in unprocessed_index_list {
let index = match self.open_index_reader(&path)? {
Some(index) => index,
None => continue,
None => {
// do not count vanished (pruned) backup snapshots as strange paths.
strange_paths_count -= 1;
continue;
}
};
self.index_mark_used_chunks(index, &path, &mut chunk_lru_cache, status, worker)?;
warn!("Marked chunks for unexpected index file at '{path:?}'");
}
if strange_paths_count > 0 {
warn!("Found {strange_paths_count} index files outside of expected directory scheme");
}
Ok(())
}

View File

@ -659,7 +659,8 @@ impl SgTape {
pub fn wait_until_ready(&mut self, timeout: Option<u64>) -> Result<(), Error> {
let start = SystemTime::now();
let timeout = timeout.unwrap_or(Self::SCSI_TAPE_DEFAULT_TIMEOUT as u64);
let max_wait = std::time::Duration::new(timeout, 0);
let mut max_wait = std::time::Duration::new(timeout, 0);
let mut increased_timeout = false;
loop {
match self.test_unit_ready() {
@ -667,6 +668,16 @@ impl SgTape {
_ => {
std::thread::sleep(std::time::Duration::new(1, 0));
if start.elapsed()? > max_wait {
if !increased_timeout {
if let Ok(DeviceActivity::Calibrating) =
read_device_activity(&mut self.file)
{
log::info!("Detected drive calibration, increasing timeout to 2 hours 5 minutes");
max_wait = std::time::Duration::new(2 * 60 * 60 + 5 * 60, 0);
increased_timeout = true;
continue;
}
}
bail!("wait_until_ready failed - got timeout");
}
}

View File

@ -853,8 +853,8 @@ fn download_previous(
};
if let Some(index) = index {
env.log(format!(
"register chunks in '{}' from previous backup.",
archive_name
"register chunks in '{archive_name}' from previous backup '{}'.",
last_backup.backup_dir.dir(),
));
for pos in 0..index.index_count() {
@ -865,7 +865,10 @@ fn download_previous(
}
}
env.log(format!("download '{}' from previous backup.", archive_name));
env.log(format!(
"download '{archive_name}' from previous backup '{}'.",
last_backup.backup_dir.dir(),
));
crate::api2::helpers::create_download_response(path).await
}
.boxed()

View File

@ -174,6 +174,11 @@ pub enum Translation {
"description" : {
optional: true,
schema: MULTI_LINE_COMMENT_SCHEMA,
},
"consent-text" : {
optional: true,
type: String,
max_length: 64 * 1024,
}
},
)]

View File

@ -480,7 +480,7 @@ impl SyncSource for LocalSource {
) -> Result<Arc<dyn SyncSourceReader>, Error> {
let dir = self.store.backup_dir(ns.clone(), dir.clone())?;
let guard = dir
.lock()
.lock_shared()
.with_context(|| format!("while reading snapshot '{dir:?}' for a sync job"))?;
Ok(Arc::new(LocalSourceReader {
_dir_lock: Arc::new(Mutex::new(guard)),

View File

@ -59,6 +59,9 @@ Ext.define('PBS.NodeOptionView', {
name: 'consent-text',
text: gettext('Consent Text'),
deleteEmpty: true,
fieldOpts: {
maxLength: 64 * 1024,
},
onlineHelp: 'consent_banner',
},
],

View File

@ -193,7 +193,7 @@ Ext.define('PBS.config.TokenView', {
handler: 'regenerateToken',
dangerous: true,
confirmMsg: rec => Ext.String.format(
gettext("Regenerate the secret of the API token '{0}'? All current use-sites will loose access!"),
gettext("Regenerate the secret of the API token '{0}'? All users of the previous token secret will lose access!"),
rec.data.tokenid,
),
},