mirror of
https://git.proxmox.com/git/proxmox-backup
synced 2025-08-14 06:48:35 +00:00
![]() Instead of iterating over all index files found in the datastore in an unstructured manner, use the datastore iterators to logically iterate over them as other datastore operations will. This allows to better distinguish index files in unexpected locations from ones in their expected location, warning the user of unexpected ones to allow to act on possible missconfigurations. Further, this will allow to integrate marking of snapshots with missing chunks as incomplete/corrupt more easily and helps improve cache hits when introducing LRU caching to avoid multiple atime updates in phase 1 of garbage collection. This now iterates twice over the index files, as indices in unexpected locations are still considered by generating the list of all index files to be found in the datastore and removing regular index files from that list, leaving unexpected ones behind. Further, align terminology by renaming the `list_images` method to a more fitting `list_index_files` and the variable names accordingly. This will reduce possible confusion since throughout the codebase and in the documentation files referencing the data chunks are referred to as index files. The term image on the other hand is associated with virtual machine images and other large binary data stored as fixed-size chunks. Basic benchmarking: Total GC runtime shows no significatn change (average of 3 runs): unpatched: 155.4 ± 2.6 s patched: 155.4 ± 3.5 s VmPeak measured via /proc/self/status before and after `mark_used_chunks` (proxmox-backup-proxy was restarted in between for normalization, no changes for all 3 runs): unpatched before: 1196032 kB unpatched after: 1196032 kB patched before: 1196028 kB patched after: 1196028 kB List image shows a slight increase due to the switch to a HashSet (average of 3 runs): unpatched: 64.2 ± 8.4 ms patched: 72.8 ± 3.7 ms Description of the PBS host and datastore: CPU: Intel Xeon E5-2620 Datastore backing storage: ZFS RAID 10 with 3 mirrors of 2x ST16000NM001G, mirror of 2x SAMSUNG_MZ1LB1T9HALS as special Namespaces: 45 Groups: 182 Snapshots: 3184 Index files: 6875 Deduplication factor: 44.54 Original data usage: 120.742 TiB On-Disk usage: 2.711 TiB (2.25%) On-Disk chunks: 1494727 Average chunk size: 1.902 MiB Distribution of snapshots (binned by month): 2023-11 11 2023-12 16 2024-01 30 2024-02 38 2024-03 17 2024-04 37 2024-05 17 2024-06 59 2024-07 99 2024-08 96 2024-09 115 2024-10 35 2024-11 42 2024-12 37 2025-01 162 2025-02 489 2025-03 1884 Signed-off-by: Christian Ebner <c.ebner@proxmox.com> |
||
---|---|---|
.cargo | ||
debian | ||
docs | ||
etc | ||
examples | ||
pbs-buildcfg | ||
pbs-client | ||
pbs-config | ||
pbs-datastore | ||
pbs-fuse-loop | ||
pbs-key-config | ||
pbs-pxar-fuse | ||
pbs-tape | ||
pbs-tools | ||
proxmox-backup-banner | ||
proxmox-backup-client | ||
proxmox-file-restore | ||
proxmox-restore-daemon | ||
pxar-bin | ||
src | ||
templates | ||
tests | ||
www | ||
zsh-completions | ||
.gitignore | ||
Cargo.toml | ||
defines.mk | ||
Makefile | ||
README.rst | ||
rustfmt.toml | ||
TODO.rst |
Build & Release Notes ********************* ``rustup`` Toolchain ==================== We normally want to build with the ``rustc`` Debian package (see below). If you still want to use ``rustup`` for other reasons (e.g. to easily switch between the official stable, beta, and nightly compilers), you should set the following ``rustup`` configuration to use the Debian-provided ``rustc`` compiler by default: # rustup toolchain link system /usr # rustup default system Versioning of proxmox helper crates =================================== To use current git master code of the proxmox* helper crates, add:: git = "git://git.proxmox.com/git/proxmox" or:: path = "../proxmox/proxmox" to the proxmox dependency, and update the version to reflect the current, pre-release version number (e.g., "0.1.1-dev.1" instead of "0.1.0"). Local cargo config ================== This repository ships with a ``.cargo/config.toml`` that replaces the crates.io registry with packaged crates located in ``/usr/share/cargo/registry``. A similar config is also applied building with dh_cargo. Cargo.lock needs to be deleted when switching between packaged crates and crates.io, since the checksums are not compatible. To reference new dependencies (or updated versions) that are not yet packaged, the dependency needs to point directly to a path or git source (e.g., see example for proxmox crate above). Build ===== on Debian 12 Bookworm Setup: 1. # echo 'deb http://download.proxmox.com/debian/devel/ bookworm main' | sudo tee /etc/apt/sources.list.d/proxmox-devel.list 2. # sudo wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg 3. # sudo apt update 4. # sudo apt install devscripts debcargo clang 5. # git clone git://git.proxmox.com/git/proxmox-backup.git 6. # cd proxmox-backup; sudo mk-build-deps -ir Note: 2. may be skipped if you already added the PVE or PBS package repository You are now able to build using the Makefile or cargo itself, e.g.:: # make deb # # or for a non-package build # cargo build --all --release Design Notes ************ Here are some random thought about the software design (unless I find a better place). Large chunk sizes ================= It is important to notice that large chunk sizes are crucial for performance. We have a multi-user system, where different people can do different operations on a datastore at the same time, and most operation involves reading a series of chunks. So what is the maximal theoretical speed we can get when reading a series of chunks? Reading a chunk sequence need the following steps: - seek to the first chunk's start location - read the chunk data - seek to the next chunk's start location - read the chunk data - ... Lets use the following disk performance metrics: :AST: Average Seek Time (second) :MRS: Maximum sequential Read Speed (bytes/second) :ACS: Average Chunk Size (bytes) The maximum performance you can get is:: MAX(ACS) = ACS /(AST + ACS/MRS) Please note that chunk data is likely to be sequential arranged on disk, but this it is sort of a best case assumption. For a typical rotational disk, we assume the following values:: AST: 10ms MRS: 170MB/s MAX(4MB) = 115.37 MB/s MAX(1MB) = 61.85 MB/s; MAX(64KB) = 6.02 MB/s; MAX(4KB) = 0.39 MB/s; MAX(1KB) = 0.10 MB/s; Modern SSD are much faster, lets assume the following:: max IOPS: 20000 => AST = 0.00005 MRS: 500Mb/s MAX(4MB) = 474 MB/s MAX(1MB) = 465 MB/s; MAX(64KB) = 354 MB/s; MAX(4KB) = 67 MB/s; MAX(1KB) = 18 MB/s; Also, the average chunk directly relates to the number of chunks produced by a backup:: CHUNK_COUNT = BACKUP_SIZE / ACS Here are some staticics from my developer worstation:: Disk Usage: 65 GB Directories: 58971 Files: 726314 Files < 64KB: 617541 As you see, there are really many small files. If we would do file level deduplication, i.e. generate one chunk per file, we end up with more than 700000 chunks. Instead, our current algorithm only produce large chunks with an average chunks size of 4MB. With above data, this produce about 15000 chunks (factor 50 less chunks).