From 24406ebc0cf20eb89620809c431599c86ccacfa6 Mon Sep 17 00:00:00 2001 From: Thomas Lamprecht Date: Wed, 8 Jul 2020 18:15:33 +0200 Subject: [PATCH] docs: move host sysadmin out to own chapter, fix ZFS one Signed-off-by: Thomas Lamprecht --- docs/administration-guide.rst | 15 +++---- docs/conf.py | 2 +- docs/index.rst | 1 + docs/local-zfs.rst | 81 +++++++++++++++++++++++------------ docs/sysadmin.rst | 8 +--- 5 files changed, 64 insertions(+), 43 deletions(-) diff --git a/docs/administration-guide.rst b/docs/administration-guide.rst index 51ec4d14..ae14ac4b 100644 --- a/docs/administration-guide.rst +++ b/docs/administration-guide.rst @@ -1,9 +1,8 @@ -Administration Guide -==================== +Backup Management +================= -The administration guide. - -.. todo:: either add a bit more explanation or remove the previous sentence +.. The administration guide. + .. todo:: either add a bit more explanation or remove the previous sentence Terminology ----------- @@ -182,6 +181,7 @@ File Layout After creating a datastore, the following default layout will appear: .. code-block:: console + # ls -arilh /backup/disk1/store1 276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock 276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks @@ -192,6 +192,7 @@ The `.chunks` directory contains folders, starting from `0000` and taking hexade directories will store the chunked data after a backup operation has been executed. .. code-block:: console + # ls -arilh /backup/disk1/store1/.chunks 545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff 545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe @@ -933,7 +934,3 @@ After that you should be able to see storage status with: .. include:: command-line-tools.rst .. include:: services.rst - -.. include host system admin at the end - -.. include:: sysadmin.rst diff --git a/docs/conf.py b/docs/conf.py index 7d18003e..0c40ffb5 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -112,7 +112,7 @@ exclude_patterns = [ 'pxar/man1.rst', 'epilog.rst', 'pbs-copyright.rst', - 'sysadmin.rst', + 'local-zfs.rst' 'package-repositories.rst', ] diff --git a/docs/index.rst b/docs/index.rst index 61c44a98..55459b4f 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -19,6 +19,7 @@ in the section entitled "GNU Free Documentation License". introduction.rst installation.rst administration-guide.rst + sysadmin.rst .. raw:: latex diff --git a/docs/local-zfs.rst b/docs/local-zfs.rst index fd56474a..74d103bf 100644 --- a/docs/local-zfs.rst +++ b/docs/local-zfs.rst @@ -1,6 +1,5 @@ ZFS on Linux -============= -.. code-block:: console.. code-block:: console.. code-block:: console +------------ ZFS is a combined file system and logical volume manager designed by Sun Microsystems. There is no need for manually compile ZFS modules - all @@ -31,7 +30,7 @@ General ZFS advantages * Encryption Hardware ---------- +~~~~~~~~~ ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent @@ -47,10 +46,8 @@ HBA adapter is the way to go, or something like LSI controller flashed in ``IT`` mode. - - ZFS Administration ------------------- +~~~~~~~~~~~~~~~~~~ This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands @@ -58,61 +55,68 @@ to manage ZFS are `zfs` and `zpool`. Both commands come with great manual pages, which can be read with: .. code-block:: console + # man zpool # man zfs Create a new zpool -~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^ To create a new pool, at least one disk is needed. The `ashift` should have the same sector-size (2 power of `ashift`) or larger as the underlying disk. .. code-block:: console + # zpool create -f -o ashift=12 Create a new pool with RAID-0 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 1 disk .. code-block:: console + # zpool create -f -o ashift=12 Create a new pool with RAID-1 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 2 disks .. code-block:: console + # zpool create -f -o ashift=12 mirror Create a new pool with RAID-10 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 4 disks .. code-block:: console + # zpool create -f -o ashift=12 mirror mirror Create a new pool with RAIDZ-1 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 3 disks .. code-block:: console + # zpool create -f -o ashift=12 raidz1 Create a new pool with RAIDZ-2 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 4 disks .. code-block:: console + # zpool create -f -o ashift=12 raidz2 Create a new pool with cache (L2ARC) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance (use SSD). @@ -121,10 +125,11 @@ As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". .. code-block:: console + # zpool create -f -o ashift=12 cache Create a new pool with log (ZIL) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance (SSD). @@ -133,10 +138,11 @@ As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". .. code-block:: console + # zpool create -f -o ashift=12 log Add cache and log to an existing pool -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have a pool without cache and log. First partition the SSD in 2 partition with `parted` or `gdisk` @@ -148,18 +154,20 @@ physical memory, so this is usually quite small. The rest of the SSD can be used as cache. .. code-block:: console + # zpool add -f log cache Changing a failed device -~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: console + # zpool replace -f Changing a failed bootable device -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot` as bootloader. @@ -169,6 +177,7 @@ the ZFS partition are the same. To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use. .. code-block:: console + # sgdisk -R # sgdisk -G # zpool replace -f @@ -178,6 +187,7 @@ different steps are needed which depend on the bootloader in use. With `systemd-boot`: .. code-block:: console + # pve-efiboot-tool format # pve-efiboot-tool init @@ -190,12 +200,13 @@ With `grub`: Usually `grub.cfg` is located in `/boot/grub/grub.cfg` .. code-block:: console + # grub-install # grub-mkconfig -o /path/to/grub.cfg Activate E-Mail Notification -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like @@ -203,12 +214,14 @@ pool errors. Newer ZFS packages ship the daemon in a separate package, and you can install it using `apt-get`: .. code-block:: console + # apt-get install zfs-zed To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: .. code-block:: console + ZED_EMAIL_ADDR="root" Please note Proxmox Backup forwards mails to `root` to the email address @@ -218,7 +231,7 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All other settings are optional. Limit ZFS Memory Usage -~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^ It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the @@ -226,6 +239,7 @@ host. Use your preferred editor to change the configuration in `/etc/modprobe.d/zfs.conf` and insert: .. code-block:: console + options zfs zfs_arc_max=8589934592 This example setting limits the usage to 8GB. @@ -233,11 +247,12 @@ This example setting limits the usage to 8GB. .. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes: .. code-block:: console + # update-initramfs -u SWAP on ZFS -~~~~~~~~~~~ +^^^^^^^^^^^ Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup @@ -251,31 +266,35 @@ installer. Additionally, you can lower the `swappiness` value. A good value for servers is 10: .. code-block:: console + # sysctl -w vm.swappiness=10 To make the swappiness persistent, open `/etc/sysctl.conf` with an editor of your choice and add the following line: .. code-block:: console + vm.swappiness = 10 .. table:: Linux kernel `swappiness` parameter values :widths:auto - ========= ============ + + ==================== =============================================================== Value Strategy - ========= ============ + ==================== =============================================================== vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition vm.swappiness = 1 Minimum amount of swapping without disabling it entirely. - vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system. + vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system. vm.swappiness = 60 The default value. vm.swappiness = 100 The kernel will swap aggressively. - ========= ============ + ==================== =============================================================== ZFS Compression -~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^ To activate compression: .. code-block:: console + # zpool set compression=lz4 We recommend using the `lz4` algorithm, since it adds very little CPU overhead. @@ -286,12 +305,13 @@ I/O performance. You can disable compression at any time with: .. code-block:: console + # zfs set compression=off Only new blocks will be affected by this change. ZFS Special Device -~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^ Since version 0.8.0 ZFS supports `special` devices. A `special` device in a pool is used to store metadata, deduplication tables, and optionally small @@ -312,11 +332,13 @@ performance. Use fast SSDs for the `special` device. Create a pool with `special` device and RAID-1: .. code-block:: console + # zpool create -f -o ashift=12 mirror special mirror Adding a `special` device to an existing pool with RAID-1: .. code-block:: console + # zpool add special mirror ZFS datasets expose the `special_small_blocks=` property. `size` can be @@ -335,20 +357,23 @@ in the pool will opt in for small file blocks). Opt in for all file smaller than 4K-blocks pool-wide: .. code-block:: console + # zfs set special_small_blocks=4K Opt in for small file blocks for a single dataset: .. code-block:: console + # zfs set special_small_blocks=4K / Opt out from small file blocks for a single dataset: .. code-block:: console + # zfs set special_small_blocks=0 / Troubleshooting -~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^ Corrupted cachefile @@ -358,11 +383,13 @@ boot until mounted manually later. For each pool, run: .. code-block:: console + # zpool set cachefile=/etc/zfs/zpool.cache POOLNAME and afterwards update the `initramfs` by running: .. code-block:: console + # update-initramfs -u -k all and finally reboot your node. diff --git a/docs/sysadmin.rst b/docs/sysadmin.rst index 3c573998..9c564f03 100644 --- a/docs/sysadmin.rst +++ b/docs/sysadmin.rst @@ -1,5 +1,5 @@ Host System Administration --------------------------- +========================== `Proxmox Backup`_ is based on the famous Debian_ Linux distribution. That means that you have access to the whole world of @@ -23,8 +23,4 @@ either explain things which are different on `Proxmox Backup`_, or tasks which are commonly used on `Proxmox Backup`_. For other topics, please refer to the standard Debian documentation. -ZFS -~~~ - -.. todo:: Add local ZFS admin guide (local.zfs.adoc) - +.. include:: local-zfs.rst