mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-08-02 21:01:57 +00:00
zfs: improve and expand on ZIL and cache sections
The ZIL section was copied over from the cache one, but not fully adapted, so it wrongly talked about cache devices, as recently reported by a user. Improve on that and expand those sections in general. Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
065b214708
commit
5f440d2c67
@ -387,53 +387,107 @@ Minimum 4 disks
|
||||
Create a new pool with cache (L2ARC)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
It is possible to use a dedicated cache drive partition to increase
|
||||
the performance (use SSD).
|
||||
It is possible to use a dedicated device, or partition, as second-level cache to
|
||||
increase the performance. Such a cache device will especially help with
|
||||
random-read workloads of data that is mostly static. As it acts as additional
|
||||
caching layer between the actual storage, and the in-memory ARC, it can also
|
||||
help if the ARC must be reduced due to memory constraints.
|
||||
|
||||
As `<device>` it is possible to use more devices, like it's shown in
|
||||
"Create a new pool with RAID*".
|
||||
.Create ZFS pool with a on-disk cache
|
||||
----
|
||||
# zpool create -f -o ashift=12 <pool> <device> cache <cache-device>
|
||||
----
|
||||
|
||||
Here only a single `<device>` and a single `<cache-device>` was used, but it is
|
||||
possible to use more devices, like it's shown in
|
||||
xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID].
|
||||
|
||||
Note that for cache devices no mirror or raid modi exist, they are all simply
|
||||
accumulated.
|
||||
|
||||
If any cache device produces errors on read, ZFS will transparently divert that
|
||||
request to the underlying storage layer.
|
||||
|
||||
----
|
||||
# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
|
||||
----
|
||||
|
||||
[[sysadmin_zfs_create_new_zpool_with_log]]
|
||||
Create a new pool with log (ZIL)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
It is possible to use a dedicated cache drive partition to increase
|
||||
the performance(SSD).
|
||||
It is possible to use a dedicated drive, or partition, for the ZFS Intent Log
|
||||
(ZIL), it is mainly used to provide safe synchronous transactions, so often in
|
||||
performance critical paths like databases, or other programs that issue `fsync`
|
||||
operations more frequently.
|
||||
|
||||
As `<device>` it is possible to use more devices, like it's shown in
|
||||
"Create a new pool with RAID*".
|
||||
The pool is used as default ZIL location, diverting the ZIL IO load to a
|
||||
separate device can, help to reduce transaction latencies while relieving the
|
||||
main pool at the same time, increasing overall performance.
|
||||
|
||||
For disks to be used as log devices, directly or through a partition, it's
|
||||
recommend to:
|
||||
|
||||
- use fast SSDs with power-loss protection, as those have much smaller commit
|
||||
latencies.
|
||||
|
||||
- Use at least a few GB for the partition (or whole device), but using more than
|
||||
half of your installed memory won't provide you with any real advantage.
|
||||
|
||||
.Create ZFS pool with separate log device
|
||||
----
|
||||
# zpool create -f -o ashift=12 <pool> <device> log <log_device>
|
||||
# zpool create -f -o ashift=12 <pool> <device> log <log-device>
|
||||
----
|
||||
|
||||
In above example a single `<device>` and a single `<log-device>` is used, but you
|
||||
can also combine this with other RAID variants, as described in the
|
||||
xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID] section.
|
||||
|
||||
You can also mirror the log device to multiple devices, this is mainly useful to
|
||||
ensure that performance doesn't immediately degrades if a single log device
|
||||
fails.
|
||||
|
||||
If all log devices fail the ZFS main pool itself will be used again, until the
|
||||
log device(s) get replaced.
|
||||
|
||||
[[sysadmin_zfs_add_cache_and_log_dev]]
|
||||
Add cache and log to an existing pool
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you have a pool without cache and log, first create 2 partitions on the SSD
|
||||
with `parted` or `gdisk`.
|
||||
If you have a pool without cache and log you can still add both, or just one of
|
||||
them, at any time.
|
||||
|
||||
IMPORTANT: Always use GPT partition tables.
|
||||
For example, let's assume you got a good enterprise SSD with power-loss
|
||||
protection that you want to use for improving the overall performance of your
|
||||
pool.
|
||||
|
||||
The maximum size of a log device should be about half the size of
|
||||
physical memory, so this is usually quite small. The rest of the SSD
|
||||
can be used as cache.
|
||||
As the maximum size of a log device should be about half the size of the
|
||||
installed physical memory, it means that the ZIL will mostly likely only take up
|
||||
a relatively small part of the SSD, the remaining space can be used as cache.
|
||||
|
||||
First you have to create two GPT partitions on the SSD with `parted` or `gdisk`.
|
||||
|
||||
Then you're ready to add them to an pool:
|
||||
|
||||
.Add both, a separate log device and a second-level cache, to an existing pool
|
||||
----
|
||||
# zpool add -f <pool> log <device-part1> cache <device-part2>
|
||||
----
|
||||
|
||||
Just replay `<pool>`, `<device-part1>` and `<device-part2>` with the pool name
|
||||
and the two `/dev/disk/by-id/` paths to the partitions.
|
||||
|
||||
You can also add ZIL and cache separately.
|
||||
|
||||
.Add a log device to an existing ZFS pool
|
||||
----
|
||||
# zpool add <pool> log <log-device>
|
||||
----
|
||||
|
||||
|
||||
[[sysadmin_zfs_change_failed_dev]]
|
||||
Changing a failed device
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
----
|
||||
# zpool replace -f <pool> <old device> <new device>
|
||||
# zpool replace -f <pool> <old-device> <new-device>
|
||||
----
|
||||
|
||||
.Changing a failed bootable device
|
||||
|
Loading…
Reference in New Issue
Block a user