update screenshots and add some more
all automatically generated by the selenium driver Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before Width: | Height: | Size: 180 KiB After Width: | Height: | Size: 180 KiB |
Before Width: | Height: | Size: 94 KiB After Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 120 KiB After Width: | Height: | Size: 109 KiB |
Before Width: | Height: | Size: 123 KiB After Width: | Height: | Size: 145 KiB |
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 68 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 104 KiB After Width: | Height: | Size: 176 KiB |
Before Width: | Height: | Size: 110 KiB After Width: | Height: | Size: 129 KiB |
BIN
images/screenshot/gui-node-ceph-cephfs-panel.png
Normal file
After Width: | Height: | Size: 83 KiB |
BIN
images/screenshot/gui-node-ceph-install-wizard-step0.png
Normal file
After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 134 KiB After Width: | Height: | Size: 165 KiB |
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 87 KiB |
Before Width: | Height: | Size: 7.3 KiB After Width: | Height: | Size: 11 KiB |
Before Width: | Height: | Size: 95 KiB After Width: | Height: | Size: 124 KiB |
Before Width: | Height: | Size: 87 KiB After Width: | Height: | Size: 110 KiB |
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
Before Width: | Height: | Size: 125 KiB After Width: | Height: | Size: 125 KiB |
@ -318,8 +318,7 @@ This is the default when creating OSDs since Ceph Luminous.
|
||||
pveceph createosd /dev/sd[X]
|
||||
----
|
||||
|
||||
Block.db and block.wal
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
.Block.db and block.wal
|
||||
|
||||
If you want to use a separate DB/WAL device for your OSDs, you can specify it
|
||||
through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not
|
||||
@ -515,6 +514,8 @@ cluster, this way even high load will not overload a single host, which can be
|
||||
an issue with traditional shared filesystem approaches, like `NFS`, for
|
||||
example.
|
||||
|
||||
[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
|
||||
|
||||
{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
|
||||
to save backups, ISO files or container templates and creating a
|
||||
hyper-converged CephFS itself.
|
||||
@ -548,8 +549,7 @@ will always poll the active one, so that it can take over faster as it is in a
|
||||
`warm` state. But naturally, the active polling will cause some additional
|
||||
performance impact on your system and active `MDS`.
|
||||
|
||||
Multiple Active MDS
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
.Multiple Active MDS
|
||||
|
||||
Since Luminous (12.2.x) you can also have multiple active metadata servers
|
||||
running, but this is normally only useful for a high count on parallel clients,
|
||||
|