ceph: nautilus followup

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2019-06-11 15:42:27 +02:00
parent 9bddef40d0
commit 352c803f9e
2 changed files with 13 additions and 14 deletions

View File

@ -86,6 +86,7 @@ recommended.
[width="100%",cols="5*d",options="header"]
|===========================================================
| {pve} Version | Debian Version | First Release | Debian EOL | Proxmox EOL
| {pve} 6.x | Debian 10 (Buster)| tba | tba | tba
| {pve} 5.x | Debian 9 (Stretch)| 2017-07 | tba | tba
| {pve} 4.x | Debian 8 (Jessie) | 2015-10 | 2018-06 | 2018-06
| {pve} 3.x | Debian 7 (Wheezy) | 2013-05 | 2016-04 | 2017-02

View File

@ -115,10 +115,10 @@ failure event during recovery.
In general SSDs will provide more IOPs than spinning disks. This fact and the
higher cost may make a xref:pve_ceph_device_classes[class based] separation of
pools appealing. Another possibility to speedup OSDs is to use a faster disk
as journal or DB/WAL device, see xref:pve_ceph_osds[creating Ceph OSDs]. If a
faster disk is used for multiple OSDs, a proper balance between OSD and WAL /
DB (or journal) disk must be selected, otherwise the faster disk becomes the
bottleneck for all linked OSDs.
as journal or DB/**W**rite-**A**head-**L**og device, see
xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
Aside from the disk type, Ceph best performs with an even sized and distributed
amount of disks per node. For example, 4 x 500 GB disks with in each node is
@ -334,10 +334,11 @@ You can directly choose the size for those with the '-db_size' and '-wal_size'
paremeters respectively. If they are not given the following values (in order)
will be used:
* bluestore_block_{db,wal}_size in ceph config database section 'osd'
* bluestore_block_{db,wal}_size in ceph config database section 'global'
* bluestore_block_{db,wal}_size in ceph config section 'osd'
* bluestore_block_{db,wal}_size in ceph config section 'global'
* bluestore_block_{db,wal}_size from ceph configuration...
** ... database, section 'osd'
** ... database, section 'global'
** ... file, section 'osd'
** ... file, section 'global'
* 10% (DB)/1% (WAL) of OSD size
NOTE: The DB stores BlueStores internal metadata and the WAL is BlueStores
@ -348,13 +349,10 @@ NVRAM for better performance.
Ceph Filestore
~~~~~~~~~~~~~~
Until Ceph Luminous, Filestore was used as storage type for Ceph OSDs. It can
still be used and might give better performance in small setups, when backed by
an NVMe SSD or similar.
Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
pveceph anymore. If you still want to create filestore OSDs, use 'ceph-volume'
directly.
'pveceph' anymore. If you still want to create filestore OSDs, use
'ceph-volume' directly.
[source,bash]
----