mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-06-15 19:57:33 +00:00
ceph: add anchors for use in troubleshooting section
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com> [AL]: revert and fix subchapter anchor to autogenerated one to not break links Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This commit is contained in:
parent
4cd548b5cc
commit
dc1e865824
@ -1,3 +1,4 @@
|
|||||||
|
[[disk_health_monitoring]]
|
||||||
Disk Health Monitoring
|
Disk Health Monitoring
|
||||||
----------------------
|
----------------------
|
||||||
ifdef::wiki[]
|
ifdef::wiki[]
|
||||||
|
@ -82,6 +82,7 @@ and vocabulary
|
|||||||
footnote:[Ceph glossary {cephdocs-url}/glossary].
|
footnote:[Ceph glossary {cephdocs-url}/glossary].
|
||||||
|
|
||||||
|
|
||||||
|
[[_recommendations_for_a_healthy_ceph_cluster]]
|
||||||
Recommendations for a Healthy Ceph Cluster
|
Recommendations for a Healthy Ceph Cluster
|
||||||
------------------------------------------
|
------------------------------------------
|
||||||
|
|
||||||
@ -95,6 +96,7 @@ NOTE: The recommendations below should be seen as a rough guidance for choosing
|
|||||||
hardware. Therefore, it is still essential to adapt it to your specific needs.
|
hardware. Therefore, it is still essential to adapt it to your specific needs.
|
||||||
You should test your setup and monitor health and performance continuously.
|
You should test your setup and monitor health and performance continuously.
|
||||||
|
|
||||||
|
[[pve_ceph_recommendation_cpu]]
|
||||||
.CPU
|
.CPU
|
||||||
Ceph services can be classified into two categories:
|
Ceph services can be classified into two categories:
|
||||||
|
|
||||||
@ -122,6 +124,7 @@ IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
|
|||||||
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
|
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
|
||||||
likely for very high performance disks.
|
likely for very high performance disks.
|
||||||
|
|
||||||
|
[[pve_ceph_recommendation_memory]]
|
||||||
.Memory
|
.Memory
|
||||||
Especially in a hyper-converged setup, the memory consumption needs to be
|
Especially in a hyper-converged setup, the memory consumption needs to be
|
||||||
carefully planned out and monitored. In addition to the predicted memory usage
|
carefully planned out and monitored. In addition to the predicted memory usage
|
||||||
@ -137,6 +140,7 @@ normal operation, but rather leave some headroom to cope with outages.
|
|||||||
The OSD service itself will use additional memory. The Ceph BlueStore backend of
|
The OSD service itself will use additional memory. The Ceph BlueStore backend of
|
||||||
the daemon requires by default **3-5 GiB of memory** (adjustable).
|
the daemon requires by default **3-5 GiB of memory** (adjustable).
|
||||||
|
|
||||||
|
[[pve_ceph_recommendation_network]]
|
||||||
.Network
|
.Network
|
||||||
We recommend a network bandwidth of at least 10 Gbps, or more, to be used
|
We recommend a network bandwidth of at least 10 Gbps, or more, to be used
|
||||||
exclusively for Ceph traffic. A meshed network setup
|
exclusively for Ceph traffic. A meshed network setup
|
||||||
@ -172,6 +176,7 @@ high-performance setups:
|
|||||||
* one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
|
* one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
|
||||||
cluster communication.
|
cluster communication.
|
||||||
|
|
||||||
|
[[pve_ceph_recommendation_disk]]
|
||||||
.Disks
|
.Disks
|
||||||
When planning the size of your Ceph cluster, it is important to take the
|
When planning the size of your Ceph cluster, it is important to take the
|
||||||
recovery time into consideration. Especially with small clusters, recovery
|
recovery time into consideration. Especially with small clusters, recovery
|
||||||
@ -197,6 +202,7 @@ You also need to balance OSD count and single OSD capacity. More capacity
|
|||||||
allows you to increase storage density, but it also means that a single OSD
|
allows you to increase storage density, but it also means that a single OSD
|
||||||
failure forces Ceph to recover more data at once.
|
failure forces Ceph to recover more data at once.
|
||||||
|
|
||||||
|
[[pve_ceph_recommendation_raid]]
|
||||||
.Avoid RAID
|
.Avoid RAID
|
||||||
As Ceph handles data object redundancy and multiple parallel writes to disks
|
As Ceph handles data object redundancy and multiple parallel writes to disks
|
||||||
(OSDs) on its own, using a RAID controller normally doesn’t improve
|
(OSDs) on its own, using a RAID controller normally doesn’t improve
|
||||||
@ -1018,6 +1024,7 @@ to act as standbys.
|
|||||||
Ceph maintenance
|
Ceph maintenance
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
|
[[pve_ceph_osd_replace]]
|
||||||
Replace OSDs
|
Replace OSDs
|
||||||
~~~~~~~~~~~~
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -1131,6 +1138,7 @@ ceph osd unset noout
|
|||||||
You can now start up the guests. Highly available guests will change their state
|
You can now start up the guests. Highly available guests will change their state
|
||||||
to 'started' when they power on.
|
to 'started' when they power on.
|
||||||
|
|
||||||
|
[[pve_ceph_mon_and_ts]]
|
||||||
Ceph Monitoring and Troubleshooting
|
Ceph Monitoring and Troubleshooting
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
|
@ -506,6 +506,7 @@ if it loses quorum.
|
|||||||
NOTE: {pve} assigns a single vote to each node by default.
|
NOTE: {pve} assigns a single vote to each node by default.
|
||||||
|
|
||||||
|
|
||||||
|
[[pvecm_cluster_network]]
|
||||||
Cluster Network
|
Cluster Network
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user