remove ceph-server.adoc, merged into pveceph.adoc

This commit is contained in:
Dietmar Maurer 2017-06-21 09:09:57 +02:00
parent b9ff41f19f
commit 21394e7091
4 changed files with 170 additions and 151 deletions

View File

@ -1,140 +0,0 @@
Ceph Server on Proxmox VE Cluster
---------------------------------
It is possible to install the Ceph storage server directly on the
Proxmox VE cluster nodes. The VMs and Containers can access that
storage using the
xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)] storage
driver.
Precondition
~~~~~~~~~~~~
There should be at least three (preferably) identical servers for setup which build together a Proxmox Cluster.
A 10Gb network is recommmended, exclusively used for Ceph. If there are no 10Gb switches available meshed network is
also an option, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
Check also the recommendations from http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's Website].
Installation of Ceph Packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On each node run the installation script as follows:
[source,bash]
----
pveceph install -version jewel
----
This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.
Creating initial Ceph configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (10.10.10.0/24 in the following example) dedicated for Ceph:
[source,bash]
----
pveceph init --network 10.10.10.0/24
----
This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
Creating Ceph Monitors
~~~~~~~~~~~~~~~~~~~~~~
On each node where a monitor is requested (at least 3 are recommended) create it by using the "Ceph" item in the GUI or run
[source,bash]
----
pveceph createmon
----
Creating Ceph OSDs
~~~~~~~~~~~~~~~~~~
via GUI or via CLI as follows:
[source,bash]
----
pveceph createosd /dev/sd[X]
----
If you want to use a dedicated SSD journal disk:
NOTE: In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.
[source,bash]
----
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
----
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk
[source,bash]
----
pveceph createosd /dev/sdf -journal_dev /dev/sdb
----
This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node).
It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using:
[source,bash]
----
ceph-disk zap /dev/sd[X]
----
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance.
Ceph Pools
~~~~~~~~~~
The standard installation creates per default the pool 'rbd', additional pools can be created via GUI.
Ceph Client
~~~~~~~~~~~
You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD, see section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
You also need to copy the keyring to a predefined location.
NOTE: that the file name needs to be storage id + .keyring - storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is 'my-ceph-storage' in the following example:
[source,bash]
----
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
----

View File

@ -1,15 +1,27 @@
[[chapter_hyper_converged_infrastructure]] [[chapter_hyper_converged_infrastructure]]
Hyper-converged Infrastructure Hyper-converged Infrastructure
============================== ==============================
ifndef::manvolnum[] ifdef::wiki[]
:pve-toplevel: :pve-toplevel:
endif::manvolnum[] endif::wiki[]
Proxmox VE has all the https://en.wikipedia.org/wiki/Hyper-converged_infrastructure[Hyper-converged Infrastructure] capabilities needed to deploy and manage a complete open source hyper-converged {pve} has all the
infrastructure. https://en.wikipedia.org/wiki/Hyper-converged_infrastructure[Hyper-converged Infrastructure]
It integrates tightly compute, networking, and storage resources into a single deployment unit and you can manage everything with the centralized web management interface. capabilities needed to deploy and manage a complete open source hyper-converged
Proxmox VE unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and Containers) as well as for replicated storage. infrastructure.
It integrates tightly compute, networking, and storage resources into
a single deployment unit and you can manage everything with the
centralized web management interface. {pve} unifies your compute
and storage systems, i.e. you can use the same physical nodes within a
cluster for both computing (processing VMs and Containers) as well as
for replicated storage.
include::ceph-server.adoc[] ifdef::wiki[]
See Also
--------
* xref:chapter_pveceph[pveceph - Manage Ceph Services on Proxmox VE Nodes]
endif::wiki[]

View File

@ -27,6 +27,10 @@ include::sysadmin.adoc[]
include::hyper-converged-infrastructure.adoc[] include::hyper-converged-infrastructure.adoc[]
:leveloffset: 2
include::pveceph.adoc[]
:leveloffset: 1
include::pve-gui.adoc[] include::pve-gui.adoc[]
include::pvecm.adoc[] include::pvecm.adoc[]
@ -69,7 +73,6 @@ Useful Command Line Tools
------------------------- -------------------------
:leveloffset: 2 :leveloffset: 2
include::pveceph.adoc[]
include::pvesubscription.adoc[] include::pvesubscription.adoc[]

View File

@ -7,7 +7,7 @@ pveceph(1)
NAME NAME
---- ----
pveceph - Manage CEPH Services on Proxmox VE Nodes pveceph - Manage Ceph Services on Proxmox VE Nodes
SYNOPSIS SYNOPSIS
-------- --------
@ -18,11 +18,155 @@ DESCRIPTION
----------- -----------
endif::manvolnum[] endif::manvolnum[]
ifndef::manvolnum[] ifndef::manvolnum[]
pveceph - Manage CEPH Services on Proxmox VE Nodes pveceph - Manage Ceph Services on Proxmox VE Nodes
================================================== ==================================================
endif::manvolnum[] endif::manvolnum[]
Tool to manage http://ceph.com[CEPH] services on {pve} nodes. It is possible to install the {ceph} storage server directly on the
Proxmox VE cluster nodes. The VMs and Containers can access that
storage using the xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]
storage driver.
To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes.
Precondition
------------
There should be at least three (preferably) identical servers for
setup which build together a Proxmox Cluster.
A 10Gb network is recommmended, exclusively used for Ceph. If there
are no 10Gb switches available meshed network is also an option, see
{webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
Check also the recommendations from
http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's website].
Installation of Ceph Packages
-----------------------------
On each node run the installation script as follows:
[source,bash]
----
pveceph install -version jewel
----
This sets up an `apt` package repository in
`/etc/apt/sources.list.d/ceph.list` and installs the required software.
Creating initial Ceph configuration
-----------------------------------
After installation of packages, you need to create an initial Ceph
configuration on just one node, based on your network (`10.10.10.0/24`
in the following example) dedicated for Ceph:
[source,bash]
----
pveceph init --network 10.10.10.0/24
----
This creates an initial config at `/etc/pve/ceph.conf`. That file is
automatically distributed to all Proxmox VE nodes by using
xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
Ceph commands without the need to specify a configuration file.
Creating Ceph Monitors
----------------------
On each node where a monitor is requested (three monitors are recommended)
create it by using the "Ceph" item in the GUI or run.
[source,bash]
----
pveceph createmon
----
Creating Ceph OSDs
------------------
via GUI or via CLI as follows:
[source,bash]
----
pveceph createosd /dev/sd[X]
----
If you want to use a dedicated SSD journal disk:
NOTE: In order to use a dedicated journal disk (SSD), the disk needs
to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
partition table. You can create this with `gdisk /dev/sd(x)`. If there
is no GPT, you cannot select the disk as journal. Currently the
journal size is fixed to 5 GB.
[source,bash]
----
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
----
Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
journal disk.
[source,bash]
----
pveceph createosd /dev/sdf -journal_dev /dev/sdb
----
This partitions the disk (data and journal partition), creates
filesystems and starts the OSD, afterwards it is running and fully
functional. Please create at least 12 OSDs, distributed among your
nodes (4 OSDs on each node).
It should be noted that this command refuses to initialize disk when
it detects existing data. So if you want to overwrite a disk you
should remove existing data first. You can do that using:
[source,bash]
----
ceph-disk zap /dev/sd[X]
----
You can create OSDs containing both journal and data partitions or you
can place the journal on a dedicated SSD. Using a SSD journal disk is
highly recommended if you expect good performance.
Ceph Pools
----------
The standard installation creates per default the pool 'rbd',
additional pools can be created via GUI.
Ceph Client
-----------
You can then configure {pve} to use such pools to store VM or
Container images. Simply use the GUI too add a new `RBD` storage (see
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
You also need to copy the keyring to a predefined location.
NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
`my-ceph-storage` in the following example:
[source,bash]
----
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
----
ifdef::manvolnum[] ifdef::manvolnum[]