pvecm.adoc: improve document structure

This commit is contained in:
Dietmar Maurer 2016-04-09 17:21:50 +02:00
parent 960f634409
commit ceabe189d9

View File

@ -58,27 +58,29 @@ Requirements
* All nodes must be in the same network as corosync uses IP Multicast
to communicate between nodes (also see
http://www.corosync.org[Corosync Cluster Engine]). NOTE: Some
switches do not support IP multicast by default and must be manually
enabled first.
http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ports 5404and 5405 for cluster communication.
+
NOTE: Some switches do not support IP multicast by default and must be
manually enabled first.
* Date and time have to be synchronized.
* SSH tunnel on port 22 between nodes is used.
* SSH tunnel on TCP port 22 between nodes is used.
* If you are interested in High Availability too, for reliable quorum
you must have at least 3 nodes (all nodes should have the same
version).
* If you are interested in High Availability, you need to have at
least three nodes for reliable quorum. All nodes should have the
same version.
* We recommend a dedicated NIC for the cluster traffic, especially if
you use shared storage.
NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
Proxmox VE 4.0 cluster.
Proxmox VE 4.0 cluster nodes.
Cluster Setup
-------------
Preparing Nodes
---------------
First, install {PVE} on all nodes. Make sure that each node is
installed with the final hostname and IP configuration. Changing the
@ -89,7 +91,7 @@ need to login via 'ssh'.
Create the Cluster
~~~~~~~~~~~~~~~~~~
------------------
Login via 'ssh' to the first Proxmox VE node. Use a unique name for
your cluster. This name cannot be changed later.
@ -102,7 +104,7 @@ To check the state of your cluster use:
Adding Nodes to the Cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~
---------------------------
Login via 'ssh' to the node you want to add.
@ -111,15 +113,16 @@ Login via 'ssh' to the node you want to add.
For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
CAUTION: A new node cannot hold any VM´s, because you would get
conflicts about identical VM IDs. To workaround, use vzdump to backup
and to restore to a different VMID after adding the node to the
cluster.
conflicts about identical VM IDs. Also, all existing configuration is
overwritten when you join a new node to the cluster. To workaround,
use vzdump to backup and restore to a different VMID after adding
the node to the cluster.
To check the state of cluster:
# pvecm status
.Check Cluster Status
.Cluster status after adding 4 nodes
----
hp2# pvecm status
Quorum information
@ -167,7 +170,7 @@ Membership information
Remove a Cluster Node
~~~~~~~~~~~~~~~~~~~~~
---------------------
CAUTION: Read carefully the procedure before proceeding, as it could
not be what you want or need.