mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-06-25 11:10:53 +00:00
pmxcfs: language and style fixup
minor language fixup replace usage of 'Proxmox VE' with '{pve}' Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
This commit is contained in:
parent
448c1d393b
commit
0593681f9d
68
pmxcfs.adoc
68
pmxcfs.adoc
@ -30,17 +30,17 @@ cluster nodes using `corosync`. We use this to store all PVE related
|
|||||||
configuration files.
|
configuration files.
|
||||||
|
|
||||||
Although the file system stores all data inside a persistent database
|
Although the file system stores all data inside a persistent database
|
||||||
on disk, a copy of the data resides in RAM. That imposes restriction
|
on disk, a copy of the data resides in RAM. This imposes restrictions
|
||||||
on the maximum size, which is currently 30MB. This is still enough to
|
on the maximum size, which is currently 30MB. This is still enough to
|
||||||
store the configuration of several thousand virtual machines.
|
store the configuration of several thousand virtual machines.
|
||||||
|
|
||||||
This system provides the following advantages:
|
This system provides the following advantages:
|
||||||
|
|
||||||
* seamless replication of all configuration to all nodes in real time
|
* Seamless replication of all configuration to all nodes in real time
|
||||||
* provides strong consistency checks to avoid duplicate VM IDs
|
* Provides strong consistency checks to avoid duplicate VM IDs
|
||||||
* read-only when a node loses quorum
|
* Read-only when a node loses quorum
|
||||||
* automatic updates of the corosync cluster configuration to all nodes
|
* Automatic updates of the corosync cluster configuration to all nodes
|
||||||
* includes a distributed locking mechanism
|
* Includes a distributed locking mechanism
|
||||||
|
|
||||||
|
|
||||||
POSIX Compatibility
|
POSIX Compatibility
|
||||||
@ -49,13 +49,13 @@ POSIX Compatibility
|
|||||||
The file system is based on FUSE, so the behavior is POSIX like. But
|
The file system is based on FUSE, so the behavior is POSIX like. But
|
||||||
some feature are simply not implemented, because we do not need them:
|
some feature are simply not implemented, because we do not need them:
|
||||||
|
|
||||||
* you can just generate normal files and directories, but no symbolic
|
* You can just generate normal files and directories, but no symbolic
|
||||||
links, ...
|
links, ...
|
||||||
|
|
||||||
* you can't rename non-empty directories (because this makes it easier
|
* You can't rename non-empty directories (because this makes it easier
|
||||||
to guarantee that VMIDs are unique).
|
to guarantee that VMIDs are unique).
|
||||||
|
|
||||||
* you can't change file permissions (permissions are based on path)
|
* You can't change file permissions (permissions are based on paths)
|
||||||
|
|
||||||
* `O_EXCL` creates were not atomic (like old NFS)
|
* `O_EXCL` creates were not atomic (like old NFS)
|
||||||
|
|
||||||
@ -67,13 +67,11 @@ File Access Rights
|
|||||||
|
|
||||||
All files and directories are owned by user `root` and have group
|
All files and directories are owned by user `root` and have group
|
||||||
`www-data`. Only root has write permissions, but group `www-data` can
|
`www-data`. Only root has write permissions, but group `www-data` can
|
||||||
read most files. Files below the following paths:
|
read most files. Files below the following paths are only accessible by root:
|
||||||
|
|
||||||
/etc/pve/priv/
|
/etc/pve/priv/
|
||||||
/etc/pve/nodes/${NAME}/priv/
|
/etc/pve/nodes/${NAME}/priv/
|
||||||
|
|
||||||
are only accessible by root.
|
|
||||||
|
|
||||||
|
|
||||||
Technology
|
Technology
|
||||||
----------
|
----------
|
||||||
@ -157,25 +155,25 @@ And disable verbose syslog messages with:
|
|||||||
Recovery
|
Recovery
|
||||||
--------
|
--------
|
||||||
|
|
||||||
If you have major problems with your Proxmox VE host, e.g. hardware
|
If you have major problems with your {pve} host, for example hardware
|
||||||
issues, it could be helpful to just copy the pmxcfs database file
|
issues, it could be helpful to copy the pmxcfs database file
|
||||||
`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
|
`/var/lib/pve-cluster/config.db`, and move it to a new {pve}
|
||||||
host. On the new host (with nothing running), you need to stop the
|
host. On the new host (with nothing running), you need to stop the
|
||||||
`pve-cluster` service and replace the `config.db` file (needed permissions
|
`pve-cluster` service and replace the `config.db` file (required permissions
|
||||||
`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
|
`0600`). Following this, adapt `/etc/hostname` and `/etc/hosts` according to the
|
||||||
lost Proxmox VE host, then reboot and check. (And don't forget your
|
lost {pve} host, then reboot and check (and don't forget your
|
||||||
VM/CT data)
|
VM/CT data).
|
||||||
|
|
||||||
|
|
||||||
Remove Cluster configuration
|
Remove Cluster Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The recommended way is to reinstall the node after you removed it from
|
The recommended way is to reinstall the node after you remove it from
|
||||||
your cluster. This makes sure that all secret cluster/ssh keys and any
|
your cluster. This ensures that all secret cluster/ssh keys and any
|
||||||
shared configuration data is destroyed.
|
shared configuration data is destroyed.
|
||||||
|
|
||||||
In some cases, you might prefer to put a node back to local mode without
|
In some cases, you might prefer to put a node back to local mode without
|
||||||
reinstall, which is described in
|
reinstalling, which is described in
|
||||||
<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
|
<<pvecm_separate_node_without_reinstall,Separate A Node Without Reinstalling>>
|
||||||
|
|
||||||
|
|
||||||
@ -183,28 +181,28 @@ Recovering/Moving Guests from Failed Nodes
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
|
For the guest configuration files in `nodes/<NAME>/qemu-server/` (VMs) and
|
||||||
`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as
|
`nodes/<NAME>/lxc/` (containers), {pve} sees the containing node `<NAME>` as the
|
||||||
owner of the respective guest. This concept enables the usage of local locks
|
owner of the respective guest. This concept enables the usage of local locks
|
||||||
instead of expensive cluster-wide locks for preventing concurrent guest
|
instead of expensive cluster-wide locks for preventing concurrent guest
|
||||||
configuration changes.
|
configuration changes.
|
||||||
|
|
||||||
As a consequence, if the owning node of a guest fails (e.g., because of a power
|
As a consequence, if the owning node of a guest fails (for example, due to a power
|
||||||
outage, fencing event, ..), a regular migration is not possible (even if all
|
outage, fencing event, etc.), a regular migration is not possible (even if all
|
||||||
the disks are located on shared storage) because such a local lock on the
|
the disks are located on shared storage), because such a local lock on the
|
||||||
(dead) owning node is unobtainable. This is not a problem for HA-managed
|
(offline) owning node is unobtainable. This is not a problem for HA-managed
|
||||||
guests, as {pve}'s High Availability stack includes the necessary
|
guests, as {pve}'s High Availability stack includes the necessary
|
||||||
(cluster-wide) locking and watchdog functionality to ensure correct and
|
(cluster-wide) locking and watchdog functionality to ensure correct and
|
||||||
automatic recovery of guests from fenced nodes.
|
automatic recovery of guests from fenced nodes.
|
||||||
|
|
||||||
If a non-HA-managed guest has only shared disks (and no other local resources
|
If a non-HA-managed guest has only shared disks (and no other local resources
|
||||||
which are only available on the failed node are configured), a manual recovery
|
which are only available on the failed node), a manual recovery
|
||||||
is possible by simply moving the guest configuration file from the failed
|
is possible by simply moving the guest configuration file from the failed
|
||||||
node's directory in `/etc/pve/` to an alive node's directory (which changes the
|
node's directory in `/etc/pve/` to an online node's directory (which changes the
|
||||||
logical owner or location of the guest).
|
logical owner or location of the guest).
|
||||||
|
|
||||||
For example, recovering the VM with ID `100` from a dead `node1` to another
|
For example, recovering the VM with ID `100` from an offline `node1` to another
|
||||||
node `node2` works with the following command executed when logged in as root
|
node `node2` works by running the following command as root on any member node
|
||||||
on any member node of the cluster:
|
of the cluster:
|
||||||
|
|
||||||
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
|
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
|
||||||
|
|
||||||
@ -213,8 +211,8 @@ that the failed source node is really powered off/fenced. Otherwise {pve}'s
|
|||||||
locking principles are violated by the `mv` command, which can have unexpected
|
locking principles are violated by the `mv` command, which can have unexpected
|
||||||
consequences.
|
consequences.
|
||||||
|
|
||||||
WARNING: Guest with local disks (or other local resources which are only
|
WARNING: Guests with local disks (or other local resources which are only
|
||||||
available on the dead node) are not recoverable like this. Either wait for the
|
available on the offline node) are not recoverable like this. Either wait for the
|
||||||
failed node to rejoin the cluster or restore such guests from backups.
|
failed node to rejoin the cluster or restore such guests from backups.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
|
Loading…
Reference in New Issue
Block a user