mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-08-04 15:10:33 +00:00
fix typos/wording
Signed-off-by: David Limbeck <d.limbeck@proxmox.com>
This commit is contained in:
parent
beb0ab8082
commit
a35aad4add
@ -137,7 +137,7 @@ resource of type `vm` (virtual machine) with the ID 100.
|
||||
|
||||
For now we have two important resources types - virtual machines and
|
||||
containers. One basic idea here is that we can bundle related software
|
||||
into such VM or container, so there is no need to compose one big
|
||||
into such a VM or container, so there is no need to compose one big
|
||||
service from other services, like it was done with `rgmanager`. In
|
||||
general, a HA managed resource should not depend on other resources.
|
||||
|
||||
@ -156,7 +156,7 @@ GUI, or simply use the command line tool, for example:
|
||||
|
||||
The HA stack now tries to start the resources and keeps it
|
||||
running. Please note that you can configure the ``requested''
|
||||
resources state. For example you may want that the HA stack stops the
|
||||
resources state. For example you may want the HA stack to stop the
|
||||
resource:
|
||||
|
||||
----
|
||||
@ -225,7 +225,7 @@ the following command:
|
||||
|
||||
NOTE: This does not start or stop the resource.
|
||||
|
||||
But all HA related task can be done on the GUI, so there is no need to
|
||||
But all HA related tasks can be done in the GUI, so there is no need to
|
||||
use the command line at all.
|
||||
|
||||
|
||||
@ -253,7 +253,7 @@ handles node fencing.
|
||||
.Locks in the LRM & CRM
|
||||
[NOTE]
|
||||
Locks are provided by our distributed configuration file system (pmxcfs).
|
||||
They are used to guarantee that each LRM is active once and working. As a
|
||||
They are used to guarantee that each LRM is active once and working. As an
|
||||
LRM only executes actions when it holds its lock, we can mark a failed node
|
||||
as fenced if we can acquire its lock. This lets us then recover any failed
|
||||
HA services securely without any interference from the now unknown failed node.
|
||||
@ -369,7 +369,7 @@ The LRM lost its lock, this means a failure happened and quorum was lost.
|
||||
After the LRM gets in the active state it reads the manager status
|
||||
file in `/etc/pve/ha/manager_status` and determines the commands it
|
||||
has to execute for the services it owns.
|
||||
For each command a worker gets started, this workers are running in
|
||||
For each command a worker gets started, these workers are running in
|
||||
parallel and are limited to at most 4 by default. This default setting
|
||||
may be changed through the datacenter configuration key `max_worker`.
|
||||
When finished the worker process gets collected and its result saved for
|
||||
@ -381,19 +381,19 @@ The default value of at most 4 concurrent workers may be unsuited for
|
||||
a specific setup. For example may 4 live migrations happen at the same
|
||||
time, which can lead to network congestions with slower networks and/or
|
||||
big (memory wise) services. Ensure that also in the worst case no congestion
|
||||
happens and lower the `max_worker` value if needed. In the contrary, if you
|
||||
happens and lower the `max_worker` value if needed. On the contrary, if you
|
||||
have a particularly powerful high end setup you may also want to increase it.
|
||||
|
||||
Each command requested by the CRM is uniquely identifiable by an UID, when
|
||||
the worker finished its result will be processed and written in the LRM
|
||||
Each command requested by the CRM is uniquely identifiable by a UID, when
|
||||
the worker finishes its result will be processed and written in the LRM
|
||||
status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
|
||||
it and let its state machine - respective the commands output - act on it.
|
||||
|
||||
The actions on each service between CRM and LRM are normally always synced.
|
||||
This means that the CRM requests a state uniquely marked by an UID, the LRM
|
||||
This means that the CRM requests a state uniquely marked by a UID, the LRM
|
||||
then executes this action *one time* and writes back the result, also
|
||||
identifiable by the same UID. This is needed so that the LRM does not
|
||||
executes an outdated command.
|
||||
execute an outdated command.
|
||||
With the exception of the `stop` and the `error` command,
|
||||
those two do not depend on the result produced and are executed
|
||||
always in the case of the stopped state and once in the case of
|
||||
@ -430,11 +430,11 @@ lost agent lock::
|
||||
|
||||
The CRM lost its lock, this means a failure happened and quorum was lost.
|
||||
|
||||
It main task is to manage the services which are configured to be highly
|
||||
Its main task is to manage the services which are configured to be highly
|
||||
available and try to always enforce the requested state. For example, a
|
||||
service with the requested state 'started' will be started if its not
|
||||
already running. If it crashes it will be automatically started again.
|
||||
Thus the CRM dictates the actions which the LRM needs to execute.
|
||||
Thus the CRM dictates the actions the LRM needs to execute.
|
||||
|
||||
When an node leaves the cluster quorum, its state changes to unknown.
|
||||
If the current CRM then can secure the failed nodes lock, the services
|
||||
@ -468,7 +468,7 @@ Resources
|
||||
|
||||
The resource configuration file `/etc/pve/ha/resources.cfg` stores
|
||||
the list of resources managed by `ha-manager`. A resource configuration
|
||||
inside that list look like this:
|
||||
inside that list looks like this:
|
||||
|
||||
----
|
||||
<type>: <name>
|
||||
@ -689,7 +689,7 @@ Start Failure Policy
|
||||
---------------------
|
||||
|
||||
The start failure policy comes in effect if a service failed to start on a
|
||||
node once ore more times. It can be used to configure how often a restart
|
||||
node one or more times. It can be used to configure how often a restart
|
||||
should be triggered on the same node and how often a service should be
|
||||
relocated so that it gets a try to be started on another node.
|
||||
The aim of this policy is to circumvent temporary unavailability of shared
|
||||
|
@ -223,7 +223,7 @@ of predefined roles which satisfies most needs.
|
||||
|
||||
You can see the whole set of predefined roles on the GUI.
|
||||
|
||||
Adding new roles can currently only be done from the command line, like
|
||||
Adding new roles can be done via both GUI and the command line, like
|
||||
this:
|
||||
|
||||
[source,bash]
|
||||
|
8
qm.adoc
8
qm.adoc
@ -442,7 +442,7 @@ minimum amount you specified is always available to the VM, and if RAM usage on
|
||||
the host is below 80%, will dynamically add memory to the guest up to the
|
||||
maximum memory specified.
|
||||
|
||||
When the host is becoming short on RAM, the VM will then release some memory
|
||||
When the host is running low on RAM, the VM will then release some memory
|
||||
back to the host, swapping running processes if needed and starting the oom
|
||||
killer in last resort. The passing around of memory between host and guest is
|
||||
done via a special `balloon` kernel driver running inside the guest, which will
|
||||
@ -452,14 +452,14 @@ footnote:[A good explanation of the inner workings of the balloon driver can be
|
||||
When multiple VMs use the autoallocate facility, it is possible to set a
|
||||
*Shares* coefficient which indicates the relative amount of the free host memory
|
||||
that each VM should take. Suppose for instance you have four VMs, three of them
|
||||
running a HTTP server and the last one is a database server. To cache more
|
||||
running an HTTP server and the last one is a database server. To cache more
|
||||
database blocks in the database server RAM, you would like to prioritize the
|
||||
database VM when spare RAM is available. For this you assign a Shares property
|
||||
of 3000 to the database VM, leaving the other VMs to the Shares default setting
|
||||
of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
|
||||
* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
|
||||
3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
|
||||
get 1/5 GB.
|
||||
get 1.5 GB.
|
||||
|
||||
All Linux distributions released after 2010 have the balloon kernel driver
|
||||
included. For Windows OSes, the balloon driver needs to be added manually and can
|
||||
@ -516,7 +516,7 @@ of packets transferred.
|
||||
|
||||
//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
|
||||
When using the VirtIO driver with {pve}, each NIC network queue is passed to the
|
||||
host kernel, where the queue will be processed by a kernel thread spawn by the
|
||||
host kernel, where the queue will be processed by a kernel thread spawned by the
|
||||
vhost driver. With this option activated, it is possible to pass _multiple_
|
||||
network queues to the host kernel for each NIC.
|
||||
|
||||
|
@ -25,7 +25,7 @@ Backup and Restore
|
||||
:pve-toplevel:
|
||||
endif::manvolnum[]
|
||||
|
||||
Backups are a requirements for any sensible IT deployment, and {pve}
|
||||
Backups are a requirement for any sensible IT deployment, and {pve}
|
||||
provides a fully integrated solution, using the capabilities of each
|
||||
storage and each guest system type. This allows the system
|
||||
administrator to fine tune via the `mode` option between consistency
|
||||
|
Loading…
Reference in New Issue
Block a user