fix typos and whitespace all around

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This commit is contained in:
Dominik Csapak 2017-09-27 10:18:50 +02:00 committed by Fabian Grünbichler
parent 0378225172
commit 470d43137c
21 changed files with 55 additions and 55 deletions

View File

@ -269,8 +269,8 @@ Screenshots
[thumbnail="gui-datacenter-search.png"]
First, it should be noted that we can display screenshots on 'html'
and 'wiki' pages, and we can include them in printed doumentation. But
ith is not possible to render them inside manual pages. So screenshot
and 'wiki' pages, and we can include them in printed documentation. But
it is not possible to render them inside manual pages. So screenshot
inside manual pages should be optional, i.e. the text should not
depend on the visibility of the screenshot. You can include a
screenshot by setting the 'thumbnail' attribute on a paragraph:
@ -304,7 +304,7 @@ all image attributes (from debian package 'imagemagick')
[thumbnail="gui-datacenter-search.png"]
We normally display screenshots as small thumbnail on the right side
of a paraprah. On printed docomentation, we render the full sized
of a paragraph. On printed documentation, we render the full sized
graphic just before the paragraph, or between the title and the text
if the paragraph has a title. It is usually a good idea to add a title
to paragraph with screenshots.

View File

@ -45,7 +45,7 @@ Commercial Support
{proxmoxGmbh} also offers commercial
http://www.proxmox.com/proxmox-ve/pricing[{pve} Subscription Service
Plans]. System Administrators with a standard subscription plan can access a
dedicated support portal with guaranteed reponse time, where {pve}
dedicated support portal with guaranteed response time, where {pve}
developers help them should an issue appear.
Please contact the mailto:office@proxmox.com[Proxmox sales
team] for more information or volume discounts.

View File

@ -478,7 +478,7 @@ properties:
include::ha-resources-opts.adoc[]
Here is a real world example with one VM and one container. As you see,
the syntax of those files is really simple, so it is even posiible to
the syntax of those files is really simple, so it is even possible to
read or edit those files using your favorite editor:
.Configuration Example (`/etc/pve/ha/resources.cfg`)
@ -534,7 +534,7 @@ an unrestricted group with a single member:
For bigger clusters, it makes sense to define a more detailed failover
behavior. For example, you may want to run a set of services on
`node1` if possible. If `node1` is not available, you want to run them
equally splitted on `node2` and `node3`. If those nodes also fail the
equally split on `node2` and `node3`. If those nodes also fail the
services should run on `node4`. To achieve this you could set the node
list to:
@ -790,7 +790,7 @@ from ``shutdown'', because the node immediately starts again.
The LRM tells the CRM that it wants to restart, and waits until the
CRM puts all resources into the `freeze` state (same mechanism is used
for xref:ha_manager_package_updates[Pakage Updates]). This prevents
for xref:ha_manager_package_updates[Package Updates]). This prevents
that those resources are moved to other nodes. Instead, the CRM start
the resources after the reboot on the same node.

View File

@ -8,7 +8,7 @@ The HA group identifier.
`max_relocate`: `<integer> (0 - N)` ('default =' `1`)::
Maximal number of service relocate tries when a service failes to start.
Maximal number of service relocate tries when a service fails to start.
`max_restart`: `<integer> (0 - N)` ('default =' `1`)::

View File

@ -25,7 +25,7 @@ then you should try to get it into the reference documentation. The reference
documentation is written in the easy to use 'asciidoc' document format.
Editing the official documentation requires to clone the git repository at
`git://git.proxmox.com/git/pve-docs.git` and then follow the
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=README.adoc;hb=HEAD[REAME.adoc] document.
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=README.adoc;hb=HEAD[README.adoc] document.
Improving the documentation is just as easy as editing a Wikipedia
article and is an interesting foray in the development of a large

View File

@ -262,7 +262,7 @@ Activate E-Mail Notification
ZFS comes with an event daemon, which monitors events generated by the
ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors. Newer ZFS packages ships the daemon in a sparate package,
pool errors. Newer ZFS packages ships the daemon in a separate package,
and you can install it using `apt-get`:
----

View File

@ -143,7 +143,7 @@ and will not be moved.
Modification of a file can be prevented by adding a `.pve-ignore.`
file for it. For instance, if the file `/etc/.pve-ignore.hosts`
exists then the `/etc/hosts` file will not be touched. This can be a
simple empty file creatd via:
simple empty file created via:
# touch /etc/.pve-ignore.hosts
@ -323,17 +323,17 @@ group/others model.
Backup of Containers mount points
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default additional mount points besides the RootDisk mount point are not
included in backups. You can reverse this default behavior by setting the
By default additional mount points besides the Root Disk mount point are not
included in backups. You can reverse this default behavior by setting the
* Backup* option on a mount point.
// see PVE::VZDump::LXC::prepare()
Replication of Containers mount points
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default additional mount points are replicated when the RootDisk
By default additional mount points are replicated when the Root Disk
is replicated. If you want the {pve} storage replication mechanism to skip a
mount point when starting a replication job, you can set the
mount point when starting a replication job, you can set the
*Skip replication* option on that mount point. +
As of {pve} 5.0, replication requires a storage of type `zfspool`, so adding a
mount point to a different type of storage when the container has replication

View File

@ -52,5 +52,5 @@ Here is an example configuration for influxdb (on your influxdb server):
batch-size = 1000
batch-timeout = "1s"
With this configuration, your server listens on all IP adresses on
With this configuration, your server listens on all IP addresses on
port 8089, and writes the data in the *proxmox* database

View File

@ -56,7 +56,7 @@
|===========================================================
[horizontal]
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Deamons)
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Daemons)
[width="100%",options="header"]
|===========================================================

View File

@ -55,9 +55,9 @@ Login
[thumbnail="gui-login-window.png"]
When you connect to the server, you will first see the longin window.
When you connect to the server, you will first see the login window.
{pve} supports various authentication backends ('Realm'), and
you can select the langauage here. The GUI is translated to more
you can select the language here. The GUI is translated to more
than 20 languages.
NOTE: You can save the user name on the client side by selection the
@ -111,7 +111,7 @@ login name, reset saved layout).
The rightmost part of the header contains four buttons:
[horizontal]
Help :: Opens a new browser window showing the reference documenation.
Help :: Opens a new browser window showing the reference documentation.
Create&nbsp;VM :: Opens the virtual machine creation wizard.
@ -183,7 +183,7 @@ When you select something in the resource tree, the corresponding
object displays configuration and status information in the content
panel. The following sections give a brief overview of the
functionality. Please refer to the individual chapters inside the
reference documentatin to get more detailed information.
reference documentation to get more detailed information.
Datacenter
@ -266,7 +266,7 @@ In the main management center the VM navigation begin if a VM is selected in the
The top header contains important VM operation commands like 'Start', 'Shutdown', 'Rest',
'Remove', 'Migrate', 'Console' and 'Help'.
Two of them have hidden buttons like 'Shutdown' has 'Stop' and
'Console' contains the different consolen typs 'SPICE' or 'noVNC'.
'Console' contains the different console types 'SPICE' or 'noVNC'.
On the right side the content switch white the focus of the option.

View File

@ -162,7 +162,7 @@ auto lo
iface lo inet loopback
auto eno0
#real IP adress
#real IP address
iface eno1 inet static
address 192.168.10.2
netmask 255.255.255.0
@ -297,7 +297,7 @@ iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet maunal
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
@ -316,5 +316,5 @@ iface vmbr0 inet static
////
TODO: explain IPv6 support?
TODO: explan OVS
TODO: explain OVS
////

View File

@ -107,7 +107,7 @@ feature to create clones.
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir vztempl iso backup |raw qcow2 vmdk subvol |no |qcow2 |qcow2
|images rootdir vztmpl iso backup |raw qcow2 vmdk subvol |no |qcow2 |qcow2
|==============================================================================

View File

@ -66,7 +66,7 @@ snapshot/clone implementation.
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images vztempl iso backup |raw qcow2 vmdk |yes |qcow2 |qcow2
|images vztmpl iso backup |raw qcow2 vmdk |yes |qcow2 |qcow2
|==============================================================================
ifdef::wiki[]

View File

@ -69,7 +69,7 @@ to implement snapshots and cloning.
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir vztempl iso backup |raw qcow2 vmdk subvol |yes |qcow2 |qcow2
|images rootdir vztmpl iso backup |raw qcow2 vmdk subvol |yes |qcow2 |qcow2
|==============================================================================
Examples

View File

@ -36,7 +36,7 @@ contain any important data.
Instructions for GNU/Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can simply use `dd` on UNUX like systems. First download the ISO
You can simply use `dd` on UNIX like systems. First download the ISO
image, then plug in the USB stick. You need to find out what device
name gets assigned to the USB stick (see below). Then run:
@ -109,7 +109,7 @@ Instructions for Windows
Download Etcher from https://etcher.io , select the ISO and your USB Drive.
If this doesn't work, alternatively use the OSForsenics USB
If this doesn't work, alternatively use the OSForensics USB
installer from http://www.osforensics.com/portability.html

View File

@ -53,7 +53,7 @@ Precondition
To build a Proxmox Ceph Cluster there should be at least three (preferably)
identical servers for the setup.
A 10Gb network, exclusively used for Ceph, is recommmended. A meshed
A 10Gb network, exclusively used for Ceph, is recommended. A meshed
network setup is also an option if there are no 10Gb switches
available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .

View File

@ -528,7 +528,7 @@ addresses. You may use plain IP addresses or also hostnames here. If you use
hostnames ensure that they are resolvable from all nodes.
In my example I want to switch my cluster communication to the 10.10.10.1/25
network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
in the totem section of the config to an address of the new network. It can be
any address from the subnet configured on the new network interface.
@ -708,7 +708,7 @@ stopped on all nodes start it one after the other again.
Corosync Configuration
----------------------
The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
controls the cluster member ship and its network.
For reading more about it check the corosync.conf man page:
[source,bash]
@ -879,7 +879,7 @@ Migration Type
The migration type defines if the migration data should be sent over a
encrypted (`secure`) channel or an unencrypted (`insecure`) one.
Setting the migration type to insecure means that the RAM content of a
virtual guest gets also transfered unencrypted, which can lead to
virtual guest gets also transferred unencrypted, which can lead to
information disclosure of critical data from inside the guest (for
example passwords or encryption keys).

View File

@ -88,7 +88,7 @@ Schedule Format
---------------
{pve} has a very flexible replication scheduler. It is based on the systemd
time calendar event format.footnote:[see `man 7 sytemd.time` for more information]
time calendar event format.footnote:[see `man 7 systemd.time` for more information]
Calendar events may be used to refer to one or more points in time in a
single expression.
@ -260,7 +260,7 @@ This ID must only be specified manually if the CLI tool is used.
Command Line Interface Examples
-------------------------------
Create a replication job which will run all 5 min with limited bandwidth of
Create a replication job which will run all 5 minutes with limited bandwidth of
10 mbps (megabytes per second) for the guest with guest ID 100.
----

View File

@ -85,7 +85,7 @@ realm, the realms have to be configured in `/etc/pve/domains.cfg`.
The following realms (authentication methods) are available:
Linux PAM standard authentication::
In this case a system user has to exist (eg. created via the `adduser`
In this case a system user has to exist (e.g. created via the `adduser`
command) on all nodes the user is allowed to login, and the user
authenticates with their usual system password.
+
@ -106,7 +106,7 @@ installations where users do not need access to anything outside of
change their own passwords via the GUI.
LDAP::
It is possible to authenticate users via an LDAP server (eq.
It is possible to authenticate users via an LDAP server (e.g.
openldap). The server and an optional fallback server can be
configured and the connection can be encrypted via SSL.
+
@ -137,7 +137,7 @@ If {pve} needs to authenticate (bind) to the ldap server before being
able to query and authenticate users, a bind domain name can be
configured via the `bind_dn` property in `/etc/pve/domains.cfg`. Its
password then has to be stored in `/etc/pve/priv/ldap/<realmname>.pw`
(eg. `/etc/pve/priv/ldap/my-ldap.pw`). This file should contain a
(e.g. `/etc/pve/priv/ldap/my-ldap.pw`). This file should contain a
single line containing the raw password.
Microsoft Active Directory::
@ -356,7 +356,7 @@ option is specified, then its specified parameter is required even if the
API call's schema otherwise lists it as being optional.
`["userid-group", [ <privileges>... ], <options>...]`::
The callermust have any of the listed privileges on `/access/groups`. In
The caller must have any of the listed privileges on `/access/groups`. In
addition there are two possible checks depending on whether the
`groups_param` option is set:
+
@ -375,7 +375,7 @@ privileges.)
`["userid-param", "Realm.AllocateUser"]`::
The user needs `Realm.AllocateUser` access to `/access/realm/<realm>`, with
`<realm>` refering to the realm of the user passed via the `userid`
`<realm>` referring to the realm of the user passed via the `userid`
parameter. Note that the user does not need to exist in order to be
associated with a realm, since user IDs are passed in the form of
`<username>@<realm>`.
@ -483,7 +483,7 @@ Example1: Allow user `joe@pve` to see all virtual machines
Delegate User Management
~~~~~~~~~~~~~~~~~~~~~~~~
If you want to delegate user managenent to user `joe@pve` you can do
If you want to delegate user management to user `joe@pve` you can do
that with:
[source,bash]

24
qm.adoc
View File

@ -74,7 +74,7 @@ Qemu can present to the guest operating system _paravirtualized devices_, where
the guest OS recognizes it is running inside Qemu and cooperates with the
hypervisor.
Qemu relies on the virtio virtualization standard, and is thus able to presente
Qemu relies on the virtio virtualization standard, and is thus able to present
paravirtualized virtio devices, which includes a paravirtualized generic disk
controller, a paravirtualized network card, a paravirtualized serial port,
a paravirtualized SCSI controller, etc ...
@ -231,7 +231,7 @@ execution on the host system. If you're not sure about the workload of your VM,
it is usually a safe bet to set the number of *Total cores* to 2.
NOTE: It is perfectly safe to set the _overall_ number of total cores in all
your VMs to be greater than the number of of cores you have on your server (ie.
your VMs to be greater than the number of of cores you have on your server (i.e.
4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
the host system will balance the Qemu execution threads between your server
cores just like if you were running a standard multithreaded application.
@ -316,12 +316,12 @@ footnote:[A good explanation of the inner workings of the balloon driver can be
When multiple VMs use the autoallocate facility, it is possible to set a
*Shares* coefficient which indicates the relative amount of the free host memory
that each VM shoud take. Suppose for instance you have four VMs, three of them
that each VM should take. Suppose for instance you have four VMs, three of them
running a HTTP server and the last one is a database server. To cache more
database blocks in the database server RAM, you would like to prioritize the
database VM when spare RAM is available. For this you assign a Shares property
of 3000 to the database VM, leaving the other VMs to the Shares default setting
of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
get 1/5 GB.
@ -332,7 +332,7 @@ incur a slowdown of the guest, so we don't recommend using it on critical
systems.
// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
of RAM available to the host.
@ -357,15 +357,15 @@ when importing a VM from another hypervisor.
{pve} will generate for each NIC a random *MAC address*, so that your VM is
addressable on Ethernet networks.
The NIC you added to the VM can follow one of two differents models:
The NIC you added to the VM can follow one of two different models:
* in the default *Bridged mode* each virtual NIC is backed on the host by a
_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
have direct access to the Ethernet LAN on which the host is located.
* in the alternative *NAT mode*, each virtual NIC will only communicate with
the Qemu user networking stack, where a builting router and DHCP server can
provide network access. This built-in DHCP will serve adresses in the private
the Qemu user networking stack, where a built-in router and DHCP server can
provide network access. This built-in DHCP will serve addresses in the private
10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
should only be used for testing.
@ -376,7 +376,7 @@ network device*.
If you are using the VirtIO driver, you can optionally activate the
*Multiqueue* option. This option allows the guest OS to process networking
packets using multiple virtual CPUs, providing an increase in the total number
of packets transfered.
of packets transferred.
//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
When using the VirtIO driver with {pve}, each NIC network queue is passed to the
@ -407,7 +407,7 @@ USB Passthrough
There are two different types of USB passthrough devices:
* Host USB passtrough
* Host USB passthrough
* SPICE USB passthrough
Host USB passthrough works by giving a VM a USB device of the host.
@ -426,7 +426,7 @@ usb controllers).
If a device is present in a VM configuration when the VM starts up,
but the device is not present in the host, the VM can boot without problems.
As soon as the device/port ist available in the host, it gets passed through.
As soon as the device/port is available in the host, it gets passed through.
WARNING: Using this kind of USB passthrough means that you cannot move
a VM online to another host, since the hardware is only available
@ -449,7 +449,7 @@ implementation. SeaBIOS is a good choice for most standard setups.
There are, however, some scenarios in which a BIOS is not a good firmware
to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
In such cases, you should rather use *OVMF*, which is an open-source UEFI implemenation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
If you want to use OVMF, there are several things to consider:

View File

@ -126,7 +126,7 @@ supports snapshots. Using the `backup=no` mount point option individual volumes
can be excluded from the backup (and thus this requirement).
// see PVE::VZDump::LXC::prepare()
NOTE: By default additional mount points besides the RootDisk mount point are
NOTE: By default additional mount points besides the Root Disk mount point are
not included in backups. For volume mount points you can set the *Backup* option
to include the mount point in the backup. Device and bind mounts are never
backed up as their content is managed outside the {pve} storage library.