fix some typos

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2020-02-11 18:01:34 +01:00
parent bc27b00ca3
commit 5f318cc038
8 changed files with 13 additions and 13 deletions

View File

@ -13,7 +13,7 @@ to draw nice borders around tables. It additionally transforms some
values into human-readable text, for example:
- Unix epoch is displayed as ISO 8601 date string.
- Durations are displayed as week/day/hour/miniute/secound count, i.e `1d 5h`.
- Durations are displayed as week/day/hour/minute/second count, i.e `1d 5h`.
- Byte sizes value include units (`B`, `KiB`, `MiB`, `GiB`, `TiB`, `PiB`).
- Fractions are display as percentage, i.e. 1.0 is displayed as 100%.

View File

@ -56,7 +56,7 @@
|===========================================================
[horizontal]
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Deamons)
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Daemons)
[width="100%",options="header"]
|===========================================================

View File

@ -465,7 +465,7 @@ VM/CT incoming/outgoing DROP/REJECT
This drops or rejects all the traffic to the VMs, with some exceptions for
DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set
configuration. The same rules for dropping/rejecting packets are inherited
from the datacenter, while the exceptions for accepted incomming/outgoing
from the datacenter, while the exceptions for accepted incoming/outgoing
traffic of the host do not apply.
Again, you can use xref:pve_firewall_iptables_inspect[iptables-save (see above)]

View File

@ -224,7 +224,7 @@ Advanced ZFS Configuration Options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The installer creates the ZFS pool `rpool`. No swap space is created but you can
reserve some unpartitioned space on the install disks for swap. You can also
create a swap zvol after the installation, altough this can lead to problems.
create a swap zvol after the installation, although this can lead to problems.
(see <<zfs_swap,ZFS swap notes>>).
`ashift`::
@ -250,7 +250,7 @@ semantics, and why this does not replace redundancy on disk-level.
`hdsize`::
Defines the total hard disk size to be used. This is useful to save free space
on the hard disk(s) for further partitioning (for exmaple to create a
on the hard disk(s) for further partitioning (for example to create a
swap-partition). `hdsize` is only honored for bootable disks, that is only the
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].

View File

@ -101,7 +101,7 @@ services on the same network and may even break the {pve} cluster stack.
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
even 100 GBps are possible.
@ -378,7 +378,7 @@ pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
----
You can directly choose the size for those with the '-db_size' and '-wal_size'
paremeters respectively. If they are not given the following values (in order)
parameters respectively. If they are not given the following values (in order)
will be used:
* bluestore_block_{db,wal}_size from ceph configuration...

View File

@ -730,7 +730,7 @@ resolve to can be changed without touching corosync or the node it runs on -
which may lead to a situation where an address is changed without thinking
about implications for corosync.
A seperate, static hostname specifically for corosync is recommended, if
A separate, static hostname specifically for corosync is recommended, if
hostnames are preferred. Also, make sure that every node in the cluster can
resolve all hostnames correctly.
@ -739,7 +739,7 @@ entry. Only the resolved IP is then saved to the configuration.
Nodes that joined the cluster on earlier versions likely still use their
unresolved hostname in `corosync.conf`. It might be a good idea to replace
them with IPs or a seperate hostname, as mentioned above.
them with IPs or a separate hostname, as mentioned above.
[[pvecm_redundancy]]
@ -758,7 +758,7 @@ physical network connection.
Links are used according to a priority setting. You can configure this priority
by setting 'knet_link_priority' in the corresponding interface section in
`corosync.conf`, or, preferrably, using the 'priority' parameter when creating
`corosync.conf`, or, preferably, using the 'priority' parameter when creating
your cluster with `pvecm`:
----
@ -888,7 +888,7 @@ example 2+1 nodes).
QDevice Technical Overview
~~~~~~~~~~~~~~~~~~~~~~~~~~
The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
node. It provides a configured number of votes to the clusters quorum
subsystem based on an external running third-party arbitrator's decision.
Its primary use is to allow a cluster to sustain more node failures than

View File

@ -126,7 +126,7 @@ qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
----
You can also configure all the Cloud-Init options using a single command
only. We have simply splitted the above example to separate the
only. We have simply split the above example to separate the
commands for reducing the line length. Also make sure to adopt the IP
setup for your specific environment.

View File

@ -144,7 +144,7 @@ hardware layout of the VM's virtual motherboard. You can choose between the
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
chipset, which also provides a virtual PCIe bus, and thus may be desired if
one want's to pass through PCIe hardware.
one wants to pass through PCIe hardware.
[[qm_hard_disk]]
Hard Disk