mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-04-28 11:56:55 +00:00
fix some typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
bc27b00ca3
commit
5f318cc038
@ -13,7 +13,7 @@ to draw nice borders around tables. It additionally transforms some
|
|||||||
values into human-readable text, for example:
|
values into human-readable text, for example:
|
||||||
|
|
||||||
- Unix epoch is displayed as ISO 8601 date string.
|
- Unix epoch is displayed as ISO 8601 date string.
|
||||||
- Durations are displayed as week/day/hour/miniute/secound count, i.e `1d 5h`.
|
- Durations are displayed as week/day/hour/minute/second count, i.e `1d 5h`.
|
||||||
- Byte sizes value include units (`B`, `KiB`, `MiB`, `GiB`, `TiB`, `PiB`).
|
- Byte sizes value include units (`B`, `KiB`, `MiB`, `GiB`, `TiB`, `PiB`).
|
||||||
- Fractions are display as percentage, i.e. 1.0 is displayed as 100%.
|
- Fractions are display as percentage, i.e. 1.0 is displayed as 100%.
|
||||||
|
|
||||||
|
@ -56,7 +56,7 @@
|
|||||||
|===========================================================
|
|===========================================================
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Deamons)
|
'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Daemons)
|
||||||
|
|
||||||
[width="100%",options="header"]
|
[width="100%",options="header"]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|
@ -465,7 +465,7 @@ VM/CT incoming/outgoing DROP/REJECT
|
|||||||
This drops or rejects all the traffic to the VMs, with some exceptions for
|
This drops or rejects all the traffic to the VMs, with some exceptions for
|
||||||
DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set
|
DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set
|
||||||
configuration. The same rules for dropping/rejecting packets are inherited
|
configuration. The same rules for dropping/rejecting packets are inherited
|
||||||
from the datacenter, while the exceptions for accepted incomming/outgoing
|
from the datacenter, while the exceptions for accepted incoming/outgoing
|
||||||
traffic of the host do not apply.
|
traffic of the host do not apply.
|
||||||
|
|
||||||
Again, you can use xref:pve_firewall_iptables_inspect[iptables-save (see above)]
|
Again, you can use xref:pve_firewall_iptables_inspect[iptables-save (see above)]
|
||||||
|
@ -224,7 +224,7 @@ Advanced ZFS Configuration Options
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
The installer creates the ZFS pool `rpool`. No swap space is created but you can
|
The installer creates the ZFS pool `rpool`. No swap space is created but you can
|
||||||
reserve some unpartitioned space on the install disks for swap. You can also
|
reserve some unpartitioned space on the install disks for swap. You can also
|
||||||
create a swap zvol after the installation, altough this can lead to problems.
|
create a swap zvol after the installation, although this can lead to problems.
|
||||||
(see <<zfs_swap,ZFS swap notes>>).
|
(see <<zfs_swap,ZFS swap notes>>).
|
||||||
|
|
||||||
`ashift`::
|
`ashift`::
|
||||||
@ -250,7 +250,7 @@ semantics, and why this does not replace redundancy on disk-level.
|
|||||||
`hdsize`::
|
`hdsize`::
|
||||||
|
|
||||||
Defines the total hard disk size to be used. This is useful to save free space
|
Defines the total hard disk size to be used. This is useful to save free space
|
||||||
on the hard disk(s) for further partitioning (for exmaple to create a
|
on the hard disk(s) for further partitioning (for example to create a
|
||||||
swap-partition). `hdsize` is only honored for bootable disks, that is only the
|
swap-partition). `hdsize` is only honored for bootable disks, that is only the
|
||||||
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
|
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
|
||||||
|
|
||||||
|
@ -101,7 +101,7 @@ services on the same network and may even break the {pve} cluster stack.
|
|||||||
|
|
||||||
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
|
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
|
||||||
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
|
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
|
||||||
10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
|
10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
|
||||||
will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
|
will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
|
||||||
even 100 GBps are possible.
|
even 100 GBps are possible.
|
||||||
|
|
||||||
@ -378,7 +378,7 @@ pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can directly choose the size for those with the '-db_size' and '-wal_size'
|
You can directly choose the size for those with the '-db_size' and '-wal_size'
|
||||||
paremeters respectively. If they are not given the following values (in order)
|
parameters respectively. If they are not given the following values (in order)
|
||||||
will be used:
|
will be used:
|
||||||
|
|
||||||
* bluestore_block_{db,wal}_size from ceph configuration...
|
* bluestore_block_{db,wal}_size from ceph configuration...
|
||||||
|
@ -730,7 +730,7 @@ resolve to can be changed without touching corosync or the node it runs on -
|
|||||||
which may lead to a situation where an address is changed without thinking
|
which may lead to a situation where an address is changed without thinking
|
||||||
about implications for corosync.
|
about implications for corosync.
|
||||||
|
|
||||||
A seperate, static hostname specifically for corosync is recommended, if
|
A separate, static hostname specifically for corosync is recommended, if
|
||||||
hostnames are preferred. Also, make sure that every node in the cluster can
|
hostnames are preferred. Also, make sure that every node in the cluster can
|
||||||
resolve all hostnames correctly.
|
resolve all hostnames correctly.
|
||||||
|
|
||||||
@ -739,7 +739,7 @@ entry. Only the resolved IP is then saved to the configuration.
|
|||||||
|
|
||||||
Nodes that joined the cluster on earlier versions likely still use their
|
Nodes that joined the cluster on earlier versions likely still use their
|
||||||
unresolved hostname in `corosync.conf`. It might be a good idea to replace
|
unresolved hostname in `corosync.conf`. It might be a good idea to replace
|
||||||
them with IPs or a seperate hostname, as mentioned above.
|
them with IPs or a separate hostname, as mentioned above.
|
||||||
|
|
||||||
|
|
||||||
[[pvecm_redundancy]]
|
[[pvecm_redundancy]]
|
||||||
@ -758,7 +758,7 @@ physical network connection.
|
|||||||
|
|
||||||
Links are used according to a priority setting. You can configure this priority
|
Links are used according to a priority setting. You can configure this priority
|
||||||
by setting 'knet_link_priority' in the corresponding interface section in
|
by setting 'knet_link_priority' in the corresponding interface section in
|
||||||
`corosync.conf`, or, preferrably, using the 'priority' parameter when creating
|
`corosync.conf`, or, preferably, using the 'priority' parameter when creating
|
||||||
your cluster with `pvecm`:
|
your cluster with `pvecm`:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -888,7 +888,7 @@ example 2+1 nodes).
|
|||||||
QDevice Technical Overview
|
QDevice Technical Overview
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
|
The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
|
||||||
node. It provides a configured number of votes to the clusters quorum
|
node. It provides a configured number of votes to the clusters quorum
|
||||||
subsystem based on an external running third-party arbitrator's decision.
|
subsystem based on an external running third-party arbitrator's decision.
|
||||||
Its primary use is to allow a cluster to sustain more node failures than
|
Its primary use is to allow a cluster to sustain more node failures than
|
||||||
|
@ -126,7 +126,7 @@ qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
|
|||||||
----
|
----
|
||||||
|
|
||||||
You can also configure all the Cloud-Init options using a single command
|
You can also configure all the Cloud-Init options using a single command
|
||||||
only. We have simply splitted the above example to separate the
|
only. We have simply split the above example to separate the
|
||||||
commands for reducing the line length. Also make sure to adopt the IP
|
commands for reducing the line length. Also make sure to adopt the IP
|
||||||
setup for your specific environment.
|
setup for your specific environment.
|
||||||
|
|
||||||
|
2
qm.adoc
2
qm.adoc
@ -144,7 +144,7 @@ hardware layout of the VM's virtual motherboard. You can choose between the
|
|||||||
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
|
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
|
||||||
https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
|
https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
|
||||||
chipset, which also provides a virtual PCIe bus, and thus may be desired if
|
chipset, which also provides a virtual PCIe bus, and thus may be desired if
|
||||||
one want's to pass through PCIe hardware.
|
one wants to pass through PCIe hardware.
|
||||||
|
|
||||||
[[qm_hard_disk]]
|
[[qm_hard_disk]]
|
||||||
Hard Disk
|
Hard Disk
|
||||||
|
Loading…
Reference in New Issue
Block a user