mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-07-12 21:52:32 +00:00
formatting cleanup
reformat most existing 'text' as either `text` or ``text'' reformat most existing "text" as either ``text'' or `text` reformat 'x' as `x` harmonize bullet list syntax to use '*' instead of '-'
This commit is contained in:
parent
128b18c0e5
commit
8c1189b640
@ -12,7 +12,7 @@ datacenter.cfg - Proxmox VE Datacenter Configuration
|
|||||||
SYNOPSYS
|
SYNOPSYS
|
||||||
--------
|
--------
|
||||||
|
|
||||||
'/etc/pve/datacenter.cfg'
|
`/etc/pve/datacenter.cfg`
|
||||||
|
|
||||||
|
|
||||||
DESCRIPTION
|
DESCRIPTION
|
||||||
@ -25,7 +25,7 @@ Datacenter Configuration
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
The file '/etc/pve/datacenter.cfg' is a configuration file for
|
The file `/etc/pve/datacenter.cfg` is a configuration file for
|
||||||
{pve}. It contains cluster wide default values used by all nodes.
|
{pve}. It contains cluster wide default values used by all nodes.
|
||||||
|
|
||||||
File Format
|
File Format
|
||||||
@ -36,7 +36,7 @@ the following format:
|
|||||||
|
|
||||||
OPTION: value
|
OPTION: value
|
||||||
|
|
||||||
Blank lines in the file are ignored, and lines starting with a '#'
|
Blank lines in the file are ignored, and lines starting with a `#`
|
||||||
character are treated as comments and are also ignored.
|
character are treated as comments and are also ignored.
|
||||||
|
|
||||||
|
|
||||||
|
@ -57,43 +57,41 @@ sometimes impossible because you cannot modify the software
|
|||||||
yourself. The following solutions works without modifying the
|
yourself. The following solutions works without modifying the
|
||||||
software:
|
software:
|
||||||
|
|
||||||
* Use reliable "server" components
|
* Use reliable ``server'' components
|
||||||
|
|
||||||
NOTE: Computer components with same functionality can have varying
|
NOTE: Computer components with same functionality can have varying
|
||||||
reliability numbers, depending on the component quality. Most vendors
|
reliability numbers, depending on the component quality. Most vendors
|
||||||
sell components with higher reliability as "server" components -
|
sell components with higher reliability as ``server'' components -
|
||||||
usually at higher price.
|
usually at higher price.
|
||||||
|
|
||||||
* Eliminate single point of failure (redundant components)
|
* Eliminate single point of failure (redundant components)
|
||||||
|
** use an uninterruptible power supply (UPS)
|
||||||
- use an uninterruptible power supply (UPS)
|
** use redundant power supplies on the main boards
|
||||||
- use redundant power supplies on the main boards
|
** use ECC-RAM
|
||||||
- use ECC-RAM
|
** use redundant network hardware
|
||||||
- use redundant network hardware
|
** use RAID for local storage
|
||||||
- use RAID for local storage
|
** use distributed, redundant storage for VM data
|
||||||
- use distributed, redundant storage for VM data
|
|
||||||
|
|
||||||
* Reduce downtime
|
* Reduce downtime
|
||||||
|
** rapidly accessible administrators (24/7)
|
||||||
- rapidly accessible administrators (24/7)
|
** availability of spare parts (other nodes in a {pve} cluster)
|
||||||
- availability of spare parts (other nodes in a {pve} cluster)
|
** automatic error detection (provided by `ha-manager`)
|
||||||
- automatic error detection ('ha-manager')
|
** automatic failover (provided by `ha-manager`)
|
||||||
- automatic failover ('ha-manager')
|
|
||||||
|
|
||||||
Virtualization environments like {pve} make it much easier to reach
|
Virtualization environments like {pve} make it much easier to reach
|
||||||
high availability because they remove the "hardware" dependency. They
|
high availability because they remove the ``hardware'' dependency. They
|
||||||
also support to setup and use redundant storage and network
|
also support to setup and use redundant storage and network
|
||||||
devices. So if one host fail, you can simply start those services on
|
devices. So if one host fail, you can simply start those services on
|
||||||
another host within your cluster.
|
another host within your cluster.
|
||||||
|
|
||||||
Even better, {pve} provides a software stack called 'ha-manager',
|
Even better, {pve} provides a software stack called `ha-manager`,
|
||||||
which can do that automatically for you. It is able to automatically
|
which can do that automatically for you. It is able to automatically
|
||||||
detect errors and do automatic failover.
|
detect errors and do automatic failover.
|
||||||
|
|
||||||
{pve} 'ha-manager' works like an "automated" administrator. First, you
|
{pve} `ha-manager` works like an ``automated'' administrator. First, you
|
||||||
configure what resources (VMs, containers, ...) it should
|
configure what resources (VMs, containers, ...) it should
|
||||||
manage. 'ha-manager' then observes correct functionality, and handles
|
manage. `ha-manager` then observes correct functionality, and handles
|
||||||
service failover to another node in case of errors. 'ha-manager' can
|
service failover to another node in case of errors. `ha-manager` can
|
||||||
also handle normal user requests which may start, stop, relocate and
|
also handle normal user requests which may start, stop, relocate and
|
||||||
migrate a service.
|
migrate a service.
|
||||||
|
|
||||||
@ -105,7 +103,7 @@ costs.
|
|||||||
|
|
||||||
TIP: Increasing availability from 99% to 99.9% is relatively
|
TIP: Increasing availability from 99% to 99.9% is relatively
|
||||||
simply. But increasing availability from 99.9999% to 99.99999% is very
|
simply. But increasing availability from 99.9999% to 99.99999% is very
|
||||||
hard and costly. 'ha-manager' has typical error detection and failover
|
hard and costly. `ha-manager` has typical error detection and failover
|
||||||
times of about 2 minutes, so you can get no more than 99.999%
|
times of about 2 minutes, so you can get no more than 99.999%
|
||||||
availability.
|
availability.
|
||||||
|
|
||||||
@ -119,7 +117,7 @@ Requirements
|
|||||||
* hardware redundancy (everywhere)
|
* hardware redundancy (everywhere)
|
||||||
|
|
||||||
* hardware watchdog - if not available we fall back to the
|
* hardware watchdog - if not available we fall back to the
|
||||||
linux kernel software watchdog ('softdog')
|
linux kernel software watchdog (`softdog`)
|
||||||
|
|
||||||
* optional hardware fencing devices
|
* optional hardware fencing devices
|
||||||
|
|
||||||
@ -127,16 +125,16 @@ Requirements
|
|||||||
Resources
|
Resources
|
||||||
---------
|
---------
|
||||||
|
|
||||||
We call the primary management unit handled by 'ha-manager' a
|
We call the primary management unit handled by `ha-manager` a
|
||||||
resource. A resource (also called "service") is uniquely
|
resource. A resource (also called ``service'') is uniquely
|
||||||
identified by a service ID (SID), which consists of the resource type
|
identified by a service ID (SID), which consists of the resource type
|
||||||
and an type specific ID, e.g.: 'vm:100'. That example would be a
|
and an type specific ID, e.g.: `vm:100`. That example would be a
|
||||||
resource of type 'vm' (virtual machine) with the ID 100.
|
resource of type `vm` (virtual machine) with the ID 100.
|
||||||
|
|
||||||
For now we have two important resources types - virtual machines and
|
For now we have two important resources types - virtual machines and
|
||||||
containers. One basic idea here is that we can bundle related software
|
containers. One basic idea here is that we can bundle related software
|
||||||
into such VM or container, so there is no need to compose one big
|
into such VM or container, so there is no need to compose one big
|
||||||
service from other services, like it was done with 'rgmanager'. In
|
service from other services, like it was done with `rgmanager`. In
|
||||||
general, a HA enabled resource should not depend on other resources.
|
general, a HA enabled resource should not depend on other resources.
|
||||||
|
|
||||||
|
|
||||||
@ -148,14 +146,14 @@ internals. It describes how the CRM and the LRM work together.
|
|||||||
|
|
||||||
To provide High Availability two daemons run on each node:
|
To provide High Availability two daemons run on each node:
|
||||||
|
|
||||||
'pve-ha-lrm'::
|
`pve-ha-lrm`::
|
||||||
|
|
||||||
The local resource manager (LRM), it controls the services running on
|
The local resource manager (LRM), it controls the services running on
|
||||||
the local node.
|
the local node.
|
||||||
It reads the requested states for its services from the current manager
|
It reads the requested states for its services from the current manager
|
||||||
status file and executes the respective commands.
|
status file and executes the respective commands.
|
||||||
|
|
||||||
'pve-ha-crm'::
|
`pve-ha-crm`::
|
||||||
|
|
||||||
The cluster resource manager (CRM), it controls the cluster wide
|
The cluster resource manager (CRM), it controls the cluster wide
|
||||||
actions of the services, processes the LRM results and includes the state
|
actions of the services, processes the LRM results and includes the state
|
||||||
@ -174,7 +172,7 @@ lock.
|
|||||||
Local Resource Manager
|
Local Resource Manager
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The local resource manager ('pve-ha-lrm') is started as a daemon on
|
The local resource manager (`pve-ha-lrm`) is started as a daemon on
|
||||||
boot and waits until the HA cluster is quorate and thus cluster wide
|
boot and waits until the HA cluster is quorate and thus cluster wide
|
||||||
locks are working.
|
locks are working.
|
||||||
|
|
||||||
@ -187,11 +185,11 @@ It can be in three states:
|
|||||||
and quorum was lost.
|
and quorum was lost.
|
||||||
|
|
||||||
After the LRM gets in the active state it reads the manager status
|
After the LRM gets in the active state it reads the manager status
|
||||||
file in '/etc/pve/ha/manager_status' and determines the commands it
|
file in `/etc/pve/ha/manager_status` and determines the commands it
|
||||||
has to execute for the services it owns.
|
has to execute for the services it owns.
|
||||||
For each command a worker gets started, this workers are running in
|
For each command a worker gets started, this workers are running in
|
||||||
parallel and are limited to maximal 4 by default. This default setting
|
parallel and are limited to maximal 4 by default. This default setting
|
||||||
may be changed through the datacenter configuration key "max_worker".
|
may be changed through the datacenter configuration key `max_worker`.
|
||||||
When finished the worker process gets collected and its result saved for
|
When finished the worker process gets collected and its result saved for
|
||||||
the CRM.
|
the CRM.
|
||||||
|
|
||||||
@ -201,12 +199,12 @@ The default value of 4 maximal concurrent Workers may be unsuited for
|
|||||||
a specific setup. For example may 4 live migrations happen at the same
|
a specific setup. For example may 4 live migrations happen at the same
|
||||||
time, which can lead to network congestions with slower networks and/or
|
time, which can lead to network congestions with slower networks and/or
|
||||||
big (memory wise) services. Ensure that also in the worst case no congestion
|
big (memory wise) services. Ensure that also in the worst case no congestion
|
||||||
happens and lower the "max_worker" value if needed. In the contrary, if you
|
happens and lower the `max_worker` value if needed. In the contrary, if you
|
||||||
have a particularly powerful high end setup you may also want to increase it.
|
have a particularly powerful high end setup you may also want to increase it.
|
||||||
|
|
||||||
Each command requested by the CRM is uniquely identifiable by an UID, when
|
Each command requested by the CRM is uniquely identifiable by an UID, when
|
||||||
the worker finished its result will be processed and written in the LRM
|
the worker finished its result will be processed and written in the LRM
|
||||||
status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect
|
status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
|
||||||
it and let its state machine - respective the commands output - act on it.
|
it and let its state machine - respective the commands output - act on it.
|
||||||
|
|
||||||
The actions on each service between CRM and LRM are normally always synced.
|
The actions on each service between CRM and LRM are normally always synced.
|
||||||
@ -214,7 +212,7 @@ This means that the CRM requests a state uniquely marked by an UID, the LRM
|
|||||||
then executes this action *one time* and writes back the result, also
|
then executes this action *one time* and writes back the result, also
|
||||||
identifiable by the same UID. This is needed so that the LRM does not
|
identifiable by the same UID. This is needed so that the LRM does not
|
||||||
executes an outdated command.
|
executes an outdated command.
|
||||||
With the exception of the 'stop' and the 'error' command,
|
With the exception of the `stop` and the `error` command,
|
||||||
those two do not depend on the result produced and are executed
|
those two do not depend on the result produced and are executed
|
||||||
always in the case of the stopped state and once in the case of
|
always in the case of the stopped state and once in the case of
|
||||||
the error state.
|
the error state.
|
||||||
@ -230,7 +228,7 @@ the same command for the pve-ha-crm on the node which is the current master.
|
|||||||
Cluster Resource Manager
|
Cluster Resource Manager
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The cluster resource manager ('pve-ha-crm') starts on each node and
|
The cluster resource manager (`pve-ha-crm`) starts on each node and
|
||||||
waits there for the manager lock, which can only be held by one node
|
waits there for the manager lock, which can only be held by one node
|
||||||
at a time. The node which successfully acquires the manager lock gets
|
at a time. The node which successfully acquires the manager lock gets
|
||||||
promoted to the CRM master.
|
promoted to the CRM master.
|
||||||
@ -261,12 +259,12 @@ Configuration
|
|||||||
-------------
|
-------------
|
||||||
|
|
||||||
The HA stack is well integrated in the Proxmox VE API2. So, for
|
The HA stack is well integrated in the Proxmox VE API2. So, for
|
||||||
example, HA can be configured via 'ha-manager' or the PVE web
|
example, HA can be configured via `ha-manager` or the PVE web
|
||||||
interface, which both provide an easy to use tool.
|
interface, which both provide an easy to use tool.
|
||||||
|
|
||||||
The resource configuration file can be located at
|
The resource configuration file can be located at
|
||||||
'/etc/pve/ha/resources.cfg' and the group configuration file at
|
`/etc/pve/ha/resources.cfg` and the group configuration file at
|
||||||
'/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
|
`/etc/pve/ha/groups.cfg`. Use the provided tools to make changes,
|
||||||
there shouldn't be any need to edit them manually.
|
there shouldn't be any need to edit them manually.
|
||||||
|
|
||||||
Node Power Status
|
Node Power Status
|
||||||
@ -347,7 +345,7 @@ Configure Hardware Watchdog
|
|||||||
By default all watchdog modules are blocked for security reasons as they are
|
By default all watchdog modules are blocked for security reasons as they are
|
||||||
like a loaded gun if not correctly initialized.
|
like a loaded gun if not correctly initialized.
|
||||||
If you have a hardware watchdog available remove its kernel module from the
|
If you have a hardware watchdog available remove its kernel module from the
|
||||||
blacklist, load it with insmod and restart the 'watchdog-mux' service or reboot
|
blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
|
||||||
the node.
|
the node.
|
||||||
|
|
||||||
Recover Fenced Services
|
Recover Fenced Services
|
||||||
@ -449,7 +447,7 @@ Service Operations
|
|||||||
------------------
|
------------------
|
||||||
|
|
||||||
This are how the basic user-initiated service operations (via
|
This are how the basic user-initiated service operations (via
|
||||||
'ha-manager') work.
|
`ha-manager`) work.
|
||||||
|
|
||||||
enable::
|
enable::
|
||||||
|
|
||||||
@ -470,9 +468,9 @@ current state will not be touched.
|
|||||||
|
|
||||||
start/stop::
|
start/stop::
|
||||||
|
|
||||||
start and stop commands can be issued to the resource specific tools
|
`start` and `stop` commands can be issued to the resource specific tools
|
||||||
(like 'qm' or 'pct'), they will forward the request to the
|
(like `qm` or `pct`), they will forward the request to the
|
||||||
'ha-manager' which then will execute the action and set the resulting
|
`ha-manager` which then will execute the action and set the resulting
|
||||||
service state (enabled, disabled).
|
service state (enabled, disabled).
|
||||||
|
|
||||||
|
|
||||||
|
@ -82,9 +82,9 @@ Configuration Options
|
|||||||
[width="100%",options="header"]
|
[width="100%",options="header"]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
| File name |Download link
|
| File name |Download link
|
||||||
| '/etc/pve/datacenter.cfg' | link:datacenter.cfg.5.html[datacenter.cfg.5]
|
| `/etc/pve/datacenter.cfg` | link:datacenter.cfg.5.html[datacenter.cfg.5]
|
||||||
| '/etc/pve/qemu-server/<VMID>.conf' | link:qm.conf.5.html[qm.conf.5]
|
| `/etc/pve/qemu-server/<VMID>.conf` | link:qm.conf.5.html[qm.conf.5]
|
||||||
| '/etc/pve/lxc/<CTID>.conf' | link:pct.conf.5.html[pct.conf.5]
|
| `/etc/pve/lxc/<CTID>.conf` | link:pct.conf.5.html[pct.conf.5]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|
|
||||||
|
|
||||||
|
@ -6,7 +6,7 @@ Most people install {pve} directly on a local disk. The {pve}
|
|||||||
installation CD offers several options for local disk management, and
|
installation CD offers several options for local disk management, and
|
||||||
the current default setup uses LVM. The installer let you select a
|
the current default setup uses LVM. The installer let you select a
|
||||||
single disk for such setup, and uses that disk as physical volume for
|
single disk for such setup, and uses that disk as physical volume for
|
||||||
the **V**olume **G**roup (VG) 'pve'. The following output is from a
|
the **V**olume **G**roup (VG) `pve`. The following output is from a
|
||||||
test installation using a small 8GB disk:
|
test installation using a small 8GB disk:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -30,7 +30,7 @@ VG:
|
|||||||
swap pve -wi-ao---- 896.00m
|
swap pve -wi-ao---- 896.00m
|
||||||
----
|
----
|
||||||
|
|
||||||
root:: Formatted as 'ext4', and contains the operation system.
|
root:: Formatted as `ext4`, and contains the operation system.
|
||||||
|
|
||||||
swap:: Swap partition
|
swap:: Swap partition
|
||||||
|
|
||||||
|
@ -64,12 +64,12 @@ increase the overall performance significantly.
|
|||||||
IMPORTANT: Do not use ZFS on top of hardware controller which has it's
|
IMPORTANT: Do not use ZFS on top of hardware controller which has it's
|
||||||
own cache management. ZFS needs to directly communicate with disks. An
|
own cache management. ZFS needs to directly communicate with disks. An
|
||||||
HBA adapter is the way to go, or something like LSI controller flashed
|
HBA adapter is the way to go, or something like LSI controller flashed
|
||||||
in 'IT' mode.
|
in ``IT'' mode.
|
||||||
|
|
||||||
If you are experimenting with an installation of {pve} inside a VM
|
If you are experimenting with an installation of {pve} inside a VM
|
||||||
(Nested Virtualization), don't use 'virtio' for disks of that VM,
|
(Nested Virtualization), don't use `virtio` for disks of that VM,
|
||||||
since they are not supported by ZFS. Use IDE or SCSI instead (works
|
since they are not supported by ZFS. Use IDE or SCSI instead (works
|
||||||
also with 'virtio' SCSI controller type).
|
also with `virtio` SCSI controller type).
|
||||||
|
|
||||||
|
|
||||||
Installation as root file system
|
Installation as root file system
|
||||||
@ -80,11 +80,11 @@ root file system. You need to select the RAID type at installation
|
|||||||
time:
|
time:
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
RAID0:: Also called 'striping'. The capacity of such volume is the sum
|
RAID0:: Also called ``striping''. The capacity of such volume is the sum
|
||||||
of the capacity of all disks. But RAID0 does not add any redundancy,
|
of the capacities of all disks. But RAID0 does not add any redundancy,
|
||||||
so the failure of a single drive makes the volume unusable.
|
so the failure of a single drive makes the volume unusable.
|
||||||
|
|
||||||
RAID1:: Also called mirroring. Data is written identically to all
|
RAID1:: Also called ``mirroring''. Data is written identically to all
|
||||||
disks. This mode requires at least 2 disks with the same size. The
|
disks. This mode requires at least 2 disks with the same size. The
|
||||||
resulting capacity is that of a single disk.
|
resulting capacity is that of a single disk.
|
||||||
|
|
||||||
@ -97,12 +97,12 @@ RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
|
|||||||
RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
|
RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
|
||||||
|
|
||||||
The installer automatically partitions the disks, creates a ZFS pool
|
The installer automatically partitions the disks, creates a ZFS pool
|
||||||
called 'rpool', and installs the root file system on the ZFS subvolume
|
called `rpool`, and installs the root file system on the ZFS subvolume
|
||||||
'rpool/ROOT/pve-1'.
|
`rpool/ROOT/pve-1`.
|
||||||
|
|
||||||
Another subvolume called 'rpool/data' is created to store VM
|
Another subvolume called `rpool/data` is created to store VM
|
||||||
images. In order to use that with the {pve} tools, the installer
|
images. In order to use that with the {pve} tools, the installer
|
||||||
creates the following configuration entry in '/etc/pve/storage.cfg':
|
creates the following configuration entry in `/etc/pve/storage.cfg`:
|
||||||
|
|
||||||
----
|
----
|
||||||
zfspool: local-zfs
|
zfspool: local-zfs
|
||||||
@ -112,7 +112,7 @@ zfspool: local-zfs
|
|||||||
----
|
----
|
||||||
|
|
||||||
After installation, you can view your ZFS pool status using the
|
After installation, you can view your ZFS pool status using the
|
||||||
'zpool' command:
|
`zpool` command:
|
||||||
|
|
||||||
----
|
----
|
||||||
# zpool status
|
# zpool status
|
||||||
@ -133,7 +133,7 @@ config:
|
|||||||
errors: No known data errors
|
errors: No known data errors
|
||||||
----
|
----
|
||||||
|
|
||||||
The 'zfs' command is used configure and manage your ZFS file
|
The `zfs` command is used configure and manage your ZFS file
|
||||||
systems. The following command lists all file systems after
|
systems. The following command lists all file systems after
|
||||||
installation:
|
installation:
|
||||||
|
|
||||||
@ -167,8 +167,8 @@ ZFS Administration
|
|||||||
|
|
||||||
This section gives you some usage examples for common tasks. ZFS
|
This section gives you some usage examples for common tasks. ZFS
|
||||||
itself is really powerful and provides many options. The main commands
|
itself is really powerful and provides many options. The main commands
|
||||||
to manage ZFS are 'zfs' and 'zpool'. Both commands comes with great
|
to manage ZFS are `zfs` and `zpool`. Both commands come with great
|
||||||
manual pages, worth to read:
|
manual pages, which can be read with:
|
||||||
|
|
||||||
----
|
----
|
||||||
# man zpool
|
# man zpool
|
||||||
@ -177,8 +177,8 @@ manual pages, worth to read:
|
|||||||
|
|
||||||
.Create a new ZPool
|
.Create a new ZPool
|
||||||
|
|
||||||
To create a new pool, at least one disk is needed. The 'ashift' should
|
To create a new pool, at least one disk is needed. The `ashift` should
|
||||||
have the same sector-size (2 power of 'ashift') or larger as the
|
have the same sector-size (2 power of `ashift`) or larger as the
|
||||||
underlying disk.
|
underlying disk.
|
||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device>
|
zpool create -f -o ashift=12 <pool> <device>
|
||||||
@ -222,7 +222,7 @@ Minimum 4 Disks
|
|||||||
It is possible to use a dedicated cache drive partition to increase
|
It is possible to use a dedicated cache drive partition to increase
|
||||||
the performance (use SSD).
|
the performance (use SSD).
|
||||||
|
|
||||||
As '<device>' it is possible to use more devices, like it's shown in
|
As `<device>` it is possible to use more devices, like it's shown in
|
||||||
"Create a new pool with RAID*".
|
"Create a new pool with RAID*".
|
||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
|
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
|
||||||
@ -232,7 +232,7 @@ As '<device>' it is possible to use more devices, like it's shown in
|
|||||||
It is possible to use a dedicated cache drive partition to increase
|
It is possible to use a dedicated cache drive partition to increase
|
||||||
the performance(SSD).
|
the performance(SSD).
|
||||||
|
|
||||||
As '<device>' it is possible to use more devices, like it's shown in
|
As `<device>` it is possible to use more devices, like it's shown in
|
||||||
"Create a new pool with RAID*".
|
"Create a new pool with RAID*".
|
||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device> log <log_device>
|
zpool create -f -o ashift=12 <pool> <device> log <log_device>
|
||||||
@ -240,7 +240,7 @@ As '<device>' it is possible to use more devices, like it's shown in
|
|||||||
.Add Cache and Log to an existing pool
|
.Add Cache and Log to an existing pool
|
||||||
|
|
||||||
If you have an pool without cache and log. First partition the SSD in
|
If you have an pool without cache and log. First partition the SSD in
|
||||||
2 partition with parted or gdisk
|
2 partition with `parted` or `gdisk`
|
||||||
|
|
||||||
IMPORTANT: Always use GPT partition tables (gdisk or parted).
|
IMPORTANT: Always use GPT partition tables (gdisk or parted).
|
||||||
|
|
||||||
@ -262,14 +262,15 @@ ZFS comes with an event daemon, which monitors events generated by the
|
|||||||
ZFS kernel module. The daemon can also send E-Mails on ZFS event like
|
ZFS kernel module. The daemon can also send E-Mails on ZFS event like
|
||||||
pool errors.
|
pool errors.
|
||||||
|
|
||||||
To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favored editor, and uncomment the 'ZED_EMAIL_ADDR' setting:
|
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
|
||||||
|
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
|
||||||
|
|
||||||
ZED_EMAIL_ADDR="root"
|
ZED_EMAIL_ADDR="root"
|
||||||
|
|
||||||
Please note {pve} forwards mails to 'root' to the email address
|
Please note {pve} forwards mails to `root` to the email address
|
||||||
configured for the root user.
|
configured for the root user.
|
||||||
|
|
||||||
IMPORTANT: the only settings that is required is ZED_EMAIL_ADDR. All
|
IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
|
||||||
other settings are optional.
|
other settings are optional.
|
||||||
|
|
||||||
|
|
||||||
@ -279,7 +280,7 @@ Limit ZFS memory usage
|
|||||||
It is good to use maximal 50 percent (which is the default) of the
|
It is good to use maximal 50 percent (which is the default) of the
|
||||||
system memory for ZFS ARC to prevent performance shortage of the
|
system memory for ZFS ARC to prevent performance shortage of the
|
||||||
host. Use your preferred editor to change the configuration in
|
host. Use your preferred editor to change the configuration in
|
||||||
/etc/modprobe.d/zfs.conf and insert:
|
`/etc/modprobe.d/zfs.conf` and insert:
|
||||||
|
|
||||||
options zfs zfs_arc_max=8589934592
|
options zfs zfs_arc_max=8589934592
|
||||||
|
|
||||||
@ -302,16 +303,16 @@ to an external Storage.
|
|||||||
|
|
||||||
We strongly recommend to use enough memory, so that you normally do not
|
We strongly recommend to use enough memory, so that you normally do not
|
||||||
run into low memory situations. Additionally, you can lower the
|
run into low memory situations. Additionally, you can lower the
|
||||||
'swappiness' value. A good value for servers is 10:
|
``swappiness'' value. A good value for servers is 10:
|
||||||
|
|
||||||
sysctl -w vm.swappiness=10
|
sysctl -w vm.swappiness=10
|
||||||
|
|
||||||
To make the swappiness persistence, open '/etc/sysctl.conf' with
|
To make the swappiness persistent, open `/etc/sysctl.conf` with
|
||||||
an editor of your choice and add the following line:
|
an editor of your choice and add the following line:
|
||||||
|
|
||||||
vm.swappiness = 10
|
vm.swappiness = 10
|
||||||
|
|
||||||
.Linux Kernel 'swappiness' parameter values
|
.Linux kernel `swappiness` parameter values
|
||||||
[width="100%",cols="<m,2d",options="header"]
|
[width="100%",cols="<m,2d",options="header"]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
| Value | Strategy
|
| Value | Strategy
|
||||||
|
116
pct.adoc
116
pct.adoc
@ -101,9 +101,9 @@ unprivileged containers are safe by design.
|
|||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
The '/etc/pve/lxc/<CTID>.conf' file stores container configuration,
|
The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
|
||||||
where '<CTID>' is the numeric ID of the given container. Like all
|
where `<CTID>` is the numeric ID of the given container. Like all
|
||||||
other files stored inside '/etc/pve/', they get automatically
|
other files stored inside `/etc/pve/`, they get automatically
|
||||||
replicated to all other cluster nodes.
|
replicated to all other cluster nodes.
|
||||||
|
|
||||||
NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
|
NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
|
||||||
@ -121,11 +121,11 @@ rootfs: local:107/vm-107-disk-1.raw,size=7G
|
|||||||
----
|
----
|
||||||
|
|
||||||
Those configuration files are simple text files, and you can edit them
|
Those configuration files are simple text files, and you can edit them
|
||||||
using a normal text editor ('vi', 'nano', ...). This is sometimes
|
using a normal text editor (`vi`, `nano`, ...). This is sometimes
|
||||||
useful to do small corrections, but keep in mind that you need to
|
useful to do small corrections, but keep in mind that you need to
|
||||||
restart the container to apply such changes.
|
restart the container to apply such changes.
|
||||||
|
|
||||||
For that reason, it is usually better to use the 'pct' command to
|
For that reason, it is usually better to use the `pct` command to
|
||||||
generate and modify those files, or do the whole thing using the GUI.
|
generate and modify those files, or do the whole thing using the GUI.
|
||||||
Our toolkit is smart enough to instantaneously apply most changes to
|
Our toolkit is smart enough to instantaneously apply most changes to
|
||||||
running containers. This feature is called "hot plug", and there is no
|
running containers. This feature is called "hot plug", and there is no
|
||||||
@ -140,7 +140,7 @@ format. Each line has the following format:
|
|||||||
# this is a comment
|
# this is a comment
|
||||||
OPTION: value
|
OPTION: value
|
||||||
|
|
||||||
Blank lines in those files are ignored, and lines starting with a '#'
|
Blank lines in those files are ignored, and lines starting with a `#`
|
||||||
character are treated as comments and are also ignored.
|
character are treated as comments and are also ignored.
|
||||||
|
|
||||||
It is possible to add low-level, LXC style configuration directly, for
|
It is possible to add low-level, LXC style configuration directly, for
|
||||||
@ -157,9 +157,9 @@ Those settings are directly passed to the LXC low-level tools.
|
|||||||
Snapshots
|
Snapshots
|
||||||
~~~~~~~~~
|
~~~~~~~~~
|
||||||
|
|
||||||
When you create a snapshot, 'pct' stores the configuration at snapshot
|
When you create a snapshot, `pct` stores the configuration at snapshot
|
||||||
time into a separate snapshot section within the same configuration
|
time into a separate snapshot section within the same configuration
|
||||||
file. For example, after creating a snapshot called 'testsnapshot',
|
file. For example, after creating a snapshot called ``testsnapshot'',
|
||||||
your configuration file will look like this:
|
your configuration file will look like this:
|
||||||
|
|
||||||
.Container Configuration with Snapshot
|
.Container Configuration with Snapshot
|
||||||
@ -176,10 +176,11 @@ snaptime: 1457170803
|
|||||||
...
|
...
|
||||||
----
|
----
|
||||||
|
|
||||||
There are a few snapshot related properties like 'parent' and
|
There are a few snapshot related properties like `parent` and
|
||||||
'snaptime'. The 'parent' property is used to store the parent/child
|
`snaptime`. The `parent` property is used to store the parent/child
|
||||||
relationship between snapshots. 'snaptime' is the snapshot creation
|
relationship between snapshots. `snaptime` is the snapshot creation
|
||||||
time stamp (unix epoch).
|
time stamp (Unix epoch).
|
||||||
|
|
||||||
|
|
||||||
Guest Operating System Configuration
|
Guest Operating System Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -226,12 +227,12 @@ simple empty file creatd via:
|
|||||||
|
|
||||||
Most modifications are OS dependent, so they differ between different
|
Most modifications are OS dependent, so they differ between different
|
||||||
distributions and versions. You can completely disable modifications
|
distributions and versions. You can completely disable modifications
|
||||||
by manually setting the 'ostype' to 'unmanaged'.
|
by manually setting the `ostype` to `unmanaged`.
|
||||||
|
|
||||||
OS type detection is done by testing for certain files inside the
|
OS type detection is done by testing for certain files inside the
|
||||||
container:
|
container:
|
||||||
|
|
||||||
Ubuntu:: inspect /etc/lsb-release ('DISTRIB_ID=Ubuntu')
|
Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
|
||||||
|
|
||||||
Debian:: test /etc/debian_version
|
Debian:: test /etc/debian_version
|
||||||
|
|
||||||
@ -245,7 +246,7 @@ Alpine:: test /etc/alpine-release
|
|||||||
|
|
||||||
Gentoo:: test /etc/gentoo-release
|
Gentoo:: test /etc/gentoo-release
|
||||||
|
|
||||||
NOTE: Container start fails if the configured 'ostype' differs from the auto
|
NOTE: Container start fails if the configured `ostype` differs from the auto
|
||||||
detected type.
|
detected type.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
@ -257,16 +258,16 @@ include::pct.conf.5-opts.adoc[]
|
|||||||
Container Images
|
Container Images
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
Container Images, sometimes also referred to as "templates" or
|
Container images, sometimes also referred to as ``templates'' or
|
||||||
"appliances", are 'tar' archives which contain everything to run a
|
``appliances'', are `tar` archives which contain everything to run a
|
||||||
container. You can think of it as a tidy container backup. Like most
|
container. You can think of it as a tidy container backup. Like most
|
||||||
modern container toolkits, 'pct' uses those images when you create a
|
modern container toolkits, `pct` uses those images when you create a
|
||||||
new container, for example:
|
new container, for example:
|
||||||
|
|
||||||
pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
||||||
|
|
||||||
Proxmox itself ships a set of basic templates for most common
|
Proxmox itself ships a set of basic templates for most common
|
||||||
operating systems, and you can download them using the 'pveam' (short
|
operating systems, and you can download them using the `pveam` (short
|
||||||
for {pve} Appliance Manager) command line utility. You can also
|
for {pve} Appliance Manager) command line utility. You can also
|
||||||
download https://www.turnkeylinux.org/[TurnKey Linux] containers using
|
download https://www.turnkeylinux.org/[TurnKey Linux] containers using
|
||||||
that tool (or the graphical user interface).
|
that tool (or the graphical user interface).
|
||||||
@ -281,8 +282,8 @@ After that you can view the list of available images using:
|
|||||||
|
|
||||||
pveam available
|
pveam available
|
||||||
|
|
||||||
You can restrict this large list by specifying the 'section' you are
|
You can restrict this large list by specifying the `section` you are
|
||||||
interested in, for example basic 'system' images:
|
interested in, for example basic `system` images:
|
||||||
|
|
||||||
.List available system images
|
.List available system images
|
||||||
----
|
----
|
||||||
@ -299,14 +300,14 @@ system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
|
|||||||
----
|
----
|
||||||
|
|
||||||
Before you can use such a template, you need to download them into one
|
Before you can use such a template, you need to download them into one
|
||||||
of your storages. You can simply use storage 'local' for that
|
of your storages. You can simply use storage `local` for that
|
||||||
purpose. For clustered installations, it is preferred to use a shared
|
purpose. For clustered installations, it is preferred to use a shared
|
||||||
storage so that all nodes can access those images.
|
storage so that all nodes can access those images.
|
||||||
|
|
||||||
pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
|
pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
|
||||||
|
|
||||||
You are now ready to create containers using that image, and you can
|
You are now ready to create containers using that image, and you can
|
||||||
list all downloaded images on storage 'local' with:
|
list all downloaded images on storage `local` with:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam list local
|
# pveam list local
|
||||||
@ -325,8 +326,8 @@ Container Storage
|
|||||||
|
|
||||||
Traditional containers use a very simple storage model, only allowing
|
Traditional containers use a very simple storage model, only allowing
|
||||||
a single mount point, the root file system. This was further
|
a single mount point, the root file system. This was further
|
||||||
restricted to specific file system types like 'ext4' and 'nfs'.
|
restricted to specific file system types like `ext4` and `nfs`.
|
||||||
Additional mounts are often done by user provided scripts. This turend
|
Additional mounts are often done by user provided scripts. This turned
|
||||||
out to be complex and error prone, so we try to avoid that now.
|
out to be complex and error prone, so we try to avoid that now.
|
||||||
|
|
||||||
Our new LXC based container model is more flexible regarding
|
Our new LXC based container model is more flexible regarding
|
||||||
@ -339,9 +340,9 @@ application.
|
|||||||
|
|
||||||
The second big improvement is that you can use any storage type
|
The second big improvement is that you can use any storage type
|
||||||
supported by the {pve} storage library. That means that you can store
|
supported by the {pve} storage library. That means that you can store
|
||||||
your containers on local 'lvmthin' or 'zfs', shared 'iSCSI' storage,
|
your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
|
||||||
or even on distributed storage systems like 'ceph'. It also enables us
|
or even on distributed storage systems like `ceph`. It also enables us
|
||||||
to use advanced storage features like snapshots and clones. 'vzdump'
|
to use advanced storage features like snapshots and clones. `vzdump`
|
||||||
can also use the snapshot feature to provide consistent container
|
can also use the snapshot feature to provide consistent container
|
||||||
backups.
|
backups.
|
||||||
|
|
||||||
@ -398,7 +399,7 @@ cannot make snapshots or deal with quotas from inside the container. With
|
|||||||
unprivileged containers you might run into permission problems caused by the
|
unprivileged containers you might run into permission problems caused by the
|
||||||
user mapping and cannot use ACLs.
|
user mapping and cannot use ACLs.
|
||||||
|
|
||||||
NOTE: The contents of bind mount points are not backed up when using 'vzdump'.
|
NOTE: The contents of bind mount points are not backed up when using `vzdump`.
|
||||||
|
|
||||||
WARNING: For security reasons, bind mounts should only be established
|
WARNING: For security reasons, bind mounts should only be established
|
||||||
using source directories especially reserved for this purpose, e.g., a
|
using source directories especially reserved for this purpose, e.g., a
|
||||||
@ -410,8 +411,8 @@ NOTE: The bind mount source path must not contain any symlinks.
|
|||||||
|
|
||||||
For example, to make the directory `/mnt/bindmounts/shared` accessible in the
|
For example, to make the directory `/mnt/bindmounts/shared` accessible in the
|
||||||
container with ID `100` under the path `/shared`, use a configuration line like
|
container with ID `100` under the path `/shared`, use a configuration line like
|
||||||
'mp0: /mnt/bindmounts/shared,mp=/shared' in '/etc/pve/lxc/100.conf'.
|
`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
|
||||||
Alternatively, use 'pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared' to
|
Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
|
||||||
achieve the same result.
|
achieve the same result.
|
||||||
|
|
||||||
|
|
||||||
@ -426,7 +427,7 @@ NOTE: Device mount points should only be used under special circumstances. In
|
|||||||
most cases a storage backed mount point offers the same performance and a lot
|
most cases a storage backed mount point offers the same performance and a lot
|
||||||
more features.
|
more features.
|
||||||
|
|
||||||
NOTE: The contents of device mount points are not backed up when using 'vzdump'.
|
NOTE: The contents of device mount points are not backed up when using `vzdump`.
|
||||||
|
|
||||||
|
|
||||||
FUSE mounts
|
FUSE mounts
|
||||||
@ -481,7 +482,7 @@ Container Network
|
|||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
You can configure up to 10 network interfaces for a single
|
You can configure up to 10 network interfaces for a single
|
||||||
container. The corresponding options are called 'net0' to 'net9', and
|
container. The corresponding options are called `net0` to `net9`, and
|
||||||
they can contain the following setting:
|
they can contain the following setting:
|
||||||
|
|
||||||
include::pct-network-opts.adoc[]
|
include::pct-network-opts.adoc[]
|
||||||
@ -493,27 +494,28 @@ Backup and Restore
|
|||||||
Container Backup
|
Container Backup
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
It is possible to use the 'vzdump' tool for container backup. Please
|
It is possible to use the `vzdump` tool for container backup. Please
|
||||||
refer to the 'vzdump' manual page for details.
|
refer to the `vzdump` manual page for details.
|
||||||
|
|
||||||
|
|
||||||
Restoring Container Backups
|
Restoring Container Backups
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Restoring container backups made with 'vzdump' is possible using the
|
Restoring container backups made with `vzdump` is possible using the
|
||||||
'pct restore' command. By default, 'pct restore' will attempt to restore as much
|
`pct restore` command. By default, `pct restore` will attempt to restore as much
|
||||||
of the backed up container configuration as possible. It is possible to override
|
of the backed up container configuration as possible. It is possible to override
|
||||||
the backed up configuration by manually setting container options on the command
|
the backed up configuration by manually setting container options on the command
|
||||||
line (see the 'pct' manual page for details).
|
line (see the `pct` manual page for details).
|
||||||
|
|
||||||
NOTE: 'pvesm extractconfig' can be used to view the backed up configuration
|
NOTE: `pvesm extractconfig` can be used to view the backed up configuration
|
||||||
contained in a vzdump archive.
|
contained in a vzdump archive.
|
||||||
|
|
||||||
There are two basic restore modes, only differing by their handling of mount
|
There are two basic restore modes, only differing by their handling of mount
|
||||||
points:
|
points:
|
||||||
|
|
||||||
|
|
||||||
"Simple" restore mode
|
``Simple'' Restore Mode
|
||||||
^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
If neither the `rootfs` parameter nor any of the optional `mpX` parameters
|
If neither the `rootfs` parameter nor any of the optional `mpX` parameters
|
||||||
are explicitly set, the mount point configuration from the backed up
|
are explicitly set, the mount point configuration from the backed up
|
||||||
@ -535,11 +537,11 @@ This simple mode is also used by the container restore operations in the web
|
|||||||
interface.
|
interface.
|
||||||
|
|
||||||
|
|
||||||
"Advanced" restore mode
|
``Advanced'' Restore Mode
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
By setting the `rootfs` parameter (and optionally, any combination of `mpX`
|
By setting the `rootfs` parameter (and optionally, any combination of `mpX`
|
||||||
parameters), the 'pct restore' command is automatically switched into an
|
parameters), the `pct restore` command is automatically switched into an
|
||||||
advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
|
advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
|
||||||
configuration options contained in the backup archive, and instead only
|
configuration options contained in the backup archive, and instead only
|
||||||
uses the options explicitly provided as parameters.
|
uses the options explicitly provided as parameters.
|
||||||
@ -553,10 +555,10 @@ individually
|
|||||||
* Restore to device and/or bind mount points (limited to root user)
|
* Restore to device and/or bind mount points (limited to root user)
|
||||||
|
|
||||||
|
|
||||||
Managing Containers with 'pct'
|
Managing Containers with `pct`
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
'pct' is the tool to manage Linux Containers on {pve}. You can create
|
`pct` is the tool to manage Linux Containers on {pve}. You can create
|
||||||
and destroy containers, and control execution (start, stop, migrate,
|
and destroy containers, and control execution (start, stop, migrate,
|
||||||
...). You can use pct to set parameters in the associated config file,
|
...). You can use pct to set parameters in the associated config file,
|
||||||
like network configuration or memory limits.
|
like network configuration or memory limits.
|
||||||
@ -585,7 +587,7 @@ Display the configuration
|
|||||||
|
|
||||||
pct config 100
|
pct config 100
|
||||||
|
|
||||||
Add a network interface called eth0, bridged to the host bridge vmbr0,
|
Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
|
||||||
set the address and gateway, while it's running
|
set the address and gateway, while it's running
|
||||||
|
|
||||||
pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
||||||
@ -598,7 +600,7 @@ Reduce the memory of the container to 512MB
|
|||||||
Files
|
Files
|
||||||
------
|
------
|
||||||
|
|
||||||
'/etc/pve/lxc/<CTID>.conf'::
|
`/etc/pve/lxc/<CTID>.conf`::
|
||||||
|
|
||||||
Configuration file for the container '<CTID>'.
|
Configuration file for the container '<CTID>'.
|
||||||
|
|
||||||
@ -606,24 +608,24 @@ Configuration file for the container '<CTID>'.
|
|||||||
Container Advantages
|
Container Advantages
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
- Simple, and fully integrated into {pve}. Setup looks similar to a normal
|
* Simple, and fully integrated into {pve}. Setup looks similar to a normal
|
||||||
VM setup.
|
VM setup.
|
||||||
|
|
||||||
* Storage (ZFS, LVM, NFS, Ceph, ...)
|
** Storage (ZFS, LVM, NFS, Ceph, ...)
|
||||||
|
|
||||||
* Network
|
** Network
|
||||||
|
|
||||||
* Authentification
|
** Authentication
|
||||||
|
|
||||||
* Cluster
|
** Cluster
|
||||||
|
|
||||||
- Fast: minimal overhead, as fast as bare metal
|
* Fast: minimal overhead, as fast as bare metal
|
||||||
|
|
||||||
- High density (perfect for idle workloads)
|
* High density (perfect for idle workloads)
|
||||||
|
|
||||||
- REST API
|
* REST API
|
||||||
|
|
||||||
- Direct hardware access
|
* Direct hardware access
|
||||||
|
|
||||||
|
|
||||||
Technology Overview
|
Technology Overview
|
||||||
|
@ -12,7 +12,7 @@ pct.conf - Proxmox VE Container Configuration
|
|||||||
SYNOPSYS
|
SYNOPSYS
|
||||||
--------
|
--------
|
||||||
|
|
||||||
'/etc/pve/lxc/<CTID>.conf'
|
`/etc/pve/lxc/<CTID>.conf`
|
||||||
|
|
||||||
|
|
||||||
DESCRIPTION
|
DESCRIPTION
|
||||||
@ -25,8 +25,8 @@ Container Configuration
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
The '/etc/pve/lxc/<CTID>.conf' files stores container configuration,
|
The `/etc/pve/lxc/<CTID>.conf` files stores container configuration,
|
||||||
where "CTID" is the numeric ID of the given container.
|
where `CTID` is the numeric ID of the given container.
|
||||||
|
|
||||||
NOTE: IDs <= 100 are reserved for internal purposes.
|
NOTE: IDs <= 100 are reserved for internal purposes.
|
||||||
|
|
||||||
@ -39,10 +39,10 @@ the following format:
|
|||||||
|
|
||||||
OPTION: value
|
OPTION: value
|
||||||
|
|
||||||
Blank lines in the file are ignored, and lines starting with a '#'
|
Blank lines in the file are ignored, and lines starting with a `#`
|
||||||
character are treated as comments and are also ignored.
|
character are treated as comments and are also ignored.
|
||||||
|
|
||||||
One can use the 'pct' command to generate and modify those files.
|
One can use the `pct` command to generate and modify those files.
|
||||||
|
|
||||||
It is also possible to add low-level lxc style configuration directly, for
|
It is also possible to add low-level lxc style configuration directly, for
|
||||||
example:
|
example:
|
||||||
|
72
pmxcfs.adoc
72
pmxcfs.adoc
@ -23,9 +23,9 @@ Proxmox Cluster File System (pmxcfs)
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
The Proxmox Cluster file system (pmxcfs) is a database-driven file
|
The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
|
||||||
system for storing configuration files, replicated in real time to all
|
system for storing configuration files, replicated in real time to all
|
||||||
cluster nodes using corosync. We use this to store all PVE related
|
cluster nodes using `corosync`. We use this to store all PVE related
|
||||||
configuration files.
|
configuration files.
|
||||||
|
|
||||||
Although the file system stores all data inside a persistent database
|
Although the file system stores all data inside a persistent database
|
||||||
@ -63,8 +63,8 @@ some feature are simply not implemented, because we do not need them:
|
|||||||
File access rights
|
File access rights
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
All files and directories are owned by user 'root' and have group
|
All files and directories are owned by user `root` and have group
|
||||||
'www-data'. Only root has write permissions, but group 'www-data' can
|
`www-data`. Only root has write permissions, but group `www-data` can
|
||||||
read most files. Files below the following paths:
|
read most files. Files below the following paths:
|
||||||
|
|
||||||
/etc/pve/priv/
|
/etc/pve/priv/
|
||||||
@ -93,25 +93,25 @@ Files
|
|||||||
|
|
||||||
[width="100%",cols="m,d"]
|
[width="100%",cols="m,d"]
|
||||||
|=======
|
|=======
|
||||||
|corosync.conf |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
|
|`corosync.conf` | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
|
||||||
|storage.cfg |{pve} storage configuration
|
|`storage.cfg` | {pve} storage configuration
|
||||||
|datacenter.cfg |{pve} datacenter wide configuration (keyboard layout, proxy, ...)
|
|`datacenter.cfg` | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
|
||||||
|user.cfg |{pve} access control configuration (users/groups/...)
|
|`user.cfg` | {pve} access control configuration (users/groups/...)
|
||||||
|domains.cfg |{pve} Authentication domains
|
|`domains.cfg` | {pve} authentication domains
|
||||||
|authkey.pub | public key used by ticket system
|
|`authkey.pub` | Public key used by ticket system
|
||||||
|pve-root-ca.pem | public certificate of cluster CA
|
|`pve-root-ca.pem` | Public certificate of cluster CA
|
||||||
|priv/shadow.cfg | shadow password file
|
|`priv/shadow.cfg` | Shadow password file
|
||||||
|priv/authkey.key | private key used by ticket system
|
|`priv/authkey.key` | Private key used by ticket system
|
||||||
|priv/pve-root-ca.key | private key of cluster CA
|
|`priv/pve-root-ca.key` | Private key of cluster CA
|
||||||
|nodes/<NAME>/pve-ssl.pem | public ssl certificate for web server (signed by cluster CA)
|
|`nodes/<NAME>/pve-ssl.pem` | Public SSL certificate for web server (signed by cluster CA)
|
||||||
|nodes/<NAME>/pve-ssl.key | private ssl key for pve-ssl.pem
|
|`nodes/<NAME>/pve-ssl.key` | Private SSL key for `pve-ssl.pem`
|
||||||
|nodes/<NAME>/pveproxy-ssl.pem | public ssl certificate (chain) for web server (optional override for pve-ssl.pem)
|
|`nodes/<NAME>/pveproxy-ssl.pem` | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
|
||||||
|nodes/<NAME>/pveproxy-ssl.key | private ssl key for pveproxy-ssl.pem (optional)
|
|`nodes/<NAME>/pveproxy-ssl.key` | Private SSL key for `pveproxy-ssl.pem` (optional)
|
||||||
|nodes/<NAME>/qemu-server/<VMID>.conf | VM configuration data for KVM VMs
|
|`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
|
||||||
|nodes/<NAME>/lxc/<VMID>.conf | VM configuration data for LXC containers
|
|`nodes/<NAME>/lxc/<VMID>.conf` | VM configuration data for LXC containers
|
||||||
|firewall/cluster.fw | Firewall config applied to all nodes
|
|`firewall/cluster.fw` | Firewall configuration applied to all nodes
|
||||||
|firewall/<NAME>.fw | Firewall config for individual nodes
|
|`firewall/<NAME>.fw` | Firewall configuration for individual nodes
|
||||||
|firewall/<VMID>.fw | Firewall config for VMs and Containers
|
|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
Symbolic links
|
Symbolic links
|
||||||
@ -119,9 +119,9 @@ Symbolic links
|
|||||||
|
|
||||||
[width="100%",cols="m,m"]
|
[width="100%",cols="m,m"]
|
||||||
|=======
|
|=======
|
||||||
|local |nodes/<LOCAL_HOST_NAME>
|
|`local` | `nodes/<LOCAL_HOST_NAME>`
|
||||||
|qemu-server |nodes/<LOCAL_HOST_NAME>/qemu-server/
|
|`qemu-server` | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
|
||||||
|lxc |nodes/<LOCAL_HOST_NAME>/lxc/
|
|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
Special status files for debugging (JSON)
|
Special status files for debugging (JSON)
|
||||||
@ -129,11 +129,11 @@ Special status files for debugging (JSON)
|
|||||||
|
|
||||||
[width="100%",cols="m,d"]
|
[width="100%",cols="m,d"]
|
||||||
|=======
|
|=======
|
||||||
| .version |file versions (to detect file modifications)
|
|`.version` |File versions (to detect file modifications)
|
||||||
| .members |Info about cluster members
|
|`.members` |Info about cluster members
|
||||||
| .vmlist |List of all VMs
|
|`.vmlist` |List of all VMs
|
||||||
| .clusterlog |Cluster log (last 50 entries)
|
|`.clusterlog` |Cluster log (last 50 entries)
|
||||||
| .rrd |RRD data (most recent entries)
|
|`.rrd` |RRD data (most recent entries)
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
Enable/Disable debugging
|
Enable/Disable debugging
|
||||||
@ -153,11 +153,11 @@ Recovery
|
|||||||
|
|
||||||
If you have major problems with your Proxmox VE host, e.g. hardware
|
If you have major problems with your Proxmox VE host, e.g. hardware
|
||||||
issues, it could be helpful to just copy the pmxcfs database file
|
issues, it could be helpful to just copy the pmxcfs database file
|
||||||
/var/lib/pve-cluster/config.db and move it to a new Proxmox VE
|
`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
|
||||||
host. On the new host (with nothing running), you need to stop the
|
host. On the new host (with nothing running), you need to stop the
|
||||||
pve-cluster service and replace the config.db file (needed permissions
|
`pve-cluster` service and replace the `config.db` file (needed permissions
|
||||||
0600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the
|
`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
|
||||||
lost Proxmox VE host, then reboot and check. (And don´t forget your
|
lost Proxmox VE host, then reboot and check. (And don't forget your
|
||||||
VM/CT data)
|
VM/CT data)
|
||||||
|
|
||||||
Remove Cluster configuration
|
Remove Cluster configuration
|
||||||
@ -170,7 +170,7 @@ shared configuration data is destroyed.
|
|||||||
In some cases, you might prefer to put a node back to local mode
|
In some cases, you might prefer to put a node back to local mode
|
||||||
without reinstall, which is described here:
|
without reinstall, which is described here:
|
||||||
|
|
||||||
* stop the cluster file system in '/etc/pve/'
|
* stop the cluster file system in `/etc/pve/`
|
||||||
|
|
||||||
# systemctl stop pve-cluster
|
# systemctl stop pve-cluster
|
||||||
|
|
||||||
|
@ -103,7 +103,7 @@ include::qm.1-synopsis.adoc[]
|
|||||||
|
|
||||||
:leveloffset: 0
|
:leveloffset: 0
|
||||||
|
|
||||||
*qmrestore* - Restore QemuServer 'vzdump' Backups
|
*qmrestore* - Restore QemuServer `vzdump` Backups
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
:leveloffset: 1
|
:leveloffset: 1
|
||||||
|
@ -28,8 +28,8 @@ NOTE: VMs and Containers can be both 32-bit and/or 64-bit.
|
|||||||
|
|
||||||
Does my CPU support virtualization?::
|
Does my CPU support virtualization?::
|
||||||
|
|
||||||
To check if your CPU is virtualization compatible, check for the "vmx"
|
To check if your CPU is virtualization compatible, check for the `vmx`
|
||||||
or "svm" tag in this command output:
|
or `svm` tag in this command output:
|
||||||
+
|
+
|
||||||
----
|
----
|
||||||
egrep '(vmx|svm)' /proc/cpuinfo
|
egrep '(vmx|svm)' /proc/cpuinfo
|
||||||
@ -96,14 +96,14 @@ complete OS inside a container, where you log in as ssh, add users,
|
|||||||
run apache, etc...
|
run apache, etc...
|
||||||
+
|
+
|
||||||
LXD is building on top of LXC to provide a new, better user
|
LXD is building on top of LXC to provide a new, better user
|
||||||
experience. Under the hood, LXD uses LXC through 'liblxc' and its Go
|
experience. Under the hood, LXD uses LXC through `liblxc` and its Go
|
||||||
binding to create and manage the containers. It's basically an
|
binding to create and manage the containers. It's basically an
|
||||||
alternative to LXC's tools and distribution template system with the
|
alternative to LXC's tools and distribution template system with the
|
||||||
added features that come from being controllable over the network.
|
added features that come from being controllable over the network.
|
||||||
+
|
+
|
||||||
Proxmox Containers also aims at *system virtualization*, and thus uses
|
Proxmox Containers also aims at *system virtualization*, and thus uses
|
||||||
LXC as the basis of its own container offer. The Proxmox Container
|
LXC as the basis of its own container offer. The Proxmox Container
|
||||||
Toolkit is called 'pct', and is tightly coupled with {pve}. That means
|
Toolkit is called `pct`, and is tightly coupled with {pve}. That means
|
||||||
that it is aware of the cluster setup, and it can use the same network
|
that it is aware of the cluster setup, and it can use the same network
|
||||||
and storage resources as fully virtualized VMs. You can even use the
|
and storage resources as fully virtualized VMs. You can even use the
|
||||||
{pve} firewall, create and restore backups, or manage containers using
|
{pve} firewall, create and restore backups, or manage containers using
|
||||||
|
@ -32,7 +32,7 @@ containers. Features like firewall macros, security groups, IP sets
|
|||||||
and aliases helps to make that task easier.
|
and aliases helps to make that task easier.
|
||||||
|
|
||||||
While all configuration is stored on the cluster file system, the
|
While all configuration is stored on the cluster file system, the
|
||||||
iptables based firewall runs on each cluster node, and thus provides
|
`iptables`-based firewall runs on each cluster node, and thus provides
|
||||||
full isolation between virtual machines. The distributed nature of
|
full isolation between virtual machines. The distributed nature of
|
||||||
this system also provides much higher bandwidth than a central
|
this system also provides much higher bandwidth than a central
|
||||||
firewall solution.
|
firewall solution.
|
||||||
@ -64,17 +64,17 @@ Configuration Files
|
|||||||
|
|
||||||
All firewall related configuration is stored on the proxmox cluster
|
All firewall related configuration is stored on the proxmox cluster
|
||||||
file system. So those files are automatically distributed to all
|
file system. So those files are automatically distributed to all
|
||||||
cluster nodes, and the 'pve-firewall' service updates the underlying
|
cluster nodes, and the `pve-firewall` service updates the underlying
|
||||||
iptables rules automatically on changes.
|
`iptables` rules automatically on changes.
|
||||||
|
|
||||||
You can configure anything using the GUI (i.e. Datacenter -> Firewall,
|
You can configure anything using the GUI (i.e. Datacenter -> Firewall,
|
||||||
or on a Node -> Firewall), or you can edit the configuration files
|
or on a Node -> Firewall), or you can edit the configuration files
|
||||||
directly using your preferred editor.
|
directly using your preferred editor.
|
||||||
|
|
||||||
Firewall configuration files contains sections of key-value
|
Firewall configuration files contains sections of key-value
|
||||||
pairs. Lines beginning with a '#' and blank lines are considered
|
pairs. Lines beginning with a `#` and blank lines are considered
|
||||||
comments. Sections starts with a header line containing the section
|
comments. Sections starts with a header line containing the section
|
||||||
name enclosed in '[' and ']'.
|
name enclosed in `[` and `]`.
|
||||||
|
|
||||||
|
|
||||||
Cluster Wide Setup
|
Cluster Wide Setup
|
||||||
@ -86,25 +86,25 @@ The cluster wide firewall configuration is stored at:
|
|||||||
|
|
||||||
The configuration can contain the following sections:
|
The configuration can contain the following sections:
|
||||||
|
|
||||||
'[OPTIONS]'::
|
`[OPTIONS]`::
|
||||||
|
|
||||||
This is used to set cluster wide firewall options.
|
This is used to set cluster wide firewall options.
|
||||||
|
|
||||||
include::pve-firewall-cluster-opts.adoc[]
|
include::pve-firewall-cluster-opts.adoc[]
|
||||||
|
|
||||||
'[RULES]'::
|
`[RULES]`::
|
||||||
|
|
||||||
This sections contains cluster wide firewall rules for all nodes.
|
This sections contains cluster wide firewall rules for all nodes.
|
||||||
|
|
||||||
'[IPSET <name>]'::
|
`[IPSET <name>]`::
|
||||||
|
|
||||||
Cluster wide IP set definitions.
|
Cluster wide IP set definitions.
|
||||||
|
|
||||||
'[GROUP <name>]'::
|
`[GROUP <name>]`::
|
||||||
|
|
||||||
Cluster wide security group definitions.
|
Cluster wide security group definitions.
|
||||||
|
|
||||||
'[ALIASES]'::
|
`[ALIASES]`::
|
||||||
|
|
||||||
Cluster wide Alias definitions.
|
Cluster wide Alias definitions.
|
||||||
|
|
||||||
@ -135,7 +135,7 @@ enabling the firewall. That way you still have access to the host if
|
|||||||
something goes wrong .
|
something goes wrong .
|
||||||
|
|
||||||
To simplify that task, you can instead create an IPSet called
|
To simplify that task, you can instead create an IPSet called
|
||||||
'management', and add all remote IPs there. This creates all required
|
``management'', and add all remote IPs there. This creates all required
|
||||||
firewall rules to access the GUI from remote.
|
firewall rules to access the GUI from remote.
|
||||||
|
|
||||||
|
|
||||||
@ -146,17 +146,17 @@ Host related configuration is read from:
|
|||||||
|
|
||||||
/etc/pve/nodes/<nodename>/host.fw
|
/etc/pve/nodes/<nodename>/host.fw
|
||||||
|
|
||||||
This is useful if you want to overwrite rules from 'cluster.fw'
|
This is useful if you want to overwrite rules from `cluster.fw`
|
||||||
config. You can also increase log verbosity, and set netfilter related
|
config. You can also increase log verbosity, and set netfilter related
|
||||||
options. The configuration can contain the following sections:
|
options. The configuration can contain the following sections:
|
||||||
|
|
||||||
'[OPTIONS]'::
|
`[OPTIONS]`::
|
||||||
|
|
||||||
This is used to set host related firewall options.
|
This is used to set host related firewall options.
|
||||||
|
|
||||||
include::pve-firewall-host-opts.adoc[]
|
include::pve-firewall-host-opts.adoc[]
|
||||||
|
|
||||||
'[RULES]'::
|
`[RULES]`::
|
||||||
|
|
||||||
This sections contains host specific firewall rules.
|
This sections contains host specific firewall rules.
|
||||||
|
|
||||||
@ -170,21 +170,21 @@ VM firewall configuration is read from:
|
|||||||
|
|
||||||
and contains the following data:
|
and contains the following data:
|
||||||
|
|
||||||
'[OPTIONS]'::
|
`[OPTIONS]`::
|
||||||
|
|
||||||
This is used to set VM/Container related firewall options.
|
This is used to set VM/Container related firewall options.
|
||||||
|
|
||||||
include::pve-firewall-vm-opts.adoc[]
|
include::pve-firewall-vm-opts.adoc[]
|
||||||
|
|
||||||
'[RULES]'::
|
`[RULES]`::
|
||||||
|
|
||||||
This sections contains VM/Container firewall rules.
|
This sections contains VM/Container firewall rules.
|
||||||
|
|
||||||
'[IPSET <name>]'::
|
`[IPSET <name>]`::
|
||||||
|
|
||||||
IP set definitions.
|
IP set definitions.
|
||||||
|
|
||||||
'[ALIASES]'::
|
`[ALIASES]`::
|
||||||
|
|
||||||
IP Alias definitions.
|
IP Alias definitions.
|
||||||
|
|
||||||
@ -194,7 +194,7 @@ Enabling the Firewall for VMs and Containers
|
|||||||
|
|
||||||
Each virtual network device has its own firewall enable flag. So you
|
Each virtual network device has its own firewall enable flag. So you
|
||||||
can selectively enable the firewall for each interface. This is
|
can selectively enable the firewall for each interface. This is
|
||||||
required in addition to the general firewall 'enable' option.
|
required in addition to the general firewall `enable` option.
|
||||||
|
|
||||||
The firewall requires a special network device setup, so you need to
|
The firewall requires a special network device setup, so you need to
|
||||||
restart the VM/container after enabling the firewall on a network
|
restart the VM/container after enabling the firewall on a network
|
||||||
@ -206,7 +206,8 @@ Firewall Rules
|
|||||||
|
|
||||||
Firewall rules consists of a direction (`IN` or `OUT`) and an
|
Firewall rules consists of a direction (`IN` or `OUT`) and an
|
||||||
action (`ACCEPT`, `DENY`, `REJECT`). You can also specify a macro
|
action (`ACCEPT`, `DENY`, `REJECT`). You can also specify a macro
|
||||||
name. Macros contain predifined sets of rules and options. Rules can be disabled by prefixing them with '|'.
|
name. Macros contain predefined sets of rules and options. Rules can be
|
||||||
|
disabled by prefixing them with `|`.
|
||||||
|
|
||||||
.Firewall rules syntax
|
.Firewall rules syntax
|
||||||
----
|
----
|
||||||
@ -240,12 +241,13 @@ IN DROP # drop all incoming packages
|
|||||||
OUT ACCEPT # accept all outgoing packages
|
OUT ACCEPT # accept all outgoing packages
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
Security Groups
|
Security Groups
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
A security group is a collection of rules, defined at cluster level, which
|
A security group is a collection of rules, defined at cluster level, which
|
||||||
can be used in all VMs' rules. For example you can define a group named
|
can be used in all VMs' rules. For example you can define a group named
|
||||||
`webserver` with rules to open the http and https ports.
|
``webserver'' with rules to open the 'http' and 'https' ports.
|
||||||
|
|
||||||
----
|
----
|
||||||
# /etc/pve/firewall/cluster.fw
|
# /etc/pve/firewall/cluster.fw
|
||||||
@ -291,7 +293,7 @@ using detected local_network: 192.168.0.0/20
|
|||||||
The firewall automatically sets up rules to allow everything needed
|
The firewall automatically sets up rules to allow everything needed
|
||||||
for cluster communication (corosync, API, SSH) using this alias.
|
for cluster communication (corosync, API, SSH) using this alias.
|
||||||
|
|
||||||
The user can overwrite these values in the cluster.fw alias
|
The user can overwrite these values in the `cluster.fw` alias
|
||||||
section. If you use a single host on a public network, it is better to
|
section. If you use a single host on a public network, it is better to
|
||||||
explicitly assign the local IP address
|
explicitly assign the local IP address
|
||||||
|
|
||||||
@ -332,7 +334,8 @@ communication. (multicast,ssh,...)
|
|||||||
192.168.2.10/24
|
192.168.2.10/24
|
||||||
----
|
----
|
||||||
|
|
||||||
Standard IP set 'blacklist'
|
|
||||||
|
Standard IP set `blacklist`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Traffic from these ips is dropped by every host's and VM's firewall.
|
Traffic from these ips is dropped by every host's and VM's firewall.
|
||||||
@ -345,8 +348,9 @@ Traffic from these ips is dropped by every host's and VM's firewall.
|
|||||||
213.87.123.0/24
|
213.87.123.0/24
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
[[ipfilter-section]]
|
[[ipfilter-section]]
|
||||||
Standard IP set 'ipfilter-net*'
|
Standard IP set `ipfilter-net*`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
These filters belong to a VM's network interface and are mainly used to prevent
|
These filters belong to a VM's network interface and are mainly used to prevent
|
||||||
@ -378,7 +382,7 @@ The firewall runs two service daemons on each node:
|
|||||||
* pvefw-logger: NFLOG daemon (ulogd replacement).
|
* pvefw-logger: NFLOG daemon (ulogd replacement).
|
||||||
* pve-firewall: updates iptables rules
|
* pve-firewall: updates iptables rules
|
||||||
|
|
||||||
There is also a CLI command named 'pve-firewall', which can be used to
|
There is also a CLI command named `pve-firewall`, which can be used to
|
||||||
start and stop the firewall service:
|
start and stop the firewall service:
|
||||||
|
|
||||||
# pve-firewall start
|
# pve-firewall start
|
||||||
@ -403,12 +407,12 @@ How to allow FTP
|
|||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
FTP is an old style protocol which uses port 21 and several other dynamic ports. So you
|
FTP is an old style protocol which uses port 21 and several other dynamic ports. So you
|
||||||
need a rule to accept port 21. In addition, you need to load the 'ip_conntrack_ftp' module.
|
need a rule to accept port 21. In addition, you need to load the `ip_conntrack_ftp` module.
|
||||||
So please run:
|
So please run:
|
||||||
|
|
||||||
modprobe ip_conntrack_ftp
|
modprobe ip_conntrack_ftp
|
||||||
|
|
||||||
and add `ip_conntrack_ftp` to '/etc/modules' (so that it works after a reboot) .
|
and add `ip_conntrack_ftp` to `/etc/modules` (so that it works after a reboot).
|
||||||
|
|
||||||
|
|
||||||
Suricata IPS integration
|
Suricata IPS integration
|
||||||
@ -429,7 +433,7 @@ Install suricata on proxmox host:
|
|||||||
# modprobe nfnetlink_queue
|
# modprobe nfnetlink_queue
|
||||||
----
|
----
|
||||||
|
|
||||||
Don't forget to add `nfnetlink_queue` to '/etc/modules' for next reboot.
|
Don't forget to add `nfnetlink_queue` to `/etc/modules` for next reboot.
|
||||||
|
|
||||||
Then, enable IPS for a specific VM with:
|
Then, enable IPS for a specific VM with:
|
||||||
|
|
||||||
@ -450,8 +454,9 @@ Available queues are defined in
|
|||||||
NFQUEUE=0
|
NFQUEUE=0
|
||||||
----
|
----
|
||||||
|
|
||||||
Avoiding link-local addresses on tap and veth devices
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
Avoiding `link-local` Addresses on `tap` and `veth` Devices
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
With IPv6 enabled by default every interface gets a MAC-derived link local
|
With IPv6 enabled by default every interface gets a MAC-derived link local
|
||||||
address. However, most devices on a typical {pve} setup are connected to a
|
address. However, most devices on a typical {pve} setup are connected to a
|
||||||
@ -519,7 +524,7 @@ The firewall contains a few IPv6 specific options. One thing to note is that
|
|||||||
IPv6 does not use the ARP protocol anymore, and instead uses NDP (Neighbor
|
IPv6 does not use the ARP protocol anymore, and instead uses NDP (Neighbor
|
||||||
Discovery Protocol) which works on IP level and thus needs IP addresses to
|
Discovery Protocol) which works on IP level and thus needs IP addresses to
|
||||||
succeed. For this purpose link-local addresses derived from the interface's MAC
|
succeed. For this purpose link-local addresses derived from the interface's MAC
|
||||||
address are used. By default the 'NDP' option is enabled on both host and VM
|
address are used. By default the `NDP` option is enabled on both host and VM
|
||||||
level to allow neighbor discovery (NDP) packets to be sent and received.
|
level to allow neighbor discovery (NDP) packets to be sent and received.
|
||||||
|
|
||||||
Beside neighbor discovery NDP is also used for a couple of other things, like
|
Beside neighbor discovery NDP is also used for a couple of other things, like
|
||||||
@ -528,14 +533,14 @@ autoconfiguration and advertising routers.
|
|||||||
By default VMs are allowed to send out router solicitation messages (to query
|
By default VMs are allowed to send out router solicitation messages (to query
|
||||||
for a router), and to receive router advetisement packets. This allows them to
|
for a router), and to receive router advetisement packets. This allows them to
|
||||||
use stateless auto configuration. On the other hand VMs cannot advertise
|
use stateless auto configuration. On the other hand VMs cannot advertise
|
||||||
themselves as routers unless the 'Allow Router Advertisement' (`radv: 1`) option
|
themselves as routers unless the ``Allow Router Advertisement'' (`radv: 1`) option
|
||||||
is set.
|
is set.
|
||||||
|
|
||||||
As for the link local addresses required for NDP, there's also an 'IP Filter'
|
As for the link local addresses required for NDP, there's also an ``IP Filter''
|
||||||
(`ipfilter: 1`) option which can be enabled which has the same effect as adding
|
(`ipfilter: 1`) option which can be enabled which has the same effect as adding
|
||||||
an `ipfilter-net*` ipset for each of the VM's network interfaces containing the
|
an `ipfilter-net*` ipset for each of the VM's network interfaces containing the
|
||||||
corresponding link local addresses. (See the
|
corresponding link local addresses. (See the
|
||||||
<<ipfilter-section,Standard IP set 'ipfilter-net*'>> section for details.)
|
<<ipfilter-section,Standard IP set `ipfilter-net*`>> section for details.)
|
||||||
|
|
||||||
|
|
||||||
Ports used by Proxmox VE
|
Ports used by Proxmox VE
|
||||||
|
@ -61,14 +61,14 @@ BIOS is unable to read the boot block from the disk.
|
|||||||
|
|
||||||
Test Memory::
|
Test Memory::
|
||||||
|
|
||||||
Runs 'memtest86+'. This is useful to check if your memory if
|
Runs `memtest86+`. This is useful to check if your memory if
|
||||||
functional and error free.
|
functional and error free.
|
||||||
|
|
||||||
You normally select *Install Proxmox VE* to start the installation.
|
You normally select *Install Proxmox VE* to start the installation.
|
||||||
After that you get prompted to select the target hard disk(s). The
|
After that you get prompted to select the target hard disk(s). The
|
||||||
`Options` button lets you select the target file system, which
|
`Options` button lets you select the target file system, which
|
||||||
defaults to `ext4`. The installer uses LVM if you select 'ext3',
|
defaults to `ext4`. The installer uses LVM if you select `ext3`,
|
||||||
'ext4' or 'xfs' as file system, and offers additional option to
|
`ext4` or `xfs` as file system, and offers additional option to
|
||||||
restrict LVM space (see <<advanced_lvm_options,below>>)
|
restrict LVM space (see <<advanced_lvm_options,below>>)
|
||||||
|
|
||||||
If you have more than one disk, you can also use ZFS as file system.
|
If you have more than one disk, you can also use ZFS as file system.
|
||||||
@ -121,7 +121,7 @@ system.
|
|||||||
`maxvz`::
|
`maxvz`::
|
||||||
|
|
||||||
Define the size of the `data` volume, which is mounted at
|
Define the size of the `data` volume, which is mounted at
|
||||||
'/var/lib/vz'.
|
`/var/lib/vz`.
|
||||||
|
|
||||||
`minfree`::
|
`minfree`::
|
||||||
|
|
||||||
|
@ -119,7 +119,7 @@ Local storage types supported are:
|
|||||||
Integrated Backup and Restore
|
Integrated Backup and Restore
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
The integrated backup tool (vzdump) creates consistent snapshots of
|
The integrated backup tool (`vzdump`) creates consistent snapshots of
|
||||||
running Containers and KVM guests. It basically creates an archive of
|
running Containers and KVM guests. It basically creates an archive of
|
||||||
the VM or CT data which includes the VM/CT configuration files.
|
the VM or CT data which includes the VM/CT configuration files.
|
||||||
|
|
||||||
@ -150,11 +150,14 @@ bonding/aggregation are possible. In this way it is possible to build
|
|||||||
complex, flexible virtual networks for the Proxmox VE hosts,
|
complex, flexible virtual networks for the Proxmox VE hosts,
|
||||||
leveraging the full power of the Linux network stack.
|
leveraging the full power of the Linux network stack.
|
||||||
|
|
||||||
|
|
||||||
Integrated Firewall
|
Integrated Firewall
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
The intergrated firewall allows you to filter network packets on
|
The intergrated firewall allows you to filter network packets on
|
||||||
any VM or Container interface. Common sets of firewall rules can be grouped into 'security groups'.
|
any VM or Container interface. Common sets of firewall rules can
|
||||||
|
be grouped into ``security groups''.
|
||||||
|
|
||||||
|
|
||||||
Why Open Source
|
Why Open Source
|
||||||
---------------
|
---------------
|
||||||
|
@ -15,14 +15,14 @@ VLANs (IEEE 802.1q) and network bonding, also known as "link
|
|||||||
aggregation". That way it is possible to build complex and flexible
|
aggregation". That way it is possible to build complex and flexible
|
||||||
virtual networks.
|
virtual networks.
|
||||||
|
|
||||||
Debian traditionally uses the 'ifup' and 'ifdown' commands to
|
Debian traditionally uses the `ifup` and `ifdown` commands to
|
||||||
configure the network. The file '/etc/network/interfaces' contains the
|
configure the network. The file `/etc/network/interfaces` contains the
|
||||||
whole network setup. Please refer to to manual page ('man interfaces')
|
whole network setup. Please refer to to manual page (`man interfaces`)
|
||||||
for a complete format description.
|
for a complete format description.
|
||||||
|
|
||||||
NOTE: {pve} does not write changes directly to
|
NOTE: {pve} does not write changes directly to
|
||||||
'/etc/network/interfaces'. Instead, we write into a temporary file
|
`/etc/network/interfaces`. Instead, we write into a temporary file
|
||||||
called '/etc/network/interfaces.new', and commit those changes when
|
called `/etc/network/interfaces.new`, and commit those changes when
|
||||||
you reboot the node.
|
you reboot the node.
|
||||||
|
|
||||||
It is worth mentioning that you can directly edit the configuration
|
It is worth mentioning that you can directly edit the configuration
|
||||||
@ -52,7 +52,7 @@ Default Configuration using a Bridge
|
|||||||
|
|
||||||
The installation program creates a single bridge named `vmbr0`, which
|
The installation program creates a single bridge named `vmbr0`, which
|
||||||
is connected to the first ethernet card `eth0`. The corresponding
|
is connected to the first ethernet card `eth0`. The corresponding
|
||||||
configuration in '/etc/network/interfaces' looks like this:
|
configuration in `/etc/network/interfaces` looks like this:
|
||||||
|
|
||||||
----
|
----
|
||||||
auto lo
|
auto lo
|
||||||
@ -87,13 +87,13 @@ TIP: Some providers allows you to register additional MACs on there
|
|||||||
management interface. This avoids the problem, but is clumsy to
|
management interface. This avoids the problem, but is clumsy to
|
||||||
configure because you need to register a MAC for each of your VMs.
|
configure because you need to register a MAC for each of your VMs.
|
||||||
|
|
||||||
You can avoid the problem by "routing" all traffic via a single
|
You can avoid the problem by ``routing'' all traffic via a single
|
||||||
interface. This makes sure that all network packets use the same MAC
|
interface. This makes sure that all network packets use the same MAC
|
||||||
address.
|
address.
|
||||||
|
|
||||||
A common scenario is that you have a public IP (assume 192.168.10.2
|
A common scenario is that you have a public IP (assume `192.168.10.2`
|
||||||
for this example), and an additional IP block for your VMs
|
for this example), and an additional IP block for your VMs
|
||||||
(10.10.10.1/255.255.255.0). We recommend the following setup for such
|
(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
|
||||||
situations:
|
situations:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -118,8 +118,8 @@ iface vmbr0 inet static
|
|||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
Masquerading (NAT) with iptables
|
Masquerading (NAT) with `iptables`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In some cases you may want to use private IPs behind your Proxmox
|
In some cases you may want to use private IPs behind your Proxmox
|
||||||
host's true IP, and masquerade the traffic using NAT:
|
host's true IP, and masquerade the traffic using NAT:
|
||||||
|
@ -5,17 +5,17 @@ include::attributes.txt[]
|
|||||||
All Debian based systems use
|
All Debian based systems use
|
||||||
http://en.wikipedia.org/wiki/Advanced_Packaging_Tool[APT] as package
|
http://en.wikipedia.org/wiki/Advanced_Packaging_Tool[APT] as package
|
||||||
management tool. The list of repositories is defined in
|
management tool. The list of repositories is defined in
|
||||||
'/etc/apt/sources.list' and '.list' files found inside
|
`/etc/apt/sources.list` and `.list` files found inside
|
||||||
'/etc/apt/sources.d/'. Updates can be installed directly using
|
`/etc/apt/sources.d/`. Updates can be installed directly using
|
||||||
'apt-get', or via the GUI.
|
`apt-get`, or via the GUI.
|
||||||
|
|
||||||
Apt 'sources.list' files list one package repository per line, with
|
Apt `sources.list` files list one package repository per line, with
|
||||||
the most preferred source listed first. Empty lines are ignored, and a
|
the most preferred source listed first. Empty lines are ignored, and a
|
||||||
'#' character anywhere on a line marks the remainder of that line as a
|
`#` character anywhere on a line marks the remainder of that line as a
|
||||||
comment. The information available from the configured sources is
|
comment. The information available from the configured sources is
|
||||||
acquired by 'apt-get update'.
|
acquired by `apt-get update`.
|
||||||
|
|
||||||
.File '/etc/apt/sources.list'
|
.File `/etc/apt/sources.list`
|
||||||
----
|
----
|
||||||
deb http://ftp.debian.org/debian jessie main contrib
|
deb http://ftp.debian.org/debian jessie main contrib
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ all {pve} subscription users. It contains the most stable packages,
|
|||||||
and is suitable for production use. The `pve-enterprise` repository is
|
and is suitable for production use. The `pve-enterprise` repository is
|
||||||
enabled by default:
|
enabled by default:
|
||||||
|
|
||||||
.File '/etc/apt/sources.list.d/pve-enterprise.list'
|
.File `/etc/apt/sources.list.d/pve-enterprise.list`
|
||||||
----
|
----
|
||||||
deb https://enterprise.proxmox.com/debian jessie pve-enterprise
|
deb https://enterprise.proxmox.com/debian jessie pve-enterprise
|
||||||
----
|
----
|
||||||
@ -48,7 +48,7 @@ repository. We offer different support levels, and you can find further
|
|||||||
details at http://www.proxmox.com/en/proxmox-ve/pricing.
|
details at http://www.proxmox.com/en/proxmox-ve/pricing.
|
||||||
|
|
||||||
NOTE: You can disable this repository by commenting out the above line
|
NOTE: You can disable this repository by commenting out the above line
|
||||||
using a '#' (at the start of the line). This prevents error messages
|
using a `#` (at the start of the line). This prevents error messages
|
||||||
if you do not have a subscription key. Please configure the
|
if you do not have a subscription key. Please configure the
|
||||||
`pve-no-subscription` repository in that case.
|
`pve-no-subscription` repository in that case.
|
||||||
|
|
||||||
@ -61,9 +61,9 @@ this repository. It can be used for testing and non-production
|
|||||||
use. Its not recommended to run on production servers, as these
|
use. Its not recommended to run on production servers, as these
|
||||||
packages are not always heavily tested and validated.
|
packages are not always heavily tested and validated.
|
||||||
|
|
||||||
We recommend to configure this repository in '/etc/apt/sources.list'.
|
We recommend to configure this repository in `/etc/apt/sources.list`.
|
||||||
|
|
||||||
.File '/etc/apt/sources.list'
|
.File `/etc/apt/sources.list`
|
||||||
----
|
----
|
||||||
deb http://ftp.debian.org/debian jessie main contrib
|
deb http://ftp.debian.org/debian jessie main contrib
|
||||||
|
|
||||||
@ -82,7 +82,7 @@ deb http://security.debian.org jessie/updates main contrib
|
|||||||
Finally, there is a repository called `pvetest`. This one contains the
|
Finally, there is a repository called `pvetest`. This one contains the
|
||||||
latest packages and is heavily used by developers to test new
|
latest packages and is heavily used by developers to test new
|
||||||
features. As usual, you can configure this using
|
features. As usual, you can configure this using
|
||||||
'/etc/apt/sources.list' by adding the following line:
|
`/etc/apt/sources.list` by adding the following line:
|
||||||
|
|
||||||
.sources.list entry for `pvetest`
|
.sources.list entry for `pvetest`
|
||||||
----
|
----
|
||||||
@ -96,7 +96,7 @@ for testing new features or bug fixes.
|
|||||||
SecureApt
|
SecureApt
|
||||||
~~~~~~~~~
|
~~~~~~~~~
|
||||||
|
|
||||||
We use GnuPG to sign the 'Release' files inside those repositories,
|
We use GnuPG to sign the `Release` files inside those repositories,
|
||||||
and APT uses that signatures to verify that all packages are from a
|
and APT uses that signatures to verify that all packages are from a
|
||||||
trusted source.
|
trusted source.
|
||||||
|
|
||||||
@ -128,7 +128,7 @@ ifdef::wiki[]
|
|||||||
{pve} 3.x Repositories
|
{pve} 3.x Repositories
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
{pve} 3.x is based on Debian 7.x ('wheezy'). Please note that this
|
{pve} 3.x is based on Debian 7.x (``wheezy''). Please note that this
|
||||||
release is out of date, and you should update your
|
release is out of date, and you should update your
|
||||||
installation. Nevertheless, we still provide access to those
|
installation. Nevertheless, we still provide access to those
|
||||||
repositories at our download servers.
|
repositories at our download servers.
|
||||||
@ -144,17 +144,17 @@ deb http://download.proxmox.com/debian wheezy pve-no-subscription
|
|||||||
deb http://download.proxmox.com/debian wheezy pvetest
|
deb http://download.proxmox.com/debian wheezy pvetest
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|
|
||||||
NOTE: Apt 'sources.list' configuration files are basically the same as
|
NOTE: Apt `sources.list` configuration files are basically the same as
|
||||||
in newer 4.x versions - just replace 'jessie' with 'wheezy'.
|
in newer 4.x versions - just replace `jessie` with `wheezy`.
|
||||||
|
|
||||||
Outdated: 'stable' Repository 'pve'
|
Outdated: `stable` Repository `pve`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This repository is a leftover to easy the update to 3.1. It will not
|
This repository is a leftover to easy the update to 3.1. It will not
|
||||||
get any updates after the release of 3.1. Therefore you need to remove
|
get any updates after the release of 3.1. Therefore you need to remove
|
||||||
this repository after you upgraded to 3.1.
|
this repository after you upgraded to 3.1.
|
||||||
|
|
||||||
.File '/etc/apt/sources.list'
|
.File `/etc/apt/sources.list`
|
||||||
----
|
----
|
||||||
deb http://ftp.debian.org/debian wheezy main contrib
|
deb http://ftp.debian.org/debian wheezy main contrib
|
||||||
|
|
||||||
@ -169,11 +169,11 @@ deb http://security.debian.org/ wheezy/updates main contrib
|
|||||||
Outdated: {pve} 2.x Repositories
|
Outdated: {pve} 2.x Repositories
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
{pve} 2.x is based on Debian 6.0 ('squeeze') and outdated. Please
|
{pve} 2.x is based on Debian 6.0 (``squeeze'') and outdated. Please
|
||||||
upgrade to latest version as soon as possible. In order to use the
|
upgrade to latest version as soon as possible. In order to use the
|
||||||
stable 'pve' 2.x repository, check your sources.list:
|
stable `pve` 2.x repository, check your sources.list:
|
||||||
|
|
||||||
.File '/etc/apt/sources.list'
|
.File `/etc/apt/sources.list`
|
||||||
----
|
----
|
||||||
deb http://ftp.debian.org/debian squeeze main contrib
|
deb http://ftp.debian.org/debian squeeze main contrib
|
||||||
|
|
||||||
@ -188,7 +188,7 @@ deb http://security.debian.org/ squeeze/updates main contrib
|
|||||||
Outdated: {pve} VE 1.x Repositories
|
Outdated: {pve} VE 1.x Repositories
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
{pve} 1.x is based on Debian 5.0 (Lenny) and very outdated. Please
|
{pve} 1.x is based on Debian 5.0 (``lenny'') and very outdated. Please
|
||||||
upgrade to latest version as soon as possible.
|
upgrade to latest version as soon as possible.
|
||||||
|
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ storage. A directory is a file level storage, so you can store any
|
|||||||
content type like virtual disk images, containers, templates, ISO images
|
content type like virtual disk images, containers, templates, ISO images
|
||||||
or backup files.
|
or backup files.
|
||||||
|
|
||||||
NOTE: You can mount additional storages via standard linux '/etc/fstab',
|
NOTE: You can mount additional storages via standard linux `/etc/fstab`,
|
||||||
and then define a directory storage for that mount point. This way you
|
and then define a directory storage for that mount point. This way you
|
||||||
can use any file system supported by Linux.
|
can use any file system supported by Linux.
|
||||||
|
|
||||||
@ -31,10 +31,10 @@ storage backends.
|
|||||||
[width="100%",cols="d,m",options="header"]
|
[width="100%",cols="d,m",options="header"]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|Content type |Subdir
|
|Content type |Subdir
|
||||||
|VM images |images/<VMID>/
|
|VM images |`images/<VMID>/`
|
||||||
|ISO images |template/iso/
|
|ISO images |`template/iso/`
|
||||||
|Container templates |template/cache
|
|Container templates |`template/cache/`
|
||||||
|Backup files |dump/
|
|Backup files |`dump/`
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
@ -44,7 +44,7 @@ This backend supports all common storage properties, and adds an
|
|||||||
additional property called `path` to specify the directory. This
|
additional property called `path` to specify the directory. This
|
||||||
needs to be an absolute file system path.
|
needs to be an absolute file system path.
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
dir: backup
|
dir: backup
|
||||||
path /mnt/backup
|
path /mnt/backup
|
||||||
@ -54,7 +54,7 @@ dir: backup
|
|||||||
|
|
||||||
Above configuration defines a storage pool called `backup`. That pool
|
Above configuration defines a storage pool called `backup`. That pool
|
||||||
can be used to store up to 7 backups (`maxfiles 7`) per VM. The real
|
can be used to store up to 7 backups (`maxfiles 7`) per VM. The real
|
||||||
path for the backup files is '/mnt/backup/dump/...'.
|
path for the backup files is `/mnt/backup/dump/...`.
|
||||||
|
|
||||||
|
|
||||||
File naming conventions
|
File naming conventions
|
||||||
@ -70,13 +70,13 @@ This specifies the owner VM.
|
|||||||
|
|
||||||
`<NAME>`::
|
`<NAME>`::
|
||||||
|
|
||||||
This can be an arbitrary name (`ascii`) without white spaces. The
|
This can be an arbitrary name (`ascii`) without white space. The
|
||||||
backend uses `disk-[N]` as default, where `[N]` is replaced by an
|
backend uses `disk-[N]` as default, where `[N]` is replaced by an
|
||||||
integer to make the name unique.
|
integer to make the name unique.
|
||||||
|
|
||||||
`<FORMAT>`::
|
`<FORMAT>`::
|
||||||
|
|
||||||
Species the image format (`raw|qcow2|vmdk`).
|
Specifies the image format (`raw|qcow2|vmdk`).
|
||||||
|
|
||||||
When you create a VM template, all VM images are renamed to indicate
|
When you create a VM template, all VM images are renamed to indicate
|
||||||
that they are now read-only, and can be uses as a base image for clones:
|
that they are now read-only, and can be uses as a base image for clones:
|
||||||
|
@ -9,7 +9,7 @@ design, runs on commodity hardware, and can provide a highly available
|
|||||||
enterprise storage at low costs. Such system is capable of scaling to
|
enterprise storage at low costs. Such system is capable of scaling to
|
||||||
several petabytes, and can handle thousands of clients.
|
several petabytes, and can handle thousands of clients.
|
||||||
|
|
||||||
NOTE: After a node/brick crash, GlusterFS does a full 'rsync' to make
|
NOTE: After a node/brick crash, GlusterFS does a full `rsync` to make
|
||||||
sure data is consistent. This can take a very long time with large
|
sure data is consistent. This can take a very long time with large
|
||||||
files, so this backend is not suitable to store large VM images.
|
files, so this backend is not suitable to store large VM images.
|
||||||
|
|
||||||
@ -36,7 +36,7 @@ GlusterFS Volume.
|
|||||||
GlusterFS transport: `tcp`, `unix` or `rdma`
|
GlusterFS transport: `tcp`, `unix` or `rdma`
|
||||||
|
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
glusterfs: Gluster
|
glusterfs: Gluster
|
||||||
server 10.2.3.4
|
server 10.2.3.4
|
||||||
|
@ -10,13 +10,13 @@ source iSCSI target solutions available,
|
|||||||
e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
|
e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
|
||||||
Debian.
|
Debian.
|
||||||
|
|
||||||
To use this backend, you need to install the 'open-iscsi'
|
To use this backend, you need to install the `open-iscsi`
|
||||||
package. This is a standard Debian package, but it is not installed by
|
package. This is a standard Debian package, but it is not installed by
|
||||||
default to save resources.
|
default to save resources.
|
||||||
|
|
||||||
# apt-get install open-iscsi
|
# apt-get install open-iscsi
|
||||||
|
|
||||||
Low-level iscsi management task can be done using the 'iscsiadm' tool.
|
Low-level iscsi management task can be done using the `iscsiadm` tool.
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
@ -34,7 +34,7 @@ target::
|
|||||||
iSCSI target.
|
iSCSI target.
|
||||||
|
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
iscsi: mynas
|
iscsi: mynas
|
||||||
portal 10.10.10.1
|
portal 10.10.10.1
|
||||||
|
@ -5,7 +5,7 @@ include::attributes.txt[]
|
|||||||
Storage pool type: `iscsidirect`
|
Storage pool type: `iscsidirect`
|
||||||
|
|
||||||
This backend provides basically the same functionality as the
|
This backend provides basically the same functionality as the
|
||||||
Open-iSCSI backed, but uses a user-level library (package 'libiscsi2')
|
Open-iSCSI backed, but uses a user-level library (package `libiscsi2`)
|
||||||
to implement it.
|
to implement it.
|
||||||
|
|
||||||
It should be noted that there are no kernel drivers involved, so this
|
It should be noted that there are no kernel drivers involved, so this
|
||||||
@ -19,7 +19,7 @@ Configuration
|
|||||||
The user mode iSCSI backend uses the same configuration options as the
|
The user mode iSCSI backend uses the same configuration options as the
|
||||||
Open-iSCSI backed.
|
Open-iSCSI backed.
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
iscsidirect: faststore
|
iscsidirect: faststore
|
||||||
portal 10.10.10.1
|
portal 10.10.10.1
|
||||||
|
@ -37,9 +37,9 @@ sure that all data gets erased.
|
|||||||
|
|
||||||
`saferemove_throughput`::
|
`saferemove_throughput`::
|
||||||
|
|
||||||
Wipe throughput ('cstream -t' parameter value).
|
Wipe throughput (`cstream -t` parameter value).
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
lvm: myspace
|
lvm: myspace
|
||||||
vgname myspace
|
vgname myspace
|
||||||
|
@ -10,7 +10,7 @@ called thin-provisioning, because volumes can be much larger than
|
|||||||
physically available space.
|
physically available space.
|
||||||
|
|
||||||
You can use the normal LVM command line tools to manage and create LVM
|
You can use the normal LVM command line tools to manage and create LVM
|
||||||
thin pools (see 'man lvmthin' for details). Assuming you already have
|
thin pools (see `man lvmthin` for details). Assuming you already have
|
||||||
a LVM volume group called `pve`, the following commands create a new
|
a LVM volume group called `pve`, the following commands create a new
|
||||||
LVM thin pool (size 100G) called `data`:
|
LVM thin pool (size 100G) called `data`:
|
||||||
|
|
||||||
@ -35,7 +35,7 @@ LVM volume group name. This must point to an existing volume group.
|
|||||||
The name of the LVM thin pool.
|
The name of the LVM thin pool.
|
||||||
|
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
lvmthin: local-lvm
|
lvmthin: local-lvm
|
||||||
thinpool data
|
thinpool data
|
||||||
|
@ -8,7 +8,7 @@ The NFS backend is based on the directory backend, so it shares most
|
|||||||
properties. The directory layout and the file naming conventions are
|
properties. The directory layout and the file naming conventions are
|
||||||
the same. The main advantage is that you can directly configure the
|
the same. The main advantage is that you can directly configure the
|
||||||
NFS server properties, so the backend can mount the share
|
NFS server properties, so the backend can mount the share
|
||||||
automatically. There is no need to modify '/etc/fstab'. The backend
|
automatically. There is no need to modify `/etc/fstab`. The backend
|
||||||
can also test if the server is online, and provides a method to query
|
can also test if the server is online, and provides a method to query
|
||||||
the server for exported shares.
|
the server for exported shares.
|
||||||
|
|
||||||
@ -34,13 +34,13 @@ You can also set NFS mount options:
|
|||||||
|
|
||||||
path::
|
path::
|
||||||
|
|
||||||
The local mount point (defaults to '/mnt/pve/`<STORAGE_ID>`/').
|
The local mount point (defaults to `/mnt/pve/<STORAGE_ID>/`).
|
||||||
|
|
||||||
options::
|
options::
|
||||||
|
|
||||||
NFS mount options (see `man nfs`).
|
NFS mount options (see `man nfs`).
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
nfs: iso-templates
|
nfs: iso-templates
|
||||||
path /mnt/pve/iso-templates
|
path /mnt/pve/iso-templates
|
||||||
|
@ -46,7 +46,7 @@ krbd::
|
|||||||
Access rbd through krbd kernel module. This is required if you want to
|
Access rbd through krbd kernel module. This is required if you want to
|
||||||
use the storage for containers.
|
use the storage for containers.
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
rbd: ceph3
|
rbd: ceph3
|
||||||
monhost 10.1.1.20 10.1.1.21 10.1.1.22
|
monhost 10.1.1.20 10.1.1.21 10.1.1.22
|
||||||
@ -55,15 +55,15 @@ rbd: ceph3
|
|||||||
username admin
|
username admin
|
||||||
----
|
----
|
||||||
|
|
||||||
TIP: You can use the 'rbd' utility to do low-level management tasks.
|
TIP: You can use the `rbd` utility to do low-level management tasks.
|
||||||
|
|
||||||
Authentication
|
Authentication
|
||||||
~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If you use cephx authentication, you need to copy the keyfile from
|
If you use `cephx` authentication, you need to copy the keyfile from
|
||||||
Ceph to Proxmox VE host.
|
Ceph to Proxmox VE host.
|
||||||
|
|
||||||
Create the directory '/etc/pve/priv/ceph' with
|
Create the directory `/etc/pve/priv/ceph` with
|
||||||
|
|
||||||
mkdir /etc/pve/priv/ceph
|
mkdir /etc/pve/priv/ceph
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ sparse::
|
|||||||
Use ZFS thin-provisioning. A sparse volume is a volume whose
|
Use ZFS thin-provisioning. A sparse volume is a volume whose
|
||||||
reservation is not equal to the volume size.
|
reservation is not equal to the volume size.
|
||||||
|
|
||||||
.Configuration Example ('/etc/pve/storage.cfg')
|
.Configuration Example (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
zfspool: vmdata
|
zfspool: vmdata
|
||||||
pool tank/vmdata
|
pool tank/vmdata
|
||||||
|
@ -24,7 +24,7 @@ Container Images
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
Command line tool to manage container images. See 'man pct' for usage
|
Command line tool to manage container images. See `man pct` for usage
|
||||||
examples.
|
examples.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
|
37
pvecm.adoc
37
pvecm.adoc
@ -23,13 +23,13 @@ Cluster Manager
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
The {PVE} cluster manager 'pvecm' is a tool to create a group of
|
The {PVE} cluster manager `pvecm` is a tool to create a group of
|
||||||
physical servers. Such group is called a *cluster*. We use the
|
physical servers. Such a group is called a *cluster*. We use the
|
||||||
http://www.corosync.org[Corosync Cluster Engine] for reliable group
|
http://www.corosync.org[Corosync Cluster Engine] for reliable group
|
||||||
communication, and such cluster can consists of up to 32 physical nodes
|
communication, and such cluster can consists of up to 32 physical nodes
|
||||||
(probably more, dependent on network latency).
|
(probably more, dependent on network latency).
|
||||||
|
|
||||||
'pvecm' can be used to create a new cluster, join nodes to a cluster,
|
`pvecm` can be used to create a new cluster, join nodes to a cluster,
|
||||||
leave the cluster, get status information and do various other cluster
|
leave the cluster, get status information and do various other cluster
|
||||||
related tasks. The Proxmox Cluster file system (pmxcfs) is used to
|
related tasks. The Proxmox Cluster file system (pmxcfs) is used to
|
||||||
transparently distribute the cluster configuration to all cluster
|
transparently distribute the cluster configuration to all cluster
|
||||||
@ -41,9 +41,8 @@ Grouping nodes into a cluster has the following advantages:
|
|||||||
|
|
||||||
* Multi-master clusters: Each node can do all management task
|
* Multi-master clusters: Each node can do all management task
|
||||||
|
|
||||||
* Proxmox Cluster file system (pmxcfs): Database-driven file system
|
* `pmxcfs`: database-driven file system for storing configuration files,
|
||||||
for storing configuration files, replicated in real-time on all
|
replicated in real-time on all nodes using `corosync`.
|
||||||
nodes using corosync.
|
|
||||||
|
|
||||||
* Easy migration of Virtual Machines and Containers between physical
|
* Easy migration of Virtual Machines and Containers between physical
|
||||||
hosts
|
hosts
|
||||||
@ -56,7 +55,7 @@ Grouping nodes into a cluster has the following advantages:
|
|||||||
Requirements
|
Requirements
|
||||||
------------
|
------------
|
||||||
|
|
||||||
* All nodes must be in the same network as corosync uses IP Multicast
|
* All nodes must be in the same network as `corosync` uses IP Multicast
|
||||||
to communicate between nodes (also see
|
to communicate between nodes (also see
|
||||||
http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
|
http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
|
||||||
ports 5404 and 5405 for cluster communication.
|
ports 5404 and 5405 for cluster communication.
|
||||||
@ -87,13 +86,13 @@ installed with the final hostname and IP configuration. Changing the
|
|||||||
hostname and IP is not possible after cluster creation.
|
hostname and IP is not possible after cluster creation.
|
||||||
|
|
||||||
Currently the cluster creation has to be done on the console, so you
|
Currently the cluster creation has to be done on the console, so you
|
||||||
need to login via 'ssh'.
|
need to login via `ssh`.
|
||||||
|
|
||||||
Create the Cluster
|
Create the Cluster
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
Login via 'ssh' to the first Proxmox VE node. Use a unique name for
|
Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
|
||||||
your cluster. This name cannot be changed later.
|
This name cannot be changed later.
|
||||||
|
|
||||||
hp1# pvecm create YOUR-CLUSTER-NAME
|
hp1# pvecm create YOUR-CLUSTER-NAME
|
||||||
|
|
||||||
@ -109,7 +108,7 @@ To check the state of your cluster use:
|
|||||||
Adding Nodes to the Cluster
|
Adding Nodes to the Cluster
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
Login via 'ssh' to the node you want to add.
|
Login via `ssh` to the node you want to add.
|
||||||
|
|
||||||
hp2# pvecm add IP-ADDRESS-CLUSTER
|
hp2# pvecm add IP-ADDRESS-CLUSTER
|
||||||
|
|
||||||
@ -117,8 +116,8 @@ For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
|
|||||||
|
|
||||||
CAUTION: A new node cannot hold any VM´s, because you would get
|
CAUTION: A new node cannot hold any VM´s, because you would get
|
||||||
conflicts about identical VM IDs. Also, all existing configuration in
|
conflicts about identical VM IDs. Also, all existing configuration in
|
||||||
'/etc/pve' is overwritten when you join a new node to the cluster. To
|
`/etc/pve` is overwritten when you join a new node to the cluster. To
|
||||||
workaround, use vzdump to backup and restore to a different VMID after
|
workaround, use `vzdump` to backup and restore to a different VMID after
|
||||||
adding the node to the cluster.
|
adding the node to the cluster.
|
||||||
|
|
||||||
To check the state of cluster:
|
To check the state of cluster:
|
||||||
@ -181,7 +180,7 @@ not be what you want or need.
|
|||||||
Move all virtual machines from the node. Make sure you have no local
|
Move all virtual machines from the node. Make sure you have no local
|
||||||
data or backups you want to keep, or save them accordingly.
|
data or backups you want to keep, or save them accordingly.
|
||||||
|
|
||||||
Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
|
Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
|
||||||
identify the node ID:
|
identify the node ID:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -230,12 +229,12 @@ Membership information
|
|||||||
----
|
----
|
||||||
|
|
||||||
Log in to one remaining node via ssh. Issue the delete command (here
|
Log in to one remaining node via ssh. Issue the delete command (here
|
||||||
deleting node hp4):
|
deleting node `hp4`):
|
||||||
|
|
||||||
hp1# pvecm delnode hp4
|
hp1# pvecm delnode hp4
|
||||||
|
|
||||||
If the operation succeeds no output is returned, just check the node
|
If the operation succeeds no output is returned, just check the node
|
||||||
list again with 'pvecm nodes' or 'pvecm status'. You should see
|
list again with `pvecm nodes` or `pvecm status`. You should see
|
||||||
something like:
|
something like:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -308,11 +307,11 @@ It is obvious that a cluster is not quorate when all nodes are
|
|||||||
offline. This is a common case after a power failure.
|
offline. This is a common case after a power failure.
|
||||||
|
|
||||||
NOTE: It is always a good idea to use an uninterruptible power supply
|
NOTE: It is always a good idea to use an uninterruptible power supply
|
||||||
('UPS', also called 'battery backup') to avoid this state. Especially if
|
(``UPS'', also called ``battery backup'') to avoid this state, especially if
|
||||||
you want HA.
|
you want HA.
|
||||||
|
|
||||||
On node startup, service 'pve-manager' is started and waits for
|
On node startup, service `pve-manager` is started and waits for
|
||||||
quorum. Once quorate, it starts all guests which have the 'onboot'
|
quorum. Once quorate, it starts all guests which have the `onboot`
|
||||||
flag set.
|
flag set.
|
||||||
|
|
||||||
When you turn on nodes, or when power comes back after power failure,
|
When you turn on nodes, or when power comes back after power failure,
|
||||||
|
@ -24,11 +24,11 @@ ifndef::manvolnum[]
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
This daemom exposes the whole {pve} API on 127.0.0.1:85. It runs as
|
This daemon exposes the whole {pve} API on `127.0.0.1:85`. It runs as
|
||||||
'root' and has permission to do all priviledged operations.
|
`root` and has permission to do all privileged operations.
|
||||||
|
|
||||||
NOTE: The daemon listens to a local address only, so you cannot access
|
NOTE: The daemon listens to a local address only, so you cannot access
|
||||||
it from outside. The 'pveproxy' daemon exposes the API to the outside
|
it from outside. The `pveproxy` daemon exposes the API to the outside
|
||||||
world.
|
world.
|
||||||
|
|
||||||
|
|
||||||
|
@ -25,9 +25,9 @@ include::attributes.txt[]
|
|||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
This daemon exposes the whole {pve} API on TCP port 8006 using
|
This daemon exposes the whole {pve} API on TCP port 8006 using
|
||||||
HTTPS. It runs as user 'www-data' and has very limited permissions.
|
HTTPS. It runs as user `www-data` and has very limited permissions.
|
||||||
Operation requiring more permissions are forwarded to the local
|
Operation requiring more permissions are forwarded to the local
|
||||||
'pvedaemon'.
|
`pvedaemon`.
|
||||||
|
|
||||||
Requests targeted for other nodes are automatically forwarded to those
|
Requests targeted for other nodes are automatically forwarded to those
|
||||||
nodes. This means that you can manage your whole cluster by connecting
|
nodes. This means that you can manage your whole cluster by connecting
|
||||||
@ -36,8 +36,8 @@ to a single {pve} node.
|
|||||||
Host based Access Control
|
Host based Access Control
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
It is possible to configure "apache2" like access control
|
It is possible to configure ``apache2''-like access control
|
||||||
lists. Values are read from file '/etc/default/pveproxy'. For example:
|
lists. Values are read from file `/etc/default/pveproxy`. For example:
|
||||||
|
|
||||||
----
|
----
|
||||||
ALLOW_FROM="10.0.0.1-10.0.0.5,192.168.0.0/22"
|
ALLOW_FROM="10.0.0.1-10.0.0.5,192.168.0.0/22"
|
||||||
@ -46,9 +46,9 @@ POLICY="allow"
|
|||||||
----
|
----
|
||||||
|
|
||||||
IP addresses can be specified using any syntax understood by `Net::IP`. The
|
IP addresses can be specified using any syntax understood by `Net::IP`. The
|
||||||
name 'all' is an alias for '0/0'.
|
name `all` is an alias for `0/0`.
|
||||||
|
|
||||||
The default policy is 'allow'.
|
The default policy is `allow`.
|
||||||
|
|
||||||
[width="100%",options="header"]
|
[width="100%",options="header"]
|
||||||
|===========================================================
|
|===========================================================
|
||||||
@ -63,7 +63,7 @@ The default policy is 'allow'.
|
|||||||
SSL Cipher Suite
|
SSL Cipher Suite
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
You can define the cipher list in '/etc/default/pveproxy', for example
|
You can define the cipher list in `/etc/default/pveproxy`, for example
|
||||||
|
|
||||||
CIPHERS="HIGH:MEDIUM:!aNULL:!MD5"
|
CIPHERS="HIGH:MEDIUM:!aNULL:!MD5"
|
||||||
|
|
||||||
@ -75,12 +75,12 @@ Diffie-Hellman Parameters
|
|||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
You can define the used Diffie-Hellman parameters in
|
You can define the used Diffie-Hellman parameters in
|
||||||
'/etc/default/pveproxy' by setting `DHPARAMS` to the path of a file
|
`/etc/default/pveproxy` by setting `DHPARAMS` to the path of a file
|
||||||
containing DH parameters in PEM format, for example
|
containing DH parameters in PEM format, for example
|
||||||
|
|
||||||
DHPARAMS="/path/to/dhparams.pem"
|
DHPARAMS="/path/to/dhparams.pem"
|
||||||
|
|
||||||
If this option is not set, the built-in 'skip2048' parameters will be
|
If this option is not set, the built-in `skip2048` parameters will be
|
||||||
used.
|
used.
|
||||||
|
|
||||||
NOTE: DH parameters are only used if a cipher suite utilizing the DH key
|
NOTE: DH parameters are only used if a cipher suite utilizing the DH key
|
||||||
@ -89,20 +89,21 @@ exchange algorithm is negotiated.
|
|||||||
Alternative HTTPS certificate
|
Alternative HTTPS certificate
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
By default, pveproxy uses the certificate '/etc/pve/local/pve-ssl.pem'
|
By default, pveproxy uses the certificate `/etc/pve/local/pve-ssl.pem`
|
||||||
(and private key '/etc/pve/local/pve-ssl.key') for HTTPS connections.
|
(and private key `/etc/pve/local/pve-ssl.key`) for HTTPS connections.
|
||||||
This certificate is signed by the cluster CA certificate, and therefor
|
This certificate is signed by the cluster CA certificate, and therefor
|
||||||
not trusted by browsers and operating systems by default.
|
not trusted by browsers and operating systems by default.
|
||||||
|
|
||||||
In order to use a different certificate and private key for HTTPS,
|
In order to use a different certificate and private key for HTTPS,
|
||||||
store the server certificate and any needed intermediate / CA
|
store the server certificate and any needed intermediate / CA
|
||||||
certificates in PEM format in the file '/etc/pve/local/pveproxy-ssl.pem'
|
certificates in PEM format in the file `/etc/pve/local/pveproxy-ssl.pem`
|
||||||
and the associated private key in PEM format without a password in the
|
and the associated private key in PEM format without a password in the
|
||||||
file '/etc/pve/local/pveproxy-ssl.key'.
|
file `/etc/pve/local/pveproxy-ssl.key`.
|
||||||
|
|
||||||
WARNING: Do not replace the automatically generated node certificate
|
WARNING: Do not replace the automatically generated node certificate
|
||||||
files in '/etc/pve/local/pve-ssl.pem'/'etc/pve/local/pve-ssl.key' or
|
files in `/etc/pve/local/pve-ssl.pem` and `etc/pve/local/pve-ssl.key` or
|
||||||
the cluster CA files in '/etc/pve/pve-root-ca.pem'/'/etc/pve/priv/pve-root-ca.key'.
|
the cluster CA files in `/etc/pve/pve-root-ca.pem` and
|
||||||
|
`/etc/pve/priv/pve-root-ca.key`.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
include::pve-copyright.adoc[]
|
include::pve-copyright.adoc[]
|
||||||
|
26
pvesm.adoc
26
pvesm.adoc
@ -36,7 +36,7 @@ live-migrate running machines without any downtime, as all nodes in
|
|||||||
the cluster have direct access to VM disk images. There is no need to
|
the cluster have direct access to VM disk images. There is no need to
|
||||||
copy VM image data, so live migration is very fast in that case.
|
copy VM image data, so live migration is very fast in that case.
|
||||||
|
|
||||||
The storage library (package 'libpve-storage-perl') uses a flexible
|
The storage library (package `libpve-storage-perl`) uses a flexible
|
||||||
plugin system to provide a common interface to all storage types. This
|
plugin system to provide a common interface to all storage types. This
|
||||||
can be easily adopted to include further storage types in future.
|
can be easily adopted to include further storage types in future.
|
||||||
|
|
||||||
@ -81,13 +81,13 @@ snapshots and clones.
|
|||||||
|=========================================================
|
|=========================================================
|
||||||
|
|
||||||
TIP: It is possible to use LVM on top of an iSCSI storage. That way
|
TIP: It is possible to use LVM on top of an iSCSI storage. That way
|
||||||
you get a 'shared' LVM storage.
|
you get a `shared` LVM storage.
|
||||||
|
|
||||||
Thin provisioning
|
Thin provisioning
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
A number of storages, and the Qemu image format `qcow2`, support _thin
|
A number of storages, and the Qemu image format `qcow2`, support 'thin
|
||||||
provisioning_. With thin provisioning activated, only the blocks that
|
provisioning'. With thin provisioning activated, only the blocks that
|
||||||
the guest system actually use will be written to the storage.
|
the guest system actually use will be written to the storage.
|
||||||
|
|
||||||
Say for instance you create a VM with a 32GB hard disk, and after
|
Say for instance you create a VM with a 32GB hard disk, and after
|
||||||
@ -99,7 +99,7 @@ available storage blocks. You can create large disk images for your
|
|||||||
VMs, and when the need arises, add more disks to your storage without
|
VMs, and when the need arises, add more disks to your storage without
|
||||||
resizing the VMs filesystems.
|
resizing the VMs filesystems.
|
||||||
|
|
||||||
All storage types which have the 'Snapshots' feature also support thin
|
All storage types which have the ``Snapshots'' feature also support thin
|
||||||
provisioning.
|
provisioning.
|
||||||
|
|
||||||
CAUTION: If a storage runs full, all guests using volumes on that
|
CAUTION: If a storage runs full, all guests using volumes on that
|
||||||
@ -112,12 +112,12 @@ Storage Configuration
|
|||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
All {pve} related storage configuration is stored within a single text
|
All {pve} related storage configuration is stored within a single text
|
||||||
file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
|
file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
|
||||||
gets automatically distributed to all cluster nodes. So all nodes
|
gets automatically distributed to all cluster nodes. So all nodes
|
||||||
share the same storage configuration.
|
share the same storage configuration.
|
||||||
|
|
||||||
Sharing storage configuration make perfect sense for shared storage,
|
Sharing storage configuration make perfect sense for shared storage,
|
||||||
because the same 'shared' storage is accessible from all nodes. But is
|
because the same ``shared'' storage is accessible from all nodes. But is
|
||||||
also useful for local storage types. In this case such local storage
|
also useful for local storage types. In this case such local storage
|
||||||
is available on all nodes, but it is physically different and can have
|
is available on all nodes, but it is physically different and can have
|
||||||
totally different content.
|
totally different content.
|
||||||
@ -140,11 +140,11 @@ them come with reasonable default. In that case you can omit the value.
|
|||||||
|
|
||||||
To be more specific, take a look at the default storage configuration
|
To be more specific, take a look at the default storage configuration
|
||||||
after installation. It contains one special local storage pool named
|
after installation. It contains one special local storage pool named
|
||||||
`local`, which refers to the directory '/var/lib/vz' and is always
|
`local`, which refers to the directory `/var/lib/vz` and is always
|
||||||
available. The {pve} installer creates additional storage entries
|
available. The {pve} installer creates additional storage entries
|
||||||
depending on the storage type chosen at installation time.
|
depending on the storage type chosen at installation time.
|
||||||
|
|
||||||
.Default storage configuration ('/etc/pve/storage.cfg')
|
.Default storage configuration (`/etc/pve/storage.cfg`)
|
||||||
----
|
----
|
||||||
dir: local
|
dir: local
|
||||||
path /var/lib/vz
|
path /var/lib/vz
|
||||||
@ -195,7 +195,7 @@ Container templates.
|
|||||||
|
|
||||||
backup:::
|
backup:::
|
||||||
|
|
||||||
Backup files ('vzdump').
|
Backup files (`vzdump`).
|
||||||
|
|
||||||
iso:::
|
iso:::
|
||||||
|
|
||||||
@ -248,7 +248,7 @@ To get the filesystem path for a `<VOLUME_ID>` use:
|
|||||||
Volume Ownership
|
Volume Ownership
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
There exists an ownership relation for 'image' type volumes. Each such
|
There exists an ownership relation for `image` type volumes. Each such
|
||||||
volume is owned by a VM or Container. For example volume
|
volume is owned by a VM or Container. For example volume
|
||||||
`local:230/example-image.raw` is owned by VM 230. Most storage
|
`local:230/example-image.raw` is owned by VM 230. Most storage
|
||||||
backends encodes this ownership information into the volume name.
|
backends encodes this ownership information into the volume name.
|
||||||
@ -266,8 +266,8 @@ of those low level operations on the command line. Normally,
|
|||||||
allocation and removal of volumes is done by the VM and Container
|
allocation and removal of volumes is done by the VM and Container
|
||||||
management tools.
|
management tools.
|
||||||
|
|
||||||
Nevertheless, there is a command line tool called 'pvesm' ({pve}
|
Nevertheless, there is a command line tool called `pvesm` (``{pve}
|
||||||
storage manager), which is able to perform common storage management
|
Storage Manager''), which is able to perform common storage management
|
||||||
tasks.
|
tasks.
|
||||||
|
|
||||||
|
|
||||||
|
41
pveum.adoc
41
pveum.adoc
@ -37,7 +37,7 @@ objects (VM´s, storages, nodes, etc.) granular access can be defined.
|
|||||||
Authentication Realms
|
Authentication Realms
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
Proxmox VE stores all user attributes in '/etc/pve/user.cfg'. So there
|
Proxmox VE stores all user attributes in `/etc/pve/user.cfg`. So there
|
||||||
must be an entry for each user in that file. The password is not
|
must be an entry for each user in that file. The password is not
|
||||||
stored, instead you can use configure several realms to verify
|
stored, instead you can use configure several realms to verify
|
||||||
passwords.
|
passwords.
|
||||||
@ -48,9 +48,9 @@ LDAP::
|
|||||||
|
|
||||||
Linux PAM standard authentication::
|
Linux PAM standard authentication::
|
||||||
|
|
||||||
You need to create the system users first with 'adduser'
|
You need to create the system users first with `adduser`
|
||||||
(e.g. adduser heinz) and possibly the group as well. After that you
|
(e.g. `adduser heinz`) and possibly the group as well. After that you
|
||||||
can create the user on the GUI!
|
can create the user on the GUI.
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
----
|
----
|
||||||
@ -63,7 +63,7 @@ usermod -a -G watchman heinz
|
|||||||
Proxmox VE authentication server::
|
Proxmox VE authentication server::
|
||||||
|
|
||||||
This is a unix like password store
|
This is a unix like password store
|
||||||
('/etc/pve/priv/shadow.cfg'). Password are encrypted using the SHA-256
|
(`/etc/pve/priv/shadow.cfg`). Password are encrypted using the SHA-256
|
||||||
hash method. Users are allowed to change passwords.
|
hash method. Users are allowed to change passwords.
|
||||||
|
|
||||||
Terms and Definitions
|
Terms and Definitions
|
||||||
@ -76,7 +76,7 @@ A Proxmox VE user name consists of two parts: `<userid>@<realm>`. The
|
|||||||
login screen on the GUI shows them a separate items, but it is
|
login screen on the GUI shows them a separate items, but it is
|
||||||
internally used as single string.
|
internally used as single string.
|
||||||
|
|
||||||
We store the following attribute for users ('/etc/pve/user.cfg'):
|
We store the following attribute for users (`/etc/pve/user.cfg`):
|
||||||
|
|
||||||
* first name
|
* first name
|
||||||
* last name
|
* last name
|
||||||
@ -88,7 +88,7 @@ We store the following attribute for users ('/etc/pve/user.cfg'):
|
|||||||
Superuser
|
Superuser
|
||||||
^^^^^^^^^
|
^^^^^^^^^
|
||||||
|
|
||||||
The traditional unix superuser account is called 'root@pam'. All
|
The traditional unix superuser account is called `root@pam`. All
|
||||||
system mails are forwarded to the email assigned to that account.
|
system mails are forwarded to the email assigned to that account.
|
||||||
|
|
||||||
Groups
|
Groups
|
||||||
@ -103,8 +103,8 @@ Objects and Paths
|
|||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Access permissions are assigned to objects, such as a virtual machines
|
Access permissions are assigned to objects, such as a virtual machines
|
||||||
('/vms/\{vmid\}') or a storage ('/storage/\{storeid\}') or a pool of
|
(`/vms/{vmid}`) or a storage (`/storage/{storeid}`) or a pool of
|
||||||
resources ('/pool/\{poolname\}'). We use filesystem like paths to
|
resources (`/pool/{poolname}`). We use file system like paths to
|
||||||
address those objects. Those paths form a natural tree, and
|
address those objects. Those paths form a natural tree, and
|
||||||
permissions can be inherited down that hierarchy.
|
permissions can be inherited down that hierarchy.
|
||||||
|
|
||||||
@ -221,7 +221,7 @@ Pools
|
|||||||
~~~~~
|
~~~~~
|
||||||
|
|
||||||
Pools can be used to group a set of virtual machines and data
|
Pools can be used to group a set of virtual machines and data
|
||||||
stores. You can then simply set permissions on pools ('/pool/\{poolid\}'),
|
stores. You can then simply set permissions on pools (`/pool/{poolid}`),
|
||||||
which are inherited to all pool members. This is a great way simplify
|
which are inherited to all pool members. This is a great way simplify
|
||||||
access control.
|
access control.
|
||||||
|
|
||||||
@ -229,8 +229,8 @@ Command Line Tool
|
|||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
Most users will simply use the GUI to manage users. But there is also
|
Most users will simply use the GUI to manage users. But there is also
|
||||||
a full featured command line tool called 'pveum' (short for 'Proxmox
|
a full featured command line tool called `pveum` (short for ``**P**roxmox
|
||||||
VE User Manager'). I will use that tool in the following
|
**VE** **U**ser **M**anager''). I will use that tool in the following
|
||||||
examples. Please note that all Proxmox VE command line tools are
|
examples. Please note that all Proxmox VE command line tools are
|
||||||
wrappers around the API, so you can also access those function through
|
wrappers around the API, so you can also access those function through
|
||||||
the REST API.
|
the REST API.
|
||||||
@ -302,12 +302,12 @@ Auditors
|
|||||||
You can give read only access to users by assigning the `PVEAuditor`
|
You can give read only access to users by assigning the `PVEAuditor`
|
||||||
role to users or groups.
|
role to users or groups.
|
||||||
|
|
||||||
Example1: Allow user 'joe@pve' to see everything
|
Example1: Allow user `joe@pve` to see everything
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod / -user joe@pve -role PVEAuditor
|
pveum aclmod / -user joe@pve -role PVEAuditor
|
||||||
|
|
||||||
Example1: Allow user 'joe@pve' to see all virtual machines
|
Example1: Allow user `joe@pve` to see all virtual machines
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod /vms -user joe@pve -role PVEAuditor
|
pveum aclmod /vms -user joe@pve -role PVEAuditor
|
||||||
@ -315,24 +315,25 @@ Example1: Allow user 'joe@pve' to see all virtual machines
|
|||||||
Delegate User Management
|
Delegate User Management
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If you want to delegate user managenent to user 'joe@pve' you can do
|
If you want to delegate user managenent to user `joe@pve` you can do
|
||||||
that with:
|
that with:
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod /access -user joe@pve -role PVEUserAdmin
|
pveum aclmod /access -user joe@pve -role PVEUserAdmin
|
||||||
|
|
||||||
User 'joe@pve' can now add and remove users, change passwords and
|
User `joe@pve` can now add and remove users, change passwords and
|
||||||
other user attributes. This is a very powerful role, and you most
|
other user attributes. This is a very powerful role, and you most
|
||||||
likely want to limit that to selected realms and groups. The following
|
likely want to limit that to selected realms and groups. The following
|
||||||
example allows 'joe@pve' to modify users within realm 'pve' if they
|
example allows `joe@pve` to modify users within realm `pve` if they
|
||||||
are members of group 'customers':
|
are members of group `customers`:
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod /access/realm/pve -user joe@pve -role PVEUserAdmin
|
pveum aclmod /access/realm/pve -user joe@pve -role PVEUserAdmin
|
||||||
pveum aclmod /access/groups/customers -user joe@pve -role PVEUserAdmin
|
pveum aclmod /access/groups/customers -user joe@pve -role PVEUserAdmin
|
||||||
|
|
||||||
NOTE: The user is able to add other users, but only if they are
|
NOTE: The user is able to add other users, but only if they are
|
||||||
members of group 'customers' and within realm 'pve'.
|
members of group `customers` and within realm `pve`.
|
||||||
|
|
||||||
|
|
||||||
Pools
|
Pools
|
||||||
~~~~~
|
~~~~~
|
||||||
@ -359,7 +360,7 @@ Now we create a new user which is a member of that group
|
|||||||
|
|
||||||
NOTE: The -password parameter will prompt you for a password
|
NOTE: The -password parameter will prompt you for a password
|
||||||
|
|
||||||
I assume we already created a pool called 'dev-pool' on the GUI. So we can now assign permission to that pool:
|
I assume we already created a pool called ``dev-pool'' on the GUI. So we can now assign permission to that pool:
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod /pool/dev-pool/ -group developers -role PVEAdmin
|
pveum aclmod /pool/dev-pool/ -group developers -role PVEAdmin
|
||||||
|
6
qm.adoc
6
qm.adoc
@ -401,7 +401,7 @@ with a press of the ESC button during boot), or you have to choose
|
|||||||
SPICE as the display type.
|
SPICE as the display type.
|
||||||
|
|
||||||
|
|
||||||
Managing Virtual Machines with 'qm'
|
Managing Virtual Machines with `qm`
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
|
qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
|
||||||
@ -437,7 +437,7 @@ All configuration files consists of lines in the form
|
|||||||
PARAMETER: value
|
PARAMETER: value
|
||||||
|
|
||||||
Configuration files are stored inside the Proxmox cluster file
|
Configuration files are stored inside the Proxmox cluster file
|
||||||
system, and can be accessed at '/etc/pve/qemu-server/<VMID>.conf'.
|
system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
@ -448,7 +448,7 @@ include::qm.conf.5-opts.adoc[]
|
|||||||
Locks
|
Locks
|
||||||
-----
|
-----
|
||||||
|
|
||||||
Online migrations and backups ('vzdump') set a lock to prevent incompatible
|
Online migrations and backups (`vzdump`) set a lock to prevent incompatible
|
||||||
concurrent actions on the affected VMs. Sometimes you need to remove such a
|
concurrent actions on the affected VMs. Sometimes you need to remove such a
|
||||||
lock manually (e.g., after a power failure).
|
lock manually (e.g., after a power failure).
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ ifndef::manvolnum[]
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
The '/etc/pve/qemu-server/<VMID>.conf' files stores VM configuration,
|
The `/etc/pve/qemu-server/<VMID>.conf` files stores VM configuration,
|
||||||
where "VMID" is the numeric ID of the given VM.
|
where "VMID" is the numeric ID of the given VM.
|
||||||
|
|
||||||
NOTE: IDs <= 100 are reserved for internal purposes.
|
NOTE: IDs <= 100 are reserved for internal purposes.
|
||||||
@ -39,10 +39,10 @@ the following format:
|
|||||||
|
|
||||||
OPTION: value
|
OPTION: value
|
||||||
|
|
||||||
Blank lines in the file are ignored, and lines starting with a '#'
|
Blank lines in the file are ignored, and lines starting with a `#`
|
||||||
character are treated as comments and are also ignored.
|
character are treated as comments and are also ignored.
|
||||||
|
|
||||||
One can use the 'qm' command to generate and modify those files.
|
One can use the `qm` command to generate and modify those files.
|
||||||
|
|
||||||
|
|
||||||
Options
|
Options
|
||||||
|
@ -6,7 +6,7 @@ include::attributes.txt[]
|
|||||||
NAME
|
NAME
|
||||||
----
|
----
|
||||||
|
|
||||||
qmrestore - Restore QemuServer 'vzdump' Backups
|
qmrestore - Restore QemuServer `vzdump` Backups
|
||||||
|
|
||||||
SYNOPSYS
|
SYNOPSYS
|
||||||
--------
|
--------
|
||||||
@ -24,9 +24,9 @@ include::attributes.txt[]
|
|||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
|
|
||||||
Restore the QemuServer vzdump backup 'archive' to virtual machine
|
Restore the QemuServer vzdump backup `archive` to virtual machine
|
||||||
'vmid'. Volumes are allocated on the original storage if there is no
|
`vmid`. Volumes are allocated on the original storage if there is no
|
||||||
'storage' specified.
|
`storage` specified.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
include::pve-copyright.adoc[]
|
include::pve-copyright.adoc[]
|
||||||
|
@ -32,15 +32,15 @@ machines and container.
|
|||||||
|
|
||||||
This daemon listens on TCP port 3128, and implements an HTTP proxy to
|
This daemon listens on TCP port 3128, and implements an HTTP proxy to
|
||||||
forward 'CONNECT' request from the SPICE client to the correct {pve}
|
forward 'CONNECT' request from the SPICE client to the correct {pve}
|
||||||
VM. It runs as user 'www-data' and has very limited permissions.
|
VM. It runs as user `www-data` and has very limited permissions.
|
||||||
|
|
||||||
|
|
||||||
Host based Access Control
|
Host based Access Control
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
It is possible to configure "apache2" like access control
|
It is possible to configure "apache2" like access control
|
||||||
lists. Values are read from file '/etc/default/pveproxy'.
|
lists. Values are read from file `/etc/default/pveproxy`.
|
||||||
See 'pveproxy' documentation for details.
|
See `pveproxy` documentation for details.
|
||||||
|
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
|
@ -57,7 +57,7 @@ Recommended system requirements
|
|||||||
|
|
||||||
* RAM: 8 GB is good, more is better
|
* RAM: 8 GB is good, more is better
|
||||||
|
|
||||||
* Hardware RAID with batteries protected write cache (BBU) or flash
|
* Hardware RAID with batteries protected write cache (``BBU'') or flash
|
||||||
based protection
|
based protection
|
||||||
|
|
||||||
* Fast hard drives, best results with 15k rpm SAS, Raid10
|
* Fast hard drives, best results with 15k rpm SAS, Raid10
|
||||||
|
@ -4,12 +4,12 @@ include::attributes.txt[]
|
|||||||
|
|
||||||
We provide regular package updates on all repositories. You can
|
We provide regular package updates on all repositories. You can
|
||||||
install those update using the GUI, or you can directly run the CLI
|
install those update using the GUI, or you can directly run the CLI
|
||||||
command 'apt-get':
|
command `apt-get`:
|
||||||
|
|
||||||
apt-get update
|
apt-get update
|
||||||
apt-get dist-upgrade
|
apt-get dist-upgrade
|
||||||
|
|
||||||
NOTE: The 'apt' package management system is extremely flexible and
|
NOTE: The `apt` package management system is extremely flexible and
|
||||||
provides countless of feature - see `man apt-get` or <<Hertzog13>> for
|
provides countless of feature - see `man apt-get` or <<Hertzog13>> for
|
||||||
additional information.
|
additional information.
|
||||||
|
|
||||||
|
16
vzdump.adoc
16
vzdump.adoc
@ -80,7 +80,7 @@ This mode provides the lowest operation downtime, at the cost of a
|
|||||||
small inconstancy risk. It works by performing a Proxmox VE live
|
small inconstancy risk. It works by performing a Proxmox VE live
|
||||||
backup, in which data blocks are copied while the VM is running. If the
|
backup, in which data blocks are copied while the VM is running. If the
|
||||||
guest agent is enabled (`agent: 1`) and running, it calls
|
guest agent is enabled (`agent: 1`) and running, it calls
|
||||||
'guest-fsfreeze-freeze' and 'guest-fsfreeze-thaw' to improve
|
`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
|
||||||
consistency.
|
consistency.
|
||||||
|
|
||||||
A technical overview of the Proxmox VE live backup for QemuServer can
|
A technical overview of the Proxmox VE live backup for QemuServer can
|
||||||
@ -122,7 +122,7 @@ snapshot content will be archived in a tar file. Finally, the temporary
|
|||||||
snapshot is deleted again.
|
snapshot is deleted again.
|
||||||
|
|
||||||
NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
|
NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
|
||||||
supports snapshots. Using the `backup=no` mountpoint option individual volumes
|
supports snapshots. Using the `backup=no` mount point option individual volumes
|
||||||
can be excluded from the backup (and thus this requirement).
|
can be excluded from the backup (and thus this requirement).
|
||||||
|
|
||||||
NOTE: bind and device mountpoints are skipped during backup operations, like
|
NOTE: bind and device mountpoints are skipped during backup operations, like
|
||||||
@ -156,13 +156,13 @@ For details see the corresponding manual pages.
|
|||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
Global configuration is stored in '/etc/vzdump.conf'. The file uses a
|
Global configuration is stored in `/etc/vzdump.conf`. The file uses a
|
||||||
simple colon separated key/value format. Each line has the following
|
simple colon separated key/value format. Each line has the following
|
||||||
format:
|
format:
|
||||||
|
|
||||||
OPTION: value
|
OPTION: value
|
||||||
|
|
||||||
Blank lines in the file are ignored, and lines starting with a '#'
|
Blank lines in the file are ignored, and lines starting with a `#`
|
||||||
character are treated as comments and are also ignored. Values from
|
character are treated as comments and are also ignored. Values from
|
||||||
this file are used as default, and can be overwritten on the command
|
this file are used as default, and can be overwritten on the command
|
||||||
line.
|
line.
|
||||||
@ -172,7 +172,7 @@ We currently support the following options:
|
|||||||
include::vzdump.conf.5-opts.adoc[]
|
include::vzdump.conf.5-opts.adoc[]
|
||||||
|
|
||||||
|
|
||||||
.Example 'vzdump.conf' Configuration
|
.Example `vzdump.conf` Configuration
|
||||||
----
|
----
|
||||||
tmpdir: /mnt/fast_local_disk
|
tmpdir: /mnt/fast_local_disk
|
||||||
storage: my_backup_storage
|
storage: my_backup_storage
|
||||||
@ -186,14 +186,14 @@ Hook Scripts
|
|||||||
You can specify a hook script with option `--script`. This script is
|
You can specify a hook script with option `--script`. This script is
|
||||||
called at various phases of the backup process, with parameters
|
called at various phases of the backup process, with parameters
|
||||||
accordingly set. You can find an example in the documentation
|
accordingly set. You can find an example in the documentation
|
||||||
directory ('vzdump-hook-script.pl').
|
directory (`vzdump-hook-script.pl`).
|
||||||
|
|
||||||
File Exclusions
|
File Exclusions
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
NOTE: this option is only available for container backups.
|
NOTE: this option is only available for container backups.
|
||||||
|
|
||||||
'vzdump' skips the following files by default (disable with the option
|
`vzdump` skips the following files by default (disable with the option
|
||||||
`--stdexcludes 0`)
|
`--stdexcludes 0`)
|
||||||
|
|
||||||
/tmp/?*
|
/tmp/?*
|
||||||
@ -214,7 +214,7 @@ Examples
|
|||||||
|
|
||||||
Simply dump guest 777 - no snapshot, just archive the guest private area and
|
Simply dump guest 777 - no snapshot, just archive the guest private area and
|
||||||
configuration files to the default dump directory (usually
|
configuration files to the default dump directory (usually
|
||||||
'/var/lib/vz/dump/').
|
`/var/lib/vz/dump/`).
|
||||||
|
|
||||||
# vzdump 777
|
# vzdump 777
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user