mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-08-01 22:30:16 +00:00
spelling/grammar/capitalization
especially: - filesystem -> file system - mountpoint -> mount point - lxc -> LXC - pve -> PVE - it's -> its (where appropriate) - VM´s -> VMs (and similar) - Ressource -> Resource - maximal -> maximum or at most - format of section headers and block headers
This commit is contained in:
parent
8c1189b640
commit
5eba07434f
@ -165,7 +165,7 @@ Locks are provided by our distributed configuration file system (pmxcfs).
|
|||||||
They are used to guarantee that each LRM is active once and working. As a
|
They are used to guarantee that each LRM is active once and working. As a
|
||||||
LRM only executes actions when it holds its lock we can mark a failed node
|
LRM only executes actions when it holds its lock we can mark a failed node
|
||||||
as fenced if we can acquire its lock. This lets us then recover any failed
|
as fenced if we can acquire its lock. This lets us then recover any failed
|
||||||
HA services securely without any interference from the now unknown failed Node.
|
HA services securely without any interference from the now unknown failed node.
|
||||||
This all gets supervised by the CRM which holds currently the manager master
|
This all gets supervised by the CRM which holds currently the manager master
|
||||||
lock.
|
lock.
|
||||||
|
|
||||||
@ -188,14 +188,14 @@ After the LRM gets in the active state it reads the manager status
|
|||||||
file in `/etc/pve/ha/manager_status` and determines the commands it
|
file in `/etc/pve/ha/manager_status` and determines the commands it
|
||||||
has to execute for the services it owns.
|
has to execute for the services it owns.
|
||||||
For each command a worker gets started, this workers are running in
|
For each command a worker gets started, this workers are running in
|
||||||
parallel and are limited to maximal 4 by default. This default setting
|
parallel and are limited to at most 4 by default. This default setting
|
||||||
may be changed through the datacenter configuration key `max_worker`.
|
may be changed through the datacenter configuration key `max_worker`.
|
||||||
When finished the worker process gets collected and its result saved for
|
When finished the worker process gets collected and its result saved for
|
||||||
the CRM.
|
the CRM.
|
||||||
|
|
||||||
.Maximal Concurrent Worker Adjustment Tips
|
.Maximum Concurrent Worker Adjustment Tips
|
||||||
[NOTE]
|
[NOTE]
|
||||||
The default value of 4 maximal concurrent Workers may be unsuited for
|
The default value of at most 4 concurrent workers may be unsuited for
|
||||||
a specific setup. For example may 4 live migrations happen at the same
|
a specific setup. For example may 4 live migrations happen at the same
|
||||||
time, which can lead to network congestions with slower networks and/or
|
time, which can lead to network congestions with slower networks and/or
|
||||||
big (memory wise) services. Ensure that also in the worst case no congestion
|
big (memory wise) services. Ensure that also in the worst case no congestion
|
||||||
@ -300,7 +300,7 @@ a watchdog reset.
|
|||||||
Fencing
|
Fencing
|
||||||
-------
|
-------
|
||||||
|
|
||||||
What Is Fencing
|
What is Fencing
|
||||||
~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Fencing secures that on a node failure the dangerous node gets will be rendered
|
Fencing secures that on a node failure the dangerous node gets will be rendered
|
||||||
@ -384,13 +384,13 @@ relative meaning only.
|
|||||||
|
|
||||||
restricted::
|
restricted::
|
||||||
|
|
||||||
resources bound to this group may only run on nodes defined by the
|
Resources bound to this group may only run on nodes defined by the
|
||||||
group. If no group node member is available the resource will be
|
group. If no group node member is available the resource will be
|
||||||
placed in the stopped state.
|
placed in the stopped state.
|
||||||
|
|
||||||
nofailback::
|
nofailback::
|
||||||
|
|
||||||
the resource won't automatically fail back when a more preferred node
|
The resource won't automatically fail back when a more preferred node
|
||||||
(re)joins the cluster.
|
(re)joins the cluster.
|
||||||
|
|
||||||
|
|
||||||
@ -411,12 +411,12 @@ specific for each resource.
|
|||||||
|
|
||||||
max_restart::
|
max_restart::
|
||||||
|
|
||||||
maximal number of tries to restart an failed service on the actual
|
Maximum number of tries to restart an failed service on the actual
|
||||||
node. The default is set to one.
|
node. The default is set to one.
|
||||||
|
|
||||||
max_relocate::
|
max_relocate::
|
||||||
|
|
||||||
maximal number of tries to relocate the service to a different node.
|
Maximum number of tries to relocate the service to a different node.
|
||||||
A relocate only happens after the max_restart value is exceeded on the
|
A relocate only happens after the max_restart value is exceeded on the
|
||||||
actual node. The default is set to one.
|
actual node. The default is set to one.
|
||||||
|
|
||||||
@ -433,7 +433,7 @@ placed in an error state. In this state the service won't get touched
|
|||||||
by the HA stack anymore. To recover from this state you should follow
|
by the HA stack anymore. To recover from this state you should follow
|
||||||
these steps:
|
these steps:
|
||||||
|
|
||||||
* bring the resource back into an safe and consistent state (e.g:
|
* bring the resource back into a safe and consistent state (e.g.,
|
||||||
killing its process)
|
killing its process)
|
||||||
|
|
||||||
* disable the ha resource to place it in an stopped state
|
* disable the ha resource to place it in an stopped state
|
||||||
@ -451,19 +451,19 @@ This are how the basic user-initiated service operations (via
|
|||||||
|
|
||||||
enable::
|
enable::
|
||||||
|
|
||||||
the service will be started by the LRM if not already running.
|
The service will be started by the LRM if not already running.
|
||||||
|
|
||||||
disable::
|
disable::
|
||||||
|
|
||||||
the service will be stopped by the LRM if running.
|
The service will be stopped by the LRM if running.
|
||||||
|
|
||||||
migrate/relocate::
|
migrate/relocate::
|
||||||
|
|
||||||
the service will be relocated (live) to another node.
|
The service will be relocated (live) to another node.
|
||||||
|
|
||||||
remove::
|
remove::
|
||||||
|
|
||||||
the service will be removed from the HA managed resource list. Its
|
The service will be removed from the HA managed resource list. Its
|
||||||
current state will not be touched.
|
current state will not be touched.
|
||||||
|
|
||||||
start/stop::
|
start/stop::
|
||||||
|
@ -5,11 +5,11 @@ include::attributes.txt[]
|
|||||||
ZFS is a combined file system and logical volume manager designed by
|
ZFS is a combined file system and logical volume manager designed by
|
||||||
Sun Microsystems. Starting with {pve} 3.4, the native Linux
|
Sun Microsystems. Starting with {pve} 3.4, the native Linux
|
||||||
kernel port of the ZFS file system is introduced as optional
|
kernel port of the ZFS file system is introduced as optional
|
||||||
file-system and also as an additional selection for the root
|
file system and also as an additional selection for the root
|
||||||
file-system. There is no need for manually compile ZFS modules - all
|
file system. There is no need for manually compile ZFS modules - all
|
||||||
packages are included.
|
packages are included.
|
||||||
|
|
||||||
By using ZFS, its possible to achieve maximal enterprise features with
|
By using ZFS, its possible to achieve maximum enterprise features with
|
||||||
low budget hardware, but also high performance systems by leveraging
|
low budget hardware, but also high performance systems by leveraging
|
||||||
SSD caching or even SSD only setups. ZFS can replace cost intense
|
SSD caching or even SSD only setups. ZFS can replace cost intense
|
||||||
hardware raid cards by moderate CPU and memory load combined with easy
|
hardware raid cards by moderate CPU and memory load combined with easy
|
||||||
@ -23,7 +23,7 @@ management.
|
|||||||
|
|
||||||
* Protection against data corruption
|
* Protection against data corruption
|
||||||
|
|
||||||
* Data compression on file-system level
|
* Data compression on file system level
|
||||||
|
|
||||||
* Snapshots
|
* Snapshots
|
||||||
|
|
||||||
@ -61,7 +61,7 @@ If you use a dedicated cache and/or log disk, you should use a
|
|||||||
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
|
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
|
||||||
increase the overall performance significantly.
|
increase the overall performance significantly.
|
||||||
|
|
||||||
IMPORTANT: Do not use ZFS on top of hardware controller which has it's
|
IMPORTANT: Do not use ZFS on top of hardware controller which has its
|
||||||
own cache management. ZFS needs to directly communicate with disks. An
|
own cache management. ZFS needs to directly communicate with disks. An
|
||||||
HBA adapter is the way to go, or something like LSI controller flashed
|
HBA adapter is the way to go, or something like LSI controller flashed
|
||||||
in ``IT'' mode.
|
in ``IT'' mode.
|
||||||
@ -72,7 +72,7 @@ since they are not supported by ZFS. Use IDE or SCSI instead (works
|
|||||||
also with `virtio` SCSI controller type).
|
also with `virtio` SCSI controller type).
|
||||||
|
|
||||||
|
|
||||||
Installation as root file system
|
Installation as Root File System
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When you install using the {pve} installer, you can choose ZFS for the
|
When you install using the {pve} installer, you can choose ZFS for the
|
||||||
@ -175,7 +175,7 @@ manual pages, which can be read with:
|
|||||||
# man zfs
|
# man zfs
|
||||||
-----
|
-----
|
||||||
|
|
||||||
.Create a new ZPool
|
.Create a new zpool
|
||||||
|
|
||||||
To create a new pool, at least one disk is needed. The `ashift` should
|
To create a new pool, at least one disk is needed. The `ashift` should
|
||||||
have the same sector-size (2 power of `ashift`) or larger as the
|
have the same sector-size (2 power of `ashift`) or larger as the
|
||||||
@ -183,7 +183,7 @@ underlying disk.
|
|||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device>
|
zpool create -f -o ashift=12 <pool> <device>
|
||||||
|
|
||||||
To activate the compression
|
To activate compression
|
||||||
|
|
||||||
zfs set compression=lz4 <pool>
|
zfs set compression=lz4 <pool>
|
||||||
|
|
||||||
@ -217,7 +217,7 @@ Minimum 4 Disks
|
|||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
|
zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
|
||||||
|
|
||||||
.Create a new pool with Cache (L2ARC)
|
.Create a new pool with cache (L2ARC)
|
||||||
|
|
||||||
It is possible to use a dedicated cache drive partition to increase
|
It is possible to use a dedicated cache drive partition to increase
|
||||||
the performance (use SSD).
|
the performance (use SSD).
|
||||||
@ -227,7 +227,7 @@ As `<device>` it is possible to use more devices, like it's shown in
|
|||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
|
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
|
||||||
|
|
||||||
.Create a new pool with Log (ZIL)
|
.Create a new pool with log (ZIL)
|
||||||
|
|
||||||
It is possible to use a dedicated cache drive partition to increase
|
It is possible to use a dedicated cache drive partition to increase
|
||||||
the performance(SSD).
|
the performance(SSD).
|
||||||
@ -237,7 +237,7 @@ As `<device>` it is possible to use more devices, like it's shown in
|
|||||||
|
|
||||||
zpool create -f -o ashift=12 <pool> <device> log <log_device>
|
zpool create -f -o ashift=12 <pool> <device> log <log_device>
|
||||||
|
|
||||||
.Add Cache and Log to an existing pool
|
.Add cache and log to an existing pool
|
||||||
|
|
||||||
If you have an pool without cache and log. First partition the SSD in
|
If you have an pool without cache and log. First partition the SSD in
|
||||||
2 partition with `parted` or `gdisk`
|
2 partition with `parted` or `gdisk`
|
||||||
@ -246,11 +246,11 @@ IMPORTANT: Always use GPT partition tables (gdisk or parted).
|
|||||||
|
|
||||||
The maximum size of a log device should be about half the size of
|
The maximum size of a log device should be about half the size of
|
||||||
physical memory, so this is usually quite small. The rest of the SSD
|
physical memory, so this is usually quite small. The rest of the SSD
|
||||||
can be used to the cache.
|
can be used as cache.
|
||||||
|
|
||||||
zpool add -f <pool> log <device-part1> cache <device-part2>
|
zpool add -f <pool> log <device-part1> cache <device-part2>
|
||||||
|
|
||||||
.Changing a failed Device
|
.Changing a failed device
|
||||||
|
|
||||||
zpool replace -f <pool> <old device> <new-device>
|
zpool replace -f <pool> <old device> <new-device>
|
||||||
|
|
||||||
@ -259,7 +259,7 @@ Activate E-Mail Notification
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
ZFS comes with an event daemon, which monitors events generated by the
|
ZFS comes with an event daemon, which monitors events generated by the
|
||||||
ZFS kernel module. The daemon can also send E-Mails on ZFS event like
|
ZFS kernel module. The daemon can also send emails on ZFS events like
|
||||||
pool errors.
|
pool errors.
|
||||||
|
|
||||||
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
|
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
|
||||||
@ -274,22 +274,24 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
|
|||||||
other settings are optional.
|
other settings are optional.
|
||||||
|
|
||||||
|
|
||||||
Limit ZFS memory usage
|
Limit ZFS Memory Usage
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
It is good to use maximal 50 percent (which is the default) of the
|
It is good to use at most 50 percent (which is the default) of the
|
||||||
system memory for ZFS ARC to prevent performance shortage of the
|
system memory for ZFS ARC to prevent performance shortage of the
|
||||||
host. Use your preferred editor to change the configuration in
|
host. Use your preferred editor to change the configuration in
|
||||||
`/etc/modprobe.d/zfs.conf` and insert:
|
`/etc/modprobe.d/zfs.conf` and insert:
|
||||||
|
|
||||||
options zfs zfs_arc_max=8589934592
|
--------
|
||||||
|
options zfs zfs_arc_max=8589934592
|
||||||
|
--------
|
||||||
|
|
||||||
This example setting limits the usage to 8GB.
|
This example setting limits the usage to 8GB.
|
||||||
|
|
||||||
[IMPORTANT]
|
[IMPORTANT]
|
||||||
====
|
====
|
||||||
If your root fs is ZFS you must update your initramfs every
|
If your root file system is ZFS you must update your initramfs every
|
||||||
time this value changes.
|
time this value changes:
|
||||||
|
|
||||||
update-initramfs -u
|
update-initramfs -u
|
||||||
====
|
====
|
||||||
|
39
pct.adoc
39
pct.adoc
@ -75,7 +75,8 @@ The good news is that LXC uses many kernel security features like
|
|||||||
AppArmor, CGroups and PID and user namespaces, which makes containers
|
AppArmor, CGroups and PID and user namespaces, which makes containers
|
||||||
usage quite secure. We distinguish two types of containers:
|
usage quite secure. We distinguish two types of containers:
|
||||||
|
|
||||||
Privileged containers
|
|
||||||
|
Privileged Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Security is done by dropping capabilities, using mandatory access
|
Security is done by dropping capabilities, using mandatory access
|
||||||
@ -86,11 +87,12 @@ and quick fix. So you should use this kind of containers only inside a
|
|||||||
trusted environment, or when no untrusted task is running as root in
|
trusted environment, or when no untrusted task is running as root in
|
||||||
the container.
|
the container.
|
||||||
|
|
||||||
Unprivileged containers
|
|
||||||
|
Unprivileged Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This kind of containers use a new kernel feature called user
|
This kind of containers use a new kernel feature called user
|
||||||
namespaces. The root uid 0 inside the container is mapped to an
|
namespaces. The root UID 0 inside the container is mapped to an
|
||||||
unprivileged user outside the container. This means that most security
|
unprivileged user outside the container. This means that most security
|
||||||
issues (container escape, resource abuse, ...) in those containers
|
issues (container escape, resource abuse, ...) in those containers
|
||||||
will affect a random unprivileged user, and so would be a generic
|
will affect a random unprivileged user, and so would be a generic
|
||||||
@ -131,6 +133,7 @@ Our toolkit is smart enough to instantaneously apply most changes to
|
|||||||
running containers. This feature is called "hot plug", and there is no
|
running containers. This feature is called "hot plug", and there is no
|
||||||
need to restart the container in that case.
|
need to restart the container in that case.
|
||||||
|
|
||||||
|
|
||||||
File Format
|
File Format
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
@ -154,6 +157,7 @@ or
|
|||||||
|
|
||||||
Those settings are directly passed to the LXC low-level tools.
|
Those settings are directly passed to the LXC low-level tools.
|
||||||
|
|
||||||
|
|
||||||
Snapshots
|
Snapshots
|
||||||
~~~~~~~~~
|
~~~~~~~~~
|
||||||
|
|
||||||
@ -162,7 +166,7 @@ time into a separate snapshot section within the same configuration
|
|||||||
file. For example, after creating a snapshot called ``testsnapshot'',
|
file. For example, after creating a snapshot called ``testsnapshot'',
|
||||||
your configuration file will look like this:
|
your configuration file will look like this:
|
||||||
|
|
||||||
.Container Configuration with Snapshot
|
.Container configuration with snapshot
|
||||||
----
|
----
|
||||||
memory: 512
|
memory: 512
|
||||||
swap: 512
|
swap: 512
|
||||||
@ -249,6 +253,7 @@ Gentoo:: test /etc/gentoo-release
|
|||||||
NOTE: Container start fails if the configured `ostype` differs from the auto
|
NOTE: Container start fails if the configured `ostype` differs from the auto
|
||||||
detected type.
|
detected type.
|
||||||
|
|
||||||
|
|
||||||
Options
|
Options
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
|
|
||||||
@ -316,7 +321,7 @@ local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
|
|||||||
|
|
||||||
The above command shows you the full {pve} volume identifiers. They include
|
The above command shows you the full {pve} volume identifiers. They include
|
||||||
the storage name, and most other {pve} commands can use them. For
|
the storage name, and most other {pve} commands can use them. For
|
||||||
examply you can delete that image later with:
|
example you can delete that image later with:
|
||||||
|
|
||||||
pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
||||||
|
|
||||||
@ -364,27 +369,27 @@ include::pct-mountpoint-opts.adoc[]
|
|||||||
Currently there are basically three types of mount points: storage backed
|
Currently there are basically three types of mount points: storage backed
|
||||||
mount points, bind mounts and device mounts.
|
mount points, bind mounts and device mounts.
|
||||||
|
|
||||||
.Typical Container `rootfs` configuration
|
.Typical container `rootfs` configuration
|
||||||
----
|
----
|
||||||
rootfs: thin1:base-100-disk-1,size=8G
|
rootfs: thin1:base-100-disk-1,size=8G
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
Storage backed mount points
|
Storage Backed Mount Points
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Storage backed mount points are managed by the {pve} storage subsystem and come
|
Storage backed mount points are managed by the {pve} storage subsystem and come
|
||||||
in three different flavors:
|
in three different flavors:
|
||||||
|
|
||||||
- Image based: These are raw images containing a single ext4 formatted file
|
- Image based: these are raw images containing a single ext4 formatted file
|
||||||
system.
|
system.
|
||||||
- ZFS Subvolumes: These are technically bind mounts, but with managed storage,
|
- ZFS subvolumes: these are technically bind mounts, but with managed storage,
|
||||||
and thus allow resizing and snapshotting.
|
and thus allow resizing and snapshotting.
|
||||||
- Directories: passing `size=0` triggers a special case where instead of a raw
|
- Directories: passing `size=0` triggers a special case where instead of a raw
|
||||||
image a directory is created.
|
image a directory is created.
|
||||||
|
|
||||||
|
|
||||||
Bind mount points
|
Bind Mount Points
|
||||||
^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Bind mounts allow you to access arbitrary directories from your Proxmox VE host
|
Bind mounts allow you to access arbitrary directories from your Proxmox VE host
|
||||||
@ -416,7 +421,7 @@ Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
|
|||||||
achieve the same result.
|
achieve the same result.
|
||||||
|
|
||||||
|
|
||||||
Device mount points
|
Device Mount Points
|
||||||
^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Device mount points allow to mount block devices of the host directly into the
|
Device mount points allow to mount block devices of the host directly into the
|
||||||
@ -430,7 +435,7 @@ more features.
|
|||||||
NOTE: The contents of device mount points are not backed up when using `vzdump`.
|
NOTE: The contents of device mount points are not backed up when using `vzdump`.
|
||||||
|
|
||||||
|
|
||||||
FUSE mounts
|
FUSE Mounts
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
WARNING: Because of existing issues in the Linux kernel's freezer
|
WARNING: Because of existing issues in the Linux kernel's freezer
|
||||||
@ -443,7 +448,7 @@ technologies, it is possible to establish the FUSE mount on the Proxmox host
|
|||||||
and use a bind mount point to make it accessible inside the container.
|
and use a bind mount point to make it accessible inside the container.
|
||||||
|
|
||||||
|
|
||||||
Using quotas inside containers
|
Using Quotas Inside Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Quotas allow to set limits inside a container for the amount of disk
|
Quotas allow to set limits inside a container for the amount of disk
|
||||||
@ -470,10 +475,10 @@ NOTE: You need to run the above commands for every mount point by passing
|
|||||||
the mount point's path instead of just `/`.
|
the mount point's path instead of just `/`.
|
||||||
|
|
||||||
|
|
||||||
Using ACLs inside containers
|
Using ACLs Inside Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The standard Posix Access Control Lists are also available inside containers.
|
The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
|
||||||
ACLs allow you to set more detailed file ownership than the traditional user/
|
ACLs allow you to set more detailed file ownership than the traditional user/
|
||||||
group/others model.
|
group/others model.
|
||||||
|
|
||||||
@ -491,6 +496,7 @@ include::pct-network-opts.adoc[]
|
|||||||
Backup and Restore
|
Backup and Restore
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
|
|
||||||
Container Backup
|
Container Backup
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -563,11 +569,12 @@ and destroy containers, and control execution (start, stop, migrate,
|
|||||||
...). You can use pct to set parameters in the associated config file,
|
...). You can use pct to set parameters in the associated config file,
|
||||||
like network configuration or memory limits.
|
like network configuration or memory limits.
|
||||||
|
|
||||||
|
|
||||||
CLI Usage Examples
|
CLI Usage Examples
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create a container based on a Debian template (provided you have
|
Create a container based on a Debian template (provided you have
|
||||||
already downloaded the template via the webgui)
|
already downloaded the template via the web interface)
|
||||||
|
|
||||||
pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
|
pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ character are treated as comments and are also ignored.
|
|||||||
|
|
||||||
One can use the `pct` command to generate and modify those files.
|
One can use the `pct` command to generate and modify those files.
|
||||||
|
|
||||||
It is also possible to add low-level lxc style configuration directly, for
|
It is also possible to add low-level LXC-style configuration directly, for
|
||||||
example:
|
example:
|
||||||
|
|
||||||
lxc.init_cmd: /sbin/my_own_init
|
lxc.init_cmd: /sbin/my_own_init
|
||||||
@ -53,7 +53,7 @@ or
|
|||||||
|
|
||||||
lxc.init_cmd = /sbin/my_own_init
|
lxc.init_cmd = /sbin/my_own_init
|
||||||
|
|
||||||
Those settings are directly passed to the lxc low-level tools.
|
Those settings are directly passed to the LXC low-level tools.
|
||||||
|
|
||||||
|
|
||||||
Options
|
Options
|
||||||
|
17
pmxcfs.adoc
17
pmxcfs.adoc
@ -30,7 +30,7 @@ configuration files.
|
|||||||
|
|
||||||
Although the file system stores all data inside a persistent database
|
Although the file system stores all data inside a persistent database
|
||||||
on disk, a copy of the data resides in RAM. That imposes restriction
|
on disk, a copy of the data resides in RAM. That imposes restriction
|
||||||
on the maximal size, which is currently 30MB. This is still enough to
|
on the maximum size, which is currently 30MB. This is still enough to
|
||||||
store the configuration of several thousand virtual machines.
|
store the configuration of several thousand virtual machines.
|
||||||
|
|
||||||
This system provides the following advantages:
|
This system provides the following advantages:
|
||||||
@ -41,6 +41,7 @@ This system provides the following advantages:
|
|||||||
* automatic updates of the corosync cluster configuration to all nodes
|
* automatic updates of the corosync cluster configuration to all nodes
|
||||||
* includes a distributed locking mechanism
|
* includes a distributed locking mechanism
|
||||||
|
|
||||||
|
|
||||||
POSIX Compatibility
|
POSIX Compatibility
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
@ -60,7 +61,7 @@ some feature are simply not implemented, because we do not need them:
|
|||||||
* `O_TRUNC` creates are not atomic (FUSE restriction)
|
* `O_TRUNC` creates are not atomic (FUSE restriction)
|
||||||
|
|
||||||
|
|
||||||
File access rights
|
File Access Rights
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
All files and directories are owned by user `root` and have group
|
All files and directories are owned by user `root` and have group
|
||||||
@ -78,10 +79,10 @@ Technology
|
|||||||
|
|
||||||
We use the http://www.corosync.org[Corosync Cluster Engine] for
|
We use the http://www.corosync.org[Corosync Cluster Engine] for
|
||||||
cluster communication, and http://www.sqlite.org[SQlite] for the
|
cluster communication, and http://www.sqlite.org[SQlite] for the
|
||||||
database file. The filesystem is implemented in user space using
|
database file. The file system is implemented in user space using
|
||||||
http://fuse.sourceforge.net[FUSE].
|
http://fuse.sourceforge.net[FUSE].
|
||||||
|
|
||||||
File system layout
|
File System Layout
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
The file system is mounted at:
|
The file system is mounted at:
|
||||||
@ -114,6 +115,7 @@ Files
|
|||||||
|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
|
|`firewall/<VMID>.fw` | Firewall configuration for VMs and Containers
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
|
|
||||||
Symbolic links
|
Symbolic links
|
||||||
~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -124,6 +126,7 @@ Symbolic links
|
|||||||
|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
|
|`lxc` | `nodes/<LOCAL_HOST_NAME>/lxc/`
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
|
|
||||||
Special status files for debugging (JSON)
|
Special status files for debugging (JSON)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -136,6 +139,7 @@ Special status files for debugging (JSON)
|
|||||||
|`.rrd` |RRD data (most recent entries)
|
|`.rrd` |RRD data (most recent entries)
|
||||||
|=======
|
|=======
|
||||||
|
|
||||||
|
|
||||||
Enable/Disable debugging
|
Enable/Disable debugging
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -160,6 +164,7 @@ host. On the new host (with nothing running), you need to stop the
|
|||||||
lost Proxmox VE host, then reboot and check. (And don't forget your
|
lost Proxmox VE host, then reboot and check. (And don't forget your
|
||||||
VM/CT data)
|
VM/CT data)
|
||||||
|
|
||||||
|
|
||||||
Remove Cluster configuration
|
Remove Cluster configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -178,7 +183,7 @@ without reinstall, which is described here:
|
|||||||
|
|
||||||
# pmxcfs -l
|
# pmxcfs -l
|
||||||
|
|
||||||
* remove the cluster config
|
* remove the cluster configuration
|
||||||
|
|
||||||
# rm /etc/pve/cluster.conf
|
# rm /etc/pve/cluster.conf
|
||||||
# rm /etc/cluster/cluster.conf
|
# rm /etc/cluster/cluster.conf
|
||||||
@ -188,7 +193,7 @@ without reinstall, which is described here:
|
|||||||
|
|
||||||
# systemctl stop pve-cluster
|
# systemctl stop pve-cluster
|
||||||
|
|
||||||
* restart pve services (or reboot)
|
* restart PVE services (or reboot)
|
||||||
|
|
||||||
# systemctl start pve-cluster
|
# systemctl start pve-cluster
|
||||||
# systemctl restart pvedaemon
|
# systemctl restart pvedaemon
|
||||||
|
@ -215,7 +215,7 @@ include::pmxcfs.8-cli.adoc[]
|
|||||||
|
|
||||||
:leveloffset: 0
|
:leveloffset: 0
|
||||||
|
|
||||||
*pve-ha-crm* - Cluster Ressource Manager Daemon
|
*pve-ha-crm* - Cluster Resource Manager Daemon
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
:leveloffset: 1
|
:leveloffset: 1
|
||||||
@ -223,7 +223,7 @@ include::pve-ha-crm.8-synopsis.adoc[]
|
|||||||
|
|
||||||
:leveloffset: 0
|
:leveloffset: 0
|
||||||
|
|
||||||
*pve-ha-lrm* - Local Ressource Manager Daemon
|
*pve-ha-lrm* - Local Resource Manager Daemon
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
:leveloffset: 1
|
:leveloffset: 1
|
||||||
|
@ -32,7 +32,7 @@ include::attributes.txt[]
|
|||||||
ASIN B01BBVQZT6
|
ASIN B01BBVQZT6
|
||||||
|
|
||||||
[bibliography]
|
[bibliography]
|
||||||
.Books about related Technology
|
.Books about related technology
|
||||||
|
|
||||||
- [[[Hertzog13]]] Raphaël Hertzog & Roland Mas.
|
- [[[Hertzog13]]] Raphaël Hertzog & Roland Mas.
|
||||||
https://debian-handbook.info/download/stable/debian-handbook.pdf[The Debian Administrator\'s Handbook: Debian Jessie from Discovery to Mastery],
|
https://debian-handbook.info/download/stable/debian-handbook.pdf[The Debian Administrator\'s Handbook: Debian Jessie from Discovery to Mastery],
|
||||||
@ -81,7 +81,7 @@ include::attributes.txt[]
|
|||||||
ISBN 978-0596521189
|
ISBN 978-0596521189
|
||||||
|
|
||||||
[bibliography]
|
[bibliography]
|
||||||
.Books about related Topics
|
.Books about related topics
|
||||||
|
|
||||||
- [[[Bessen09]]] James Bessen & Michael J. Meurer,
|
- [[[Bessen09]]] James Bessen & Michael J. Meurer,
|
||||||
'Patent Failure: How Judges, Bureaucrats, and Lawyers Put Innovators at Risk'.
|
'Patent Failure: How Judges, Bureaucrats, and Lawyers Put Innovators at Risk'.
|
||||||
|
@ -21,7 +21,7 @@ version 3.
|
|||||||
|
|
||||||
Will {pve} run on a 32bit processor?::
|
Will {pve} run on a 32bit processor?::
|
||||||
|
|
||||||
{pve} works only on 64-bit CPU´s (AMD or Intel). There is no plan
|
{pve} works only on 64-bit CPUs (AMD or Intel). There is no plan
|
||||||
for 32-bit for the platform.
|
for 32-bit for the platform.
|
||||||
+
|
+
|
||||||
NOTE: VMs and Containers can be both 32-bit and/or 64-bit.
|
NOTE: VMs and Containers can be both 32-bit and/or 64-bit.
|
||||||
|
@ -29,7 +29,7 @@ Proxmox VE Firewall provides an easy way to protect your IT
|
|||||||
infrastructure. You can setup firewall rules for all hosts
|
infrastructure. You can setup firewall rules for all hosts
|
||||||
inside a cluster, or define rules for virtual machines and
|
inside a cluster, or define rules for virtual machines and
|
||||||
containers. Features like firewall macros, security groups, IP sets
|
containers. Features like firewall macros, security groups, IP sets
|
||||||
and aliases helps to make that task easier.
|
and aliases help to make that task easier.
|
||||||
|
|
||||||
While all configuration is stored on the cluster file system, the
|
While all configuration is stored on the cluster file system, the
|
||||||
`iptables`-based firewall runs on each cluster node, and thus provides
|
`iptables`-based firewall runs on each cluster node, and thus provides
|
||||||
@ -139,7 +139,7 @@ To simplify that task, you can instead create an IPSet called
|
|||||||
firewall rules to access the GUI from remote.
|
firewall rules to access the GUI from remote.
|
||||||
|
|
||||||
|
|
||||||
Host specific Configuration
|
Host Specific Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Host related configuration is read from:
|
Host related configuration is read from:
|
||||||
@ -161,7 +161,7 @@ include::pve-firewall-host-opts.adoc[]
|
|||||||
This sections contains host specific firewall rules.
|
This sections contains host specific firewall rules.
|
||||||
|
|
||||||
|
|
||||||
VM/Container configuration
|
VM/Container Configuration
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
VM firewall configuration is read from:
|
VM firewall configuration is read from:
|
||||||
@ -276,7 +276,8 @@ name. You can then refer to those names:
|
|||||||
* inside IP set definitions
|
* inside IP set definitions
|
||||||
* in `source` and `dest` properties of firewall rules
|
* in `source` and `dest` properties of firewall rules
|
||||||
|
|
||||||
Standard IP alias `local_network`
|
|
||||||
|
Standard IP Alias `local_network`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This alias is automatically defined. Please use the following command
|
This alias is automatically defined. Please use the following command
|
||||||
@ -303,6 +304,7 @@ explicitly assign the local IP address
|
|||||||
local_network 1.2.3.4 # use the single ip address
|
local_network 1.2.3.4 # use the single ip address
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
IP Sets
|
IP Sets
|
||||||
-------
|
-------
|
||||||
|
|
||||||
@ -315,11 +317,12 @@ set.
|
|||||||
|
|
||||||
IN HTTP(ACCEPT) -source +management
|
IN HTTP(ACCEPT) -source +management
|
||||||
|
|
||||||
|
|
||||||
Standard IP set `management`
|
Standard IP set `management`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This IP set applies only to host firewalls (not VM firewalls). Those
|
This IP set applies only to host firewalls (not VM firewalls). Those
|
||||||
ips are allowed to do normal management tasks (PVE GUI, VNC, SPICE,
|
IPs are allowed to do normal management tasks (PVE GUI, VNC, SPICE,
|
||||||
SSH).
|
SSH).
|
||||||
|
|
||||||
The local cluster network is automatically added to this IP set (alias
|
The local cluster network is automatically added to this IP set (alias
|
||||||
@ -338,7 +341,7 @@ communication. (multicast,ssh,...)
|
|||||||
Standard IP set `blacklist`
|
Standard IP set `blacklist`
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Traffic from these ips is dropped by every host's and VM's firewall.
|
Traffic from these IPs is dropped by every host's and VM's firewall.
|
||||||
|
|
||||||
----
|
----
|
||||||
# /etc/pve/firewall/cluster.fw
|
# /etc/pve/firewall/cluster.fw
|
||||||
@ -531,7 +534,7 @@ Beside neighbor discovery NDP is also used for a couple of other things, like
|
|||||||
autoconfiguration and advertising routers.
|
autoconfiguration and advertising routers.
|
||||||
|
|
||||||
By default VMs are allowed to send out router solicitation messages (to query
|
By default VMs are allowed to send out router solicitation messages (to query
|
||||||
for a router), and to receive router advetisement packets. This allows them to
|
for a router), and to receive router advertisement packets. This allows them to
|
||||||
use stateless auto configuration. On the other hand VMs cannot advertise
|
use stateless auto configuration. On the other hand VMs cannot advertise
|
||||||
themselves as routers unless the ``Allow Router Advertisement'' (`radv: 1`) option
|
themselves as routers unless the ``Allow Router Advertisement'' (`radv: 1`) option
|
||||||
is set.
|
is set.
|
||||||
@ -551,7 +554,7 @@ Ports used by Proxmox VE
|
|||||||
* SPICE proxy: 3128
|
* SPICE proxy: 3128
|
||||||
* sshd (used for cluster actions): 22
|
* sshd (used for cluster actions): 22
|
||||||
* rpcbind: 111
|
* rpcbind: 111
|
||||||
* corosync multicast (if you run a cluster): 5404, 5405 UDP
|
* corosync multicast (if you run a cluster): 5404, 5405 UDP
|
||||||
|
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
|
@ -6,7 +6,7 @@ include::attributes.txt[]
|
|||||||
NAME
|
NAME
|
||||||
----
|
----
|
||||||
|
|
||||||
pve-ha-crm - PVE Cluster Ressource Manager Daemon
|
pve-ha-crm - PVE Cluster Resource Manager Daemon
|
||||||
|
|
||||||
|
|
||||||
SYNOPSYS
|
SYNOPSYS
|
||||||
@ -19,12 +19,12 @@ DESCRIPTION
|
|||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
ifndef::manvolnum[]
|
ifndef::manvolnum[]
|
||||||
Cluster Ressource Manager Daemon
|
Cluster Resource Manager Daemon
|
||||||
================================
|
================================
|
||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
This is the Cluster Ressource Manager Daemon.
|
This is the Cluster Resource Manager Daemon.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
include::pve-copyright.adoc[]
|
include::pve-copyright.adoc[]
|
||||||
|
@ -6,7 +6,7 @@ include::attributes.txt[]
|
|||||||
NAME
|
NAME
|
||||||
----
|
----
|
||||||
|
|
||||||
pve-ha-lrm - PVE Local Ressource Manager Daemon
|
pve-ha-lrm - PVE Local Resource Manager Daemon
|
||||||
|
|
||||||
|
|
||||||
SYNOPSYS
|
SYNOPSYS
|
||||||
@ -19,12 +19,12 @@ DESCRIPTION
|
|||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
ifndef::manvolnum[]
|
ifndef::manvolnum[]
|
||||||
Local Ressource Manager Daemon
|
Local Resource Manager Daemon
|
||||||
==============================
|
==============================
|
||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
This is the Local Ressource Manager Daemon.
|
This is the Local Resource Manager Daemon.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
include::pve-copyright.adoc[]
|
include::pve-copyright.adoc[]
|
||||||
|
@ -32,6 +32,7 @@ within a few minutes, including the following:
|
|||||||
NOTE: By default, the complete server is used and all existing data is
|
NOTE: By default, the complete server is used and all existing data is
|
||||||
removed.
|
removed.
|
||||||
|
|
||||||
|
|
||||||
Using the {pve} Installation CD-ROM
|
Using the {pve} Installation CD-ROM
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -95,7 +96,7 @@ and higher, and Google Chrome.
|
|||||||
|
|
||||||
|
|
||||||
[[advanced_lvm_options]]
|
[[advanced_lvm_options]]
|
||||||
Advanced LVM configuration options
|
Advanced LVM Configuration Options
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
The installer creates a Volume Group (VG) called `pve`, and additional
|
The installer creates a Volume Group (VG) called `pve`, and additional
|
||||||
|
@ -23,13 +23,13 @@ While many people start with a single node, {pve} can scale out to a
|
|||||||
large set of clustered nodes. The cluster stack is fully integrated
|
large set of clustered nodes. The cluster stack is fully integrated
|
||||||
and ships with the default installation.
|
and ships with the default installation.
|
||||||
|
|
||||||
Unique Multi-master Design::
|
Unique Multi-Master Design::
|
||||||
|
|
||||||
The integrated web-based management interface gives you a clean
|
The integrated web-based management interface gives you a clean
|
||||||
overview of all your KVM guests and Linux containers and even of your
|
overview of all your KVM guests and Linux containers and even of your
|
||||||
whole cluster. You can easily manage your VMs and containers, storage
|
whole cluster. You can easily manage your VMs and containers, storage
|
||||||
or cluster from the GUI. There is no need to install a separate,
|
or cluster from the GUI. There is no need to install a separate,
|
||||||
complex, and pricy management server.
|
complex, and pricey management server.
|
||||||
|
|
||||||
Proxmox Cluster File System (pmxcfs)::
|
Proxmox Cluster File System (pmxcfs)::
|
||||||
|
|
||||||
@ -48,7 +48,7 @@ cluster file system.
|
|||||||
Web-based Management Interface::
|
Web-based Management Interface::
|
||||||
|
|
||||||
Proxmox VE is simple to use. Management tasks can be done via the
|
Proxmox VE is simple to use. Management tasks can be done via the
|
||||||
included web based managment interface - there is no need to install a
|
included web based management interface - there is no need to install a
|
||||||
separate management tool or any additional management node with huge
|
separate management tool or any additional management node with huge
|
||||||
databases. The multi-master tool allows you to manage your whole
|
databases. The multi-master tool allows you to manage your whole
|
||||||
cluster from any node of your cluster. The central web-based
|
cluster from any node of your cluster. The central web-based
|
||||||
@ -74,7 +74,7 @@ hosting environments.
|
|||||||
|
|
||||||
Role-based Administration::
|
Role-based Administration::
|
||||||
|
|
||||||
You can define granular access for all objects (like VM´s, storages,
|
You can define granular access for all objects (like VMs, storages,
|
||||||
nodes, etc.) by using the role based user- and permission
|
nodes, etc.) by using the role based user- and permission
|
||||||
management. This allows you to define privileges and helps you to
|
management. This allows you to define privileges and helps you to
|
||||||
control access to objects. This concept is also known as access
|
control access to objects. This concept is also known as access
|
||||||
@ -88,7 +88,7 @@ Active Directory, LDAP, Linux PAM standard authentication or the
|
|||||||
built-in Proxmox VE authentication server.
|
built-in Proxmox VE authentication server.
|
||||||
|
|
||||||
|
|
||||||
Flexible Storage
|
Flexible Storage
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
The Proxmox VE storage model is very flexible. Virtual machine images
|
The Proxmox VE storage model is very flexible. Virtual machine images
|
||||||
@ -116,6 +116,7 @@ Local storage types supported are:
|
|||||||
* Directory (storage on existing filesystem)
|
* Directory (storage on existing filesystem)
|
||||||
* ZFS
|
* ZFS
|
||||||
|
|
||||||
|
|
||||||
Integrated Backup and Restore
|
Integrated Backup and Restore
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
@ -128,6 +129,7 @@ NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
|
|||||||
optimized for storing VM backups fast and effective (sparse files, out
|
optimized for storing VM backups fast and effective (sparse files, out
|
||||||
of order data, minimized I/O).
|
of order data, minimized I/O).
|
||||||
|
|
||||||
|
|
||||||
High Availability Cluster
|
High Availability Cluster
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
@ -136,6 +138,7 @@ available virtual servers. The Proxmox VE HA Cluster is based on
|
|||||||
proven Linux HA technologies, providing stable and reliable HA
|
proven Linux HA technologies, providing stable and reliable HA
|
||||||
services.
|
services.
|
||||||
|
|
||||||
|
|
||||||
Flexible Networking
|
Flexible Networking
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
@ -154,7 +157,7 @@ leveraging the full power of the Linux network stack.
|
|||||||
Integrated Firewall
|
Integrated Firewall
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
The intergrated firewall allows you to filter network packets on
|
The integrated firewall allows you to filter network packets on
|
||||||
any VM or Container interface. Common sets of firewall rules can
|
any VM or Container interface. Common sets of firewall rules can
|
||||||
be grouped into ``security groups''.
|
be grouped into ``security groups''.
|
||||||
|
|
||||||
@ -179,6 +182,7 @@ the product always meets professional quality criteria.
|
|||||||
Open source software also helps to keep your costs low and makes your
|
Open source software also helps to keep your costs low and makes your
|
||||||
core infrastructure independent from a single vendor.
|
core infrastructure independent from a single vendor.
|
||||||
|
|
||||||
|
|
||||||
Your benefit with {pve}
|
Your benefit with {pve}
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
@ -191,11 +195,12 @@ Your benefit with {pve}
|
|||||||
* Huge active community
|
* Huge active community
|
||||||
* Low administration costs and simple deployment
|
* Low administration costs and simple deployment
|
||||||
|
|
||||||
|
|
||||||
Project History
|
Project History
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
The project started in 2007, followed by a first stable version in
|
The project started in 2007, followed by a first stable version in
|
||||||
2008. By that time we used OpenVZ for containers, and KVM for virtual
|
2008. At the time we used OpenVZ for containers, and KVM for virtual
|
||||||
machines. The clustering features were limited, and the user interface
|
machines. The clustering features were limited, and the user interface
|
||||||
was simple (server generated web page).
|
was simple (server generated web page).
|
||||||
|
|
||||||
@ -207,8 +212,8 @@ the user. Managing a cluster of 16 nodes is as simple as managing a
|
|||||||
single node.
|
single node.
|
||||||
|
|
||||||
We also introduced a new REST API, with a complete declarative
|
We also introduced a new REST API, with a complete declarative
|
||||||
spezification written in JSON-Schema. This enabled other people to
|
specification written in JSON-Schema. This enabled other people to
|
||||||
integrate {pve} into their infrastructur, and made it easy provide
|
integrate {pve} into their infrastructure, and made it easy to provide
|
||||||
additional services.
|
additional services.
|
||||||
|
|
||||||
Also, the new REST API made it possible to replace the original user
|
Also, the new REST API made it possible to replace the original user
|
||||||
@ -225,7 +230,7 @@ are extremely cost effective.
|
|||||||
|
|
||||||
When we started we were among the first companies providing
|
When we started we were among the first companies providing
|
||||||
commercial support for KVM. The KVM project itself continuously
|
commercial support for KVM. The KVM project itself continuously
|
||||||
evolved, and is now a widely used hypervisor. New features arrives
|
evolved, and is now a widely used hypervisor. New features arrive
|
||||||
with each release. We developed the KVM live backup feature, which
|
with each release. We developed the KVM live backup feature, which
|
||||||
makes it possible to create snapshot backups on any storage type.
|
makes it possible to create snapshot backups on any storage type.
|
||||||
|
|
||||||
|
@ -30,6 +30,7 @@ file. All {pve} tools tries hard to keep such direct user
|
|||||||
modifications. Using the GUI is still preferable, because it
|
modifications. Using the GUI is still preferable, because it
|
||||||
protect you from errors.
|
protect you from errors.
|
||||||
|
|
||||||
|
|
||||||
Naming Conventions
|
Naming Conventions
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -37,6 +37,7 @@ storage backends.
|
|||||||
|Backup files |`dump/`
|
|Backup files |`dump/`
|
||||||
|===========================================================
|
|===========================================================
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -79,7 +80,7 @@ integer to make the name unique.
|
|||||||
Specifies the image format (`raw|qcow2|vmdk`).
|
Specifies the image format (`raw|qcow2|vmdk`).
|
||||||
|
|
||||||
When you create a VM template, all VM images are renamed to indicate
|
When you create a VM template, all VM images are renamed to indicate
|
||||||
that they are now read-only, and can be uses as a base image for clones:
|
that they are now read-only, and can be used as a base image for clones:
|
||||||
|
|
||||||
base-<VMID>-<NAME>.<FORMAT>
|
base-<VMID>-<NAME>.<FORMAT>
|
||||||
|
|
||||||
@ -88,6 +89,7 @@ important that those files are read-only, and never get modified. The
|
|||||||
backend changes the access mode to `0444`, and sets the immutable flag
|
backend changes the access mode to `0444`, and sets the immutable flag
|
||||||
(`chattr +i`) if the storage supports that.
|
(`chattr +i`) if the storage supports that.
|
||||||
|
|
||||||
|
|
||||||
Storage Features
|
Storage Features
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -45,12 +45,14 @@ glusterfs: Gluster
|
|||||||
content images,iso
|
content images,iso
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
File naming conventions
|
File naming conventions
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The directory layout and the file naming conventions are inhertited
|
The directory layout and the file naming conventions are inherited
|
||||||
from the `dir` backend.
|
from the `dir` backend.
|
||||||
|
|
||||||
|
|
||||||
Storage Features
|
Storage Features
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -46,6 +46,7 @@ TIP: If you want to use LVM on top of iSCSI, it make sense to set
|
|||||||
`content none`. That way it is not possible to create VMs using iSCSI
|
`content none`. That way it is not possible to create VMs using iSCSI
|
||||||
LUNs directly.
|
LUNs directly.
|
||||||
|
|
||||||
|
|
||||||
File naming conventions
|
File naming conventions
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -14,6 +14,7 @@ can easily manage space on that iSCSI LUN, which would not be possible
|
|||||||
otherwise, because the iSCSI specification does not define a
|
otherwise, because the iSCSI specification does not define a
|
||||||
management interface for space allocation.
|
management interface for space allocation.
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ used to configure the NFS server:
|
|||||||
server::
|
server::
|
||||||
|
|
||||||
Server IP or DNS name. To avoid DNS lookup delays, it is usually
|
Server IP or DNS name. To avoid DNS lookup delays, it is usually
|
||||||
preferrable to use an IP address instead of a DNS name - unless you
|
preferable to use an IP address instead of a DNS name - unless you
|
||||||
have a very reliable DNS server, or list the server in the local
|
have a very reliable DNS server, or list the server in the local
|
||||||
`/etc/hosts` file.
|
`/etc/hosts` file.
|
||||||
|
|
||||||
@ -59,7 +59,7 @@ client side. For read-only content, it is worth to consider the NFS
|
|||||||
Storage Features
|
Storage Features
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
NFS does not support snapshots, but the backend use `qcow2` features
|
NFS does not support snapshots, but the backend uses `qcow2` features
|
||||||
to implement snapshots and cloning.
|
to implement snapshots and cloning.
|
||||||
|
|
||||||
.Storage features for backend `nfs`
|
.Storage features for backend `nfs`
|
||||||
|
@ -16,7 +16,7 @@ storage, and you get the following advantages:
|
|||||||
* self healing
|
* self healing
|
||||||
* no single point of failure
|
* no single point of failure
|
||||||
* scalable to the exabyte level
|
* scalable to the exabyte level
|
||||||
* kernel and unser space implementation available
|
* kernel and user space implementation available
|
||||||
|
|
||||||
NOTE: For smaller deployments, it is also possible to run Ceph
|
NOTE: For smaller deployments, it is also possible to run Ceph
|
||||||
services directly on your {pve} nodes. Recent hardware has plenty
|
services directly on your {pve} nodes. Recent hardware has plenty
|
||||||
|
@ -4,9 +4,10 @@ include::attributes.txt[]
|
|||||||
|
|
||||||
Storage pool type: `zfspool`
|
Storage pool type: `zfspool`
|
||||||
|
|
||||||
This backend allows you to access local ZFS pools (or ZFS filesystems
|
This backend allows you to access local ZFS pools (or ZFS file systems
|
||||||
inside such pools).
|
inside such pools).
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -35,6 +36,7 @@ zfspool: vmdata
|
|||||||
sparse
|
sparse
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
File naming conventions
|
File naming conventions
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -50,7 +52,7 @@ This specifies the owner VM.
|
|||||||
|
|
||||||
`<NAME>`::
|
`<NAME>`::
|
||||||
|
|
||||||
This scan be an arbitrary name (`ascii`) without white spaces. The
|
This can be an arbitrary name (`ascii`) without white space. The
|
||||||
backend uses `disk[N]` as default, where `[N]` is replaced by an
|
backend uses `disk[N]` as default, where `[N]` is replaced by an
|
||||||
integer to make the name unique.
|
integer to make the name unique.
|
||||||
|
|
||||||
@ -71,14 +73,15 @@ on the parent dataset.
|
|||||||
|images rootdir |raw subvol |no |yes |yes
|
|images rootdir |raw subvol |no |yes |yes
|
||||||
|==============================================================================
|
|==============================================================================
|
||||||
|
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
~~~~~~~~
|
~~~~~~~~
|
||||||
|
|
||||||
It is recommended to create and extra ZFS filesystem to store your VM images:
|
It is recommended to create an extra ZFS file system to store your VM images:
|
||||||
|
|
||||||
# zfs create tank/vmdata
|
# zfs create tank/vmdata
|
||||||
|
|
||||||
To enable compression on that newly allocated filesystem:
|
To enable compression on that newly allocated file system:
|
||||||
|
|
||||||
# zfs set compression=on tank/vmdata
|
# zfs set compression=on tank/vmdata
|
||||||
|
|
||||||
|
12
pvecm.adoc
12
pvecm.adoc
@ -26,7 +26,7 @@ endif::manvolnum[]
|
|||||||
The {PVE} cluster manager `pvecm` is a tool to create a group of
|
The {PVE} cluster manager `pvecm` is a tool to create a group of
|
||||||
physical servers. Such a group is called a *cluster*. We use the
|
physical servers. Such a group is called a *cluster*. We use the
|
||||||
http://www.corosync.org[Corosync Cluster Engine] for reliable group
|
http://www.corosync.org[Corosync Cluster Engine] for reliable group
|
||||||
communication, and such cluster can consists of up to 32 physical nodes
|
communication, and such clusters can consist of up to 32 physical nodes
|
||||||
(probably more, dependent on network latency).
|
(probably more, dependent on network latency).
|
||||||
|
|
||||||
`pvecm` can be used to create a new cluster, join nodes to a cluster,
|
`pvecm` can be used to create a new cluster, join nodes to a cluster,
|
||||||
@ -39,12 +39,12 @@ Grouping nodes into a cluster has the following advantages:
|
|||||||
|
|
||||||
* Centralized, web based management
|
* Centralized, web based management
|
||||||
|
|
||||||
* Multi-master clusters: Each node can do all management task
|
* Multi-master clusters: each node can do all management task
|
||||||
|
|
||||||
* `pmxcfs`: database-driven file system for storing configuration files,
|
* `pmxcfs`: database-driven file system for storing configuration files,
|
||||||
replicated in real-time on all nodes using `corosync`.
|
replicated in real-time on all nodes using `corosync`.
|
||||||
|
|
||||||
* Easy migration of Virtual Machines and Containers between physical
|
* Easy migration of virtual machines and containers between physical
|
||||||
hosts
|
hosts
|
||||||
|
|
||||||
* Fast deployment
|
* Fast deployment
|
||||||
@ -114,7 +114,7 @@ Login via `ssh` to the node you want to add.
|
|||||||
|
|
||||||
For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
|
For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
|
||||||
|
|
||||||
CAUTION: A new node cannot hold any VM´s, because you would get
|
CAUTION: A new node cannot hold any VMs, because you would get
|
||||||
conflicts about identical VM IDs. Also, all existing configuration in
|
conflicts about identical VM IDs. Also, all existing configuration in
|
||||||
`/etc/pve` is overwritten when you join a new node to the cluster. To
|
`/etc/pve` is overwritten when you join a new node to the cluster. To
|
||||||
workaround, use `vzdump` to backup and restore to a different VMID after
|
workaround, use `vzdump` to backup and restore to a different VMID after
|
||||||
@ -157,7 +157,7 @@ If you only want the list of all nodes use:
|
|||||||
|
|
||||||
# pvecm nodes
|
# pvecm nodes
|
||||||
|
|
||||||
.List Nodes in a Cluster
|
.List nodes in a cluster
|
||||||
----
|
----
|
||||||
hp2# pvecm nodes
|
hp2# pvecm nodes
|
||||||
|
|
||||||
@ -295,7 +295,7 @@ ____
|
|||||||
|
|
||||||
In case of network partitioning, state changes requires that a
|
In case of network partitioning, state changes requires that a
|
||||||
majority of nodes are online. The cluster switches to read-only mode
|
majority of nodes are online. The cluster switches to read-only mode
|
||||||
if it loose quorum.
|
if it loses quorum.
|
||||||
|
|
||||||
NOTE: {pve} assigns a single vote to each node by default.
|
NOTE: {pve} assigns a single vote to each node by default.
|
||||||
|
|
||||||
|
@ -86,6 +86,7 @@ used.
|
|||||||
NOTE: DH parameters are only used if a cipher suite utilizing the DH key
|
NOTE: DH parameters are only used if a cipher suite utilizing the DH key
|
||||||
exchange algorithm is negotiated.
|
exchange algorithm is negotiated.
|
||||||
|
|
||||||
|
|
||||||
Alternative HTTPS certificate
|
Alternative HTTPS certificate
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
|
22
pvesm.adoc
22
pvesm.adoc
@ -83,7 +83,8 @@ snapshots and clones.
|
|||||||
TIP: It is possible to use LVM on top of an iSCSI storage. That way
|
TIP: It is possible to use LVM on top of an iSCSI storage. That way
|
||||||
you get a `shared` LVM storage.
|
you get a `shared` LVM storage.
|
||||||
|
|
||||||
Thin provisioning
|
|
||||||
|
Thin Provisioning
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
A number of storages, and the Qemu image format `qcow2`, support 'thin
|
A number of storages, and the Qemu image format `qcow2`, support 'thin
|
||||||
@ -91,13 +92,13 @@ provisioning'. With thin provisioning activated, only the blocks that
|
|||||||
the guest system actually use will be written to the storage.
|
the guest system actually use will be written to the storage.
|
||||||
|
|
||||||
Say for instance you create a VM with a 32GB hard disk, and after
|
Say for instance you create a VM with a 32GB hard disk, and after
|
||||||
installing the guest system OS, the root filesystem of the VM contains
|
installing the guest system OS, the root file system of the VM contains
|
||||||
3 GB of data. In that case only 3GB are written to the storage, even
|
3 GB of data. In that case only 3GB are written to the storage, even
|
||||||
if the guest VM sees a 32GB hard drive. In this way thin provisioning
|
if the guest VM sees a 32GB hard drive. In this way thin provisioning
|
||||||
allows you to create disk images which are larger than the currently
|
allows you to create disk images which are larger than the currently
|
||||||
available storage blocks. You can create large disk images for your
|
available storage blocks. You can create large disk images for your
|
||||||
VMs, and when the need arises, add more disks to your storage without
|
VMs, and when the need arises, add more disks to your storage without
|
||||||
resizing the VMs filesystems.
|
resizing the VMs' file systems.
|
||||||
|
|
||||||
All storage types which have the ``Snapshots'' feature also support thin
|
All storage types which have the ``Snapshots'' feature also support thin
|
||||||
provisioning.
|
provisioning.
|
||||||
@ -108,6 +109,7 @@ and may corrupt your data. So it is advisable to avoid
|
|||||||
over-provisioning of your storage resources, or carefully observe
|
over-provisioning of your storage resources, or carefully observe
|
||||||
free space to avoid such conditions.
|
free space to avoid such conditions.
|
||||||
|
|
||||||
|
|
||||||
Storage Configuration
|
Storage Configuration
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
@ -122,10 +124,12 @@ also useful for local storage types. In this case such local storage
|
|||||||
is available on all nodes, but it is physically different and can have
|
is available on all nodes, but it is physically different and can have
|
||||||
totally different content.
|
totally different content.
|
||||||
|
|
||||||
|
|
||||||
Storage Pools
|
Storage Pools
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
|
Each storage pool has a `<type>`, and is uniquely identified by its
|
||||||
|
`<STORAGE_ID>`. A pool configuration looks like this:
|
||||||
|
|
||||||
----
|
----
|
||||||
<type>: <STORAGE_ID>
|
<type>: <STORAGE_ID>
|
||||||
@ -163,6 +167,7 @@ zfspool: local-zfs
|
|||||||
content images,rootdir
|
content images,rootdir
|
||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
Common Storage Properties
|
Common Storage Properties
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -211,7 +216,7 @@ You can use this flag to disable the storage completely.
|
|||||||
|
|
||||||
maxfiles::
|
maxfiles::
|
||||||
|
|
||||||
Maximal number of backup files per VM. Use `0` for unlimted.
|
Maximum number of backup files per VM. Use `0` for unlimited.
|
||||||
|
|
||||||
format::
|
format::
|
||||||
|
|
||||||
@ -241,10 +246,11 @@ like:
|
|||||||
|
|
||||||
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
|
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
|
||||||
|
|
||||||
To get the filesystem path for a `<VOLUME_ID>` use:
|
To get the file system path for a `<VOLUME_ID>` use:
|
||||||
|
|
||||||
pvesm path <VOLUME_ID>
|
pvesm path <VOLUME_ID>
|
||||||
|
|
||||||
|
|
||||||
Volume Ownership
|
Volume Ownership
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -312,7 +318,7 @@ you pass an empty string as `<name>`
|
|||||||
|
|
||||||
pvesm alloc local <VMID> '' 4G
|
pvesm alloc local <VMID> '' 4G
|
||||||
|
|
||||||
Free volumes
|
Free volumes
|
||||||
|
|
||||||
pvesm free <VOLUME_ID>
|
pvesm free <VOLUME_ID>
|
||||||
|
|
||||||
@ -338,7 +344,7 @@ List container templates
|
|||||||
|
|
||||||
pvesm list <STORAGE_ID> --vztmpl
|
pvesm list <STORAGE_ID> --vztmpl
|
||||||
|
|
||||||
Show filesystem path for a volume
|
Show file system path for a volume
|
||||||
|
|
||||||
pvesm path <VOLUME_ID>
|
pvesm path <VOLUME_ID>
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ ifndef::manvolnum[]
|
|||||||
include::attributes.txt[]
|
include::attributes.txt[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
This daemom queries the status of VMs, storages and containers at
|
This daemon queries the status of VMs, storages and containers at
|
||||||
regular intervals. The result is sent to all nodes in the cluster.
|
regular intervals. The result is sent to all nodes in the cluster.
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
|
19
pveum.adoc
19
pveum.adoc
@ -32,7 +32,8 @@ Active Directory, LDAP, Linux PAM or the integrated Proxmox VE
|
|||||||
authentication server.
|
authentication server.
|
||||||
|
|
||||||
By using the role based user- and permission management for all
|
By using the role based user- and permission management for all
|
||||||
objects (VM´s, storages, nodes, etc.) granular access can be defined.
|
objects (VMs, storages, nodes, etc.) granular access can be defined.
|
||||||
|
|
||||||
|
|
||||||
Authentication Realms
|
Authentication Realms
|
||||||
---------------------
|
---------------------
|
||||||
@ -66,9 +67,11 @@ This is a unix like password store
|
|||||||
(`/etc/pve/priv/shadow.cfg`). Password are encrypted using the SHA-256
|
(`/etc/pve/priv/shadow.cfg`). Password are encrypted using the SHA-256
|
||||||
hash method. Users are allowed to change passwords.
|
hash method. Users are allowed to change passwords.
|
||||||
|
|
||||||
|
|
||||||
Terms and Definitions
|
Terms and Definitions
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
|
|
||||||
Users
|
Users
|
||||||
~~~~~
|
~~~~~
|
||||||
|
|
||||||
@ -85,12 +88,14 @@ We store the following attribute for users (`/etc/pve/user.cfg`):
|
|||||||
* flag to enable/disable account
|
* flag to enable/disable account
|
||||||
* comment
|
* comment
|
||||||
|
|
||||||
|
|
||||||
Superuser
|
Superuser
|
||||||
^^^^^^^^^
|
^^^^^^^^^
|
||||||
|
|
||||||
The traditional unix superuser account is called `root@pam`. All
|
The traditional unix superuser account is called `root@pam`. All
|
||||||
system mails are forwarded to the email assigned to that account.
|
system mails are forwarded to the email assigned to that account.
|
||||||
|
|
||||||
|
|
||||||
Groups
|
Groups
|
||||||
~~~~~~
|
~~~~~~
|
||||||
|
|
||||||
@ -99,6 +104,7 @@ way to organize access permissions. You should always grant permission
|
|||||||
to groups instead of using individual users. That way you will get a
|
to groups instead of using individual users. That way you will get a
|
||||||
much shorter access control list which is easier to handle.
|
much shorter access control list which is easier to handle.
|
||||||
|
|
||||||
|
|
||||||
Objects and Paths
|
Objects and Paths
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -108,6 +114,7 @@ resources (`/pool/{poolname}`). We use file system like paths to
|
|||||||
address those objects. Those paths form a natural tree, and
|
address those objects. Those paths form a natural tree, and
|
||||||
permissions can be inherited down that hierarchy.
|
permissions can be inherited down that hierarchy.
|
||||||
|
|
||||||
|
|
||||||
Privileges
|
Privileges
|
||||||
~~~~~~~~~~
|
~~~~~~~~~~
|
||||||
|
|
||||||
@ -157,6 +164,7 @@ Storage related privileges::
|
|||||||
* `Datastore.AllocateTemplate`: allocate/upload templates and iso images
|
* `Datastore.AllocateTemplate`: allocate/upload templates and iso images
|
||||||
* `Datastore.Audit`: view/browse a datastore
|
* `Datastore.Audit`: view/browse a datastore
|
||||||
|
|
||||||
|
|
||||||
Roles
|
Roles
|
||||||
~~~~~
|
~~~~~
|
||||||
|
|
||||||
@ -200,10 +208,11 @@ When a subject requests an action on an object, the framework looks up
|
|||||||
the roles assigned to that subject (using the object path). The set of
|
the roles assigned to that subject (using the object path). The set of
|
||||||
roles defines the granted privileges.
|
roles defines the granted privileges.
|
||||||
|
|
||||||
|
|
||||||
Inheritance
|
Inheritance
|
||||||
^^^^^^^^^^^
|
^^^^^^^^^^^
|
||||||
|
|
||||||
As mentioned earlier, object paths forms a filesystem like tree, and
|
As mentioned earlier, object paths form a file system like tree, and
|
||||||
permissions can be inherited down that tree (the propagate flag is set
|
permissions can be inherited down that tree (the propagate flag is set
|
||||||
by default). We use the following inheritance rules:
|
by default). We use the following inheritance rules:
|
||||||
|
|
||||||
@ -211,12 +220,14 @@ by default). We use the following inheritance rules:
|
|||||||
* permission for groups apply when the user is member of that group.
|
* permission for groups apply when the user is member of that group.
|
||||||
* permission set at higher level always overwrites inherited permissions.
|
* permission set at higher level always overwrites inherited permissions.
|
||||||
|
|
||||||
|
|
||||||
What permission do I need?
|
What permission do I need?
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
The required API permissions are documented for each individual
|
The required API permissions are documented for each individual
|
||||||
method, and can be found at http://pve.proxmox.com/pve-docs/api-viewer/
|
method, and can be found at http://pve.proxmox.com/pve-docs/api-viewer/
|
||||||
|
|
||||||
|
|
||||||
Pools
|
Pools
|
||||||
~~~~~
|
~~~~~
|
||||||
|
|
||||||
@ -274,11 +285,12 @@ Create a new role:
|
|||||||
Real World Examples
|
Real World Examples
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
|
|
||||||
Administrator Group
|
Administrator Group
|
||||||
~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
One of the most wanted features was the ability to define a group of
|
One of the most wanted features was the ability to define a group of
|
||||||
users with full administartor rights (without using the root account).
|
users with full administrator rights (without using the root account).
|
||||||
|
|
||||||
Define the group:
|
Define the group:
|
||||||
|
|
||||||
@ -312,6 +324,7 @@ Example1: Allow user `joe@pve` to see all virtual machines
|
|||||||
[source,bash]
|
[source,bash]
|
||||||
pveum aclmod /vms -user joe@pve -role PVEAuditor
|
pveum aclmod /vms -user joe@pve -role PVEAuditor
|
||||||
|
|
||||||
|
|
||||||
Delegate User Management
|
Delegate User Management
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
7
qm.adoc
7
qm.adoc
@ -29,7 +29,7 @@ endif::manvolnum[]
|
|||||||
// http://pve.proxmox.com/wiki/KVM
|
// http://pve.proxmox.com/wiki/KVM
|
||||||
// http://pve.proxmox.com/wiki/Qemu_Server
|
// http://pve.proxmox.com/wiki/Qemu_Server
|
||||||
|
|
||||||
Qemu (short form for Quick Emulator) is an opensource hypervisor that emulates a
|
Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
|
||||||
physical computer. From the perspective of the host system where Qemu is
|
physical computer. From the perspective of the host system where Qemu is
|
||||||
running, Qemu is a user program which has access to a number of local resources
|
running, Qemu is a user program which has access to a number of local resources
|
||||||
like partitions, files, network cards which are then passed to an
|
like partitions, files, network cards which are then passed to an
|
||||||
@ -56,6 +56,7 @@ module.
|
|||||||
Qemu inside {pve} runs as a root process, since this is required to access block
|
Qemu inside {pve} runs as a root process, since this is required to access block
|
||||||
and PCI devices.
|
and PCI devices.
|
||||||
|
|
||||||
|
|
||||||
Emulated devices and paravirtualized devices
|
Emulated devices and paravirtualized devices
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
@ -86,12 +87,14 @@ up to three times the throughput of an emulated Intel E1000 network card, as
|
|||||||
measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
|
measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
|
||||||
http://www.linux-kvm.org/page/Using_VirtIO_NIC]
|
http://www.linux-kvm.org/page/Using_VirtIO_NIC]
|
||||||
|
|
||||||
|
|
||||||
Virtual Machines settings
|
Virtual Machines settings
|
||||||
-------------------------
|
-------------------------
|
||||||
Generally speaking {pve} tries to choose sane defaults for virtual machines
|
Generally speaking {pve} tries to choose sane defaults for virtual machines
|
||||||
(VM). Make sure you understand the meaning of the settings you change, as it
|
(VM). Make sure you understand the meaning of the settings you change, as it
|
||||||
could incur a performance slowdown, or putting your data at risk.
|
could incur a performance slowdown, or putting your data at risk.
|
||||||
|
|
||||||
|
|
||||||
General Settings
|
General Settings
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
General settings of a VM include
|
General settings of a VM include
|
||||||
@ -101,6 +104,7 @@ General settings of a VM include
|
|||||||
* *Name*: a free form text string you can use to describe the VM
|
* *Name*: a free form text string you can use to describe the VM
|
||||||
* *Resource Pool*: a logical group of VMs
|
* *Resource Pool*: a logical group of VMs
|
||||||
|
|
||||||
|
|
||||||
OS Settings
|
OS Settings
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
When creating a VM, setting the proper Operating System(OS) allows {pve} to
|
When creating a VM, setting the proper Operating System(OS) allows {pve} to
|
||||||
@ -108,6 +112,7 @@ optimize some low level parameters. For instance Windows OS expect the BIOS
|
|||||||
clock to use the local time, while Unix based OS expect the BIOS clock to have
|
clock to use the local time, while Unix based OS expect the BIOS clock to have
|
||||||
the UTC time.
|
the UTC time.
|
||||||
|
|
||||||
|
|
||||||
Hard Disk
|
Hard Disk
|
||||||
~~~~~~~~~
|
~~~~~~~~~
|
||||||
Qemu can emulate a number of storage controllers:
|
Qemu can emulate a number of storage controllers:
|
||||||
|
@ -62,7 +62,7 @@ Recommended system requirements
|
|||||||
|
|
||||||
* Fast hard drives, best results with 15k rpm SAS, Raid10
|
* Fast hard drives, best results with 15k rpm SAS, Raid10
|
||||||
|
|
||||||
* At least two NIC´s, depending on the used storage technology you need more
|
* At least two NICs, depending on the used storage technology you need more
|
||||||
|
|
||||||
ifdef::wiki[]
|
ifdef::wiki[]
|
||||||
|
|
||||||
|
@ -106,9 +106,9 @@ a second rsync copies changed files. After that, the container is
|
|||||||
started (resumed) again. This results in minimal downtime, but needs
|
started (resumed) again. This results in minimal downtime, but needs
|
||||||
additional space to hold the container copy.
|
additional space to hold the container copy.
|
||||||
+
|
+
|
||||||
When the container is on a local filesystem and the target storage of
|
When the container is on a local file system and the target storage of
|
||||||
the backup is an NFS server, you should set `--tmpdir` to reside on a
|
the backup is an NFS server, you should set `--tmpdir` to reside on a
|
||||||
local filesystem too, as this will result in a many fold performance
|
local file system too, as this will result in a many fold performance
|
||||||
improvement. Use of a local `tmpdir` is also required if you want to
|
improvement. Use of a local `tmpdir` is also required if you want to
|
||||||
backup a local container using ACLs in suspend mode if the backup
|
backup a local container using ACLs in suspend mode if the backup
|
||||||
storage is an NFS server.
|
storage is an NFS server.
|
||||||
@ -125,8 +125,8 @@ NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
|
|||||||
supports snapshots. Using the `backup=no` mount point option individual volumes
|
supports snapshots. Using the `backup=no` mount point option individual volumes
|
||||||
can be excluded from the backup (and thus this requirement).
|
can be excluded from the backup (and thus this requirement).
|
||||||
|
|
||||||
NOTE: bind and device mountpoints are skipped during backup operations, like
|
NOTE: bind and device mount points are skipped during backup operations, like
|
||||||
volume mountpoints with the backup option disabled.
|
volume mount points with the backup option disabled.
|
||||||
|
|
||||||
|
|
||||||
Backup File Names
|
Backup File Names
|
||||||
|
Loading…
Reference in New Issue
Block a user