mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-08-07 01:41:38 +00:00
pct: fix text width to 80cc
use `git show --word-diff=color` to see that almost no word change happened, only realignment. Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
6d718b9b2c
commit
69ab602f6a
387
pct.adoc
387
pct.adoc
@ -85,13 +85,14 @@ Technology Overview
|
|||||||
|
|
||||||
* Container setup from host (network, DNS, storage, etc.)
|
* Container setup from host (network, DNS, storage, etc.)
|
||||||
|
|
||||||
|
|
||||||
Security Considerations
|
Security Considerations
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Containers use the kernel of the host system. This creates a big attack
|
Containers use the kernel of the host system. This creates a big attack surface
|
||||||
surface for malicious users. This should be considered if containers
|
for malicious users. This should be considered if containers are provided to
|
||||||
are provided to untrustworthy people. In general, full
|
untrustworthy people. In general, full virtual machines provide better
|
||||||
virtual machines provide better isolation.
|
isolation.
|
||||||
|
|
||||||
However, LXC uses many security features like AppArmor, CGroups and kernel
|
However, LXC uses many security features like AppArmor, CGroups and kernel
|
||||||
namespaces to reduce the attack surface.
|
namespaces to reduce the attack surface.
|
||||||
@ -108,8 +109,8 @@ To trace AppArmor activity, use:
|
|||||||
Guest Operating System Configuration
|
Guest Operating System Configuration
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
{pve} tries to detect the Linux distribution in the container, and modifies some
|
{pve} tries to detect the Linux distribution in the container, and modifies
|
||||||
files. Here is a short list of things done at container startup:
|
some files. Here is a short list of things done at container startup:
|
||||||
|
|
||||||
set /etc/hostname:: to set the container name
|
set /etc/hostname:: to set the container name
|
||||||
|
|
||||||
@ -135,22 +136,20 @@ Changes made by {PVE} are enclosed by comment markers:
|
|||||||
# --- END PVE ---
|
# --- END PVE ---
|
||||||
----
|
----
|
||||||
|
|
||||||
Those markers will be inserted at a reasonable location in the
|
Those markers will be inserted at a reasonable location in the file. If such a
|
||||||
file. If such a section already exists, it will be updated in place
|
section already exists, it will be updated in place and will not be moved.
|
||||||
and will not be moved.
|
|
||||||
|
|
||||||
Modification of a file can be prevented by adding a `.pve-ignore.`
|
Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
|
||||||
file for it. For instance, if the file `/etc/.pve-ignore.hosts`
|
For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
|
||||||
exists then the `/etc/hosts` file will not be touched. This can be a
|
file will not be touched. This can be a simple empty file created via:
|
||||||
simple empty file created via:
|
|
||||||
|
|
||||||
----
|
----
|
||||||
# touch /etc/.pve-ignore.hosts
|
# touch /etc/.pve-ignore.hosts
|
||||||
----
|
----
|
||||||
|
|
||||||
Most modifications are OS dependent, so they differ between different
|
Most modifications are OS dependent, so they differ between different
|
||||||
distributions and versions. You can completely disable modifications
|
distributions and versions. You can completely disable modifications by
|
||||||
by manually setting the `ostype` to `unmanaged`.
|
manually setting the `ostype` to `unmanaged`.
|
||||||
|
|
||||||
OS type detection is done by testing for certain files inside the
|
OS type detection is done by testing for certain files inside the
|
||||||
container:
|
container:
|
||||||
@ -178,20 +177,21 @@ Container Images
|
|||||||
----------------
|
----------------
|
||||||
|
|
||||||
Container images, sometimes also referred to as ``templates'' or
|
Container images, sometimes also referred to as ``templates'' or
|
||||||
``appliances'', are `tar` archives which contain everything to run a
|
``appliances'', are `tar` archives which contain everything to run a container.
|
||||||
container. `pct` uses them to create a new container, for example:
|
`pct` uses them to create a new container, for example:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
----
|
----
|
||||||
|
|
||||||
{pve} itself provides a variety of basic templates for the most common
|
{pve} itself provides a variety of basic templates for the most common Linux
|
||||||
Linux distributions. They can be downloaded using the GUI or the
|
distributions. They can be downloaded using the GUI or the `pveam` (short for
|
||||||
`pveam` (short for {pve} Appliance Manager) command line utility.
|
{pve} Appliance Manager) command line utility.
|
||||||
Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
|
Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
|
||||||
container templates are also available to download.
|
are also available to download.
|
||||||
|
|
||||||
The list of available templates is updated daily via cron. To trigger it manually:
|
The list of available templates is updated daily via cron. To trigger it
|
||||||
|
manually:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam update
|
# pveam update
|
||||||
@ -229,26 +229,26 @@ system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
|
|||||||
system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
|
system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
|
||||||
----
|
----
|
||||||
|
|
||||||
Before you can use such a template, you need to download them into one
|
Before you can use such a template, you need to download them into one of your
|
||||||
of your storages. You can simply use storage `local` for that
|
storages. You can simply use storage `local` for that purpose. For clustered
|
||||||
purpose. For clustered installations, it is preferred to use a shared
|
installations, it is preferred to use a shared storage so that all nodes can
|
||||||
storage so that all nodes can access those images.
|
access those images.
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
|
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
----
|
----
|
||||||
|
|
||||||
You are now ready to create containers using that image, and you can
|
You are now ready to create containers using that image, and you can list all
|
||||||
list all downloaded images on storage `local` with:
|
downloaded images on storage `local` with:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam list local
|
# pveam list local
|
||||||
local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
|
local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
|
||||||
----
|
----
|
||||||
|
|
||||||
The above command shows you the full {pve} volume identifiers. They include
|
The above command shows you the full {pve} volume identifiers. They include the
|
||||||
the storage name, and most other {pve} commands can use them. For
|
storage name, and most other {pve} commands can use them. For example you can
|
||||||
example you can delete that image later with:
|
delete that image later with:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
@ -259,12 +259,13 @@ Container Storage
|
|||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
The {pve} LXC container storage model is more flexible than traditional
|
The {pve} LXC container storage model is more flexible than traditional
|
||||||
container storage models. A container can have multiple mount points. This makes
|
container storage models. A container can have multiple mount points. This
|
||||||
it possible to use the best suited storage for each application.
|
makes it possible to use the best suited storage for each application.
|
||||||
|
|
||||||
For example the root file system of the container can be on slow and cheap
|
For example the root file system of the container can be on slow and cheap
|
||||||
storage while the database can be on fast and distributed storage via a second
|
storage while the database can be on fast and distributed storage via a second
|
||||||
mount point. See section <<pct_mount_points, Mount Points>> for further details.
|
mount point. See section <<pct_mount_points, Mount Points>> for further
|
||||||
|
details.
|
||||||
|
|
||||||
Any storage type supported by the {pve} storage library can be used. This means
|
Any storage type supported by the {pve} storage library can be used. This means
|
||||||
that containers can be stored on local (for example `lvm`, `zfs` or directory),
|
that containers can be stored on local (for example `lvm`, `zfs` or directory),
|
||||||
@ -282,10 +283,9 @@ between containers.
|
|||||||
FUSE Mounts
|
FUSE Mounts
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
WARNING: Because of existing issues in the Linux kernel's freezer
|
WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
|
||||||
subsystem the usage of FUSE mounts inside a container is strongly
|
usage of FUSE mounts inside a container is strongly advised against, as
|
||||||
advised against, as containers need to be frozen for suspend or
|
containers need to be frozen for suspend or snapshot mode backups.
|
||||||
snapshot mode backups.
|
|
||||||
|
|
||||||
If FUSE mounts cannot be replaced by other mounting mechanisms or storage
|
If FUSE mounts cannot be replaced by other mounting mechanisms or storage
|
||||||
technologies, it is possible to establish the FUSE mount on the Proxmox host
|
technologies, it is possible to establish the FUSE mount on the Proxmox host
|
||||||
@ -295,29 +295,29 @@ and use a bind mount point to make it accessible inside the container.
|
|||||||
Using Quotas Inside Containers
|
Using Quotas Inside Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Quotas allow to set limits inside a container for the amount of disk
|
Quotas allow to set limits inside a container for the amount of disk space that
|
||||||
space that each user can use.
|
each user can use.
|
||||||
|
|
||||||
NOTE: This only works on ext4 image based storage types and currently only works
|
NOTE: This only works on ext4 image based storage types and currently only
|
||||||
with privileged containers.
|
works with privileged containers.
|
||||||
|
|
||||||
Activating the `quota` option causes the following mount options to be
|
Activating the `quota` option causes the following mount options to be used for
|
||||||
used for a mount point:
|
a mount point:
|
||||||
`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
|
`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
|
||||||
|
|
||||||
This allows quotas to be used like on any other system. You
|
This allows quotas to be used like on any other system. You can initialize the
|
||||||
can initialize the `/aquota.user` and `/aquota.group` files by running
|
`/aquota.user` and `/aquota.group` files by running:
|
||||||
|
|
||||||
----
|
----
|
||||||
# quotacheck -cmug /
|
# quotacheck -cmug /
|
||||||
# quotaon /
|
# quotaon /
|
||||||
----
|
----
|
||||||
|
|
||||||
and edit the quotas via the `edquota` command. Refer to the documentation
|
Then edit the quotas using the `edquota` command. Refer to the documentation of
|
||||||
of the distribution running inside the container for details.
|
the distribution running inside the container for details.
|
||||||
|
|
||||||
NOTE: You need to run the above commands for every mount point by passing
|
NOTE: You need to run the above commands for every mount point by passing the
|
||||||
the mount point's path instead of just `/`.
|
mount point's path instead of just `/`.
|
||||||
|
|
||||||
|
|
||||||
Using ACLs Inside Containers
|
Using ACLs Inside Containers
|
||||||
@ -347,15 +347,15 @@ mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
|
|||||||
NOTE: When creating a new mount point in the GUI, this option is enabled by
|
NOTE: When creating a new mount point in the GUI, this option is enabled by
|
||||||
default.
|
default.
|
||||||
|
|
||||||
To disable backups for a mount point, add `backup=0` in the way described above,
|
To disable backups for a mount point, add `backup=0` in the way described
|
||||||
or uncheck the *Backup* checkbox on the GUI.
|
above, or uncheck the *Backup* checkbox on the GUI.
|
||||||
|
|
||||||
Replication of Containers mount points
|
Replication of Containers mount points
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
By default, additional mount points are replicated when the Root Disk is
|
By default, additional mount points are replicated when the Root Disk is
|
||||||
replicated. If you want the {pve} storage replication mechanism to skip a mount
|
replicated. If you want the {pve} storage replication mechanism to skip a mount
|
||||||
point, you can set the *Skip replication* option for that mount point. +
|
point, you can set the *Skip replication* option for that mount point.
|
||||||
As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
|
As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
|
||||||
mount point to a different type of storage when the container has replication
|
mount point to a different type of storage when the container has replication
|
||||||
configured requires to have *Skip replication* enabled for that mount point.
|
configured requires to have *Skip replication* enabled for that mount point.
|
||||||
@ -373,44 +373,45 @@ General Settings
|
|||||||
General settings of a container include
|
General settings of a container include
|
||||||
|
|
||||||
* the *Node* : the physical server on which the container will run
|
* the *Node* : the physical server on which the container will run
|
||||||
* the *CT ID*: a unique number in this {pve} installation used to identify your container
|
* the *CT ID*: a unique number in this {pve} installation used to identify your
|
||||||
|
container
|
||||||
* *Hostname*: the hostname of the container
|
* *Hostname*: the hostname of the container
|
||||||
* *Resource Pool*: a logical group of containers and VMs
|
* *Resource Pool*: a logical group of containers and VMs
|
||||||
* *Password*: the root password of the container
|
* *Password*: the root password of the container
|
||||||
* *SSH Public Key*: a public key for connecting to the root account over SSH
|
* *SSH Public Key*: a public key for connecting to the root account over SSH
|
||||||
* *Unprivileged container*: this option allows to choose at creation time
|
* *Unprivileged container*: this option allows to choose at creation time
|
||||||
if you want to create a privileged or unprivileged container.
|
if you want to create a privileged or unprivileged container.
|
||||||
|
|
||||||
Unprivileged Containers
|
Unprivileged Containers
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Unprivileged containers use a new kernel feature called user namespaces. The
|
Unprivileged containers use a new kernel feature called user namespaces.
|
||||||
root UID 0 inside the container is mapped to an unprivileged user outside the
|
The root UID 0 inside the container is mapped to an unprivileged user outside
|
||||||
container. This means that most security issues (container escape, resource
|
the container. This means that most security issues (container escape, resource
|
||||||
abuse, etc.) in these containers will affect a random unprivileged user, and
|
abuse, etc.) in these containers will affect a random unprivileged user, and
|
||||||
would be a generic kernel security bug rather than an LXC issue. The LXC team
|
would be a generic kernel security bug rather than an LXC issue. The LXC team
|
||||||
thinks unprivileged containers are safe by design.
|
thinks unprivileged containers are safe by design.
|
||||||
|
|
||||||
This is the default option when creating a new container.
|
This is the default option when creating a new container.
|
||||||
|
|
||||||
NOTE: If the container uses systemd as an init system, please be
|
NOTE: If the container uses systemd as an init system, please be aware the
|
||||||
aware the systemd version running inside the container should be equal to
|
systemd version running inside the container should be equal to or greater than
|
||||||
or greater than 220.
|
220.
|
||||||
|
|
||||||
|
|
||||||
Privileged Containers
|
Privileged Containers
|
||||||
^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Security in containers is achieved by using mandatory access control
|
Security in containers is achieved by using mandatory access control
|
||||||
(AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
|
('AppArmor'), 'seccomp' filters and namespaces. The LXC team considers this
|
||||||
container as unsafe, and they will not consider new container escape exploits
|
kind of container as unsafe, and they will not consider new container escape
|
||||||
to be security issues worthy of a CVE and quick fix. That's why privileged
|
exploits to be security issues worthy of a CVE and quick fix. That's why
|
||||||
containers should only be used in trusted environments.
|
privileged containers should only be used in trusted environments.
|
||||||
|
|
||||||
WARNING: Although it is not recommended, AppArmor can be disabled for a
|
Although it is not recommended, AppArmor can be disabled for a container. This
|
||||||
container. This brings security risks with it. Some syscalls can lead to
|
brings security risks with it. Some syscalls can lead to privilege escalation
|
||||||
privilege escalation when executed within a container if the system is
|
when executed within a container if the system is misconfigured or if a LXC or
|
||||||
misconfigured or if a LXC or Linux Kernel vulnerability exists.
|
Linux Kernel vulnerability exists.
|
||||||
|
|
||||||
To disable AppArmor for a container, add the following line to the container
|
To disable AppArmor for a container, add the following line to the container
|
||||||
configuration file located at `/etc/pve/lxc/CTID.conf`:
|
configuration file located at `/etc/pve/lxc/CTID.conf`:
|
||||||
@ -419,8 +420,7 @@ configuration file located at `/etc/pve/lxc/CTID.conf`:
|
|||||||
lxc.apparmor_profile = unconfined
|
lxc.apparmor_profile = unconfined
|
||||||
----
|
----
|
||||||
|
|
||||||
Please note that this is not recommended for production use.
|
WARNING: Please note that this is not recommended for production use.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
[[pct_cpu]]
|
[[pct_cpu]]
|
||||||
@ -431,9 +431,10 @@ CPU
|
|||||||
|
|
||||||
You can restrict the number of visible CPUs inside the container using the
|
You can restrict the number of visible CPUs inside the container using the
|
||||||
`cores` option. This is implemented using the Linux 'cpuset' cgroup
|
`cores` option. This is implemented using the Linux 'cpuset' cgroup
|
||||||
(**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
|
(**c**ontrol *group*).
|
||||||
running containers among available CPUs. To view the assigned CPUs run
|
A special task inside `pvestatd` tries to distribute running containers among
|
||||||
the following command:
|
available CPUs periodically.
|
||||||
|
To view the assigned CPUs run the following command:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct cpusets
|
# pct cpusets
|
||||||
@ -451,21 +452,20 @@ control options.
|
|||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
|
|
||||||
`cpulimit`: :: You can use this option to further limit assigned CPU
|
`cpulimit`: :: You can use this option to further limit assigned CPU time.
|
||||||
time. Please note that this is a floating point number, so it is
|
Please note that this is a floating point number, so it is perfectly valid to
|
||||||
perfectly valid to assign two cores to a container, but restrict
|
assign two cores to a container, but restrict overall CPU consumption to half a
|
||||||
overall CPU consumption to half a core.
|
core.
|
||||||
+
|
+
|
||||||
----
|
----
|
||||||
cores: 2
|
cores: 2
|
||||||
cpulimit: 0.5
|
cpulimit: 0.5
|
||||||
----
|
----
|
||||||
|
|
||||||
`cpuunits`: :: This is a relative weight passed to the kernel
|
`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
|
||||||
scheduler. The larger the number is, the more CPU time this container
|
larger the number is, the more CPU time this container gets. Number is relative
|
||||||
gets. Number is relative to the weights of all the other running
|
to the weights of all the other running containers. The default is 1024. You
|
||||||
containers. The default is 1024. You can use this setting to
|
can use this setting to prioritize some containers.
|
||||||
prioritize some containers.
|
|
||||||
|
|
||||||
|
|
||||||
[[pct_memory]]
|
[[pct_memory]]
|
||||||
@ -478,13 +478,12 @@ Container memory is controlled using the cgroup memory controller.
|
|||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
|
|
||||||
`memory`: :: Limit overall memory usage. This corresponds
|
`memory`: :: Limit overall memory usage. This corresponds to the
|
||||||
to the `memory.limit_in_bytes` cgroup setting.
|
`memory.limit_in_bytes` cgroup setting.
|
||||||
|
|
||||||
`swap`: :: Allows the container to use additional swap memory from the
|
`swap`: :: Allows the container to use additional swap memory from the host
|
||||||
host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
|
swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
|
||||||
cgroup setting, which is set to the sum of both value (`memory +
|
setting, which is set to the sum of both value (`memory + swap`).
|
||||||
swap`).
|
|
||||||
|
|
||||||
|
|
||||||
[[pct_mount_points]]
|
[[pct_mount_points]]
|
||||||
@ -494,13 +493,13 @@ Mount Points
|
|||||||
[thumbnail="screenshot/gui-create-ct-root-disk.png"]
|
[thumbnail="screenshot/gui-create-ct-root-disk.png"]
|
||||||
|
|
||||||
The root mount point is configured with the `rootfs` property. You can
|
The root mount point is configured with the `rootfs` property. You can
|
||||||
configure up to 256 additional mount points. The corresponding options
|
configure up to 256 additional mount points. The corresponding options are
|
||||||
are called `mp0` to `mp255`. They can contain the following settings:
|
called `mp0` to `mp255`. They can contain the following settings:
|
||||||
|
|
||||||
include::pct-mountpoint-opts.adoc[]
|
include::pct-mountpoint-opts.adoc[]
|
||||||
|
|
||||||
Currently there are three types of mount points: storage backed
|
Currently there are three types of mount points: storage backed mount points,
|
||||||
mount points, bind mounts, and device mounts.
|
bind mounts, and device mounts.
|
||||||
|
|
||||||
.Typical container `rootfs` configuration
|
.Typical container `rootfs` configuration
|
||||||
----
|
----
|
||||||
@ -523,10 +522,15 @@ in three different flavors:
|
|||||||
|
|
||||||
NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
|
NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
|
||||||
mount point volumes will automatically allocate a volume of the specified size
|
mount point volumes will automatically allocate a volume of the specified size
|
||||||
on the specified storage. E.g., calling
|
on the specified storage. For example, calling
|
||||||
`pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
|
|
||||||
on the storage `thin1` and replace the volume ID place holder `10` with the
|
----
|
||||||
allocated volume ID.
|
pct set 100 -mp0 thin1:10,mp=/path/in/container
|
||||||
|
----
|
||||||
|
|
||||||
|
will allocate a 10GB volume on the storage `thin1` and replace the volume ID
|
||||||
|
place holder `10` with the allocated volume ID, and setup the moutpoint in the
|
||||||
|
container at `/path/in/container`
|
||||||
|
|
||||||
|
|
||||||
Bind Mount Points
|
Bind Mount Points
|
||||||
@ -546,11 +550,10 @@ user mapping and cannot use ACLs.
|
|||||||
|
|
||||||
NOTE: The contents of bind mount points are not backed up when using `vzdump`.
|
NOTE: The contents of bind mount points are not backed up when using `vzdump`.
|
||||||
|
|
||||||
WARNING: For security reasons, bind mounts should only be established
|
WARNING: For security reasons, bind mounts should only be established using
|
||||||
using source directories especially reserved for this purpose, e.g., a
|
source directories especially reserved for this purpose, e.g., a directory
|
||||||
directory hierarchy under `/mnt/bindmounts`. Never bind mount system
|
hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
|
||||||
directories like `/`, `/var` or `/etc` into a container - this poses a
|
`/`, `/var` or `/etc` into a container - this poses a great security risk.
|
||||||
great security risk.
|
|
||||||
|
|
||||||
NOTE: The bind mount source path must not contain any symlinks.
|
NOTE: The bind mount source path must not contain any symlinks.
|
||||||
|
|
||||||
@ -572,7 +575,8 @@ NOTE: Device mount points should only be used under special circumstances. In
|
|||||||
most cases a storage backed mount point offers the same performance and a lot
|
most cases a storage backed mount point offers the same performance and a lot
|
||||||
more features.
|
more features.
|
||||||
|
|
||||||
NOTE: The contents of device mount points are not backed up when using `vzdump`.
|
NOTE: The contents of device mount points are not backed up when using
|
||||||
|
`vzdump`.
|
||||||
|
|
||||||
|
|
||||||
[[pct_container_network]]
|
[[pct_container_network]]
|
||||||
@ -581,9 +585,9 @@ Network
|
|||||||
|
|
||||||
[thumbnail="screenshot/gui-create-ct-network.png"]
|
[thumbnail="screenshot/gui-create-ct-network.png"]
|
||||||
|
|
||||||
You can configure up to 10 network interfaces for a single
|
You can configure up to 10 network interfaces for a single container.
|
||||||
container. The corresponding options are called `net0` to `net9`, and
|
The corresponding options are called `net0` to `net9`, and they can contain the
|
||||||
they can contain the following setting:
|
following setting:
|
||||||
|
|
||||||
include::pct-network-opts.adoc[]
|
include::pct-network-opts.adoc[]
|
||||||
|
|
||||||
@ -604,24 +608,24 @@ interface or run the following command:
|
|||||||
// use the screenshot from qemu - its the same
|
// use the screenshot from qemu - its the same
|
||||||
[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
|
[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
|
||||||
|
|
||||||
If you want to fine tune the boot order of your containers, you can use the following
|
If you want to fine tune the boot order of your containers, you can use the
|
||||||
parameters:
|
following parameters:
|
||||||
|
|
||||||
* *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
|
* *Start/Shutdown order*: Defines the start order priority. For example, set it
|
||||||
you want the CT to be the first to be started. (We use the reverse startup
|
to 1 if you want the CT to be the first to be started. (We use the reverse
|
||||||
order for shutdown, so a container with a start order of 1 would be the last to
|
startup order for shutdown, so a container with a start order of 1 would be
|
||||||
be shut down)
|
the last to be shut down)
|
||||||
* *Startup delay*: Defines the interval between this container start and subsequent
|
* *Startup delay*: Defines the interval between this container start and
|
||||||
containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
|
subsequent containers starts. For example, set it to 240 if you want to wait
|
||||||
other containers.
|
240 seconds before starting other containers.
|
||||||
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
|
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
|
||||||
for the container to be offline after issuing a shutdown command.
|
for the container to be offline after issuing a shutdown command.
|
||||||
By default this value is set to 60, which means that {pve} will issue a
|
By default this value is set to 60, which means that {pve} will issue a
|
||||||
shutdown request, wait 60s for the machine to be offline, and if after 60s
|
shutdown request, wait 60s for the machine to be offline, and if after 60s
|
||||||
the machine is still online will notify that the shutdown action failed.
|
the machine is still online will notify that the shutdown action failed.
|
||||||
|
|
||||||
Please note that containers without a Start/Shutdown order parameter will always
|
Please note that containers without a Start/Shutdown order parameter will
|
||||||
start after those where the parameter is set, and this parameter only
|
always start after those where the parameter is set, and this parameter only
|
||||||
makes sense between the machines running locally on a host, and not
|
makes sense between the machines running locally on a host, and not
|
||||||
cluster-wide.
|
cluster-wide.
|
||||||
|
|
||||||
@ -634,8 +638,8 @@ You can add a hook script to CTs with the config property `hookscript`.
|
|||||||
# pct set 100 -hookscript local:snippets/hookscript.pl
|
# pct set 100 -hookscript local:snippets/hookscript.pl
|
||||||
----
|
----
|
||||||
|
|
||||||
It will be called during various phases of the guests lifetime.
|
It will be called during various phases of the guests lifetime. For an example
|
||||||
For an example and documentation see the example script under
|
and documentation see the example script under
|
||||||
`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
|
`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
|
||||||
|
|
||||||
Backup and Restore
|
Backup and Restore
|
||||||
@ -645,18 +649,18 @@ Backup and Restore
|
|||||||
Container Backup
|
Container Backup
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
It is possible to use the `vzdump` tool for container backup. Please
|
It is possible to use the `vzdump` tool for container backup. Please refer to
|
||||||
refer to the `vzdump` manual page for details.
|
the `vzdump` manual page for details.
|
||||||
|
|
||||||
|
|
||||||
Restoring Container Backups
|
Restoring Container Backups
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Restoring container backups made with `vzdump` is possible using the
|
Restoring container backups made with `vzdump` is possible using the `pct
|
||||||
`pct restore` command. By default, `pct restore` will attempt to restore as much
|
restore` command. By default, `pct restore` will attempt to restore as much of
|
||||||
of the backed up container configuration as possible. It is possible to override
|
the backed up container configuration as possible. It is possible to override
|
||||||
the backed up configuration by manually setting container options on the command
|
the backed up configuration by manually setting container options on the
|
||||||
line (see the `pct` manual page for details).
|
command line (see the `pct` manual page for details).
|
||||||
|
|
||||||
NOTE: `pvesm extractconfig` can be used to view the backed up configuration
|
NOTE: `pvesm extractconfig` can be used to view the backed up configuration
|
||||||
contained in a vzdump archive.
|
contained in a vzdump archive.
|
||||||
@ -668,15 +672,16 @@ points:
|
|||||||
``Simple'' Restore Mode
|
``Simple'' Restore Mode
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
If neither the `rootfs` parameter nor any of the optional `mpX` parameters
|
If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
|
||||||
are explicitly set, the mount point configuration from the backed up
|
explicitly set, the mount point configuration from the backed up configuration
|
||||||
configuration file is restored using the following steps:
|
file is restored using the following steps:
|
||||||
|
|
||||||
. Extract mount points and their options from backup
|
. Extract mount points and their options from backup
|
||||||
. Create volumes for storage backed mount points (on storage provided with the
|
. Create volumes for storage backed mount points (on storage provided with the
|
||||||
`storage` parameter, or default local storage if unset)
|
`storage` parameter, or default local storage if unset)
|
||||||
. Extract files from backup archive
|
. Extract files from backup archive
|
||||||
. Add bind and device mount points to restored configuration (limited to root user)
|
. Add bind and device mount points to restored configuration (limited to root
|
||||||
|
user)
|
||||||
|
|
||||||
NOTE: Since bind and device mount points are never backed up, no files are
|
NOTE: Since bind and device mount points are never backed up, no files are
|
||||||
restored in the last step, but only the configuration options. The assumption
|
restored in the last step, but only the configuration options. The assumption
|
||||||
@ -694,14 +699,14 @@ interface.
|
|||||||
By setting the `rootfs` parameter (and optionally, any combination of `mpX`
|
By setting the `rootfs` parameter (and optionally, any combination of `mpX`
|
||||||
parameters), the `pct restore` command is automatically switched into an
|
parameters), the `pct restore` command is automatically switched into an
|
||||||
advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
|
advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
|
||||||
configuration options contained in the backup archive, and instead only
|
configuration options contained in the backup archive, and instead only uses
|
||||||
uses the options explicitly provided as parameters.
|
the options explicitly provided as parameters.
|
||||||
|
|
||||||
This mode allows flexible configuration of mount point settings at restore time,
|
This mode allows flexible configuration of mount point settings at restore
|
||||||
for example:
|
time, for example:
|
||||||
|
|
||||||
* Set target storages, volume sizes and other options for each mount point
|
* Set target storages, volume sizes and other options for each mount point
|
||||||
individually
|
individually
|
||||||
* Redistribute backed up files according to new mount point scheme
|
* Redistribute backed up files according to new mount point scheme
|
||||||
* Restore to device and/or bind mount points (limited to root user)
|
* Restore to device and/or bind mount points (limited to root user)
|
||||||
|
|
||||||
@ -718,8 +723,8 @@ network configuration or memory limits.
|
|||||||
CLI Usage Examples
|
CLI Usage Examples
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Create a container based on a Debian template (provided you have
|
Create a container based on a Debian template (provided you have already
|
||||||
already downloaded the template via the web interface)
|
downloaded the template via the web interface)
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
|
# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
@ -749,8 +754,8 @@ Display the configuration
|
|||||||
# pct config 100
|
# pct config 100
|
||||||
----
|
----
|
||||||
|
|
||||||
Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
|
Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
|
||||||
set the address and gateway, while it's running
|
the address and gateway, while it's running
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
||||||
@ -774,9 +779,8 @@ the container's ID):
|
|||||||
# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
|
# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
|
||||||
----
|
----
|
||||||
|
|
||||||
This command will attempt to start the container in foreground mode,
|
This command will attempt to start the container in foreground mode, to stop
|
||||||
to stop the container run `pct shutdown ID` or `pct stop ID` in a
|
the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
|
||||||
second terminal.
|
|
||||||
|
|
||||||
The collected debug log is written to `/tmp/lxc-ID.log`.
|
The collected debug log is written to `/tmp/lxc-ID.log`.
|
||||||
|
|
||||||
@ -798,23 +802,23 @@ This works as long as your Container is offline. If it has local volumes or
|
|||||||
mount points defined, the migration will copy the content over the network to
|
mount points defined, the migration will copy the content over the network to
|
||||||
the target host if the same storage is defined there.
|
the target host if the same storage is defined there.
|
||||||
|
|
||||||
If you want to migrate online Containers, the only way is to use
|
If you want to migrate online Containers, the only way is to use restart
|
||||||
restart migration. This can be initiated with the -restart flag and the optional
|
migration. This can be initiated with the -restart flag and the optional
|
||||||
-timeout parameter.
|
-timeout parameter.
|
||||||
|
|
||||||
A restart migration will shut down the Container and kill it after the specified
|
A restart migration will shut down the Container and kill it after the
|
||||||
timeout (the default is 180 seconds). Then it will migrate the Container
|
specified timeout (the default is 180 seconds). Then it will migrate the
|
||||||
like an offline migration and when finished, it starts the Container on the
|
Container like an offline migration and when finished, it starts the Container
|
||||||
target node.
|
on the target node.
|
||||||
|
|
||||||
[[pct_configuration]]
|
[[pct_configuration]]
|
||||||
Configuration
|
Configuration
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
|
The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
|
||||||
where `<CTID>` is the numeric ID of the given container. Like all
|
`<CTID>` is the numeric ID of the given container. Like all other files stored
|
||||||
other files stored inside `/etc/pve/`, they get automatically
|
inside `/etc/pve/`, they get automatically replicated to all other cluster
|
||||||
replicated to all other cluster nodes.
|
nodes.
|
||||||
|
|
||||||
NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
|
NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
|
||||||
unique cluster wide.
|
unique cluster wide.
|
||||||
@ -830,38 +834,37 @@ net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
|
|||||||
rootfs: local:107/vm-107-disk-1.raw,size=7G
|
rootfs: local:107/vm-107-disk-1.raw,size=7G
|
||||||
----
|
----
|
||||||
|
|
||||||
The configuration files are simple text files. You can edit them
|
The configuration files are simple text files. You can edit them using a normal
|
||||||
using a normal text editor (`vi`, `nano`, etc). This is sometimes
|
text editor (`vi`, `nano`, etc).
|
||||||
useful to do small corrections, but keep in mind that you need to
|
This is sometimes useful to do small corrections, but keep in mind that you
|
||||||
restart the container to apply such changes.
|
need to restart the container to apply such changes.
|
||||||
|
|
||||||
For that reason, it is usually better to use the `pct` command to
|
For that reason, it is usually better to use the `pct` command to generate and
|
||||||
generate and modify those files, or do the whole thing using the GUI.
|
modify those files, or do the whole thing using the GUI.
|
||||||
Our toolkit is smart enough to instantaneously apply most changes to
|
Our toolkit is smart enough to instantaneously apply most changes to running
|
||||||
running containers. This feature is called "hot plug", and there is no
|
containers. This feature is called "hot plug", and there is no need to restart
|
||||||
need to restart the container in that case.
|
the container in that case.
|
||||||
|
|
||||||
In cases where a change cannot be hot plugged, it will be registered
|
In cases where a change cannot be hot plugged, it will be registered as a
|
||||||
as a pending change (shown in red color in the GUI). They will only
|
pending change (shown in red color in the GUI).
|
||||||
be applied after rebooting the container.
|
They will only be applied after rebooting the container.
|
||||||
|
|
||||||
|
|
||||||
File Format
|
File Format
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
The container configuration file uses a simple colon separated
|
The container configuration file uses a simple colon separated key/value
|
||||||
key/value format. Each line has the following format:
|
format. Each line has the following format:
|
||||||
|
|
||||||
-----
|
-----
|
||||||
# this is a comment
|
# this is a comment
|
||||||
OPTION: value
|
OPTION: value
|
||||||
-----
|
-----
|
||||||
|
|
||||||
Blank lines in those files are ignored, and lines starting with a `#`
|
Blank lines in those files are ignored, and lines starting with a `#` character
|
||||||
character are treated as comments and are also ignored.
|
are treated as comments and are also ignored.
|
||||||
|
|
||||||
It is possible to add low-level, LXC style configuration directly, for
|
It is possible to add low-level, LXC style configuration directly, for example:
|
||||||
example:
|
|
||||||
|
|
||||||
----
|
----
|
||||||
lxc.init_cmd: /sbin/my_own_init
|
lxc.init_cmd: /sbin/my_own_init
|
||||||
@ -880,10 +883,10 @@ The settings are passed directly to the LXC low-level tools.
|
|||||||
Snapshots
|
Snapshots
|
||||||
~~~~~~~~~
|
~~~~~~~~~
|
||||||
|
|
||||||
When you create a snapshot, `pct` stores the configuration at snapshot
|
When you create a snapshot, `pct` stores the configuration at snapshot time
|
||||||
time into a separate snapshot section within the same configuration
|
into a separate snapshot section within the same configuration file. For
|
||||||
file. For example, after creating a snapshot called ``testsnapshot'',
|
example, after creating a snapshot called ``testsnapshot'', your configuration
|
||||||
your configuration file will look like this:
|
file will look like this:
|
||||||
|
|
||||||
.Container configuration with snapshot
|
.Container configuration with snapshot
|
||||||
----
|
----
|
||||||
@ -899,10 +902,9 @@ snaptime: 1457170803
|
|||||||
...
|
...
|
||||||
----
|
----
|
||||||
|
|
||||||
There are a few snapshot related properties like `parent` and
|
There are a few snapshot related properties like `parent` and `snaptime`. The
|
||||||
`snaptime`. The `parent` property is used to store the parent/child
|
`parent` property is used to store the parent/child relationship between
|
||||||
relationship between snapshots. `snaptime` is the snapshot creation
|
snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
|
||||||
time stamp (Unix epoch).
|
|
||||||
|
|
||||||
|
|
||||||
[[pct_options]]
|
[[pct_options]]
|
||||||
@ -915,16 +917,16 @@ include::pct.conf.5-opts.adoc[]
|
|||||||
Locks
|
Locks
|
||||||
-----
|
-----
|
||||||
|
|
||||||
Container migrations, snapshots and backups (`vzdump`) set a lock to
|
Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
|
||||||
prevent incompatible concurrent actions on the affected container. Sometimes
|
incompatible concurrent actions on the affected container. Sometimes you need
|
||||||
you need to remove such a lock manually (e.g., after a power failure).
|
to remove such a lock manually (e.g., after a power failure).
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct unlock <CTID>
|
# pct unlock <CTID>
|
||||||
----
|
----
|
||||||
|
|
||||||
CAUTION: Only do this if you are sure the action which set the lock is
|
CAUTION: Only do this if you are sure the action which set the lock is no
|
||||||
no longer running.
|
longer running.
|
||||||
|
|
||||||
|
|
||||||
ifdef::manvolnum[]
|
ifdef::manvolnum[]
|
||||||
@ -939,10 +941,3 @@ Configuration file for the container '<CTID>'.
|
|||||||
|
|
||||||
include::pve-copyright.adoc[]
|
include::pve-copyright.adoc[]
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user