mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-06-27 12:21:32 +00:00
rewrite and extend pct documentation
* rephrase some parts. * update old information * add info about pending changes and other "new" features Co-Authored-by: Aaron Lauterer <a.lauterer@proxmox.com> Co-Authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This commit is contained in:
parent
f1447c8d20
commit
14e978110c
437
pct.adoc
437
pct.adoc
@ -28,41 +28,35 @@ ifdef::wiki[]
|
|||||||
:title: Linux Container
|
:title: Linux Container
|
||||||
endif::wiki[]
|
endif::wiki[]
|
||||||
|
|
||||||
Containers are a lightweight alternative to fully virtualized
|
Containers are a lightweight alternative to fully virtualized machines (VMs).
|
||||||
VMs. Instead of emulating a complete Operating System (OS), containers
|
They use the kernel of the host system that they run on, instead of emulating a
|
||||||
simply use the OS of the host they run on. This implies that all
|
full operating system (OS). This means that containers can access resources on
|
||||||
containers use the same kernel, and that they can access resources
|
the host system directly.
|
||||||
from the host directly.
|
|
||||||
|
|
||||||
This is great because containers do not waste CPU power nor memory due
|
The runtime costs for containers is low, usually negligible. However, there
|
||||||
to kernel emulation. Container run-time costs are close to zero and
|
are some drawbacks that need be considered:
|
||||||
usually negligible. But there are also some drawbacks you need to
|
|
||||||
consider:
|
|
||||||
|
|
||||||
* You can only run Linux based OS inside containers, i.e. it is not
|
* Only Linux distributions can be run in containers. (It is not
|
||||||
possible to run FreeBSD or MS Windows inside.
|
possible to run FreeBSD or MS Windows inside a container.)
|
||||||
|
|
||||||
* For security reasons, access to host resources needs to be
|
* For security reasons, access to host resources needs to be restricted. Containers
|
||||||
restricted. This is done with AppArmor, SecComp filters and other
|
run in their own separate namespaces. Additionally some syscalls are not
|
||||||
kernel features. Be prepared that some syscalls are not allowed
|
allowed within containers.
|
||||||
inside containers.
|
|
||||||
|
|
||||||
{pve} uses https://linuxcontainers.org/[LXC] as underlying container
|
{pve} uses https://linuxcontainers.org/[LXC] as underlying container
|
||||||
technology. We consider LXC as low-level library, which provides
|
technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the usage of LXC
|
||||||
countless options. It would be too difficult to use those tools
|
containers.
|
||||||
directly. Instead, we provide a small wrapper called `pct`, the
|
|
||||||
"Proxmox Container Toolkit".
|
|
||||||
|
|
||||||
The toolkit is tightly coupled with {pve}. That means that it is aware
|
Containers are tightly integrated with {pve}. This means that they are aware of
|
||||||
of the cluster setup, and it can use the same network and storage
|
the cluster setup, and they can use the same network and storage resources as
|
||||||
resources as fully virtualized VMs. You can even use the {pve}
|
virtual machines. You can also use the {pve} firewall, or manage containers
|
||||||
firewall, or manage containers using the HA framework.
|
using the HA framework.
|
||||||
|
|
||||||
Our primary goal is to offer an environment as one would get from a
|
Our primary goal is to offer an environment as one would get from a
|
||||||
VM, but without the additional overhead. We call this "System
|
VM, but without the additional overhead. We call this "System
|
||||||
Containers".
|
Containers".
|
||||||
|
|
||||||
NOTE: If you want to run micro-containers (with docker, rkt, ...), it
|
NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
|
||||||
is best to run them inside a VM.
|
is best to run them inside a VM.
|
||||||
|
|
||||||
|
|
||||||
@ -79,38 +73,43 @@ Technology Overview
|
|||||||
|
|
||||||
* lxcfs to provide containerized /proc file system
|
* lxcfs to provide containerized /proc file system
|
||||||
|
|
||||||
|
* CGroups (control groups) for resource allocation
|
||||||
|
|
||||||
* AppArmor/Seccomp to improve security
|
* AppArmor/Seccomp to improve security
|
||||||
|
|
||||||
* CRIU: for live migration (planned)
|
* Modern Linux kernels
|
||||||
|
|
||||||
* Runs on modern Linux kernels
|
|
||||||
|
|
||||||
* Image based deployment (templates)
|
* Image based deployment (templates)
|
||||||
|
|
||||||
* Use {pve} storage library
|
* Uses {pve} storage library
|
||||||
|
|
||||||
* Container setup from host (network, DNS, storage, ...)
|
|
||||||
|
|
||||||
|
* Container setup from host (network, DNS, storage, etc.)
|
||||||
|
|
||||||
Security Considerations
|
Security Considerations
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Containers use the same kernel as the host, so there is a big attack
|
Containers use the kernel of the host system. This creates a big attack
|
||||||
surface for malicious users. You should consider this fact if you
|
surface for malicious users. This should be considered if containers
|
||||||
provide containers to totally untrusted people. In general, fully
|
are provided to untrustworthy people. In general, full
|
||||||
virtualized VMs provide better isolation.
|
virtual machines provide better isolation.
|
||||||
|
|
||||||
The good news is that LXC uses many kernel security features like
|
However, LXC uses many security features like AppArmor, CGroups and kernel
|
||||||
AppArmor, CGroups and PID and user namespaces, which makes containers
|
namespaces to reduce the attack surface.
|
||||||
usage quite secure.
|
|
||||||
|
AppArmor profiles are used to restrict access to possibly dangerous actions.
|
||||||
|
Some system calls, i.e. `mount`, are prohibited from execution.
|
||||||
|
|
||||||
|
To trace AppArmor activity, use:
|
||||||
|
|
||||||
|
----
|
||||||
|
# dmesg | grep apparmor
|
||||||
|
----
|
||||||
|
|
||||||
Guest Operating System Configuration
|
Guest Operating System Configuration
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
We normally try to detect the operating system type inside the
|
{pve} tries to detect the Linux distribution in the container, and modifies some
|
||||||
container, and then modify some files inside the container to make
|
files. Here is a short list of things done at container startup:
|
||||||
them work as expected. Here is a short list of things we do at
|
|
||||||
container startup:
|
|
||||||
|
|
||||||
set /etc/hostname:: to set the container name
|
set /etc/hostname:: to set the container name
|
||||||
|
|
||||||
@ -145,7 +144,9 @@ file for it. For instance, if the file `/etc/.pve-ignore.hosts`
|
|||||||
exists then the `/etc/hosts` file will not be touched. This can be a
|
exists then the `/etc/hosts` file will not be touched. This can be a
|
||||||
simple empty file created via:
|
simple empty file created via:
|
||||||
|
|
||||||
# touch /etc/.pve-ignore.hosts
|
----
|
||||||
|
# touch /etc/.pve-ignore.hosts
|
||||||
|
----
|
||||||
|
|
||||||
Most modifications are OS dependent, so they differ between different
|
Most modifications are OS dependent, so they differ between different
|
||||||
distributions and versions. You can completely disable modifications
|
distributions and versions. You can completely disable modifications
|
||||||
@ -178,27 +179,29 @@ Container Images
|
|||||||
|
|
||||||
Container images, sometimes also referred to as ``templates'' or
|
Container images, sometimes also referred to as ``templates'' or
|
||||||
``appliances'', are `tar` archives which contain everything to run a
|
``appliances'', are `tar` archives which contain everything to run a
|
||||||
container. You can think of it as a tidy container backup. Like most
|
container. `pct` uses them to create a new container, for example:
|
||||||
modern container toolkits, `pct` uses those images when you create a
|
|
||||||
new container, for example:
|
|
||||||
|
|
||||||
pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
----
|
||||||
|
# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
|
----
|
||||||
|
|
||||||
{pve} itself ships a set of basic templates for most common
|
{pve} itself provides a variety of basic templates for the most common
|
||||||
operating systems, and you can download them using the `pveam` (short
|
Linux distributions. They can be downloaded using the GUI or the
|
||||||
for {pve} Appliance Manager) command line utility. You can also
|
`pveam` (short for {pve} Appliance Manager) command line utility.
|
||||||
download https://www.turnkeylinux.org/[TurnKey Linux] containers using
|
Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
|
||||||
that tool (or the graphical user interface).
|
container templates are also available to download.
|
||||||
|
|
||||||
Our image repositories contain a list of available images, and there
|
The list of available templates is updated daily via cron. To trigger it manually:
|
||||||
is a cron job run each day to download that list. You can trigger that
|
|
||||||
update manually with:
|
|
||||||
|
|
||||||
pveam update
|
----
|
||||||
|
# pveam update
|
||||||
|
----
|
||||||
|
|
||||||
After that you can view the list of available images using:
|
To view the list of available images run:
|
||||||
|
|
||||||
pveam available
|
----
|
||||||
|
# pveam available
|
||||||
|
----
|
||||||
|
|
||||||
You can restrict this large list by specifying the `section` you are
|
You can restrict this large list by specifying the `section` you are
|
||||||
interested in, for example basic `system` images:
|
interested in, for example basic `system` images:
|
||||||
@ -206,15 +209,24 @@ interested in, for example basic `system` images:
|
|||||||
.List available system images
|
.List available system images
|
||||||
----
|
----
|
||||||
# pveam available --section system
|
# pveam available --section system
|
||||||
system archlinux-base_2015-24-29-1_x86_64.tar.gz
|
system alpine-3.10-default_20190626_amd64.tar.xz
|
||||||
system centos-7-default_20160205_amd64.tar.xz
|
system alpine-3.9-default_20190224_amd64.tar.xz
|
||||||
system debian-6.0-standard_6.0-7_amd64.tar.gz
|
system archlinux-base_20190924-1_amd64.tar.gz
|
||||||
system debian-7.0-standard_7.0-3_amd64.tar.gz
|
system centos-6-default_20191016_amd64.tar.xz
|
||||||
system debian-8.0-standard_8.0-1_amd64.tar.gz
|
system centos-7-default_20190926_amd64.tar.xz
|
||||||
system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
|
system centos-8-default_20191016_amd64.tar.xz
|
||||||
system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
|
system debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
|
system debian-8.0-standard_8.11-1_amd64.tar.gz
|
||||||
system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
|
system debian-9.0-standard_9.7-1_amd64.tar.gz
|
||||||
|
system fedora-30-default_20190718_amd64.tar.xz
|
||||||
|
system fedora-31-default_20191029_amd64.tar.xz
|
||||||
|
system gentoo-current-default_20190718_amd64.tar.xz
|
||||||
|
system opensuse-15.0-default_20180907_amd64.tar.xz
|
||||||
|
system opensuse-15.1-default_20190719_amd64.tar.xz
|
||||||
|
system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
|
||||||
|
system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
|
||||||
|
system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
|
||||||
|
system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
|
||||||
----
|
----
|
||||||
|
|
||||||
Before you can use such a template, you need to download them into one
|
Before you can use such a template, you need to download them into one
|
||||||
@ -222,54 +234,49 @@ of your storages. You can simply use storage `local` for that
|
|||||||
purpose. For clustered installations, it is preferred to use a shared
|
purpose. For clustered installations, it is preferred to use a shared
|
||||||
storage so that all nodes can access those images.
|
storage so that all nodes can access those images.
|
||||||
|
|
||||||
pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
|
----
|
||||||
|
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
|
----
|
||||||
|
|
||||||
You are now ready to create containers using that image, and you can
|
You are now ready to create containers using that image, and you can
|
||||||
list all downloaded images on storage `local` with:
|
list all downloaded images on storage `local` with:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pveam list local
|
# pveam list local
|
||||||
local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
|
local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
|
||||||
----
|
----
|
||||||
|
|
||||||
The above command shows you the full {pve} volume identifiers. They include
|
The above command shows you the full {pve} volume identifiers. They include
|
||||||
the storage name, and most other {pve} commands can use them. For
|
the storage name, and most other {pve} commands can use them. For
|
||||||
example you can delete that image later with:
|
example you can delete that image later with:
|
||||||
|
|
||||||
pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
|
----
|
||||||
|
# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
|
----
|
||||||
|
|
||||||
[[pct_container_storage]]
|
[[pct_container_storage]]
|
||||||
Container Storage
|
Container Storage
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
Traditional containers use a very simple storage model, only allowing
|
The {pve} LXC container storage model is more flexible than traditional
|
||||||
a single mount point, the root file system. This was further
|
container storage models. A container can have multiple mount points. This makes
|
||||||
restricted to specific file system types like `ext4` and `nfs`.
|
it possible to use the best suited storage for each application.
|
||||||
Additional mounts are often done by user provided scripts. This turned
|
|
||||||
out to be complex and error prone, so we try to avoid that now.
|
|
||||||
|
|
||||||
Our new LXC based container model is more flexible regarding
|
For example the root file system of the container can be on slow and cheap
|
||||||
storage. First, you can have more than a single mount point. This
|
storage while the database can be on fast and distributed storage via a second
|
||||||
allows you to choose a suitable storage for each application. For
|
mount point. See section <<pct_mount_points, Mount Points>> for further details.
|
||||||
example, you can use a relatively slow (and thus cheap) storage for
|
|
||||||
the container root file system. Then you can use a second mount point
|
|
||||||
to mount a very fast, distributed storage for your database
|
|
||||||
application. See section <<pct_mount_points,Mount Points>> for further
|
|
||||||
details.
|
|
||||||
|
|
||||||
The second big improvement is that you can use any storage type
|
Any storage type supported by the {pve} storage library can be used. This means
|
||||||
supported by the {pve} storage library. That means that you can store
|
that containers can be stored on local (for example `lvm`, `zfs` or directory),
|
||||||
your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
|
shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
|
||||||
or even on distributed storage systems like `ceph`. It also enables us
|
Ceph. Advanced storage features like snapshots or clones can be used if the
|
||||||
to use advanced storage features like snapshots and clones. `vzdump`
|
underlying storage supports them. The `vzdump` backup tool can use snapshots to
|
||||||
can also use the snapshot feature to provide consistent container
|
provide consistent container backups.
|
||||||
backups.
|
|
||||||
|
|
||||||
Last but not least, you can also mount local devices directly, or
|
Furthermore, local devices or local directories can be mounted directly using
|
||||||
mount local directories using bind mounts. That way you can access
|
'bind mounts'. This gives access to local resources inside a container with
|
||||||
local storage inside containers with zero overhead. Such bind mounts
|
practically zero overhead. Bind mounts can be used as an easy way to share data
|
||||||
also provide an easy way to share data between different containers.
|
between containers.
|
||||||
|
|
||||||
|
|
||||||
FUSE Mounts
|
FUSE Mounts
|
||||||
@ -289,20 +296,21 @@ Using Quotas Inside Containers
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Quotas allow to set limits inside a container for the amount of disk
|
Quotas allow to set limits inside a container for the amount of disk
|
||||||
space that each user can use. This only works on ext4 image based
|
space that each user can use.
|
||||||
storage types and currently does not work with unprivileged
|
|
||||||
containers.
|
NOTE: This only works on ext4 image based storage types and currently only works
|
||||||
|
with privileged containers.
|
||||||
|
|
||||||
Activating the `quota` option causes the following mount options to be
|
Activating the `quota` option causes the following mount options to be
|
||||||
used for a mount point:
|
used for a mount point:
|
||||||
`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
|
`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
|
||||||
|
|
||||||
This allows quotas to be used like you would on any other system. You
|
This allows quotas to be used like on any other system. You
|
||||||
can initialize the `/aquota.user` and `/aquota.group` files by running
|
can initialize the `/aquota.user` and `/aquota.group` files by running
|
||||||
|
|
||||||
----
|
----
|
||||||
quotacheck -cmug /
|
# quotacheck -cmug /
|
||||||
quotaon /
|
# quotaon /
|
||||||
----
|
----
|
||||||
|
|
||||||
and edit the quotas via the `edquota` command. Refer to the documentation
|
and edit the quotas via the `edquota` command. Refer to the documentation
|
||||||
@ -315,30 +323,42 @@ the mount point's path instead of just `/`.
|
|||||||
Using ACLs Inside Containers
|
Using ACLs Inside Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
|
The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
|
||||||
ACLs allow you to set more detailed file ownership than the traditional user/
|
containers. ACLs allow you to set more detailed file ownership than the
|
||||||
group/others model.
|
traditional user/group/others model.
|
||||||
|
|
||||||
|
|
||||||
Backup of Containers mount points
|
Backup of Container mount points
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
By default additional mount points besides the Root Disk mount point are not
|
To include a mount point in backups, enable the `backup` option for it in the
|
||||||
included in backups. You can reverse this default behavior by setting the
|
container configuration. For an existing mount point `mp0`
|
||||||
*Backup* option on a mount point.
|
|
||||||
// see PVE::VZDump::LXC::prepare()
|
----
|
||||||
|
mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
|
||||||
|
----
|
||||||
|
|
||||||
|
add `backup=1` to enable it.
|
||||||
|
|
||||||
|
----
|
||||||
|
mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
|
||||||
|
----
|
||||||
|
|
||||||
|
NOTE: When creating a new mount point in the GUI, this option is enabled by
|
||||||
|
default.
|
||||||
|
|
||||||
|
To disable backups for a mount point, add `backup=0` in the way described above,
|
||||||
|
or uncheck the *Backup* checkbox on the GUI.
|
||||||
|
|
||||||
Replication of Containers mount points
|
Replication of Containers mount points
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
By default additional mount points are replicated when the Root Disk
|
By default, additional mount points are replicated when the Root Disk is
|
||||||
is replicated. If you want the {pve} storage replication mechanism to skip a
|
replicated. If you want the {pve} storage replication mechanism to skip a mount
|
||||||
mount point when starting a replication job, you can set the
|
point, you can set the *Skip replication* option for that mount point. +
|
||||||
*Skip replication* option on that mount point. +
|
As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
|
||||||
As of {pve} 5.0, replication requires a storage of type `zfspool`, so adding a
|
mount point to a different type of storage when the container has replication
|
||||||
mount point to a different type of storage when the container has replication
|
configured requires to have *Skip replication* enabled for that mount point.
|
||||||
configured requires to *Skip replication* for that mount point.
|
|
||||||
|
|
||||||
|
|
||||||
[[pct_settings]]
|
[[pct_settings]]
|
||||||
Container Settings
|
Container Settings
|
||||||
@ -361,33 +381,47 @@ General settings of a container include
|
|||||||
* *Unprivileged container*: this option allows to choose at creation time
|
* *Unprivileged container*: this option allows to choose at creation time
|
||||||
if you want to create a privileged or unprivileged container.
|
if you want to create a privileged or unprivileged container.
|
||||||
|
|
||||||
|
Unprivileged Containers
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
Unprivileged containers use a new kernel feature called user namespaces. The
|
||||||
|
root UID 0 inside the container is mapped to an unprivileged user outside the
|
||||||
|
container. This means that most security issues (container escape, resource
|
||||||
|
abuse, etc.) in these containers will affect a random unprivileged user, and
|
||||||
|
would be a generic kernel security bug rather than an LXC issue. The LXC team
|
||||||
|
thinks unprivileged containers are safe by design.
|
||||||
|
|
||||||
|
This is the default option when creating a new container.
|
||||||
|
|
||||||
|
NOTE: If the container uses systemd as an init system, please be
|
||||||
|
aware the systemd version running inside the container should be equal to
|
||||||
|
or greater than 220.
|
||||||
|
|
||||||
|
|
||||||
Privileged Containers
|
Privileged Containers
|
||||||
^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Security is done by dropping capabilities, using mandatory access
|
Security in containers is achieved by using mandatory access control
|
||||||
control (AppArmor), SecComp filters and namespaces. The LXC team
|
(AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
|
||||||
considers this kind of container as unsafe, and they will not consider
|
container as unsafe, and they will not consider new container escape exploits
|
||||||
new container escape exploits to be security issues worthy of a CVE
|
to be security issues worthy of a CVE and quick fix. That's why privileged
|
||||||
and quick fix. So you should use this kind of containers only inside a
|
containers should only be used in trusted environments.
|
||||||
trusted environment, or when no untrusted task is running as root in
|
|
||||||
the container.
|
WARNING: Although it is not recommended, AppArmor can be disabled for a
|
||||||
|
container. This brings security risks with it. Some syscalls can lead to
|
||||||
|
privilege escalation when executed within a container if the system is
|
||||||
|
misconfigured or if a LXC or Linux Kernel vulnerability exists.
|
||||||
|
|
||||||
|
To disable AppArmor for a container, add the following line to the container
|
||||||
|
configuration file located at `/etc/pve/lxc/CTID.conf`:
|
||||||
|
|
||||||
|
----
|
||||||
|
lxc.apparmor_profile = unconfined
|
||||||
|
----
|
||||||
|
|
||||||
|
Please note that this is not recommended for production use.
|
||||||
|
|
||||||
|
|
||||||
Unprivileged Containers
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
This kind of containers use a new kernel feature called user
|
|
||||||
namespaces. The root UID 0 inside the container is mapped to an
|
|
||||||
unprivileged user outside the container. This means that most security
|
|
||||||
issues (container escape, resource abuse, ...) in those containers
|
|
||||||
will affect a random unprivileged user, and so would be a generic
|
|
||||||
kernel security bug rather than an LXC issue. The LXC team thinks
|
|
||||||
unprivileged containers are safe by design.
|
|
||||||
|
|
||||||
NOTE: If the container uses systemd as an init system, please be
|
|
||||||
aware the systemd version running inside the container should be equal
|
|
||||||
or greater than 220.
|
|
||||||
|
|
||||||
[[pct_cpu]]
|
[[pct_cpu]]
|
||||||
CPU
|
CPU
|
||||||
@ -395,11 +429,11 @@ CPU
|
|||||||
|
|
||||||
[thumbnail="screenshot/gui-create-ct-cpu.png"]
|
[thumbnail="screenshot/gui-create-ct-cpu.png"]
|
||||||
|
|
||||||
You can restrict the number of visible CPUs inside the container using
|
You can restrict the number of visible CPUs inside the container using the
|
||||||
the `cores` option. This is implemented using the Linux 'cpuset'
|
`cores` option. This is implemented using the Linux 'cpuset' cgroup
|
||||||
cgroup (**c**ontrol *group*). A special task inside `pvestatd` tries
|
(**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
|
||||||
to distribute running containers among available CPUs. You can view
|
running containers among available CPUs. To view the assigned CPUs run
|
||||||
the assigned CPUs using the following command:
|
the following command:
|
||||||
|
|
||||||
----
|
----
|
||||||
# pct cpusets
|
# pct cpusets
|
||||||
@ -410,10 +444,10 @@ the assigned CPUs using the following command:
|
|||||||
---------------------
|
---------------------
|
||||||
----
|
----
|
||||||
|
|
||||||
Containers use the host kernel directly, so all task inside a
|
Containers use the host kernel directly. All tasks inside a container are
|
||||||
container are handled by the host CPU scheduler. {pve} uses the Linux
|
handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
|
||||||
'CFS' (**C**ompletely **F**air **S**cheduler) scheduler by default,
|
**F**air **S**cheduler) scheduler by default, which has additional bandwidth
|
||||||
which has additional bandwidth control options.
|
control options.
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
|
|
||||||
@ -459,14 +493,14 @@ Mount Points
|
|||||||
|
|
||||||
[thumbnail="screenshot/gui-create-ct-root-disk.png"]
|
[thumbnail="screenshot/gui-create-ct-root-disk.png"]
|
||||||
|
|
||||||
The root mount point is configured with the `rootfs` property, and you can
|
The root mount point is configured with the `rootfs` property. You can
|
||||||
configure up to 10 additional mount points. The corresponding options
|
configure up to 256 additional mount points. The corresponding options
|
||||||
are called `mp0` to `mp9`, and they can contain the following setting:
|
are called `mp0` to `mp255`. They can contain the following settings:
|
||||||
|
|
||||||
include::pct-mountpoint-opts.adoc[]
|
include::pct-mountpoint-opts.adoc[]
|
||||||
|
|
||||||
Currently there are basically three types of mount points: storage backed
|
Currently there are three types of mount points: storage backed
|
||||||
mount points, bind mounts and device mounts.
|
mount points, bind mounts, and device mounts.
|
||||||
|
|
||||||
.Typical container `rootfs` configuration
|
.Typical container `rootfs` configuration
|
||||||
----
|
----
|
||||||
@ -558,26 +592,27 @@ include::pct-network-opts.adoc[]
|
|||||||
Automatic Start and Shutdown of Containers
|
Automatic Start and Shutdown of Containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
After creating your containers, you probably want them to start automatically
|
To automatically start a container when the host system boots, select the
|
||||||
when the host system boots. For this you need to select the option 'Start at
|
option 'Start at boot' in the 'Options' panel of the container in the web
|
||||||
boot' from the 'Options' Tab of your container in the web interface, or set it with
|
interface or run the following command:
|
||||||
the following command:
|
|
||||||
|
|
||||||
pct set <ctid> -onboot 1
|
----
|
||||||
|
# pct set CTID -onboot 1
|
||||||
|
----
|
||||||
|
|
||||||
.Start and Shutdown Order
|
.Start and Shutdown Order
|
||||||
// use the screenshot from qemu - its the same
|
// use the screenshot from qemu - its the same
|
||||||
[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
|
[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
|
||||||
|
|
||||||
If you want to fine tune the boot order of your containers, you can use the following
|
If you want to fine tune the boot order of your containers, you can use the following
|
||||||
parameters :
|
parameters:
|
||||||
|
|
||||||
* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
|
* *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
|
||||||
you want the CT to be the first to be started. (We use the reverse startup
|
you want the CT to be the first to be started. (We use the reverse startup
|
||||||
order for shutdown, so a container with a start order of 1 would be the last to
|
order for shutdown, so a container with a start order of 1 would be the last to
|
||||||
be shut down)
|
be shut down)
|
||||||
* *Startup delay*: Defines the interval between this container start and subsequent
|
* *Startup delay*: Defines the interval between this container start and subsequent
|
||||||
containers starts . E.g. set it to 240 if you want to wait 240 seconds before starting
|
containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
|
||||||
other containers.
|
other containers.
|
||||||
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
|
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
|
||||||
for the container to be offline after issuing a shutdown command.
|
for the container to be offline after issuing a shutdown command.
|
||||||
@ -595,7 +630,9 @@ Hookscripts
|
|||||||
|
|
||||||
You can add a hook script to CTs with the config property `hookscript`.
|
You can add a hook script to CTs with the config property `hookscript`.
|
||||||
|
|
||||||
pct set 100 -hookscript local:snippets/hookscript.pl
|
----
|
||||||
|
# pct set 100 -hookscript local:snippets/hookscript.pl
|
||||||
|
----
|
||||||
|
|
||||||
It will be called during various phases of the guests lifetime.
|
It will be called during various phases of the guests lifetime.
|
||||||
For an example and documentation see the example script under
|
For an example and documentation see the example script under
|
||||||
@ -672,11 +709,11 @@ individually
|
|||||||
Managing Containers with `pct`
|
Managing Containers with `pct`
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
`pct` is the tool to manage Linux Containers on {pve}. You can create
|
The "Proxmox Container Toolkit" (`pct`) is the command line tool to manage {pve}
|
||||||
and destroy containers, and control execution (start, stop, migrate,
|
containers. It enables you to create or destroy containers, as well as control the
|
||||||
...). You can use pct to set parameters in the associated config file,
|
container execution (start, stop, reboot, migrate, etc.). It can be used to set
|
||||||
like network configuration or memory limits.
|
parameters in the config file of a container, for example the network
|
||||||
|
configuration or memory limits.
|
||||||
|
|
||||||
CLI Usage Examples
|
CLI Usage Examples
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
@ -684,32 +721,46 @@ CLI Usage Examples
|
|||||||
Create a container based on a Debian template (provided you have
|
Create a container based on a Debian template (provided you have
|
||||||
already downloaded the template via the web interface)
|
already downloaded the template via the web interface)
|
||||||
|
|
||||||
pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
|
----
|
||||||
|
# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
|
||||||
|
----
|
||||||
|
|
||||||
Start container 100
|
Start container 100
|
||||||
|
|
||||||
pct start 100
|
----
|
||||||
|
# pct start 100
|
||||||
|
----
|
||||||
|
|
||||||
Start a login session via getty
|
Start a login session via getty
|
||||||
|
|
||||||
pct console 100
|
----
|
||||||
|
# pct console 100
|
||||||
|
----
|
||||||
|
|
||||||
Enter the LXC namespace and run a shell as root user
|
Enter the LXC namespace and run a shell as root user
|
||||||
|
|
||||||
pct enter 100
|
----
|
||||||
|
# pct enter 100
|
||||||
|
----
|
||||||
|
|
||||||
Display the configuration
|
Display the configuration
|
||||||
|
|
||||||
pct config 100
|
----
|
||||||
|
# pct config 100
|
||||||
|
----
|
||||||
|
|
||||||
Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
|
Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
|
||||||
set the address and gateway, while it's running
|
set the address and gateway, while it's running
|
||||||
|
|
||||||
pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
----
|
||||||
|
# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
|
||||||
|
----
|
||||||
|
|
||||||
Reduce the memory of the container to 512MB
|
Reduce the memory of the container to 512MB
|
||||||
|
|
||||||
pct set 100 -memory 512
|
----
|
||||||
|
# pct set 100 -memory 512
|
||||||
|
----
|
||||||
|
|
||||||
|
|
||||||
Obtaining Debugging Logs
|
Obtaining Debugging Logs
|
||||||
@ -719,9 +770,13 @@ In case `pct start` is unable to start a specific container, it might be
|
|||||||
helpful to collect debugging output by running `lxc-start` (replace `ID` with
|
helpful to collect debugging output by running `lxc-start` (replace `ID` with
|
||||||
the container's ID):
|
the container's ID):
|
||||||
|
|
||||||
lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
|
----
|
||||||
|
# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
|
||||||
|
----
|
||||||
|
|
||||||
This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
|
This command will attempt to start the container in foreground mode,
|
||||||
|
to stop the container run `pct shutdown ID` or `pct stop ID` in a
|
||||||
|
second terminal.
|
||||||
|
|
||||||
The collected debug log is written to `/tmp/lxc-ID.log`.
|
The collected debug log is written to `/tmp/lxc-ID.log`.
|
||||||
|
|
||||||
@ -735,10 +790,12 @@ Migration
|
|||||||
|
|
||||||
If you have a cluster, you can migrate your Containers with
|
If you have a cluster, you can migrate your Containers with
|
||||||
|
|
||||||
pct migrate <vmid> <target>
|
----
|
||||||
|
# pct migrate <ctid> <target>
|
||||||
|
----
|
||||||
|
|
||||||
This works as long as your Container is offline. If it has local volumes or
|
This works as long as your Container is offline. If it has local volumes or
|
||||||
mountpoints defined, the migration will copy the content over the network to
|
mount points defined, the migration will copy the content over the network to
|
||||||
the target host if the same storage is defined there.
|
the target host if the same storage is defined there.
|
||||||
|
|
||||||
If you want to migrate online Containers, the only way is to use
|
If you want to migrate online Containers, the only way is to use
|
||||||
@ -773,8 +830,8 @@ net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
|
|||||||
rootfs: local:107/vm-107-disk-1.raw,size=7G
|
rootfs: local:107/vm-107-disk-1.raw,size=7G
|
||||||
----
|
----
|
||||||
|
|
||||||
Those configuration files are simple text files, and you can edit them
|
The configuration files are simple text files. You can edit them
|
||||||
using a normal text editor (`vi`, `nano`, ...). This is sometimes
|
using a normal text editor (`vi`, `nano`, etc). This is sometimes
|
||||||
useful to do small corrections, but keep in mind that you need to
|
useful to do small corrections, but keep in mind that you need to
|
||||||
restart the container to apply such changes.
|
restart the container to apply such changes.
|
||||||
|
|
||||||
@ -784,12 +841,16 @@ Our toolkit is smart enough to instantaneously apply most changes to
|
|||||||
running containers. This feature is called "hot plug", and there is no
|
running containers. This feature is called "hot plug", and there is no
|
||||||
need to restart the container in that case.
|
need to restart the container in that case.
|
||||||
|
|
||||||
|
In cases where a change cannot be hot plugged, it will be registered
|
||||||
|
as a pending change (shown in red color in the GUI). They will only
|
||||||
|
be applied after rebooting the container.
|
||||||
|
|
||||||
|
|
||||||
File Format
|
File Format
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
Container configuration files use a simple colon separated key/value
|
The container configuration file uses a simple colon separated
|
||||||
format. Each line has the following format:
|
key/value format. Each line has the following format:
|
||||||
|
|
||||||
-----
|
-----
|
||||||
# this is a comment
|
# this is a comment
|
||||||
@ -802,13 +863,17 @@ character are treated as comments and are also ignored.
|
|||||||
It is possible to add low-level, LXC style configuration directly, for
|
It is possible to add low-level, LXC style configuration directly, for
|
||||||
example:
|
example:
|
||||||
|
|
||||||
lxc.init_cmd: /sbin/my_own_init
|
----
|
||||||
|
lxc.init_cmd: /sbin/my_own_init
|
||||||
|
----
|
||||||
|
|
||||||
or
|
or
|
||||||
|
|
||||||
lxc.init_cmd = /sbin/my_own_init
|
----
|
||||||
|
lxc.init_cmd = /sbin/my_own_init
|
||||||
|
----
|
||||||
|
|
||||||
Those settings are directly passed to the LXC low-level tools.
|
The settings are passed directly to the LXC low-level tools.
|
||||||
|
|
||||||
|
|
||||||
[[pct_snapshots]]
|
[[pct_snapshots]]
|
||||||
@ -854,9 +919,11 @@ Container migrations, snapshots and backups (`vzdump`) set a lock to
|
|||||||
prevent incompatible concurrent actions on the affected container. Sometimes
|
prevent incompatible concurrent actions on the affected container. Sometimes
|
||||||
you need to remove such a lock manually (e.g., after a power failure).
|
you need to remove such a lock manually (e.g., after a power failure).
|
||||||
|
|
||||||
pct unlock <CTID>
|
----
|
||||||
|
# pct unlock <CTID>
|
||||||
|
----
|
||||||
|
|
||||||
CAUTION: Only do that if you are sure the action which set the lock is
|
CAUTION: Only do this if you are sure the action which set the lock is
|
||||||
no longer running.
|
no longer running.
|
||||||
|
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user