mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-08-10 09:29:51 +00:00
SPAM: [PATCH v2 docs] fix typos in various adoc files
checked for common misspellings. some of the changes (like favourite vs. favorite or virtualization vs. virtualisation) are because of US vs. UK english Reviewed-by: Dylan Whyte <d.whyte@proxmox.com> Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This commit is contained in:
parent
169a0fc1d8
commit
3a433e9bb0
@ -313,7 +313,7 @@ root@proxmox:~# pvenode acme plugin config example_plugin
|
|||||||
└────────┴──────────────────────────────────────────┘
|
└────────┴──────────────────────────────────────────┘
|
||||||
----
|
----
|
||||||
|
|
||||||
At last you can configure the domain you want to get certitficates for and
|
At last you can configure the domain you want to get certificates for and
|
||||||
place the certificate order for it:
|
place the certificate order for it:
|
||||||
|
|
||||||
----
|
----
|
||||||
|
@ -249,7 +249,7 @@ Bootloader
|
|||||||
|
|
||||||
{pve} uses xref:sysboot_proxmox_boot_tool[`proxmox-boot-tool`] to manage the
|
{pve} uses xref:sysboot_proxmox_boot_tool[`proxmox-boot-tool`] to manage the
|
||||||
bootloader configuration.
|
bootloader configuration.
|
||||||
See the chapter on xref:sysboot[{pve} host bootladers] for details.
|
See the chapter on xref:sysboot[{pve} host bootloaders] for details.
|
||||||
|
|
||||||
|
|
||||||
ZFS Administration
|
ZFS Administration
|
||||||
@ -439,7 +439,7 @@ and you can install it using `apt-get`:
|
|||||||
----
|
----
|
||||||
|
|
||||||
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
|
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
|
||||||
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
|
favorite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
|
||||||
|
|
||||||
--------
|
--------
|
||||||
ZED_EMAIL_ADDR="root"
|
ZED_EMAIL_ADDR="root"
|
||||||
@ -515,7 +515,7 @@ to an external Storage.
|
|||||||
|
|
||||||
We strongly recommend to use enough memory, so that you normally do not
|
We strongly recommend to use enough memory, so that you normally do not
|
||||||
run into low memory situations. Should you need or want to add swap, it is
|
run into low memory situations. Should you need or want to add swap, it is
|
||||||
preferred to create a partition on a physical disk and use it as swapdevice.
|
preferred to create a partition on a physical disk and use it as a swap device.
|
||||||
You can leave some space free for this purpose in the advanced options of the
|
You can leave some space free for this purpose in the advanced options of the
|
||||||
installer. Additionally, you can lower the
|
installer. Additionally, you can lower the
|
||||||
``swappiness'' value. A good value for servers is 10:
|
``swappiness'' value. A good value for servers is 10:
|
||||||
|
@ -475,7 +475,7 @@ Logging of firewall rules
|
|||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
By default, all logging of traffic filtered by the firewall rules is disabled.
|
By default, all logging of traffic filtered by the firewall rules is disabled.
|
||||||
To enable logging, the `loglevel` for incommig and/or outgoing traffic has to be
|
To enable logging, the `loglevel` for incoming and/or outgoing traffic has to be
|
||||||
set in *Firewall* -> *Options*. This can be done for the host as well as for the
|
set in *Firewall* -> *Options*. This can be done for the host as well as for the
|
||||||
VM/CT firewall individually. By this, logging of {PVE}'s standard firewall rules
|
VM/CT firewall individually. By this, logging of {PVE}'s standard firewall rules
|
||||||
is enabled and the output can be observed in *Firewall* -> *Log*.
|
is enabled and the output can be observed in *Firewall* -> *Log*.
|
||||||
|
@ -92,7 +92,7 @@ machines and containers, you must also account for having enough memory
|
|||||||
available for Ceph to provide excellent and stable performance.
|
available for Ceph to provide excellent and stable performance.
|
||||||
|
|
||||||
As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
|
As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
|
||||||
by an OSD. Especially during recovery, rebalancing or backfilling.
|
by an OSD. Especially during recovery, re-balancing or backfilling.
|
||||||
|
|
||||||
The daemon itself will use additional memory. The Bluestore backend of the
|
The daemon itself will use additional memory. The Bluestore backend of the
|
||||||
daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
|
daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
|
||||||
@ -121,7 +121,7 @@ might take long. It is recommended that you use SSDs instead of HDDs in small
|
|||||||
setups to reduce recovery time, minimizing the likelihood of a subsequent
|
setups to reduce recovery time, minimizing the likelihood of a subsequent
|
||||||
failure event during recovery.
|
failure event during recovery.
|
||||||
|
|
||||||
In general SSDs will provide more IOPs than spinning disks. With this in mind,
|
In general, SSDs will provide more IOPS than spinning disks. With this in mind,
|
||||||
in addition to the higher cost, it may make sense to implement a
|
in addition to the higher cost, it may make sense to implement a
|
||||||
xref:pve_ceph_device_classes[class based] separation of pools. Another way to
|
xref:pve_ceph_device_classes[class based] separation of pools. Another way to
|
||||||
speed up OSDs is to use a faster disk as a journal or
|
speed up OSDs is to use a faster disk as a journal or
|
||||||
@ -623,7 +623,7 @@ NOTE: Further information can be found in the Ceph documentation, under the
|
|||||||
section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
|
section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
|
||||||
|
|
||||||
This map can be altered to reflect different replication hierarchies. The object
|
This map can be altered to reflect different replication hierarchies. The object
|
||||||
replicas can be separated (eg. failure domains), while maintaining the desired
|
replicas can be separated (e.g., failure domains), while maintaining the desired
|
||||||
distribution.
|
distribution.
|
||||||
|
|
||||||
A common configuration is to use different classes of disks for different Ceph
|
A common configuration is to use different classes of disks for different Ceph
|
||||||
@ -672,7 +672,7 @@ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class
|
|||||||
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|
||||||
|<root>|which crush root it should belong to (default ceph root "default")
|
|<root>|which crush root it should belong to (default ceph root "default")
|
||||||
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|
||||||
|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
|
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|
||||||
|===
|
|===
|
||||||
|
|
||||||
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
|
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
|
||||||
|
@ -112,7 +112,7 @@ You can define the cipher list in `/etc/default/pveproxy`, for example
|
|||||||
Above is the default. See the ciphers(1) man page from the openssl
|
Above is the default. See the ciphers(1) man page from the openssl
|
||||||
package for a list of all available options.
|
package for a list of all available options.
|
||||||
|
|
||||||
Additionally you can define that the client choses the used cipher in
|
Additionally, you can set the client to choose the cipher used in
|
||||||
`/etc/default/pveproxy` (default is the first cipher in the list available to
|
`/etc/default/pveproxy` (default is the first cipher in the list available to
|
||||||
both client and `pveproxy`):
|
both client and `pveproxy`):
|
||||||
|
|
||||||
|
13
pveum.adoc
13
pveum.adoc
@ -171,7 +171,7 @@ description: This is the first test user.
|
|||||||
The 'Base Domain Name' would be `ou=People,dc=ldap-test,dc=com` and the user
|
The 'Base Domain Name' would be `ou=People,dc=ldap-test,dc=com` and the user
|
||||||
attribute would be `uid`.
|
attribute would be `uid`.
|
||||||
+
|
+
|
||||||
If {pve} needs to authenticate (bind) to the ldap server before being
|
If {pve} needs to authenticate (bind) to the LDAP server before being
|
||||||
able to query and authenticate users, a bind domain name can be
|
able to query and authenticate users, a bind domain name can be
|
||||||
configured via the `bind_dn` property in `/etc/pve/domains.cfg`. Its
|
configured via the `bind_dn` property in `/etc/pve/domains.cfg`. Its
|
||||||
password then has to be stored in `/etc/pve/priv/ldap/<realmname>.pw`
|
password then has to be stored in `/etc/pve/priv/ldap/<realmname>.pw`
|
||||||
@ -181,14 +181,13 @@ single line containing the raw password.
|
|||||||
To verify certificates, you need to to set `capath`. You can set it either
|
To verify certificates, you need to to set `capath`. You can set it either
|
||||||
directly to the CA certificate of your LDAP server, or to the system path
|
directly to the CA certificate of your LDAP server, or to the system path
|
||||||
containing all trusted CA certificates (`/etc/ssl/certs`).
|
containing all trusted CA certificates (`/etc/ssl/certs`).
|
||||||
Additionally, you need to set the `verify` option, which can also be doen over
|
Additionally, you need to set the `verify` option, which can also be done over
|
||||||
the web interface.
|
the web interface.
|
||||||
|
|
||||||
Microsoft Active Directory::
|
Microsoft Active Directory::
|
||||||
|
|
||||||
A server and authentication domain need to be specified. Like with
|
A server and authentication domain need to be specified. Like with LDAP, an
|
||||||
ldap an optional fallback server, optional port, and SSL
|
optional fallback server, port, and SSL encryption can be configured.
|
||||||
encryption can be configured.
|
|
||||||
|
|
||||||
[[pveum_ldap_sync]]
|
[[pveum_ldap_sync]]
|
||||||
Syncing LDAP-based realms
|
Syncing LDAP-based realms
|
||||||
@ -409,7 +408,7 @@ of predefined roles which satisfies most needs.
|
|||||||
* `PVETemplateUser`: view and clone templates
|
* `PVETemplateUser`: view and clone templates
|
||||||
* `PVEUserAdmin`: user administration
|
* `PVEUserAdmin`: user administration
|
||||||
* `PVEVMAdmin`: fully administer VMs
|
* `PVEVMAdmin`: fully administer VMs
|
||||||
* `PVEVMUser`: view, backup, config CDROM, VM console, VM power management
|
* `PVEVMUser`: view, backup, config CD-ROM, VM console, VM power management
|
||||||
|
|
||||||
You can see the whole set of predefined roles on the GUI.
|
You can see the whole set of predefined roles on the GUI.
|
||||||
|
|
||||||
@ -464,7 +463,7 @@ Virtual machine related privileges::
|
|||||||
* `VM.Audit`: view VM config
|
* `VM.Audit`: view VM config
|
||||||
* `VM.Clone`: clone/copy a VM
|
* `VM.Clone`: clone/copy a VM
|
||||||
* `VM.Config.Disk`: add/modify/delete Disks
|
* `VM.Config.Disk`: add/modify/delete Disks
|
||||||
* `VM.Config.CDROM`: eject/change CDROM
|
* `VM.Config.CDROM`: eject/change CD-ROM
|
||||||
* `VM.Config.CPU`: modify CPU settings
|
* `VM.Config.CPU`: modify CPU settings
|
||||||
* `VM.Config.Memory`: modify Memory settings
|
* `VM.Config.Memory`: modify Memory settings
|
||||||
* `VM.Config.Network`: add/modify/delete Network devices
|
* `VM.Config.Network`: add/modify/delete Network devices
|
||||||
|
@ -5,7 +5,7 @@ ifdef::wiki[]
|
|||||||
:pve-toplevel:
|
:pve-toplevel:
|
||||||
endif::wiki[]
|
endif::wiki[]
|
||||||
|
|
||||||
http://cloudinit.readthedocs.io[Cloud-Init] is the defacto
|
http://cloudinit.readthedocs.io[Cloud-Init] is the de facto
|
||||||
multi-distribution package that handles early initialization of a
|
multi-distribution package that handles early initialization of a
|
||||||
virtual machine instance. Using Cloud-Init, configuration of network
|
virtual machine instance. Using Cloud-Init, configuration of network
|
||||||
devices and ssh keys on the hypervisor side is possible. When the VM
|
devices and ssh keys on the hypervisor side is possible. When the VM
|
||||||
@ -32,7 +32,7 @@ needs to store an encrypted version of that password inside the
|
|||||||
Cloud-Init data.
|
Cloud-Init data.
|
||||||
|
|
||||||
{pve} generates an ISO image to pass the Cloud-Init data to the VM. For
|
{pve} generates an ISO image to pass the Cloud-Init data to the VM. For
|
||||||
that purpose all Cloud-Init VMs need to have an assigned CDROM drive.
|
that purpose, all Cloud-Init VMs need to have an assigned CD-ROM drive.
|
||||||
Also many Cloud-Init images assume to have a serial console, so it is
|
Also many Cloud-Init images assume to have a serial console, so it is
|
||||||
recommended to add a serial console and use it as display for those VMs.
|
recommended to add a serial console and use it as display for those VMs.
|
||||||
|
|
||||||
@ -70,11 +70,11 @@ qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-1
|
|||||||
NOTE: Ubuntu Cloud-Init images require the `virtio-scsi-pci`
|
NOTE: Ubuntu Cloud-Init images require the `virtio-scsi-pci`
|
||||||
controller type for SCSI drives.
|
controller type for SCSI drives.
|
||||||
|
|
||||||
.Add Cloud-Init CDROM drive
|
.Add Cloud-Init CD-ROM drive
|
||||||
|
|
||||||
[thumbnail="screenshot/gui-cloudinit-hardware.png"]
|
[thumbnail="screenshot/gui-cloudinit-hardware.png"]
|
||||||
|
|
||||||
The next step is to configure a CDROM drive which will be used to pass
|
The next step is to configure a CD-ROM drive, which will be used to pass
|
||||||
the Cloud-Init data to the VM.
|
the Cloud-Init data to the VM.
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -84,7 +84,7 @@ qm set 9000 --ide2 local-lvm:cloudinit
|
|||||||
To be able to boot directly from the Cloud-Init image, set the
|
To be able to boot directly from the Cloud-Init image, set the
|
||||||
`bootdisk` parameter to `scsi0`, and restrict BIOS to boot from disk
|
`bootdisk` parameter to `scsi0`, and restrict BIOS to boot from disk
|
||||||
only. This will speed up booting, because VM BIOS skips the testing for
|
only. This will speed up booting, because VM BIOS skips the testing for
|
||||||
a bootable CDROM.
|
a bootable CD-ROM.
|
||||||
|
|
||||||
----
|
----
|
||||||
qm set 9000 --boot c --bootdisk scsi0
|
qm set 9000 --boot c --bootdisk scsi0
|
||||||
|
@ -309,7 +309,7 @@ Mediated Devices (vGPU, GVT-g)
|
|||||||
|
|
||||||
Mediated devices are another method to reuse features and performance from
|
Mediated devices are another method to reuse features and performance from
|
||||||
physical hardware for virtualized hardware. These are found most common in
|
physical hardware for virtualized hardware. These are found most common in
|
||||||
virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their
|
virtualized GPU setups such as Intel's GVT-g and NVIDIA's vGPUs used in their
|
||||||
GRID technology.
|
GRID technology.
|
||||||
|
|
||||||
With this, a physical Card is able to create virtual cards, similar to SR-IOV.
|
With this, a physical Card is able to create virtual cards, similar to SR-IOV.
|
||||||
@ -324,7 +324,7 @@ In general your card's driver must support that feature, otherwise it will
|
|||||||
not work. So please refer to your vendor for compatible drivers and how to
|
not work. So please refer to your vendor for compatible drivers and how to
|
||||||
configure them.
|
configure them.
|
||||||
|
|
||||||
Intels drivers for GVT-g are integrated in the Kernel and should work
|
Intel's drivers for GVT-g are integrated in the Kernel and should work
|
||||||
with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
|
with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
|
||||||
v5 and E3 v6 Xeon Processors.
|
v5 and E3 v6 Xeon Processors.
|
||||||
|
|
||||||
|
24
qm.adoc
24
qm.adoc
@ -36,9 +36,9 @@ like partitions, files, network cards which are then passed to an
|
|||||||
emulated computer which sees them as if they were real devices.
|
emulated computer which sees them as if they were real devices.
|
||||||
|
|
||||||
A guest operating system running in the emulated computer accesses these
|
A guest operating system running in the emulated computer accesses these
|
||||||
devices, and runs as it were running on real hardware. For instance you can pass
|
devices, and runs as if it were running on real hardware. For instance, you can pass
|
||||||
an iso image as a parameter to Qemu, and the OS running in the emulated computer
|
an ISO image as a parameter to Qemu, and the OS running in the emulated computer
|
||||||
will see a real CDROM inserted in a CD drive.
|
will see a real CD-ROM inserted into a CD drive.
|
||||||
|
|
||||||
Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
|
Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
|
||||||
only concerned with 32 and 64 bits PC clone emulation, since it represents the
|
only concerned with 32 and 64 bits PC clone emulation, since it represents the
|
||||||
@ -49,8 +49,8 @@ architecture.
|
|||||||
|
|
||||||
NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
|
NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
|
||||||
It means that Qemu is running with the support of the virtualization processor
|
It means that Qemu is running with the support of the virtualization processor
|
||||||
extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
|
extensions, via the Linux KVM module. In the context of {pve} _Qemu_ and
|
||||||
_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
|
_KVM_ can be used interchangeably, as Qemu in {pve} will always try to load the KVM
|
||||||
module.
|
module.
|
||||||
|
|
||||||
Qemu inside {pve} runs as a root process, since this is required to access block
|
Qemu inside {pve} runs as a root process, since this is required to access block
|
||||||
@ -61,7 +61,7 @@ Emulated devices and paravirtualized devices
|
|||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
The PC hardware emulated by Qemu includes a mainboard, network controllers,
|
The PC hardware emulated by Qemu includes a mainboard, network controllers,
|
||||||
scsi, ide and sata controllers, serial ports (the complete list can be seen in
|
SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
|
||||||
the `kvm(1)` man page) all of them emulated in software. All these devices
|
the `kvm(1)` man page) all of them emulated in software. All these devices
|
||||||
are the exact software equivalent of existing hardware devices, and if the OS
|
are the exact software equivalent of existing hardware devices, and if the OS
|
||||||
running in the guest has the proper drivers it will use the devices as if it
|
running in the guest has the proper drivers it will use the devices as if it
|
||||||
@ -138,7 +138,7 @@ snapshots) more intelligently.
|
|||||||
|
|
||||||
{pve} allows to boot VMs with different firmware and machine types, namely
|
{pve} allows to boot VMs with different firmware and machine types, namely
|
||||||
xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
|
xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
|
||||||
the default SeabBIOS to OVMF only if you plan to use
|
the default SeaBIOS to OVMF only if you plan to use
|
||||||
xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
|
xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
|
||||||
hardware layout of the VM's virtual motherboard. You can choose between the
|
hardware layout of the VM's virtual motherboard. You can choose between the
|
||||||
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
|
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
|
||||||
@ -271,10 +271,10 @@ However some software licenses depend on the number of sockets a machine has,
|
|||||||
in that case it makes sense to set the number of sockets to what the license
|
in that case it makes sense to set the number of sockets to what the license
|
||||||
allows you.
|
allows you.
|
||||||
|
|
||||||
Increasing the number of virtual cpus (cores and sockets) will usually provide a
|
Increasing the number of virtual CPUs (cores and sockets) will usually provide a
|
||||||
performance improvement though that is heavily dependent on the use of the VM.
|
performance improvement though that is heavily dependent on the use of the VM.
|
||||||
Multithreaded applications will of course benefit from a large number of
|
Multi-threaded applications will of course benefit from a large number of
|
||||||
virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
|
virtual CPUs, as for each virtual cpu you add, Qemu will create a new thread of
|
||||||
execution on the host system. If you're not sure about the workload of your VM,
|
execution on the host system. If you're not sure about the workload of your VM,
|
||||||
it is usually a safe bet to set the number of *Total cores* to 2.
|
it is usually a safe bet to set the number of *Total cores* to 2.
|
||||||
|
|
||||||
@ -282,7 +282,7 @@ NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
|
|||||||
is greater than the number of cores on the server (e.g., 4 VMs with each 4
|
is greater than the number of cores on the server (e.g., 4 VMs with each 4
|
||||||
cores on a machine with only 8 cores). In that case the host system will
|
cores on a machine with only 8 cores). In that case the host system will
|
||||||
balance the Qemu execution threads between your server cores, just like if you
|
balance the Qemu execution threads between your server cores, just like if you
|
||||||
were running a standard multithreaded application. However, {pve} will prevent
|
were running a standard multi-threaded application. However, {pve} will prevent
|
||||||
you from starting VMs with more virtual CPU cores than physically available, as
|
you from starting VMs with more virtual CPU cores than physically available, as
|
||||||
this will only bring the performance down due to the cost of context switches.
|
this will only bring the performance down due to the cost of context switches.
|
||||||
|
|
||||||
@ -492,7 +492,7 @@ vCPU hot-plug
|
|||||||
^^^^^^^^^^^^^
|
^^^^^^^^^^^^^
|
||||||
|
|
||||||
Modern operating systems introduced the capability to hot-plug and, to a
|
Modern operating systems introduced the capability to hot-plug and, to a
|
||||||
certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
|
certain extent, hot-unplug CPUs in a running system. Virtualization allows us
|
||||||
to avoid a lot of the (physical) problems real hardware can cause in such
|
to avoid a lot of the (physical) problems real hardware can cause in such
|
||||||
scenarios.
|
scenarios.
|
||||||
Still, this is a rather new and complicated feature, so its use should be
|
Still, this is a rather new and complicated feature, so its use should be
|
||||||
|
@ -76,7 +76,7 @@ perl packages installed on your system. For Debian/Ubuntu:
|
|||||||
Sending the Translation
|
Sending the Translation
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
You can send the finished translation (`.po` file) to the Proxmox team at the address
|
You can send the finished translation (`.po` file) to the Proxmox team at the address
|
||||||
office(at)proxmox.com, along with a signed contributor licence agreement.
|
office(at)proxmox.com, along with a signed contributor license agreement.
|
||||||
Alternatively, if you have some developer experience, you can send it as a
|
Alternatively, if you have some developer experience, you can send it as a
|
||||||
patch to the {pve} development mailing list. See
|
patch to the {pve} development mailing list. See
|
||||||
{webwiki-url}Developer_Documentation[Developer Documentation].
|
{webwiki-url}Developer_Documentation[Developer Documentation].
|
||||||
|
@ -449,7 +449,7 @@ but will match relative to any subdirectory. For example:
|
|||||||
|
|
||||||
# vzdump 777 --exclude-path bar
|
# vzdump 777 --exclude-path bar
|
||||||
|
|
||||||
excludes any file or directoy named `/bar`, `/var/bar`, `/var/foo/bar`, and
|
excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
|
||||||
so on, but not `/bar2`.
|
so on, but not `/bar2`.
|
||||||
|
|
||||||
Configuration files are also stored inside the backup archive
|
Configuration files are also stored inside the backup archive
|
||||||
|
Loading…
Reference in New Issue
Block a user