change http links to https

checked if they work -- some returned certificate errors so didn't
change those ones.

also updated some that didn't point to the right thing (open-iscsi, and
the list of supported CPUs was returning empty).

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This commit is contained in:
Oguz Bektas 2021-04-29 13:59:42 +02:00 committed by Thomas Lamprecht
parent 024d3706c0
commit a55d30db1d
13 changed files with 25 additions and 24 deletions

View File

@ -78,10 +78,10 @@ are only accessible by root.
Technology Technology
---------- ----------
We use the http://www.corosync.org[Corosync Cluster Engine] for We use the https://www.corosync.org[Corosync Cluster Engine] for
cluster communication, and http://www.sqlite.org[SQlite] for the cluster communication, and https://www.sqlite.org[SQlite] for the
database file. The file system is implemented in user space using database file. The file system is implemented in user space using
http://fuse.sourceforge.net[FUSE]. https://github.com/libfuse/libfuse[FUSE].
File System Layout File System Layout
------------------ ------------------

View File

@ -15,4 +15,4 @@ Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public You should have received a copy of the GNU Affero General Public
License along with this program. If not, see License along with this program. If not, see
http://www.gnu.org/licenses/ https://www.gnu.org/licenses/

View File

@ -12,7 +12,7 @@ receive various stats about your hosts, virtual guests and storages.
Currently supported are: Currently supported are:
* Graphite (see http://graphiteapp.org ) * Graphite (see https://graphiteapp.org )
* InfluxDB (see https://www.influxdata.com/time-series-platform/influxdb/ ) * InfluxDB (see https://www.influxdata.com/time-series-platform/influxdb/ )
The external metric server definitions are saved in '/etc/pve/status.cfg', and The external metric server definitions are saved in '/etc/pve/status.cfg', and

View File

@ -17,7 +17,7 @@ ADD NEW FAQS TO THE BOTTOM OF THIS SECTION TO MAINTAIN NUMBERING
What distribution is {pve} based on?:: What distribution is {pve} based on?::
{pve} is based on http://www.debian.org[Debian GNU/Linux] {pve} is based on https://www.debian.org[Debian GNU/Linux]
What license does the {pve} project use?:: What license does the {pve} project use?::
@ -43,13 +43,14 @@ egrep '(vmx|svm)' /proc/cpuinfo
Supported Intel CPUs:: Supported Intel CPUs::
64-bit processors with 64-bit processors with
http://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel https://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel
Virtualization Technology (Intel VT-x)] support. (http://ark.intel.com/search/advanced/?s=t&VTX=true&InstructionSet=64-bit[List of processors with Intel VT and 64-bit]) Virtualization Technology (Intel VT-x)] support.
(https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&2_VTX=True&2_InstructionSet=64-bit[List of processors with Intel VT and 64-bit])
Supported AMD CPUs:: Supported AMD CPUs::
64-bit processors with 64-bit processors with
http://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD https://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD
Virtualization Technology (AMD-V)] support. Virtualization Technology (AMD-V)] support.
What is a container/virtual environment (VE)/virtual private server (VPS)?:: What is a container/virtual environment (VE)/virtual private server (VPS)?::

View File

@ -562,7 +562,7 @@ and add `ip_conntrack_ftp` to `/etc/modules` (so that it works after a reboot).
Suricata IPS integration Suricata IPS integration
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
If you want to use the http://suricata-ids.org/[Suricata IPS] If you want to use the https://suricata-ids.org/[Suricata IPS]
(Intrusion Prevention System), it's possible. (Intrusion Prevention System), it's possible.
Packets will be forwarded to the IPS only after the firewall ACCEPTed Packets will be forwarded to the IPS only after the firewall ACCEPTed

View File

@ -304,10 +304,10 @@ Video Tutorials
--------------- ---------------
* List of all official tutorials on our * List of all official tutorials on our
http://www.youtube.com/proxmoxve[{pve} YouTube Channel] https://www.youtube.com/proxmoxve[{pve} YouTube Channel]
* Tutorials in Spanish language on * Tutorials in Spanish language on
http://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es https://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es
YouTube Play List] YouTube Play List]

View File

@ -169,7 +169,7 @@ Why Open Source
{pve} uses a Linux kernel and is based on the Debian GNU/Linux {pve} uses a Linux kernel and is based on the Debian GNU/Linux
Distribution. The source code of {pve} is released under the Distribution. The source code of {pve} is released under the
http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
License, version 3]. This means that you are free to inspect the License, version 3]. This means that you are free to inspect the
source code at any time or contribute to the project yourself. source code at any time or contribute to the project yourself.
@ -209,7 +209,7 @@ machines. The clustering features were limited, and the user interface
was simple (server generated web page). was simple (server generated web page).
But we quickly developed new features using the But we quickly developed new features using the
http://corosync.github.io/corosync/[Corosync] cluster stack, and the https://corosync.github.io/corosync/[Corosync] cluster stack, and the
introduction of the new Proxmox cluster file system (pmxcfs) was a big introduction of the new Proxmox cluster file system (pmxcfs) was a big
step forward, because it completely hides the cluster complexity from step forward, because it completely hides the cluster complexity from
the user. Managing a cluster of 16 nodes is as simple as managing a the user. Managing a cluster of 16 nodes is as simple as managing a
@ -229,7 +229,7 @@ to manage your VMs.
The support for various storage types is another big task. Notably, The support for various storage types is another big task. Notably,
{pve} was the first distribution to ship ZFS on Linux by default in {pve} was the first distribution to ship ZFS on Linux by default in
2014. Another milestone was the ability to run and manage 2014. Another milestone was the ability to run and manage
http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
are extremely cost effective. are extremely cost effective.
When we started we were among the first companies providing When we started we were among the first companies providing

View File

@ -8,7 +8,7 @@ endif::wiki[]
Storage pool type: `cephfs` Storage pool type: `cephfs`
CephFS implements a POSIX-compliant filesystem, using a http://ceph.com[Ceph] CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph]
storage cluster to store its data. As CephFS builds upon Ceph, it shares most of storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
its properties. This includes redundancy, scalability, self-healing, and high its properties. This includes redundancy, scalability, self-healing, and high
availability. availability.

View File

@ -11,11 +11,11 @@ Storage pool type: `iscsi`
iSCSI is a widely employed technology used to connect to storage iSCSI is a widely employed technology used to connect to storage
servers. Almost all storage vendors support iSCSI. There are also open servers. Almost all storage vendors support iSCSI. There are also open
source iSCSI target solutions available, source iSCSI target solutions available,
e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on e.g. https://www.openmediavault.org/[OpenMediaVault], which is based on
Debian. Debian.
To use this backend, you need to install the To use this backend, you need to install the
http://www.open-iscsi.org/[Open-iSCSI] (`open-iscsi`) package. This is a https://www.open-iscsi.com/[Open-iSCSI] (`open-iscsi`) package. This is a
standard Debian package, but it is not installed by default to save standard Debian package, but it is not installed by default to save
resources. resources.

View File

@ -8,7 +8,7 @@ endif::wiki[]
Storage pool type: `rbd` Storage pool type: `rbd`
http://ceph.com[Ceph] is a distributed object store and file system https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages: storage, and you get the following advantages:

View File

@ -537,7 +537,7 @@ What permission do I need?
~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~
The required API permissions are documented for each individual The required API permissions are documented for each individual
method, and can be found at http://pve.proxmox.com/pve-docs/api-viewer/ method, and can be found at https://pve.proxmox.com/pve-docs/api-viewer/
The permissions are specified as a list which can be interpreted as a The permissions are specified as a list which can be interpreted as a
tree of logic and access-check functions: tree of logic and access-check functions:

View File

@ -5,7 +5,7 @@ ifdef::wiki[]
:pve-toplevel: :pve-toplevel:
endif::wiki[] endif::wiki[]
http://cloudinit.readthedocs.io[Cloud-Init] is the de facto https://cloudinit.readthedocs.io[Cloud-Init] is the de facto
multi-distribution package that handles early initialization of a multi-distribution package that handles early initialization of a
virtual machine instance. Using Cloud-Init, configuration of network virtual machine instance. Using Cloud-Init, configuration of network
devices and ssh keys on the hypervisor side is possible. When the VM devices and ssh keys on the hypervisor side is possible. When the VM

View File

@ -85,7 +85,7 @@ versus an emulated IDE controller will double the sequential write throughput,
as measured with `bonnie++(8)`. Using the virtio network interface can deliver as measured with `bonnie++(8)`. Using the virtio network interface can deliver
up to three times the throughput of an emulated Intel E1000 network card, as up to three times the throughput of an emulated Intel E1000 network card, as
measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
http://www.linux-kvm.org/page/Using_VirtIO_NIC] https://www.linux-kvm.org/page/Using_VirtIO_NIC]
[[qm_virtual_machines_settings]] [[qm_virtual_machines_settings]]
@ -735,8 +735,8 @@ standard setups.
There are, however, some scenarios in which a BIOS is not a good firmware There are, however, some scenarios in which a BIOS is not a good firmware
to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this. to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html] https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/] In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
If you want to use OVMF, there are several things to consider: If you want to use OVMF, there are several things to consider: