pveceph: add attribute ceph_codename

To change the codename for Ceph in one place, the patch adds the
asciidoc attribute 'ceph_codename'. Replaces the outdated references to
luminous and the http -> https on the links in pveceph.adoc.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
This commit is contained in:
Alwin Antreich 2019-11-06 15:09:11 +01:00 committed by Thomas Lamprecht
parent 081cb76105
commit 2798d126f3
2 changed files with 16 additions and 15 deletions

View File

@ -15,4 +15,5 @@ author=Proxmox Server Solutions Gmbh
email=support@proxmox.com email=support@proxmox.com
endif::docinfo1[] endif::docinfo1[]
ceph=http://ceph.com[Ceph] ceph=http://ceph.com[Ceph]
ceph_codename=nautilus

View File

@ -58,15 +58,15 @@ and VMs on the same node is possible.
To simplify management, we provide 'pveceph' - a tool to install and To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes. manage {ceph} services on {pve} nodes.
.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage: .Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD storage:
- Ceph Monitor (ceph-mon) - Ceph Monitor (ceph-mon)
- Ceph Manager (ceph-mgr) - Ceph Manager (ceph-mgr)
- Ceph OSD (ceph-osd; Object Storage Daemon) - Ceph OSD (ceph-osd; Object Storage Daemon)
TIP: We highly recommend to get familiar with Ceph's architecture TIP: We highly recommend to get familiar with Ceph's architecture
footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/] footnote:[Ceph architecture https://docs.ceph.com/docs/{ceph_codename}/architecture/]
and vocabulary and vocabulary
footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary]. footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary].
Precondition Precondition
@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there should be at least
three (preferably) identical servers for the setup. three (preferably) identical servers for the setup.
Check also the recommendations from Check also the recommendations from
http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website]. https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's website].
.CPU .CPU
Higher CPU core frequency reduce latency and should be preferred. As a simple Higher CPU core frequency reduce latency and should be preferred. As a simple
@ -237,7 +237,7 @@ configuration file.
Ceph Monitor Ceph Monitor
----------- -----------
The Ceph Monitor (MON) The Ceph Monitor (MON)
footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/] footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/]
maintains a master copy of the cluster map. For high availability you need to maintains a master copy of the cluster map. For high availability you need to
have at least 3 monitors. One monitor will already be installed if you have at least 3 monitors. One monitor will already be installed if you
used the installation wizard. You won't need more than 3 monitors as long used the installation wizard. You won't need more than 3 monitors as long
@ -282,7 +282,7 @@ Ceph Manager
------------ ------------
The Manager daemon runs alongside the monitors. It provides an interface to The Manager daemon runs alongside the monitors. It provides an interface to
monitor the cluster. Since the Ceph luminous release at least one ceph-mgr monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is
required. required.
Create Manager Create Manager
@ -355,7 +355,7 @@ WARNING: The above command will destroy data on the disk!
Starting with the Ceph Kraken release, a new Ceph OSD storage type was Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced, the so called Bluestore introduced, the so called Bluestore
footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/]. footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
This is the default when creating OSDs since Ceph Luminous. This is the default when creating OSDs since Ceph Luminous.
[source,bash] [source,bash]
@ -452,7 +452,7 @@ NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
It is advised to calculate the PG number depending on your setup, you can find It is advised to calculate the PG number depending on your setup, you can find
the formula and the PG calculator footnote:[PG calculator the formula and the PG calculator footnote:[PG calculator
http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can https://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
never be decreased. never be decreased.
@ -470,7 +470,7 @@ mark the checkbox "Add storages" in the GUI or use the command line option
Further information on Ceph pool handling can be found in the Ceph pool Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation operation footnote:[Ceph pool operation
http://docs.ceph.com/docs/luminous/rados/operations/pools/] https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/]
manual. manual.
@ -503,7 +503,7 @@ advantage that no central index service is needed. CRUSH works with a map of
OSDs, buckets (device locations) and rulesets (data replication) for pools. OSDs, buckets (device locations) and rulesets (data replication) for pools.
NOTE: Further information can be found in the Ceph documentation, under the NOTE: Further information can be found in the Ceph documentation, under the
section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/]. section CRUSH map footnote:[CRUSH map https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/].
This map can be altered to reflect different replication hierarchies. The object This map can be altered to reflect different replication hierarchies. The object
replicas can be separated (eg. failure domains), while maintaining the desired replicas can be separated (eg. failure domains), while maintaining the desired
@ -649,7 +649,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers
running, but this is normally only useful for a high count on parallel clients, running, but this is normally only useful for a high count on parallel clients,
as else the `MDS` seldom is the bottleneck. If you want to set this up please as else the `MDS` seldom is the bottleneck. If you want to set this up please
refer to the ceph documentation. footnote:[Configuring multiple active MDS refer to the ceph documentation. footnote:[Configuring multiple active MDS
daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/] daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/]
[[pveceph_fs_create]] [[pveceph_fs_create]]
Create CephFS Create CephFS
@ -681,7 +681,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group Ceph documentation for more information regarding a fitting placement group
number (`pg_num`) for your setup footnote:[Ceph Placement Groups number (`pg_num`) for your setup footnote:[Ceph Placement Groups
http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/]. https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully. storage configuration after it was created successfully.
@ -763,7 +763,7 @@ object in a PG for its health. There are two forms of Scrubbing, daily
(metadata compare) and weekly. The weekly reads the objects and uses checksums (metadata compare) and weekly. The weekly reads the objects and uses checksums
to ensure data integrity. If a running scrub interferes with business needs, to ensure data integrity. If a running scrub interferes with business needs,
you can adjust the time when scrubs footnote:[Ceph scrubbing you can adjust the time when scrubs footnote:[Ceph scrubbing
https://docs.ceph.com/docs/nautilus/rados/configuration/osd-config-ref/#scrubbing] https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
are executed. are executed.
@ -787,10 +787,10 @@ pve# ceph -w
To get a more detailed view, every ceph service has a log file under To get a more detailed view, every ceph service has a log file under
`/var/log/ceph/` and if there is not enough detail, the log level can be `/var/log/ceph/` and if there is not enough detail, the log level can be
adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/]. adjusted footnote:[Ceph log and debugging https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/log-and-debug/].
You can find more information about troubleshooting You can find more information about troubleshooting
footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/] footnote:[Ceph troubleshooting https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/]
a Ceph cluster on the official website. a Ceph cluster on the official website.