Due to Ceph dropping privileges when running the 'ceph-crash' daemon
[0], it is necessary to allow the daemon to authenticate with its
cluster in a safe manner.
In order to avoid exposing sensitive keyrings or somehow escalating
its privileges again, 'ceph-crash' is therefore provided with its own
keyring in the '/etc/pve/ceph' directory. This directory, due to being
on 'pmxcfs', may be read by members of the 'www-data' group, which
'ceph-crash' is made part of [1].
Expected Configuration
----------------------
1. A keyring file named '/etc/pve/ceph/ceph.client.crash.keyring'
exists
2. A section named 'client.crash' exists in '/etc/pve/ceph.conf'
3. The 'client.crash' section has a key named 'keyring' which
references the keyring file as '/etc/pve/ceph/$cluster.$name.keyring'
4. The 'client.crash' section has *no* key named 'key'
New Clusters
------------
The keyring file is created and the conf file is updated after the first
monitor has been created (when calling `pveceph mon create`).
Existing Clusters
-----------------
A new helper script creates and configures the 'client.crash' keyring in
`postinst`, if:
* Ceph is installed
* Ceph is initialized ('/etc/pve/ceph.conf' and '/etc/pve/ceph' exist)
* Connection to RADOS is successful
If the above conditions are met, the helper script ensures that the
existing configuration matches the expected configuration mentioned
above.
The configuration is not changed if it is already as expected.
The helper script may be called again manually if the `postinst` hook
fails. It is installed to '/usr/share/pve-manager/helpers/pve-init-ceph-crash'.
Existing `client.crash` Key
---------------------------
If a key named 'client.crash' already exists within the cluster, it is
reused and not regenerated.
[0]: https://github.com/ceph/ceph/pull/48713
[1]: https://git.proxmox.com/?p=ceph.git;a=commitdiff;h=f72c698a55905d93e9a0b7b95674616547deba8a
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
[ TL: also improve if-expression wrapping ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The Ceph monitor removal assertion contains a condition that checks
whether the given mon ID actually exists and thus may be removed.
The first part of the condition checks whether the hash returned by
`get_services_info` [0] contains the key "mon.$monid". However, the
hash's keys are never prefixed with "mon.", which makes this check
incorrect.
This is fixed by just using "$monid" directly.
The second part checks whether the mon hashes returned by
Ceph contain the "name" key before comparing the key with the given
mon ID. This key existence check is also incorrect; in particular:
* If the lookup `$_->{name}` evaluates to e.g. "foo", the check
passes, because "foo" is truthy. [1]
* If the lookup `$_->{name}` evaluates to "0", the check fails,
because "0" is falsy (due to it being equivalent to the number 0,
according to Perl [1]).
This is solved by using the inbuilt `defined()` instead of relying on
Perl's definition of truthiness.
[0]: https://git.proxmox.com/?p=pve-manager.git;a=blob;f=PVE/Ceph/Services.pm;h=e0f31e8eb6bc9b3777b3d0d548497276efaa5c41;hb=HEAD#l112
[1]: https://perldoc.perl.org/perldata#Scalar-values
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5198
Signed-off-by: Max Carrara <m.carrara@proxmox.com>
The pve_verify_cidr{,v4,v6} functions were originally intended for
the /etc/network/interfaces API endpoints and thus are a bit
restrictive. For example, as reported in the community forum[0],
pve_verify_cidr() does not consider '0::/0' and '0::/1' to be valid.
The error message in this scenario being
> value does not look like a valid CIDR network
is also confusing, as the first thought of users will be that it comes
from the passed-in monitor address.
The public networks are not written here and read from the Ceph config
and via a RADOS mon command, so no need to try and verify them. If
something really would go wrong during parsing, the
get_local_ip_from_cidr() call would complain afterwards.
[0]: https://forum.proxmox.com/threads/125226/
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
to include a more complete description of the returned data.
Sort properties in alphabetical order if the list is longer.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Multiple public networks can be defined in the ceph.conf. The networks need to
be routed to each other.
Support handling multiple IPs for a single monitor. By default, one address from
each public network is selected for monitor creation, but, as before, it can be
overwritten with the mon-address parameter, now taking a list of addresses.
On removal, make sure the all addresses are removed from the mon_host entry in
the ceph configuration.
Originally-by: Alwin Antreich <a.antreich@proxmox.com>
[handling of multiple addresses]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by also comparing the canonical form to decide when to remove an address. When
getting the IP from the rados information, also drop eventual brackets, so our
existing function can handle it. Add the brackets back within the
remove_addr_from_mon_host function.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Partially based on pve-storage's CephConfig.pm get_monaddr_list, but the
interface is not the best for the use case here.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation for supporting multiple addresses. The config section does not
allow more than one public_addr.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
mostly relevant to prepare support for IPv4/IPv6 dual stack mode as a special
case of the planned support for mutliple public networks.
As before, only set the false value when we are dealing with the first address,
but also be explicit about the IPv4 case as the defaults might change in the
future.
Then, when an address of a different type comes along later, set the relevant
bind option to true.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.
The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
The public_addr option for creating a new MON is only valid for manual
startup (since Ceph Jewel) and is just ignored by ceph-mon during setup.
As the MON is started after the creation through systemd without an IP
specified. It is trying to auto-select an IP.
Before this patch the public_addr was only explicitly written to the
ceph.conf if no public_network was set. The mon_address is only needed
in the config on the first start of the MON.
The ceph-mon itself tries to select an IP on the following conditions.
- no public_network or public_addr is in the ceph.conf
* startup fails
- public_network is in the ceph.conf
* with a single network, take the first available IP
* on multiple networks, walk through the list orderly and start on
the first network where an IP is found
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
instead of having multiple regexes in various places for the name,
define a 'SERVICE_REGEX' in PVE::Ceph::Services, and use that
everywhere in the api where we need it
additionally limit new sevices to 200 characters, since
systemd units have a limit of 256 characters[0] (including suffix), and
200 seems to be enough.
users can now create ceph services on machines with hostnames
longer than 32 characters
0: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of silently ignoring them. since we are in a task worker here
this is especially important - otherwise the task status/result is also
wrong!
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
nautilus puts non running monitors also in the monmap, so only show
as running when it has quorum
this is also not 100% correct, but the only 'correct' alternative is
to try and get/parse the systemd status of the units and broadcast it
to the pmxcfs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this fixes an issue where only one monitor is in mon_host, which is
offline, prevents a client connection
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ms_bind_ipv4 is default true and osds look for both
ipv6 and ipv4 addresses in cluster network/public network
since we only allow for one network each (which must be either
ipv4 or ipv6) we disallow ipv4 if ipv6 is detected
this fixes not starting osds on an ipv6 enabled, newly-setup cluster
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when there is no 'public_network' in the config, the monitor
can only find an ip if it is given explicitly, either via commandline
(not possible with systemd) or via the ceph.conf
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
a 'mon remove' does this already for us, so do not stop it
this lead to a race where we could stop the next to the last monitor
before it was removed from the cluster, leading to a state
where two monitor were needed for quorum, but only one did exist
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we need to remove an ip, ip:port or a ipvector from monhost
so use multiple regex search and replaces for this
this looks not really nice, but due to the strange format
of the line (e.g. ',' is a seperator inside and outside of a vector,
also ipv6 adresses may be surrounded with [] but so are vectors),
i found no better way
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise it is possible that multiple users create monitors at the same
time, resulting in a wrong ceph.conf and probably worse
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in nautilus, the default msgr protocol is v2, but it has to be
explicitely given to monmaptool, also we don't want to use the
monitor sections anymore so only update mon_host
ceph can cope with mixed mon_host and monitor sections, so this is
not a problem
also the ceph-create-keys part is not necessary anymore since
this is done by the monitor itself now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by using our new 'get_services_info'
this already checks for nautilus+ style 'mon_host' key in the ceph.conf
for the ip address
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it makes no sense to have the mon key inside the client.admin.keyring
also the order and operations did not make much sense
also create the client admin keyring when creating the config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if we already have a monitor, we can try to get the public_network via
the ceph configuration database
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in a case where we cannot connect to any monitor, we did not get
any info even if we have them via the pmxcfs
so get the RADOS object in an eval, and get the info we have from the
config/pmxcfs, and set the state to unknown if we cannot query via RADOS
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the MDS api, so that DELETE and POST calls can operate on
the same path. This does not changes the CLI pveceph interface
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since we will have a seperate gui for the manager, we do not need this
anymore
this is a breaking api change
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
From Nautilus release changelog[0]:
> The auid property for cephx users and RADOS pools has been removed.
> This was an undocumented and partially implemented capability that
> allowed cephx users to map capabilities to RADOS pools that they
> “owned”. Because there are no users we have removed this support.
[0]: https://ceph.com/releases/v14-2-0-nautilus-released/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>