mirror of
https://git.proxmox.com/git/ceph.git
synced 2025-08-05 08:56:23 +00:00
import 12.2.13 release
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
39cfebf25a
commit
b9c3bfeb3d
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
|||||||
RELEASE=5.4
|
RELEASE=5.4
|
||||||
|
|
||||||
PACKAGE=ceph
|
PACKAGE=ceph
|
||||||
VER=12.2.12
|
VER=12.2.13
|
||||||
DEBREL=pve1
|
DEBREL=pve1
|
||||||
|
|
||||||
SRCDIR=ceph
|
SRCDIR=ceph
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
cmake_minimum_required(VERSION 2.8.11)
|
cmake_minimum_required(VERSION 2.8.11)
|
||||||
|
|
||||||
project(ceph)
|
project(ceph)
|
||||||
set(VERSION 12.2.12)
|
set(VERSION 12.2.13)
|
||||||
|
|
||||||
if(POLICY CMP0046)
|
if(POLICY CMP0046)
|
||||||
# Tweak policies (this one disables "missing" dependency warning)
|
# Tweak policies (this one disables "missing" dependency warning)
|
||||||
|
@ -1,3 +1,48 @@
|
|||||||
|
12.2.13
|
||||||
|
-------
|
||||||
|
* Ceph now packages python bindings for python3.6 instead of
|
||||||
|
python3.4, because EPEL7 recently switched from python3.4 to
|
||||||
|
python3.6 as the native python3. see the `announcement <https://lists.fedoraproject.org/archives/list/epel-announce@lists.fedoraproject.org/message/EGUMKAIMPK2UD5VSHXM53BH2MBDGDWMO/>_`
|
||||||
|
for more details on the background of this change.
|
||||||
|
|
||||||
|
* We now have telemetry support via a ceph-mgr module. The telemetry module is
|
||||||
|
absolutely on an opt-in basis, and is meant to collect generic cluster
|
||||||
|
information and push it to a central endpoint. By default, we're pushing it
|
||||||
|
to a project endpoint at https://telemetry.ceph.com/report, but this is
|
||||||
|
customizable using by setting the 'url' config option with::
|
||||||
|
|
||||||
|
ceph telemetry config-set url '<your url>'
|
||||||
|
|
||||||
|
You will have to opt-in on sharing your information with::
|
||||||
|
|
||||||
|
ceph telemetry on
|
||||||
|
|
||||||
|
You can view exactly what information will be reported first with::
|
||||||
|
|
||||||
|
ceph telemetry show
|
||||||
|
|
||||||
|
Should you opt-in, your information will be licensed under the
|
||||||
|
Community Data License Agreement - Sharing - Version 1.0, which you can
|
||||||
|
read at https://cdla.io/sharing-1-0/
|
||||||
|
|
||||||
|
The telemetry module reports information about CephFS file systems,
|
||||||
|
including:
|
||||||
|
|
||||||
|
- how many MDS daemons (in total and per file system)
|
||||||
|
- which features are (or have been) enabled
|
||||||
|
- how many data pools
|
||||||
|
- approximate file system age (year + month of creation)
|
||||||
|
- how much metadata is being cached per file system
|
||||||
|
|
||||||
|
As well as:
|
||||||
|
|
||||||
|
- whether IPv4 or IPv6 addresses are used for the monitors
|
||||||
|
- whether RADOS cache tiering is enabled (and which mode)
|
||||||
|
- whether pools are replicated or erasure coded, and
|
||||||
|
which erasure code profile plugin and parameters are in use
|
||||||
|
- how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
|
||||||
|
- aggregate stats about the CRUSH map, like which algorithms are used, how big buckets are, how many rules are defined, and what tunables are in use
|
||||||
|
|
||||||
12.2.12
|
12.2.12
|
||||||
-------
|
-------
|
||||||
* In 12.2.9 and earlier releases, keyring caps were not checked for validity,
|
* In 12.2.9 and earlier releases, keyring caps were not checked for validity,
|
||||||
@ -182,3 +227,29 @@
|
|||||||
'ceph osd set pglog_hardlimit' after completely upgrading to 12.2.11. Once all the OSDs
|
'ceph osd set pglog_hardlimit' after completely upgrading to 12.2.11. Once all the OSDs
|
||||||
have this flag set, the length of the pg log will be capped by a hard limit. We do not
|
have this flag set, the length of the pg log will be capped by a hard limit. We do not
|
||||||
recommend unsetting this flag beyond this point.
|
recommend unsetting this flag beyond this point.
|
||||||
|
|
||||||
|
* A health warning is now generated if the average osd heartbeat ping
|
||||||
|
time exceeds a configurable threshold for any of the intervals
|
||||||
|
computed. The OSD computes 1 minute, 5 minute and 15 minute
|
||||||
|
intervals with average, minimum and maximum values. New configuration
|
||||||
|
option ``mon_warn_on_slow_ping_ratio`` specifies a percentage of
|
||||||
|
``osd_heartbeat_grace`` to determine the threshold. A value of zero
|
||||||
|
disables the warning. New configuration option
|
||||||
|
``mon_warn_on_slow_ping_time`` specified in milliseconds over-rides the
|
||||||
|
computed value, causes a warning
|
||||||
|
when OSD heartbeat pings take longer than the specified amount.
|
||||||
|
New admin command ``ceph daemon mgr.# dump_osd_network [threshold]`` command will
|
||||||
|
list all connections with a ping time longer than the specified threshold or
|
||||||
|
value determined by the config options, for the average for any of the 3 intervals.
|
||||||
|
New admin command ``ceph daemon osd.# dump_osd_network [threshold]`` will
|
||||||
|
do the same but only including heartbeats initiated by the specified OSD.
|
||||||
|
|
||||||
|
* The configuration value ``osd_calc_pg_upmaps_max_stddev`` used for upmap
|
||||||
|
balancing has been removed. Instead use the mgr balancer config
|
||||||
|
``upmap_max_deviation`` which now is an integer number of PGs of deviation
|
||||||
|
from the target PGs per OSD. This can be set with a command like
|
||||||
|
``ceph config set mgr mgr/balancer/upmap_max_deviation 2``. The default
|
||||||
|
``upmap_max_deviation`` is 1. There are situations where crush rules
|
||||||
|
would not allow a pool to ever have completely balanced PGs. For example, if
|
||||||
|
crush requires 1 replica on each of 3 racks, but there are fewer OSDs in 1 of
|
||||||
|
the racks. In those cases, the configuration value can be increased.
|
||||||
|
@ -20,8 +20,8 @@ if command -v dpkg >/dev/null; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
elif command -v yum >/dev/null; then
|
elif command -v yum >/dev/null; then
|
||||||
for package in python-devel python-pip python-virtualenv doxygen ditaa ant libxml2-devel libxslt-devel Cython graphviz; do
|
for package in python36-devel python36-pip python36-virtualenv doxygen ditaa ant libxml2-devel libxslt-devel python36-Cython graphviz; do
|
||||||
if ! rpm -q $package >/dev/null ; then
|
if ! rpm -q --whatprovides $package >/dev/null ; then
|
||||||
missing="${missing:+$missing }$package"
|
missing="${missing:+$missing }$package"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
@ -57,7 +57,7 @@ cd build-doc
|
|||||||
[ -z "$vdir" ] && vdir="$TOPDIR/build-doc/virtualenv"
|
[ -z "$vdir" ] && vdir="$TOPDIR/build-doc/virtualenv"
|
||||||
|
|
||||||
if [ ! -e $vdir ]; then
|
if [ ! -e $vdir ]; then
|
||||||
virtualenv --system-site-packages $vdir -p python2
|
virtualenv --python=python3 --system-site-packages $vdir
|
||||||
fi
|
fi
|
||||||
$vdir/bin/pip install --quiet -r $TOPDIR/admin/doc-requirements.txt
|
$vdir/bin/pip install --quiet -r $TOPDIR/admin/doc-requirements.txt
|
||||||
|
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
Sphinx == 1.6.3
|
Sphinx == 2.1.2
|
||||||
-e git+https://github.com/ceph/sphinx-ditaa.git@py3#egg=sphinx-ditaa
|
git+https://github.com/ceph/sphinx-ditaa.git@py3#egg=sphinx-ditaa
|
||||||
breathe == 4.11.1
|
breathe == 4.13.1
|
||||||
|
pyyaml >= 5.1.2
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Contributor: John Coyle <dx9err@gmail.com>
|
# Contributor: John Coyle <dx9err@gmail.com>
|
||||||
# Maintainer: John Coyle <dx9err@gmail.com>
|
# Maintainer: John Coyle <dx9err@gmail.com>
|
||||||
pkgname=ceph
|
pkgname=ceph
|
||||||
pkgver=12.2.12
|
pkgver=12.2.13
|
||||||
pkgrel=0
|
pkgrel=0
|
||||||
pkgdesc="Ceph is a distributed object store and file system"
|
pkgdesc="Ceph is a distributed object store and file system"
|
||||||
pkgusers="ceph"
|
pkgusers="ceph"
|
||||||
@ -63,7 +63,7 @@ makedepends="
|
|||||||
xmlstarlet
|
xmlstarlet
|
||||||
yasm
|
yasm
|
||||||
"
|
"
|
||||||
source="ceph-12.2.12.tar.bz2"
|
source="ceph-12.2.13.tar.bz2"
|
||||||
subpackages="
|
subpackages="
|
||||||
$pkgname-base
|
$pkgname-base
|
||||||
$pkgname-common
|
$pkgname-common
|
||||||
@ -116,7 +116,7 @@ _sysconfdir=/etc
|
|||||||
_udevrulesdir=/etc/udev/rules.d
|
_udevrulesdir=/etc/udev/rules.d
|
||||||
_python_sitelib=/usr/lib/python2.7/site-packages
|
_python_sitelib=/usr/lib/python2.7/site-packages
|
||||||
|
|
||||||
builddir=$srcdir/ceph-12.2.12
|
builddir=$srcdir/ceph-12.2.13
|
||||||
|
|
||||||
build() {
|
build() {
|
||||||
export CEPH_BUILD_VIRTUALENV=$builddir
|
export CEPH_BUILD_VIRTUALENV=$builddir
|
||||||
|
@ -15,30 +15,27 @@
|
|||||||
# Please submit bugfixes or comments via http://tracker.ceph.com/
|
# Please submit bugfixes or comments via http://tracker.ceph.com/
|
||||||
#
|
#
|
||||||
%bcond_without ocf
|
%bcond_without ocf
|
||||||
%bcond_without cephfs_java
|
|
||||||
%if 0%{?suse_version}
|
|
||||||
%bcond_with ceph_test_package
|
|
||||||
%else
|
|
||||||
%bcond_without ceph_test_package
|
|
||||||
%endif
|
|
||||||
%bcond_with make_check
|
%bcond_with make_check
|
||||||
|
%bcond_without ceph_test_package
|
||||||
%ifarch s390 s390x
|
%ifarch s390 s390x
|
||||||
%bcond_with tcmalloc
|
%bcond_with tcmalloc
|
||||||
%else
|
%else
|
||||||
%bcond_without tcmalloc
|
%bcond_without tcmalloc
|
||||||
%endif
|
%endif
|
||||||
%bcond_with lowmem_builder
|
|
||||||
%if 0%{?fedora} || 0%{?rhel}
|
%if 0%{?fedora} || 0%{?rhel}
|
||||||
%bcond_without selinux
|
%bcond_without selinux
|
||||||
|
%bcond_without cephfs_java
|
||||||
|
%bcond_with lowmem_builder
|
||||||
|
%bcond_without lttng
|
||||||
%endif
|
%endif
|
||||||
%if 0%{?suse_version}
|
%if 0%{?suse_version}
|
||||||
%bcond_with selinux
|
%bcond_with selinux
|
||||||
%endif
|
%bcond_with cephfs_java
|
||||||
|
%bcond_without lowmem_builder
|
||||||
# LTTng-UST enabled on Fedora, RHEL 6+, and SLE (not openSUSE)
|
%ifarch x86_64 aarch64
|
||||||
%if 0%{?fedora} || 0%{?rhel} >= 6 || 0%{?suse_version}
|
|
||||||
%if ! 0%{?is_opensuse}
|
|
||||||
%bcond_without lttng
|
%bcond_without lttng
|
||||||
|
%else
|
||||||
|
%bcond_with lttng
|
||||||
%endif
|
%endif
|
||||||
%endif
|
%endif
|
||||||
|
|
||||||
@ -50,7 +47,8 @@
|
|||||||
%{!?_udevrulesdir: %global _udevrulesdir /lib/udev/rules.d}
|
%{!?_udevrulesdir: %global _udevrulesdir /lib/udev/rules.d}
|
||||||
%{!?tmpfiles_create: %global tmpfiles_create systemd-tmpfiles --create}
|
%{!?tmpfiles_create: %global tmpfiles_create systemd-tmpfiles --create}
|
||||||
%{!?python3_pkgversion: %global python3_pkgversion 3}
|
%{!?python3_pkgversion: %global python3_pkgversion 3}
|
||||||
|
%{!?python3_version_nodots: %global python3_version_nodots 3}
|
||||||
|
%{!?python3_version: %global python3_version 3}
|
||||||
# unify libexec for all targets
|
# unify libexec for all targets
|
||||||
%global _libexecdir %{_exec_prefix}/lib
|
%global _libexecdir %{_exec_prefix}/lib
|
||||||
|
|
||||||
@ -61,7 +59,7 @@
|
|||||||
# main package definition
|
# main package definition
|
||||||
#################################################################################
|
#################################################################################
|
||||||
Name: ceph
|
Name: ceph
|
||||||
Version: 12.2.12
|
Version: 12.2.13
|
||||||
Release: 0%{?dist}
|
Release: 0%{?dist}
|
||||||
%if 0%{?fedora} || 0%{?rhel}
|
%if 0%{?fedora} || 0%{?rhel}
|
||||||
Epoch: 2
|
Epoch: 2
|
||||||
@ -77,7 +75,7 @@ License: LGPL-2.1 and CC-BY-SA-3.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and
|
|||||||
Group: System/Filesystems
|
Group: System/Filesystems
|
||||||
%endif
|
%endif
|
||||||
URL: http://ceph.com/
|
URL: http://ceph.com/
|
||||||
Source0: http://ceph.com/download/ceph-12.2.12.tar.bz2
|
Source0: http://ceph.com/download/ceph-12.2.13.tar.bz2
|
||||||
%if 0%{?suse_version}
|
%if 0%{?suse_version}
|
||||||
%if 0%{?is_opensuse}
|
%if 0%{?is_opensuse}
|
||||||
ExclusiveArch: x86_64 aarch64 ppc64 ppc64le
|
ExclusiveArch: x86_64 aarch64 ppc64 ppc64le
|
||||||
@ -113,6 +111,7 @@ BuildRequires: python-numpy-devel
|
|||||||
%endif
|
%endif
|
||||||
BuildRequires: python-coverage
|
BuildRequires: python-coverage
|
||||||
BuildRequires: python-pecan
|
BuildRequires: python-pecan
|
||||||
|
BuildRequires: python-tox
|
||||||
BuildRequires: socat
|
BuildRequires: socat
|
||||||
%endif
|
%endif
|
||||||
BuildRequires: bc
|
BuildRequires: bc
|
||||||
@ -196,9 +195,9 @@ BuildRequires: python-sphinx
|
|||||||
%endif
|
%endif
|
||||||
# python34-... for RHEL, python3-... for all other supported distros
|
# python34-... for RHEL, python3-... for all other supported distros
|
||||||
%if 0%{?rhel}
|
%if 0%{?rhel}
|
||||||
BuildRequires: python34-devel
|
BuildRequires: python%{python3_pkgversion}-devel
|
||||||
BuildRequires: python34-setuptools
|
BuildRequires: python%{python3_pkgversion}-setuptools
|
||||||
BuildRequires: python34-Cython
|
BuildRequires: python%{python3_version_nodots}-Cython
|
||||||
%else
|
%else
|
||||||
BuildRequires: python3-devel
|
BuildRequires: python3-devel
|
||||||
BuildRequires: python3-setuptools
|
BuildRequires: python3-setuptools
|
||||||
@ -501,6 +500,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rgw}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rgw
|
%description -n python-rgw
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -513,6 +513,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rgw}
|
||||||
%description -n python%{python3_pkgversion}-rgw
|
%description -n python%{python3_pkgversion}-rgw
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
gateway.
|
gateway.
|
||||||
@ -523,6 +524,7 @@ Summary: Python 2 libraries for the RADOS object store
|
|||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rados}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rados
|
%description -n python-rados
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -535,6 +537,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: python%{python3_pkgversion}
|
Requires: python%{python3_pkgversion}
|
||||||
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rados}
|
||||||
%description -n python%{python3_pkgversion}-rados
|
%description -n python%{python3_pkgversion}-rados
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
object store.
|
object store.
|
||||||
@ -603,6 +606,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rbd}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rbd
|
%description -n python-rbd
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -615,6 +619,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rbd}
|
||||||
%description -n python%{python3_pkgversion}-rbd
|
%description -n python%{python3_pkgversion}-rbd
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
block device.
|
block device.
|
||||||
@ -655,9 +660,8 @@ Summary: Python 2 libraries for Ceph distributed file system
|
|||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
%if 0%{?suse_version}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
Recommends: python-rados = %{_epoch_prefix}%{version}-%{release}
|
%{?python_provide:%python_provide python-cephfs}
|
||||||
%endif
|
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-cephfs
|
%description -n python-cephfs
|
||||||
This package contains Python 2 libraries for interacting with Cephs distributed
|
This package contains Python 2 libraries for interacting with Cephs distributed
|
||||||
@ -670,6 +674,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-cephfs}
|
||||||
%description -n python%{python3_pkgversion}-cephfs
|
%description -n python%{python3_pkgversion}-cephfs
|
||||||
This package contains Python 3 libraries for interacting with Cephs distributed
|
This package contains Python 3 libraries for interacting with Cephs distributed
|
||||||
file system.
|
file system.
|
||||||
@ -679,6 +684,7 @@ Summary: Python 3 utility libraries for Ceph CLI
|
|||||||
%if 0%{?suse_version}
|
%if 0%{?suse_version}
|
||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-ceph-argparse}
|
||||||
%description -n python%{python3_pkgversion}-ceph-argparse
|
%description -n python%{python3_pkgversion}-ceph-argparse
|
||||||
This package contains types and routines for Python 3 used by the Ceph CLI as
|
This package contains types and routines for Python 3 used by the Ceph CLI as
|
||||||
well as the RESTful interface. These have to do with querying the daemons for
|
well as the RESTful interface. These have to do with querying the daemons for
|
||||||
@ -788,7 +794,7 @@ python-rbd, python-rgw or python-cephfs instead.
|
|||||||
# common
|
# common
|
||||||
#################################################################################
|
#################################################################################
|
||||||
%prep
|
%prep
|
||||||
%autosetup -p1 -n ceph-12.2.12
|
%autosetup -p1 -n ceph-12.2.13
|
||||||
|
|
||||||
%build
|
%build
|
||||||
%if 0%{with cephfs_java}
|
%if 0%{with cephfs_java}
|
||||||
@ -833,7 +839,7 @@ cmake .. \
|
|||||||
-DCMAKE_INSTALL_DOCDIR=%{_docdir}/ceph \
|
-DCMAKE_INSTALL_DOCDIR=%{_docdir}/ceph \
|
||||||
-DCMAKE_INSTALL_INCLUDEDIR=%{_includedir} \
|
-DCMAKE_INSTALL_INCLUDEDIR=%{_includedir} \
|
||||||
-DWITH_MANPAGE=ON \
|
-DWITH_MANPAGE=ON \
|
||||||
-DWITH_PYTHON3=ON \
|
-DWITH_PYTHON3=%{python3_version} \
|
||||||
-DWITH_SYSTEMD=ON \
|
-DWITH_SYSTEMD=ON \
|
||||||
%if 0%{?rhel} && ! 0%{?centos}
|
%if 0%{?rhel} && ! 0%{?centos}
|
||||||
-DWITH_SUBMAN=ON \
|
-DWITH_SUBMAN=ON \
|
||||||
|
@ -15,30 +15,27 @@
|
|||||||
# Please submit bugfixes or comments via http://tracker.ceph.com/
|
# Please submit bugfixes or comments via http://tracker.ceph.com/
|
||||||
#
|
#
|
||||||
%bcond_without ocf
|
%bcond_without ocf
|
||||||
%bcond_without cephfs_java
|
|
||||||
%if 0%{?suse_version}
|
|
||||||
%bcond_with ceph_test_package
|
|
||||||
%else
|
|
||||||
%bcond_without ceph_test_package
|
|
||||||
%endif
|
|
||||||
%bcond_with make_check
|
%bcond_with make_check
|
||||||
|
%bcond_without ceph_test_package
|
||||||
%ifarch s390 s390x
|
%ifarch s390 s390x
|
||||||
%bcond_with tcmalloc
|
%bcond_with tcmalloc
|
||||||
%else
|
%else
|
||||||
%bcond_without tcmalloc
|
%bcond_without tcmalloc
|
||||||
%endif
|
%endif
|
||||||
%bcond_with lowmem_builder
|
|
||||||
%if 0%{?fedora} || 0%{?rhel}
|
%if 0%{?fedora} || 0%{?rhel}
|
||||||
%bcond_without selinux
|
%bcond_without selinux
|
||||||
|
%bcond_without cephfs_java
|
||||||
|
%bcond_with lowmem_builder
|
||||||
|
%bcond_without lttng
|
||||||
%endif
|
%endif
|
||||||
%if 0%{?suse_version}
|
%if 0%{?suse_version}
|
||||||
%bcond_with selinux
|
%bcond_with selinux
|
||||||
%endif
|
%bcond_with cephfs_java
|
||||||
|
%bcond_without lowmem_builder
|
||||||
# LTTng-UST enabled on Fedora, RHEL 6+, and SLE (not openSUSE)
|
%ifarch x86_64 aarch64
|
||||||
%if 0%{?fedora} || 0%{?rhel} >= 6 || 0%{?suse_version}
|
|
||||||
%if ! 0%{?is_opensuse}
|
|
||||||
%bcond_without lttng
|
%bcond_without lttng
|
||||||
|
%else
|
||||||
|
%bcond_with lttng
|
||||||
%endif
|
%endif
|
||||||
%endif
|
%endif
|
||||||
|
|
||||||
@ -50,7 +47,8 @@
|
|||||||
%{!?_udevrulesdir: %global _udevrulesdir /lib/udev/rules.d}
|
%{!?_udevrulesdir: %global _udevrulesdir /lib/udev/rules.d}
|
||||||
%{!?tmpfiles_create: %global tmpfiles_create systemd-tmpfiles --create}
|
%{!?tmpfiles_create: %global tmpfiles_create systemd-tmpfiles --create}
|
||||||
%{!?python3_pkgversion: %global python3_pkgversion 3}
|
%{!?python3_pkgversion: %global python3_pkgversion 3}
|
||||||
|
%{!?python3_version_nodots: %global python3_version_nodots 3}
|
||||||
|
%{!?python3_version: %global python3_version 3}
|
||||||
# unify libexec for all targets
|
# unify libexec for all targets
|
||||||
%global _libexecdir %{_exec_prefix}/lib
|
%global _libexecdir %{_exec_prefix}/lib
|
||||||
|
|
||||||
@ -113,6 +111,7 @@ BuildRequires: python-numpy-devel
|
|||||||
%endif
|
%endif
|
||||||
BuildRequires: python-coverage
|
BuildRequires: python-coverage
|
||||||
BuildRequires: python-pecan
|
BuildRequires: python-pecan
|
||||||
|
BuildRequires: python-tox
|
||||||
BuildRequires: socat
|
BuildRequires: socat
|
||||||
%endif
|
%endif
|
||||||
BuildRequires: bc
|
BuildRequires: bc
|
||||||
@ -196,9 +195,9 @@ BuildRequires: python-sphinx
|
|||||||
%endif
|
%endif
|
||||||
# python34-... for RHEL, python3-... for all other supported distros
|
# python34-... for RHEL, python3-... for all other supported distros
|
||||||
%if 0%{?rhel}
|
%if 0%{?rhel}
|
||||||
BuildRequires: python34-devel
|
BuildRequires: python%{python3_pkgversion}-devel
|
||||||
BuildRequires: python34-setuptools
|
BuildRequires: python%{python3_pkgversion}-setuptools
|
||||||
BuildRequires: python34-Cython
|
BuildRequires: python%{python3_version_nodots}-Cython
|
||||||
%else
|
%else
|
||||||
BuildRequires: python3-devel
|
BuildRequires: python3-devel
|
||||||
BuildRequires: python3-setuptools
|
BuildRequires: python3-setuptools
|
||||||
@ -501,6 +500,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rgw}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rgw
|
%description -n python-rgw
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -513,6 +513,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librgw2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rgw}
|
||||||
%description -n python%{python3_pkgversion}-rgw
|
%description -n python%{python3_pkgversion}-rgw
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
gateway.
|
gateway.
|
||||||
@ -523,6 +524,7 @@ Summary: Python 2 libraries for the RADOS object store
|
|||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rados}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rados
|
%description -n python-rados
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -535,6 +537,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: python%{python3_pkgversion}
|
Requires: python%{python3_pkgversion}
|
||||||
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librados2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rados}
|
||||||
%description -n python%{python3_pkgversion}-rados
|
%description -n python%{python3_pkgversion}-rados
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
object store.
|
object store.
|
||||||
@ -603,6 +606,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python-rbd}
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-rbd
|
%description -n python-rbd
|
||||||
This package contains Python 2 libraries for interacting with Cephs RADOS
|
This package contains Python 2 libraries for interacting with Cephs RADOS
|
||||||
@ -615,6 +619,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
Requires: librbd1 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-rbd}
|
||||||
%description -n python%{python3_pkgversion}-rbd
|
%description -n python%{python3_pkgversion}-rbd
|
||||||
This package contains Python 3 libraries for interacting with Cephs RADOS
|
This package contains Python 3 libraries for interacting with Cephs RADOS
|
||||||
block device.
|
block device.
|
||||||
@ -655,9 +660,8 @@ Summary: Python 2 libraries for Ceph distributed file system
|
|||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
%if 0%{?suse_version}
|
Requires: python-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
Recommends: python-rados = %{_epoch_prefix}%{version}-%{release}
|
%{?python_provide:%python_provide python-cephfs}
|
||||||
%endif
|
|
||||||
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
Obsoletes: python-ceph < %{_epoch_prefix}%{version}-%{release}
|
||||||
%description -n python-cephfs
|
%description -n python-cephfs
|
||||||
This package contains Python 2 libraries for interacting with Cephs distributed
|
This package contains Python 2 libraries for interacting with Cephs distributed
|
||||||
@ -670,6 +674,7 @@ Group: Development/Languages/Python
|
|||||||
%endif
|
%endif
|
||||||
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
Requires: libcephfs2 = %{_epoch_prefix}%{version}-%{release}
|
||||||
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
Requires: python%{python3_pkgversion}-rados = %{_epoch_prefix}%{version}-%{release}
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-cephfs}
|
||||||
%description -n python%{python3_pkgversion}-cephfs
|
%description -n python%{python3_pkgversion}-cephfs
|
||||||
This package contains Python 3 libraries for interacting with Cephs distributed
|
This package contains Python 3 libraries for interacting with Cephs distributed
|
||||||
file system.
|
file system.
|
||||||
@ -679,6 +684,7 @@ Summary: Python 3 utility libraries for Ceph CLI
|
|||||||
%if 0%{?suse_version}
|
%if 0%{?suse_version}
|
||||||
Group: Development/Languages/Python
|
Group: Development/Languages/Python
|
||||||
%endif
|
%endif
|
||||||
|
%{?python_provide:%python_provide python%{python3_pkgversion}-ceph-argparse}
|
||||||
%description -n python%{python3_pkgversion}-ceph-argparse
|
%description -n python%{python3_pkgversion}-ceph-argparse
|
||||||
This package contains types and routines for Python 3 used by the Ceph CLI as
|
This package contains types and routines for Python 3 used by the Ceph CLI as
|
||||||
well as the RESTful interface. These have to do with querying the daemons for
|
well as the RESTful interface. These have to do with querying the daemons for
|
||||||
@ -833,7 +839,7 @@ cmake .. \
|
|||||||
-DCMAKE_INSTALL_DOCDIR=%{_docdir}/ceph \
|
-DCMAKE_INSTALL_DOCDIR=%{_docdir}/ceph \
|
||||||
-DCMAKE_INSTALL_INCLUDEDIR=%{_includedir} \
|
-DCMAKE_INSTALL_INCLUDEDIR=%{_includedir} \
|
||||||
-DWITH_MANPAGE=ON \
|
-DWITH_MANPAGE=ON \
|
||||||
-DWITH_PYTHON3=ON \
|
-DWITH_PYTHON3=%{python3_version} \
|
||||||
-DWITH_SYSTEMD=ON \
|
-DWITH_SYSTEMD=ON \
|
||||||
%if 0%{?rhel} && ! 0%{?centos}
|
%if 0%{?rhel} && ! 0%{?centos}
|
||||||
-DWITH_SUBMAN=ON \
|
-DWITH_SUBMAN=ON \
|
||||||
|
@ -1,3 +1,9 @@
|
|||||||
|
ceph (12.2.13-1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New upstream release
|
||||||
|
|
||||||
|
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 30 Jan 2020 20:52:35 +0000
|
||||||
|
|
||||||
ceph (12.2.12-1) stable; urgency=medium
|
ceph (12.2.12-1) stable; urgency=medium
|
||||||
|
|
||||||
* New upstream release
|
* New upstream release
|
||||||
|
@ -16,49 +16,21 @@
|
|||||||
|
|
||||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
||||||
set(HAVE_ARM 1)
|
set(HAVE_ARM 1)
|
||||||
set(save_quiet ${CMAKE_REQUIRED_QUIET})
|
include(CheckCCompilerFlag)
|
||||||
set(CMAKE_REQUIRED_QUIET true)
|
|
||||||
include(CheckCXXSourceCompiles)
|
|
||||||
|
|
||||||
check_cxx_source_compiles("
|
|
||||||
#define CRC32CX(crc, value) __asm__(\"crc32cx %w[c], %w[c], %x[v]\":[c]\"+r\"(crc):[v]\"r\"(value))
|
|
||||||
asm(\".arch_extension crc\");
|
|
||||||
unsigned int foo(unsigned int ret) {
|
|
||||||
CRC32CX(ret, 0);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
int main() { foo(0); }" HAVE_ARMV8_CRC)
|
|
||||||
check_cxx_source_compiles("
|
|
||||||
asm(\".arch_extension crypto\");
|
|
||||||
unsigned int foo(unsigned int ret) {
|
|
||||||
__asm__(\"pmull v2.1q, v2.1d, v1.1d\");
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
int main() { foo(0); }" HAVE_ARMV8_CRYPTO)
|
|
||||||
|
|
||||||
set(CMAKE_REQUIRED_QUIET ${save_quiet})
|
|
||||||
if(HAVE_ARMV8_CRC)
|
|
||||||
message(STATUS " aarch64 crc extensions supported")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if(HAVE_ARMV8_CRYPTO)
|
|
||||||
message(STATUS " aarch64 crypto extensions supported")
|
|
||||||
endif()
|
|
||||||
CHECK_C_COMPILER_FLAG(-march=armv8-a+crc+crypto HAVE_ARMV8_CRC_CRYPTO_MARCH)
|
|
||||||
|
|
||||||
# don't believe only the -march support; gcc 4.8.5 on RHEL/CentOS says
|
|
||||||
# it supports +crc but hasn't got the intrinsics or arm_acle.h. Test for
|
|
||||||
# the actual presence of one of the intrinsic functions.
|
|
||||||
if(HAVE_ARMV8_CRC_CRYPTO_MARCH)
|
|
||||||
check_cxx_source_compiles("
|
|
||||||
#include <inttypes.h>
|
|
||||||
int main() { uint32_t a; uint8_t b; __builtin_aarch64_crc32b(a, b); }
|
|
||||||
" HAVE_ARMV8_CRC_CRYPTO_INTRINSICS)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
|
check_c_compiler_flag(-march=armv8-a+crc+crypto HAVE_ARMV8_CRC_CRYPTO_INTRINSICS)
|
||||||
if(HAVE_ARMV8_CRC_CRYPTO_INTRINSICS)
|
if(HAVE_ARMV8_CRC_CRYPTO_INTRINSICS)
|
||||||
message(STATUS " aarch64 crc+crypto intrinsics supported")
|
set(ARMV8_CRC_COMPILE_FLAGS "-march=armv8-a+crc+crypto")
|
||||||
set(ARMV8_CRC_COMPILE_FLAGS "${ARMV8_CRC_COMPILE_FLAGS} -march=armv8-a+crc+crypto")
|
set(HAVE_ARMV8_CRC TRUE)
|
||||||
|
set(HAVE_ARMV8_CRYPTO TRUE)
|
||||||
|
else()
|
||||||
|
check_c_compiler_flag(-march=armv8-a+crc HAVE_ARMV8_CRC)
|
||||||
|
check_c_compiler_flag(-march=armv8-a+crypto HAVE_ARMV8_CRYPTO)
|
||||||
|
if(HAVE_ARMV8_CRC)
|
||||||
|
set(ARMV8_CRC_COMPILE_FLAGS "-march=armv8-a+crc")
|
||||||
|
elseif(HAVE_ARMV8_CRYPTO)
|
||||||
|
set(ARMV8_CRC_COMPILE_FLAGS "-march=armv8-a+crypto")
|
||||||
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
CHECK_C_COMPILER_FLAG(-march=armv8-a+simd HAVE_ARMV8_SIMD)
|
CHECK_C_COMPILER_FLAG(-march=armv8-a+simd HAVE_ARMV8_SIMD)
|
||||||
|
@ -886,6 +886,7 @@ Package: python-rgw
|
|||||||
Architecture: linux-any
|
Architecture: linux-any
|
||||||
Section: python
|
Section: python
|
||||||
Depends: librgw2 (>= ${binary:Version}),
|
Depends: librgw2 (>= ${binary:Version}),
|
||||||
|
python-rados (= ${binary:Version}),
|
||||||
${misc:Depends},
|
${misc:Depends},
|
||||||
${python:Depends},
|
${python:Depends},
|
||||||
${shlibs:Depends},
|
${shlibs:Depends},
|
||||||
@ -920,6 +921,7 @@ Package: python3-rgw
|
|||||||
Architecture: linux-any
|
Architecture: linux-any
|
||||||
Section: python
|
Section: python
|
||||||
Depends: librgw2 (>= ${binary:Version}),
|
Depends: librgw2 (>= ${binary:Version}),
|
||||||
|
python3-rados (= ${binary:Version}),
|
||||||
${misc:Depends},
|
${misc:Depends},
|
||||||
${python3:Depends},
|
${python3:Depends},
|
||||||
${shlibs:Depends},
|
${shlibs:Depends},
|
||||||
@ -952,6 +954,7 @@ Package: python-cephfs
|
|||||||
Architecture: linux-any
|
Architecture: linux-any
|
||||||
Section: python
|
Section: python
|
||||||
Depends: libcephfs2 (= ${binary:Version}),
|
Depends: libcephfs2 (= ${binary:Version}),
|
||||||
|
python-rados (= ${binary:Version}),
|
||||||
${misc:Depends},
|
${misc:Depends},
|
||||||
${python:Depends},
|
${python:Depends},
|
||||||
${shlibs:Depends},
|
${shlibs:Depends},
|
||||||
@ -986,6 +989,7 @@ Package: python3-cephfs
|
|||||||
Architecture: linux-any
|
Architecture: linux-any
|
||||||
Section: python
|
Section: python
|
||||||
Depends: libcephfs2 (= ${binary:Version}),
|
Depends: libcephfs2 (= ${binary:Version}),
|
||||||
|
python3-rados (= ${binary:Version}),
|
||||||
${misc:Depends},
|
${misc:Depends},
|
||||||
${python3:Depends},
|
${python3:Depends},
|
||||||
${shlibs:Depends},
|
${shlibs:Depends},
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/bin/sh -x
|
#!/usr/bin/env bash
|
||||||
|
set -x
|
||||||
git submodule update --init --recursive
|
git submodule update --init --recursive
|
||||||
if test -e build; then
|
if test -e build; then
|
||||||
echo 'build dir already exists; rm -rf build and re-run'
|
echo 'build dir already exists; rm -rf build and re-run'
|
||||||
|
@ -97,8 +97,11 @@ User-visible PG States
|
|||||||
*down*
|
*down*
|
||||||
a replica with necessary data is down, so the pg is offline
|
a replica with necessary data is down, so the pg is offline
|
||||||
|
|
||||||
*replay*
|
*recovery_unfound*
|
||||||
the PG is waiting for clients to replay operations after an OSD crashed
|
recovery could not finish because object(s) are unfound.
|
||||||
|
|
||||||
|
*backfill_unfound*
|
||||||
|
backfill could not finish because object(s) are unfound.
|
||||||
|
|
||||||
*splitting*
|
*splitting*
|
||||||
the PG is being split into multiple PGs (not functional as of 2012-02)
|
the PG is being split into multiple PGs (not functional as of 2012-02)
|
||||||
@ -123,20 +126,9 @@ User-visible PG States
|
|||||||
*recovering*
|
*recovering*
|
||||||
objects are being migrated/synchronized with replicas
|
objects are being migrated/synchronized with replicas
|
||||||
|
|
||||||
*recovery_wait*
|
|
||||||
the PG is waiting for the local/remote recovery reservations
|
|
||||||
|
|
||||||
*backfilling*
|
|
||||||
a special case of recovery, in which the entire contents of
|
|
||||||
the PG are scanned and synchronized, instead of inferring what
|
|
||||||
needs to be transferred from the PG logs of recent operations
|
|
||||||
|
|
||||||
*backfill_wait*
|
*backfill_wait*
|
||||||
the PG is waiting in line to start backfill
|
the PG is waiting in line to start backfill
|
||||||
|
|
||||||
*backfill_toofull*
|
|
||||||
backfill reservation rejected, OSD too full
|
|
||||||
|
|
||||||
*incomplete*
|
*incomplete*
|
||||||
a pg is missing a necessary period of history from its
|
a pg is missing a necessary period of history from its
|
||||||
log. If you see this state, report a bug, and try to start any
|
log. If you see this state, report a bug, and try to start any
|
||||||
@ -149,3 +141,64 @@ User-visible PG States
|
|||||||
*remapped*
|
*remapped*
|
||||||
the PG is temporarily mapped to a different set of OSDs from what
|
the PG is temporarily mapped to a different set of OSDs from what
|
||||||
CRUSH specified
|
CRUSH specified
|
||||||
|
|
||||||
|
*deep*
|
||||||
|
In conjunction with *scrubbing* the scrub is a deep scrub
|
||||||
|
|
||||||
|
*backfilling*
|
||||||
|
a special case of recovery, in which the entire contents of
|
||||||
|
the PG are scanned and synchronized, instead of inferring what
|
||||||
|
needs to be transferred from the PG logs of recent operations
|
||||||
|
|
||||||
|
*backfill_toofull*
|
||||||
|
backfill reservation rejected, OSD too full
|
||||||
|
|
||||||
|
*recovery_wait*
|
||||||
|
the PG is waiting for the local/remote recovery reservations
|
||||||
|
|
||||||
|
*undersized*
|
||||||
|
the PG can't select enough OSDs given its size
|
||||||
|
|
||||||
|
*activating*
|
||||||
|
the PG is peered but not yet active
|
||||||
|
|
||||||
|
*peered*
|
||||||
|
the PG peered but can't go active
|
||||||
|
|
||||||
|
*snaptrim*
|
||||||
|
the PG is trimming snaps
|
||||||
|
|
||||||
|
*snaptrim_wait*
|
||||||
|
the PG is queued to trim snaps
|
||||||
|
|
||||||
|
*recovery_toofull*
|
||||||
|
recovery reservation rejected, OSD too full
|
||||||
|
|
||||||
|
*snaptrim_error*
|
||||||
|
the PG could not complete snap trimming due to errors
|
||||||
|
|
||||||
|
*forced_recovery*
|
||||||
|
the PG has been marked for highest priority recovery
|
||||||
|
|
||||||
|
*forced_backfill*
|
||||||
|
the PG has been marked for highest priority backfill
|
||||||
|
|
||||||
|
=======
|
||||||
|
OMAP STATISTICS
|
||||||
|
===============
|
||||||
|
|
||||||
|
Omap statistics are gathered during deep scrub and displayed in the output of
|
||||||
|
the following commands::
|
||||||
|
|
||||||
|
ceph pg dump
|
||||||
|
ceph pg dump all
|
||||||
|
ceph pg dump summary
|
||||||
|
ceph pg dump pgs
|
||||||
|
ceph pg dump pools
|
||||||
|
ceph pg ls
|
||||||
|
|
||||||
|
As these statistics are not updated continuously they may be quite inaccurate in
|
||||||
|
an environment where deep scrubs are run infrequently and/or there is a lot of
|
||||||
|
omap activity. As such they should not be relied on for exact accuracy but
|
||||||
|
rather used as a guide. Running a deep scrub and checking these statistics
|
||||||
|
immediately afterwards should give a good indication of current omap usage.
|
||||||
|
@ -19,7 +19,7 @@ Synopsis
|
|||||||
| **ceph-bluestore-tool** show-label --dev *device* ...
|
| **ceph-bluestore-tool** show-label --dev *device* ...
|
||||||
| **ceph-bluestore-tool** prime-osd-dir --dev *device* --path *osd path*
|
| **ceph-bluestore-tool** prime-osd-dir --dev *device* --path *osd path*
|
||||||
| **ceph-bluestore-tool** bluefs-export --path *osd path* --out-dir *dir*
|
| **ceph-bluestore-tool** bluefs-export --path *osd path* --out-dir *dir*
|
||||||
| **ceph-bluestore-tool** bluefs-export --path *osd path* --out-dir *dir*
|
| **ceph-bluestore-tool** free-dump|free-score --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
|
||||||
|
|
||||||
|
|
||||||
Description
|
Description
|
||||||
@ -59,6 +59,15 @@ Commands
|
|||||||
|
|
||||||
Show device label(s).
|
Show device label(s).
|
||||||
|
|
||||||
|
:command:`free-dump` --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
|
||||||
|
|
||||||
|
Dump all free regions in allocator.
|
||||||
|
|
||||||
|
:command:`free-score` --path *osd path* [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
|
||||||
|
|
||||||
|
Give a [0-1] number that represents quality of fragmentation in allocator.
|
||||||
|
0 represents case when all free space is in one chunk. 1 represents worst possible fragmentation.
|
||||||
|
|
||||||
Options
|
Options
|
||||||
=======
|
=======
|
||||||
|
|
||||||
@ -87,6 +96,10 @@ Options
|
|||||||
|
|
||||||
deep scrub/repair (read and validate object data, not just metadata)
|
deep scrub/repair (read and validate object data, not just metadata)
|
||||||
|
|
||||||
|
.. option:: --allocator *name*
|
||||||
|
|
||||||
|
Useful for *free-dump* and *free-score* actions. Selects allocator(s).
|
||||||
|
|
||||||
Device labels
|
Device labels
|
||||||
=============
|
=============
|
||||||
|
|
||||||
|
@ -11,6 +11,12 @@ Synopsis
|
|||||||
|
|
||||||
| **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
|
| **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
|
||||||
[--pgbits *bitsperosd* ] ] [--clobber]
|
[--pgbits *bitsperosd* ] ] [--clobber]
|
||||||
|
| **osdmaptool** *mapfilename* [--import-crush *crushmap*]
|
||||||
|
| **osdmaptool** *mapfilename* [--export-crush *crushmap*]
|
||||||
|
| **osdmaptool** *mapfilename* [--upmap *file*] [--upmap-max *max-optimizations*]
|
||||||
|
[--upmap-deviation *max-deviation*] [--upmap-pool *poolname*]
|
||||||
|
[--upmap-save *file*] [--upmap-save *newosdmap*] [--upmap-active]
|
||||||
|
| **osdmaptool** *mapfilename* [--upmap-cleanup] [--upmap-save *newosdmap*]
|
||||||
|
|
||||||
|
|
||||||
Description
|
Description
|
||||||
@ -19,6 +25,8 @@ Description
|
|||||||
**osdmaptool** is a utility that lets you create, view, and manipulate
|
**osdmaptool** is a utility that lets you create, view, and manipulate
|
||||||
OSD cluster maps from the Ceph distributed storage system. Notably, it
|
OSD cluster maps from the Ceph distributed storage system. Notably, it
|
||||||
lets you extract the embedded CRUSH map or import a new CRUSH map.
|
lets you extract the embedded CRUSH map or import a new CRUSH map.
|
||||||
|
It can also simulate the upmap balancer mode so you can get a sense of
|
||||||
|
what is needed to balance your PGs.
|
||||||
|
|
||||||
|
|
||||||
Options
|
Options
|
||||||
@ -58,6 +66,46 @@ Options
|
|||||||
will print out the summary of all placement groups and the mappings
|
will print out the summary of all placement groups and the mappings
|
||||||
from them to the mapped OSDs.
|
from them to the mapped OSDs.
|
||||||
|
|
||||||
|
.. option:: --mark-out
|
||||||
|
|
||||||
|
mark an osd as out (but do not persist)
|
||||||
|
|
||||||
|
.. option:: --health
|
||||||
|
|
||||||
|
dump health checks
|
||||||
|
|
||||||
|
.. option:: --with-default-pool
|
||||||
|
|
||||||
|
include default pool when creating map
|
||||||
|
|
||||||
|
.. option:: --upmap-cleanup <file>
|
||||||
|
|
||||||
|
clean up pg_upmap[_items] entries, writing commands to <file> [default: - for stdout]
|
||||||
|
|
||||||
|
.. option:: --upmap <file>
|
||||||
|
|
||||||
|
calculate pg upmap entries to balance pg layout writing commands to <file> [default: - for stdout]
|
||||||
|
|
||||||
|
.. option:: --upmap-max <max-optimizations>
|
||||||
|
|
||||||
|
set max upmap entries to calculate [default: 10]
|
||||||
|
|
||||||
|
.. option:: --upmap-deviation <max-deviation>
|
||||||
|
|
||||||
|
max deviation from target [default: 5]
|
||||||
|
|
||||||
|
.. option:: --upmap-pool <poolname>
|
||||||
|
|
||||||
|
restrict upmap balancing to 1 pool or the option can be repeated for multiple pools
|
||||||
|
|
||||||
|
.. option:: --upmap-save
|
||||||
|
|
||||||
|
write modified OSDMap with upmap changes
|
||||||
|
|
||||||
|
.. option:: --upmap-active
|
||||||
|
|
||||||
|
Act like an active balancer, keep applying changes until balanced
|
||||||
|
|
||||||
|
|
||||||
Example
|
Example
|
||||||
=======
|
=======
|
||||||
@ -70,19 +118,19 @@ To view the result::
|
|||||||
|
|
||||||
osdmaptool --print osdmap
|
osdmaptool --print osdmap
|
||||||
|
|
||||||
To view the mappings of placement groups for pool 0::
|
To view the mappings of placement groups for pool 1::
|
||||||
|
|
||||||
osdmaptool --test-map-pgs-dump rbd --pool 0
|
osdmaptool osdmap --test-map-pgs-dump --pool 1
|
||||||
|
|
||||||
pool 0 pg_num 8
|
pool 0 pg_num 8
|
||||||
0.0 [0,2,1] 0
|
1.0 [0,2,1] 0
|
||||||
0.1 [2,0,1] 2
|
1.1 [2,0,1] 2
|
||||||
0.2 [0,1,2] 0
|
1.2 [0,1,2] 0
|
||||||
0.3 [2,0,1] 2
|
1.3 [2,0,1] 2
|
||||||
0.4 [0,2,1] 0
|
1.4 [0,2,1] 0
|
||||||
0.5 [0,2,1] 0
|
1.5 [0,2,1] 0
|
||||||
0.6 [0,1,2] 0
|
1.6 [0,1,2] 0
|
||||||
0.7 [1,0,2] 1
|
1.7 [1,0,2] 1
|
||||||
#osd count first primary c wt wt
|
#osd count first primary c wt wt
|
||||||
osd.0 8 5 5 1 1
|
osd.0 8 5 5 1 1
|
||||||
osd.1 8 1 1 1 1
|
osd.1 8 1 1 1 1
|
||||||
@ -97,7 +145,7 @@ To view the mappings of placement groups for pool 0::
|
|||||||
size 3 8
|
size 3 8
|
||||||
|
|
||||||
In which,
|
In which,
|
||||||
#. pool 0 has 8 placement groups. And two tables follow:
|
#. pool 1 has 8 placement groups. And two tables follow:
|
||||||
#. A table for placement groups. Each row presents a placement group. With columns of:
|
#. A table for placement groups. Each row presents a placement group. With columns of:
|
||||||
|
|
||||||
* placement group id,
|
* placement group id,
|
||||||
@ -141,6 +189,56 @@ placement group distribution, whose standard deviation is 1.41421::
|
|||||||
size 20
|
size 20
|
||||||
size 364
|
size 364
|
||||||
|
|
||||||
|
To simulate the active balancer in upmap mode::
|
||||||
|
|
||||||
|
osdmaptool --upmap upmaps.out --upmap-active --upmap-deviation 6 --upmap-max 11 osdmap
|
||||||
|
|
||||||
|
osdmaptool: osdmap file 'osdmap'
|
||||||
|
writing upmap command output to: upmaps.out
|
||||||
|
checking for upmap cleanups
|
||||||
|
upmap, max-count 11, max deviation 6
|
||||||
|
pools movies photos metadata data
|
||||||
|
prepared 11/11 changes
|
||||||
|
Time elapsed 0.00310404 secs
|
||||||
|
pools movies photos metadata data
|
||||||
|
prepared 11/11 changes
|
||||||
|
Time elapsed 0.00283402 secs
|
||||||
|
pools data metadata movies photos
|
||||||
|
prepared 11/11 changes
|
||||||
|
Time elapsed 0.003122 secs
|
||||||
|
pools photos metadata data movies
|
||||||
|
prepared 11/11 changes
|
||||||
|
Time elapsed 0.00324372 secs
|
||||||
|
pools movies metadata data photos
|
||||||
|
prepared 1/11 changes
|
||||||
|
Time elapsed 0.00222609 secs
|
||||||
|
pools data movies photos metadata
|
||||||
|
prepared 0/11 changes
|
||||||
|
Time elapsed 0.00209916 secs
|
||||||
|
Unable to find further optimization, or distribution is already perfect
|
||||||
|
osd.0 pgs 41
|
||||||
|
osd.1 pgs 42
|
||||||
|
osd.2 pgs 42
|
||||||
|
osd.3 pgs 41
|
||||||
|
osd.4 pgs 46
|
||||||
|
osd.5 pgs 39
|
||||||
|
osd.6 pgs 39
|
||||||
|
osd.7 pgs 43
|
||||||
|
osd.8 pgs 41
|
||||||
|
osd.9 pgs 46
|
||||||
|
osd.10 pgs 46
|
||||||
|
osd.11 pgs 46
|
||||||
|
osd.12 pgs 46
|
||||||
|
osd.13 pgs 41
|
||||||
|
osd.14 pgs 40
|
||||||
|
osd.15 pgs 40
|
||||||
|
osd.16 pgs 39
|
||||||
|
osd.17 pgs 46
|
||||||
|
osd.18 pgs 46
|
||||||
|
osd.19 pgs 39
|
||||||
|
osd.20 pgs 42
|
||||||
|
Total time elapsed 0.0167765 secs, 5 rounds
|
||||||
|
|
||||||
|
|
||||||
Availability
|
Availability
|
||||||
============
|
============
|
||||||
|
@ -28,6 +28,12 @@ Options
|
|||||||
|
|
||||||
Interact with the given pool. Required by most commands.
|
Interact with the given pool. Required by most commands.
|
||||||
|
|
||||||
|
.. option:: --pgid
|
||||||
|
|
||||||
|
As an alternative to ``--pool``, ``--pgid`` also allow users to specify the
|
||||||
|
PG id to which the command will be directed. With this option, certain
|
||||||
|
commands like ``ls`` allow users to limit the scope of the command to the given PG.
|
||||||
|
|
||||||
.. option:: -s snap, --snap snap
|
.. option:: -s snap, --snap snap
|
||||||
|
|
||||||
Read from the given pool snapshot. Valid for all pool-specific read operations.
|
Read from the given pool snapshot. Valid for all pool-specific read operations.
|
||||||
@ -107,7 +113,7 @@ Pool specific commands
|
|||||||
List the watchers of object name.
|
List the watchers of object name.
|
||||||
|
|
||||||
:command:`ls` *outfile*
|
:command:`ls` *outfile*
|
||||||
List objects in given pool and write to outfile.
|
List objects in the given pool and write to outfile. Instead of ``--pool`` if ``--pgid`` will be specified, ``ls`` will only list the objects in the given PG.
|
||||||
|
|
||||||
:command:`lssnap`
|
:command:`lssnap`
|
||||||
List snapshots for given pool.
|
List snapshots for given pool.
|
||||||
@ -189,6 +195,10 @@ To get a list object in pool foo sent to stdout::
|
|||||||
|
|
||||||
rados -p foo ls -
|
rados -p foo ls -
|
||||||
|
|
||||||
|
To get a list of objects in PG 0.6::
|
||||||
|
|
||||||
|
rados --pgid 0.6 ls
|
||||||
|
|
||||||
To write an object::
|
To write an object::
|
||||||
|
|
||||||
rados -p foo put myobject blah.txt
|
rados -p foo put myobject blah.txt
|
||||||
|
@ -72,7 +72,10 @@ which are as follows:
|
|||||||
Remove access key.
|
Remove access key.
|
||||||
|
|
||||||
:command:`bucket list`
|
:command:`bucket list`
|
||||||
List all buckets.
|
List buckets, or, if bucket specified with --bucket=<bucket>,
|
||||||
|
list its objects. If bucket specified adding --allow-unordered
|
||||||
|
removes ordering requirement, possibly generating results more
|
||||||
|
quickly in buckets with large number of objects.
|
||||||
|
|
||||||
:command:`bucket link`
|
:command:`bucket link`
|
||||||
Link bucket to specified user.
|
Link bucket to specified user.
|
||||||
@ -226,6 +229,20 @@ which are as follows:
|
|||||||
:command:`orphans finish`
|
:command:`orphans finish`
|
||||||
Clean up search for leaked rados objects
|
Clean up search for leaked rados objects
|
||||||
|
|
||||||
|
:command:`reshard add`
|
||||||
|
Schedule a resharding of a bucket
|
||||||
|
|
||||||
|
:command:`reshard list`
|
||||||
|
List all bucket resharding or scheduled to be resharded
|
||||||
|
|
||||||
|
:command:`reshard process`
|
||||||
|
Process of scheduled reshard jobs
|
||||||
|
|
||||||
|
:command:`reshard status`
|
||||||
|
Resharding status of a bucket
|
||||||
|
|
||||||
|
:command:`reshard cancel`
|
||||||
|
Cancel resharding a bucket
|
||||||
|
|
||||||
Options
|
Options
|
||||||
=======
|
=======
|
||||||
|
@ -34,4 +34,5 @@ sensible.
|
|||||||
Zabbix plugin <zabbix>
|
Zabbix plugin <zabbix>
|
||||||
Prometheus plugin <prometheus>
|
Prometheus plugin <prometheus>
|
||||||
Influx plugin <influx>
|
Influx plugin <influx>
|
||||||
|
Telemetry plugin <telemetry>
|
||||||
|
|
||||||
|
36
ceph/doc/mgr/telemetry.rst
Normal file
36
ceph/doc/mgr/telemetry.rst
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
Telemetry plugin
|
||||||
|
================
|
||||||
|
The telemetry plugin sends anonymous data about the cluster, in which it is running, back to the Ceph project.
|
||||||
|
|
||||||
|
The data being sent back to the project does not contain any sensitive data like pool names, object names, object contents or hostnames.
|
||||||
|
|
||||||
|
It contains counters and statistics on how the cluster has been deployed, the version of Ceph, the distribition of the hosts and other parameters which help the project to gain a better understanding of the way Ceph is used.
|
||||||
|
|
||||||
|
Data is sent over HTTPS to *telemetry.ceph.com*
|
||||||
|
|
||||||
|
Enabling
|
||||||
|
--------
|
||||||
|
|
||||||
|
The *telemetry* module is enabled with::
|
||||||
|
|
||||||
|
ceph mgr module enable telemetry
|
||||||
|
|
||||||
|
|
||||||
|
Interval
|
||||||
|
--------
|
||||||
|
The module compiles and sends a new report every 72 hours by default.
|
||||||
|
|
||||||
|
Contact and Description
|
||||||
|
-----------------------
|
||||||
|
A contact and description can be added to the report, this is optional.
|
||||||
|
|
||||||
|
ceph telemetry config-set contact 'John Doe <john.doe@example.com>'
|
||||||
|
ceph telemetry config-set description 'My first Ceph cluster'
|
||||||
|
|
||||||
|
Show report
|
||||||
|
-----------
|
||||||
|
The report is sent in JSON format, and can be printed::
|
||||||
|
|
||||||
|
ceph telemetry show
|
||||||
|
|
||||||
|
So you can inspect the content if you have privacy concerns.
|
@ -91,7 +91,8 @@
|
|||||||
"attr_value_mismatch",
|
"attr_value_mismatch",
|
||||||
"attr_name_mismatch",
|
"attr_name_mismatch",
|
||||||
"snapset_inconsistency",
|
"snapset_inconsistency",
|
||||||
"hinfo_inconsistency"
|
"hinfo_inconsistency",
|
||||||
|
"size_too_large"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"minItems": 0,
|
"minItems": 0,
|
||||||
|
@ -231,12 +231,13 @@ The configured cache memory budget can be used in a few different ways:
|
|||||||
* BlueStore data (i.e., recently read or written object data)
|
* BlueStore data (i.e., recently read or written object data)
|
||||||
|
|
||||||
Cache memory usage is governed by the following options:
|
Cache memory usage is governed by the following options:
|
||||||
``bluestore_cache_meta_ratio``, ``bluestore_cache_kv_ratio``, and
|
``bluestore_cache_meta_ratio`` and ``bluestore_cache_kv_ratio``.
|
||||||
``bluestore_cache_kv_max``. The fraction of the cache devoted to data
|
The fraction of the cache devoted to data
|
||||||
is 1.0 minus the meta and kv ratios. The memory devoted to kv
|
is governed by the effective bluestore cache size (depending on
|
||||||
metadata (the RocksDB cache) is capped by ``bluestore_cache_kv_max``
|
``bluestore_cache_size[_ssd|_hdd]`` settings and the device class of the primary
|
||||||
since our testing indicates there are diminishing returns beyond a
|
device) as well as the meta and kv ratios.
|
||||||
certain point.
|
The data fraction can be calculated by
|
||||||
|
``<effective_cache_size> * (1 - bluestore_cache_meta_ratio - bluestore_cache_kv_ratio)``
|
||||||
|
|
||||||
``bluestore_cache_size``
|
``bluestore_cache_size``
|
||||||
|
|
||||||
@ -264,14 +265,14 @@ certain point.
|
|||||||
:Description: The ratio of cache devoted to metadata.
|
:Description: The ratio of cache devoted to metadata.
|
||||||
:Type: Floating point
|
:Type: Floating point
|
||||||
:Required: Yes
|
:Required: Yes
|
||||||
:Default: ``.01``
|
:Default: ``.4``
|
||||||
|
|
||||||
``bluestore_cache_kv_ratio``
|
``bluestore_cache_kv_ratio``
|
||||||
|
|
||||||
:Description: The ratio of cache devoted to key/value data (rocksdb).
|
:Description: The ratio of cache devoted to key/value data (rocksdb).
|
||||||
:Type: Floating point
|
:Type: Floating point
|
||||||
:Required: Yes
|
:Required: Yes
|
||||||
:Default: ``.99``
|
:Default: ``.4``
|
||||||
|
|
||||||
``bluestore_cache_kv_max``
|
``bluestore_cache_kv_max``
|
||||||
|
|
||||||
|
@ -395,6 +395,25 @@ by setting it in the ``[mon]`` section of the configuration file.
|
|||||||
:Default: True
|
:Default: True
|
||||||
|
|
||||||
|
|
||||||
|
``mon warn on slow ping ratio``
|
||||||
|
|
||||||
|
:Description: Issue a ``HEALTH_WARN`` in cluster log if any heartbeat
|
||||||
|
between OSDs exceeds ``mon warn on slow ping ratio``
|
||||||
|
of ``osd heartbeat grace``. The default is 5%.
|
||||||
|
:Type: Float
|
||||||
|
:Default: ``0.05``
|
||||||
|
|
||||||
|
|
||||||
|
``mon warn on slow ping time``
|
||||||
|
|
||||||
|
:Description: Override ``mon warn on slow ping ratio`` with a specific value.
|
||||||
|
Issue a ``HEALTH_WARN`` in cluster log if any heartbeat
|
||||||
|
between OSDs exceeds ``mon warn on slow ping time``
|
||||||
|
milliseconds. The default is 0 (disabled).
|
||||||
|
:Type: Integer
|
||||||
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
``mon cache target full warn ratio``
|
``mon cache target full warn ratio``
|
||||||
|
|
||||||
:Description: Position between pool's ``cache_target_full`` and
|
:Description: Position between pool's ``cache_target_full`` and
|
||||||
|
@ -24,10 +24,8 @@ monitoring the Ceph Storage Cluster.
|
|||||||
OSDs Check Heartbeats
|
OSDs Check Heartbeats
|
||||||
=====================
|
=====================
|
||||||
|
|
||||||
Each Ceph OSD Daemon checks the heartbeat of other Ceph OSD Daemons every 6
|
Each Ceph OSD Daemon checks the heartbeat of other Ceph OSD Daemons at random
|
||||||
seconds. You can change the heartbeat interval by adding an ``osd heartbeat
|
intervals less than every 6 seconds. If a neighboring Ceph OSD Daemon doesn't
|
||||||
interval`` setting under the ``[osd]`` section of your Ceph configuration file,
|
|
||||||
or by setting the value at runtime. If a neighboring Ceph OSD Daemon doesn't
|
|
||||||
show a heartbeat within a 20 second grace period, the Ceph OSD Daemon may
|
show a heartbeat within a 20 second grace period, the Ceph OSD Daemon may
|
||||||
consider the neighboring Ceph OSD Daemon ``down`` and report it back to a Ceph
|
consider the neighboring Ceph OSD Daemon ``down`` and report it back to a Ceph
|
||||||
Monitor, which will update the Ceph Cluster Map. You may change this grace
|
Monitor, which will update the Ceph Cluster Map. You may change this grace
|
||||||
@ -379,6 +377,15 @@ OSD Settings
|
|||||||
:Default: ``30``
|
:Default: ``30``
|
||||||
|
|
||||||
|
|
||||||
|
``osd mon heartbeat stat stale``
|
||||||
|
|
||||||
|
:Description: Stop reporting on heartbeat ping times which haven't been updated for
|
||||||
|
this many seconds. Set to zero to disable this action.
|
||||||
|
|
||||||
|
:Type: 32-bit Integer
|
||||||
|
:Default: ``3600``
|
||||||
|
|
||||||
|
|
||||||
``osd mon report interval max``
|
``osd mon report interval max``
|
||||||
|
|
||||||
:Description: The maximum time in seconds that a Ceph OSD Daemon can wait before
|
:Description: The maximum time in seconds that a Ceph OSD Daemon can wait before
|
||||||
|
@ -335,6 +335,22 @@ scrubbing operations.
|
|||||||
:Default: 512 KB. ``524288``
|
:Default: 512 KB. ``524288``
|
||||||
|
|
||||||
|
|
||||||
|
``osd scrub auto repair``
|
||||||
|
|
||||||
|
:Description: Setting this to ``true`` will enable automatic pg repair when errors
|
||||||
|
are found in deep-scrub. However, if more than ``osd scrub auto repair num errors``
|
||||||
|
errors are found a repair is NOT performed.
|
||||||
|
:Type: Boolean
|
||||||
|
:Default: ``false``
|
||||||
|
|
||||||
|
|
||||||
|
``osd scrub auto repair num errors``
|
||||||
|
|
||||||
|
:Description: Auto repair will not occur if more than this many errors are found.
|
||||||
|
:Type: 32-bit Integer
|
||||||
|
:Default: ``5``
|
||||||
|
|
||||||
|
|
||||||
.. index:: OSD; operations settings
|
.. index:: OSD; operations settings
|
||||||
|
|
||||||
Operations
|
Operations
|
||||||
@ -401,8 +417,7 @@ recovery operations to ensure optimal performance during recovery.
|
|||||||
|
|
||||||
``osd client op priority``
|
``osd client op priority``
|
||||||
|
|
||||||
:Description: The priority set for client operations. It is relative to
|
:Description: The priority set for client operations.
|
||||||
``osd recovery op priority``.
|
|
||||||
|
|
||||||
:Type: 32-bit Integer
|
:Type: 32-bit Integer
|
||||||
:Default: ``63``
|
:Default: ``63``
|
||||||
@ -411,8 +426,7 @@ recovery operations to ensure optimal performance during recovery.
|
|||||||
|
|
||||||
``osd recovery op priority``
|
``osd recovery op priority``
|
||||||
|
|
||||||
:Description: The priority set for recovery operations. It is relative to
|
:Description: The priority set for recovery operations, if not specified by the pool's ``recovery_op_priority``.
|
||||||
``osd client op priority``.
|
|
||||||
|
|
||||||
:Type: 32-bit Integer
|
:Type: 32-bit Integer
|
||||||
:Default: ``3``
|
:Default: ``3``
|
||||||
@ -421,23 +435,70 @@ recovery operations to ensure optimal performance during recovery.
|
|||||||
|
|
||||||
``osd scrub priority``
|
``osd scrub priority``
|
||||||
|
|
||||||
:Description: The priority set for scrub operations. It is relative to
|
:Description: The default priority set for a scheduled scrub work queue when the
|
||||||
``osd client op priority``.
|
pool doesn't specify a value of ``scrub_priority``. This can be
|
||||||
|
boosted to the value of ``osd client op priority`` when scrub is
|
||||||
|
blocking client operations.
|
||||||
|
|
||||||
:Type: 32-bit Integer
|
:Type: 32-bit Integer
|
||||||
:Default: ``5``
|
:Default: ``5``
|
||||||
:Valid Range: 1-63
|
:Valid Range: 1-63
|
||||||
|
|
||||||
|
|
||||||
|
``osd requested scrub priority``
|
||||||
|
|
||||||
|
:Description: The priority set for user requested scrub on the work queue. If
|
||||||
|
this value were to be smaller than ``osd client op priority`` it
|
||||||
|
can be boosted to the value of ``osd client op priority`` when
|
||||||
|
scrub is blocking client operations.
|
||||||
|
|
||||||
|
:Type: 32-bit Integer
|
||||||
|
:Default: ``120``
|
||||||
|
|
||||||
|
|
||||||
``osd snap trim priority``
|
``osd snap trim priority``
|
||||||
|
|
||||||
:Description: The priority set for snap trim operations. It is relative to
|
:Description: The priority set for the snap trim work queue.
|
||||||
``osd client op priority``.
|
|
||||||
|
|
||||||
:Type: 32-bit Integer
|
:Type: 32-bit Integer
|
||||||
:Default: ``5``
|
:Default: ``5``
|
||||||
:Valid Range: 1-63
|
:Valid Range: 1-63
|
||||||
|
|
||||||
|
``osd snap trim sleep``
|
||||||
|
|
||||||
|
:Description: Time in seconds to sleep before next snap trim op.
|
||||||
|
Increasing this value will slow down snap trimming.
|
||||||
|
This option overrides backend specific variants.
|
||||||
|
|
||||||
|
:Type: Float
|
||||||
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
|
``osd snap trim sleep hdd``
|
||||||
|
|
||||||
|
:Description: Time in seconds to sleep before next snap trim op
|
||||||
|
for HDDs.
|
||||||
|
|
||||||
|
:Type: Float
|
||||||
|
:Default: ``5``
|
||||||
|
|
||||||
|
|
||||||
|
``osd snap trim sleep ssd``
|
||||||
|
|
||||||
|
:Description: Time in seconds to sleep before next snap trim op
|
||||||
|
for SSDs.
|
||||||
|
|
||||||
|
:Type: Float
|
||||||
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
|
``osd snap trim sleep hybrid``
|
||||||
|
|
||||||
|
:Description: Time in seconds to sleep before next snap trim op
|
||||||
|
when osd data is on HDD and osd journal is on SSD.
|
||||||
|
|
||||||
|
:Type: Float
|
||||||
|
:Default: ``2``
|
||||||
|
|
||||||
``osd op thread timeout``
|
``osd op thread timeout``
|
||||||
|
|
||||||
@ -455,49 +516,6 @@ recovery operations to ensure optimal performance during recovery.
|
|||||||
:Default: ``30``
|
:Default: ``30``
|
||||||
|
|
||||||
|
|
||||||
``osd disk threads``
|
|
||||||
|
|
||||||
:Description: The number of disk threads, which are used to perform background
|
|
||||||
disk intensive OSD operations such as scrubbing and snap
|
|
||||||
trimming.
|
|
||||||
|
|
||||||
:Type: 32-bit Integer
|
|
||||||
:Default: ``1``
|
|
||||||
|
|
||||||
``osd disk thread ioprio class``
|
|
||||||
|
|
||||||
:Description: Warning: it will only be used if both ``osd disk thread
|
|
||||||
ioprio class`` and ``osd disk thread ioprio priority`` are
|
|
||||||
set to a non default value. Sets the ioprio_set(2) I/O
|
|
||||||
scheduling ``class`` for the disk thread. Acceptable
|
|
||||||
values are ``idle``, ``be`` or ``rt``. The ``idle``
|
|
||||||
class means the disk thread will have lower priority
|
|
||||||
than any other thread in the OSD. This is useful to slow
|
|
||||||
down scrubbing on an OSD that is busy handling client
|
|
||||||
operations. ``be`` is the default and is the same
|
|
||||||
priority as all other threads in the OSD. ``rt`` means
|
|
||||||
the disk thread will have precendence over all other
|
|
||||||
threads in the OSD. Note: Only works with the Linux Kernel
|
|
||||||
CFQ scheduler. Since Jewel scrubbing is no longer carried
|
|
||||||
out by the disk iothread, see osd priority options instead.
|
|
||||||
:Type: String
|
|
||||||
:Default: the empty string
|
|
||||||
|
|
||||||
``osd disk thread ioprio priority``
|
|
||||||
|
|
||||||
:Description: Warning: it will only be used if both ``osd disk thread
|
|
||||||
ioprio class`` and ``osd disk thread ioprio priority`` are
|
|
||||||
set to a non default value. It sets the ioprio_set(2)
|
|
||||||
I/O scheduling ``priority`` of the disk thread ranging
|
|
||||||
from 0 (highest) to 7 (lowest). If all OSDs on a given
|
|
||||||
host were in class ``idle`` and compete for I/O
|
|
||||||
(i.e. due to controller congestion), it can be used to
|
|
||||||
lower the disk thread priority of one OSD to 7 so that
|
|
||||||
another OSD with priority 0 can have priority.
|
|
||||||
Note: Only works with the Linux Kernel CFQ scheduler.
|
|
||||||
:Type: Integer in the range of 0 to 7 or -1 if not to be used.
|
|
||||||
:Default: ``-1``
|
|
||||||
|
|
||||||
``osd op history size``
|
``osd op history size``
|
||||||
|
|
||||||
:Description: The maximum number of completed operations to track.
|
:Description: The maximum number of completed operations to track.
|
||||||
@ -971,6 +989,16 @@ perform well in a degraded state.
|
|||||||
:Type: Float
|
:Type: Float
|
||||||
:Default: ``0.025``
|
:Default: ``0.025``
|
||||||
|
|
||||||
|
|
||||||
|
``osd recovery priority``
|
||||||
|
|
||||||
|
:Description: The default priority set for recovery work queue. Not
|
||||||
|
related to a pool's ``recovery_priority``.
|
||||||
|
|
||||||
|
:Type: 32-bit Integer
|
||||||
|
:Default: ``5``
|
||||||
|
|
||||||
|
|
||||||
Tiering
|
Tiering
|
||||||
=======
|
=======
|
||||||
|
|
||||||
|
@ -264,6 +264,20 @@ Ceph configuration file.
|
|||||||
:Type: Float
|
:Type: Float
|
||||||
:Default: ``2``
|
:Default: ``2``
|
||||||
|
|
||||||
|
``osd recovery priority``
|
||||||
|
|
||||||
|
:Description: Priority of recovery in the work queue.
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
:Default: ``5``
|
||||||
|
|
||||||
|
``osd recovery op priority``
|
||||||
|
|
||||||
|
:Description: Default priority used for recovery operations if pool doesn't override.
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
:Default: ``3``
|
||||||
|
|
||||||
.. _pool: ../../operations/pools
|
.. _pool: ../../operations/pools
|
||||||
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
|
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
|
||||||
.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems
|
.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems
|
||||||
|
@ -218,6 +218,59 @@ You can either raise the pool quota with::
|
|||||||
|
|
||||||
or delete some existing data to reduce utilization.
|
or delete some existing data to reduce utilization.
|
||||||
|
|
||||||
|
BLUEFS_AVAILABLE_SPACE
|
||||||
|
______________________
|
||||||
|
|
||||||
|
To check how much space is free for BlueFS do::
|
||||||
|
|
||||||
|
ceph daemon osd.123 bluestore bluefs available
|
||||||
|
|
||||||
|
This will output up to 3 values: `BDEV_DB free`, `BDEV_SLOW free` and
|
||||||
|
`available_from_bluestore`. `BDEV_DB` and `BDEV_SLOW` report amount of space that
|
||||||
|
has been acquired by BlueFS and is considered free. Value `available_from_bluestore`
|
||||||
|
denotes ability of BlueStore to relinquish more space to BlueFS.
|
||||||
|
It is normal that this value is different from amount of BlueStore free space, as
|
||||||
|
BlueFS allocation unit is typically larger than BlueStore allocation unit.
|
||||||
|
This means that only part of BlueStore free space will be acceptable for BlueFS.
|
||||||
|
|
||||||
|
BLUEFS_LOW_SPACE
|
||||||
|
_________________
|
||||||
|
|
||||||
|
If BlueFS is running low on available free space and there is little
|
||||||
|
`available_from_bluestore` one can consider reducing BlueFS allocation unit size.
|
||||||
|
To simulate available space when allocation unit is different do::
|
||||||
|
|
||||||
|
ceph daemon osd.123 bluestore bluefs available <alloc-unit-size>
|
||||||
|
|
||||||
|
BLUESTORE_FRAGMENTATION
|
||||||
|
_______________________
|
||||||
|
|
||||||
|
As BlueStore works free space on underlying storage will get fragmented.
|
||||||
|
This is normal and unavoidable but excessive fragmentation will cause slowdown.
|
||||||
|
To inspect BlueStore fragmentation one can do::
|
||||||
|
|
||||||
|
ceph daemon osd.123 bluestore allocator score block
|
||||||
|
|
||||||
|
Score is given in [0-1] range.
|
||||||
|
[0.0 .. 0.4] tiny fragmentation
|
||||||
|
[0.4 .. 0.7] small, acceptable fragmentation
|
||||||
|
[0.7 .. 0.9] considerable, but safe fragmentation
|
||||||
|
[0.9 .. 1.0] severe fragmentation, may impact BlueFS ability to get space from BlueStore
|
||||||
|
|
||||||
|
If detailed report of free fragments is required do::
|
||||||
|
|
||||||
|
ceph daemon osd.123 bluestore allocator dump block
|
||||||
|
|
||||||
|
In case when handling OSD process that is not running fragmentation can be
|
||||||
|
inspected with `ceph-bluestore-tool`.
|
||||||
|
Get fragmentation score::
|
||||||
|
|
||||||
|
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score
|
||||||
|
|
||||||
|
And dump detailed free chunks::
|
||||||
|
|
||||||
|
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump
|
||||||
|
|
||||||
|
|
||||||
Data health (pools & placement groups)
|
Data health (pools & placement groups)
|
||||||
--------------------------------------
|
--------------------------------------
|
||||||
@ -355,6 +408,12 @@ Please refer to
|
|||||||
:doc:`placement-groups#Choosing-the-number-of-Placement-Groups` for
|
:doc:`placement-groups#Choosing-the-number-of-Placement-Groups` for
|
||||||
more information.
|
more information.
|
||||||
|
|
||||||
|
TOO_FEW_OSDS
|
||||||
|
____________
|
||||||
|
|
||||||
|
The number of OSDs in the cluster is below the configurable
|
||||||
|
threshold of ``osd_pool_default_size``.
|
||||||
|
|
||||||
SMALLER_PGP_NUM
|
SMALLER_PGP_NUM
|
||||||
_______________
|
_______________
|
||||||
|
|
||||||
@ -525,3 +584,36 @@ happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
|
|||||||
You can manually initiate a scrub of a clean PG with::
|
You can manually initiate a scrub of a clean PG with::
|
||||||
|
|
||||||
ceph pg deep-scrub <pgid>
|
ceph pg deep-scrub <pgid>
|
||||||
|
|
||||||
|
|
||||||
|
Miscellaneous
|
||||||
|
-------------
|
||||||
|
|
||||||
|
TELEMETRY_CHANGED
|
||||||
|
_________________
|
||||||
|
|
||||||
|
Telemetry has been enabled, but the contents of the telemetry report
|
||||||
|
have changed since that time, so telemetry reports will not be sent.
|
||||||
|
|
||||||
|
The Ceph developers periodically revise the telemetry feature to
|
||||||
|
include new and useful information, or to remove information found to
|
||||||
|
be useless or sensitive. If any new information is included in the
|
||||||
|
report, Ceph will require the administrator to re-enable telemetry to
|
||||||
|
ensure they have an opportunity to (re)review what information will be
|
||||||
|
shared.
|
||||||
|
|
||||||
|
To review the contents of the telemetry report,::
|
||||||
|
|
||||||
|
ceph telemetry show
|
||||||
|
|
||||||
|
Note that the telemetry report consists of several optional channels
|
||||||
|
that may be independently enabled or disabled. For more information, see
|
||||||
|
:ref:`telemetry`.
|
||||||
|
|
||||||
|
To re-enable telemetry (and make this warning go away),::
|
||||||
|
|
||||||
|
ceph telemetry on
|
||||||
|
|
||||||
|
To disable telemetry (and make this warning go away),::
|
||||||
|
|
||||||
|
ceph telemetry off
|
||||||
|
@ -230,15 +230,15 @@ few cases:
|
|||||||
Placement group IDs consist of the pool number (not pool name) followed
|
Placement group IDs consist of the pool number (not pool name) followed
|
||||||
by a period (.) and the placement group ID--a hexadecimal number. You
|
by a period (.) and the placement group ID--a hexadecimal number. You
|
||||||
can view pool numbers and their names from the output of ``ceph osd
|
can view pool numbers and their names from the output of ``ceph osd
|
||||||
lspools``. For example, the default pool ``rbd`` corresponds to
|
lspools``. For example, the first pool created corresponds to
|
||||||
pool number ``0``. A fully qualified placement group ID has the
|
pool number ``1``. A fully qualified placement group ID has the
|
||||||
following form::
|
following form::
|
||||||
|
|
||||||
{pool-num}.{pg-id}
|
{pool-num}.{pg-id}
|
||||||
|
|
||||||
And it typically looks like this::
|
And it typically looks like this::
|
||||||
|
|
||||||
0.1f
|
1.1f
|
||||||
|
|
||||||
|
|
||||||
To retrieve a list of placement groups, execute the following::
|
To retrieve a list of placement groups, execute the following::
|
||||||
@ -488,19 +488,19 @@ requests when it is ready.
|
|||||||
|
|
||||||
During the backfill operations, you may see one of several states:
|
During the backfill operations, you may see one of several states:
|
||||||
``backfill_wait`` indicates that a backfill operation is pending, but is not
|
``backfill_wait`` indicates that a backfill operation is pending, but is not
|
||||||
underway yet; ``backfill`` indicates that a backfill operation is underway;
|
underway yet; ``backfilling`` indicates that a backfill operation is underway;
|
||||||
and, ``backfill_too_full`` indicates that a backfill operation was requested,
|
and, ``backfill_toofull`` indicates that a backfill operation was requested,
|
||||||
but couldn't be completed due to insufficient storage capacity. When a
|
but couldn't be completed due to insufficient storage capacity. When a
|
||||||
placement group cannot be backfilled, it may be considered ``incomplete``.
|
placement group cannot be backfilled, it may be considered ``incomplete``.
|
||||||
|
|
||||||
Ceph provides a number of settings to manage the load spike associated with
|
Ceph provides a number of settings to manage the load spike associated with
|
||||||
reassigning placement groups to an OSD (especially a new OSD). By default,
|
reassigning placement groups to an OSD (especially a new OSD). By default,
|
||||||
``osd_max_backfills`` sets the maximum number of concurrent backfills to or from
|
``osd_max_backfills`` sets the maximum number of concurrent backfills to and from
|
||||||
an OSD to 10. The ``backfill full ratio`` enables an OSD to refuse a
|
an OSD to 1. The ``backfill full ratio`` enables an OSD to refuse a
|
||||||
backfill request if the OSD is approaching its full ratio (90%, by default) and
|
backfill request if the OSD is approaching its full ratio (90%, by default) and
|
||||||
change with ``ceph osd set-backfillfull-ratio`` comand.
|
change with ``ceph osd set-backfillfull-ratio`` comand.
|
||||||
If an OSD refuses a backfill request, the ``osd backfill retry interval``
|
If an OSD refuses a backfill request, the ``osd backfill retry interval``
|
||||||
enables an OSD to retry the request (after 10 seconds, by default). OSDs can
|
enables an OSD to retry the request (after 30 seconds, by default). OSDs can
|
||||||
also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan
|
also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan
|
||||||
intervals (64 and 512, by default).
|
intervals (64 and 512, by default).
|
||||||
|
|
||||||
@ -593,7 +593,7 @@ location, all you need is the object name and the pool name. For example::
|
|||||||
|
|
||||||
Ceph should output the object's location. For example::
|
Ceph should output the object's location. For example::
|
||||||
|
|
||||||
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
|
osdmap e537 pool 'data' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
|
||||||
|
|
||||||
To remove the test object, simply delete it using the ``rados rm`` command.
|
To remove the test object, simply delete it using the ``rados rm`` command.
|
||||||
For example::
|
For example::
|
||||||
|
@ -159,6 +159,114 @@ to a health state:
|
|||||||
2017-07-25 10:11:13.535493 mon.a mon.0 172.21.9.34:6789/0 110 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2 pgs unclean, 2 pgs degraded, 2 pgs undersized)
|
2017-07-25 10:11:13.535493 mon.a mon.0 172.21.9.34:6789/0 110 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2 pgs unclean, 2 pgs degraded, 2 pgs undersized)
|
||||||
2017-07-25 10:11:13.535577 mon.a mon.0 172.21.9.34:6789/0 111 : cluster [INF] Cluster is now healthy
|
2017-07-25 10:11:13.535577 mon.a mon.0 172.21.9.34:6789/0 111 : cluster [INF] Cluster is now healthy
|
||||||
|
|
||||||
|
Network Performance Checks
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
Ceph OSDs send heartbeat ping messages amongst themselves to monitor daemon availability. We
|
||||||
|
also use the response times to monitor network performance.
|
||||||
|
While it is possible that a busy OSD could delay a ping response, we can assume
|
||||||
|
that if a network switch fails mutiple delays will be detected between distinct pairs of OSDs.
|
||||||
|
|
||||||
|
By default we will warn about ping times which exceed 1 second (1000 milliseconds).
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
HEALTH_WARN Long heartbeat ping times on back interface seen, longest is 1118.001 msec
|
||||||
|
|
||||||
|
The health detail will add the combination of OSDs are seeing the delays and by how much. There is a limit of 10
|
||||||
|
detail line items.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
[WRN] OSD_SLOW_PING_TIME_BACK: Long heartbeat ping times on back interface seen, longest is 1118.001 msec
|
||||||
|
Slow heartbeat ping on back interface from osd.0 to osd.1 1118.001 msec
|
||||||
|
Slow heartbeat ping on back interface from osd.0 to osd.2 1030.123 msec
|
||||||
|
Slow heartbeat ping on back interface from osd.2 to osd.1 1015.321 msec
|
||||||
|
Slow heartbeat ping on back interface from osd.1 to osd.0 1010.456 msec
|
||||||
|
|
||||||
|
To see even more detail and a complete dump of network performance information the ``dump_osd_network`` command can be used. Typically, this would be
|
||||||
|
sent to a mgr, but it can be limited to a particular OSD's interactions by issuing it to any OSD. The current threshold which defaults to 1 second
|
||||||
|
(1000 milliseconds) can be overridden as an argument in milliseconds.
|
||||||
|
|
||||||
|
The following command will show all gathered network performance data by specifying a threshold of 0 and sending to the mgr.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ ceph daemon /var/run/ceph/ceph-mgr.x.asok dump_osd_network 0
|
||||||
|
{
|
||||||
|
"threshold": 0,
|
||||||
|
"entries": [
|
||||||
|
{
|
||||||
|
"last update": "Wed Sep 4 17:04:49 2019",
|
||||||
|
"stale": false,
|
||||||
|
"from osd": 2,
|
||||||
|
"to osd": 0,
|
||||||
|
"interface": "front",
|
||||||
|
"average": {
|
||||||
|
"1min": 1.023,
|
||||||
|
"5min": 0.860,
|
||||||
|
"15min": 0.883
|
||||||
|
},
|
||||||
|
"min": {
|
||||||
|
"1min": 0.818,
|
||||||
|
"5min": 0.607,
|
||||||
|
"15min": 0.607
|
||||||
|
},
|
||||||
|
"max": {
|
||||||
|
"1min": 1.164,
|
||||||
|
"5min": 1.173,
|
||||||
|
"15min": 1.544
|
||||||
|
},
|
||||||
|
"last": 0.924
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"last update": "Wed Sep 4 17:04:49 2019",
|
||||||
|
"stale": false,
|
||||||
|
"from osd": 2,
|
||||||
|
"to osd": 0,
|
||||||
|
"interface": "back",
|
||||||
|
"average": {
|
||||||
|
"1min": 0.968,
|
||||||
|
"5min": 0.897,
|
||||||
|
"15min": 0.830
|
||||||
|
},
|
||||||
|
"min": {
|
||||||
|
"1min": 0.860,
|
||||||
|
"5min": 0.563,
|
||||||
|
"15min": 0.502
|
||||||
|
},
|
||||||
|
"max": {
|
||||||
|
"1min": 1.171,
|
||||||
|
"5min": 1.216,
|
||||||
|
"15min": 1.456
|
||||||
|
},
|
||||||
|
"last": 0.845
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"last update": "Wed Sep 4 17:04:48 2019",
|
||||||
|
"stale": false,
|
||||||
|
"from osd": 0,
|
||||||
|
"to osd": 1,
|
||||||
|
"interface": "front",
|
||||||
|
"average": {
|
||||||
|
"1min": 0.965,
|
||||||
|
"5min": 0.811,
|
||||||
|
"15min": 0.850
|
||||||
|
},
|
||||||
|
"min": {
|
||||||
|
"1min": 0.650,
|
||||||
|
"5min": 0.488,
|
||||||
|
"15min": 0.466
|
||||||
|
},
|
||||||
|
"max": {
|
||||||
|
"1min": 1.252,
|
||||||
|
"5min": 1.252,
|
||||||
|
"15min": 1.362
|
||||||
|
},
|
||||||
|
"last": 0.791
|
||||||
|
},
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
Detecting configuration issues
|
Detecting configuration issues
|
||||||
==============================
|
==============================
|
||||||
|
@ -350,7 +350,7 @@ You may set values for the following keys:
|
|||||||
``crush_rule``
|
``crush_rule``
|
||||||
|
|
||||||
:Description: The rule to use for mapping object placement in the cluster.
|
:Description: The rule to use for mapping object placement in the cluster.
|
||||||
:Type: Integer
|
:Type: String
|
||||||
|
|
||||||
.. _allow_ec_overwrites:
|
.. _allow_ec_overwrites:
|
||||||
|
|
||||||
@ -603,6 +603,27 @@ You may set values for the following keys:
|
|||||||
:Default: ``0``
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
|
.. _recovery_priority:
|
||||||
|
|
||||||
|
``recovery_priority``
|
||||||
|
|
||||||
|
:Description: When a value is set it will boost the computed reservation priority
|
||||||
|
by this amount. This value should be less than 30.
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
|
.. _recovery_op_priority:
|
||||||
|
|
||||||
|
``recovery_op_priority``
|
||||||
|
|
||||||
|
:Description: Specify the recovery operation priority for this pool instead of ``osd_recovery_op_priority``.
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
:Default: ``0``
|
||||||
|
|
||||||
|
|
||||||
Get Pool Values
|
Get Pool Values
|
||||||
===============
|
===============
|
||||||
|
|
||||||
@ -757,6 +778,20 @@ You may get values for the following keys:
|
|||||||
:Type: Boolean
|
:Type: Boolean
|
||||||
|
|
||||||
|
|
||||||
|
``recovery_priority``
|
||||||
|
|
||||||
|
:Description: see recovery_priority_
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
|
||||||
|
|
||||||
|
``recovery_op_priority``
|
||||||
|
|
||||||
|
:Description: see recovery_op_priority_
|
||||||
|
|
||||||
|
:Type: Integer
|
||||||
|
|
||||||
|
|
||||||
Set the Number of Object Replicas
|
Set the Number of Object Replicas
|
||||||
=================================
|
=================================
|
||||||
|
|
||||||
|
@ -23,14 +23,12 @@ use with::
|
|||||||
|
|
||||||
ceph features
|
ceph features
|
||||||
|
|
||||||
A word of caution
|
Balancer module
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
This is a new feature and not very user friendly. At the time of this
|
The new `balancer` module for ceph-mgr will automatically balance
|
||||||
writing we are working on a new `balancer` module for ceph-mgr that
|
the number of PGs per OSD. See ``Balancer``
|
||||||
will eventually do all of this automatically.
|
|
||||||
|
|
||||||
Until then,
|
|
||||||
|
|
||||||
Offline optimization
|
Offline optimization
|
||||||
--------------------
|
--------------------
|
||||||
@ -43,7 +41,9 @@ Upmap entries are updated with an offline optimizer built into ``osdmaptool``.
|
|||||||
|
|
||||||
#. Run the optimizer::
|
#. Run the optimizer::
|
||||||
|
|
||||||
osdmaptool om --upmap out.txt [--upmap-pool <pool>] [--upmap-max <max-count>] [--upmap-deviation <max-deviation>]
|
osdmaptool om --upmap out.txt [--upmap-pool <pool>]
|
||||||
|
[--upmap-max <max-optimizations>] [--upmap-deviation <max-deviation>]
|
||||||
|
[--upmap-active]
|
||||||
|
|
||||||
It is highly recommended that optimization be done for each pool
|
It is highly recommended that optimization be done for each pool
|
||||||
individually, or for sets of similarly-utilized pools. You can
|
individually, or for sets of similarly-utilized pools. You can
|
||||||
@ -52,24 +52,34 @@ Upmap entries are updated with an offline optimizer built into ``osdmaptool``.
|
|||||||
kind of data (e.g., RBD image pools, yes; RGW index pool and RGW
|
kind of data (e.g., RBD image pools, yes; RGW index pool and RGW
|
||||||
data pool, no).
|
data pool, no).
|
||||||
|
|
||||||
The ``max-count`` value is the maximum number of upmap entries to
|
The ``max-optimizations`` value is the maximum number of upmap entries to
|
||||||
identify in the run. The default is 100, but you may want to make
|
identify in the run. The default is `10` like the ceph-mgr balancer module,
|
||||||
this a smaller number so that the tool completes more quickly (but
|
but you should use a larger number if you are doing offline optimization.
|
||||||
does less work). If it cannot find any additional changes to make
|
If it cannot find any additional changes to make it will stop early
|
||||||
it will stop early (i.e., when the pool distribution is perfect).
|
(i.e., when the pool distribution is perfect).
|
||||||
|
|
||||||
The ``max-deviation`` value defaults to `.01` (i.e., 1%). If an OSD
|
The ``max-deviation`` value defaults to `5`. If an OSD PG count
|
||||||
utilization varies from the average by less than this amount it
|
varies from the computed target number by less than or equal
|
||||||
will be considered perfect.
|
to this amount it will be considered perfect.
|
||||||
|
|
||||||
#. The proposed changes are written to the output file ``out.txt`` in
|
The ``--upmap-active`` option simulates the behavior of the active
|
||||||
the example above. These are normal ceph CLI commands that can be
|
balancer in upmap mode. It keeps cycling until the OSDs are balanced
|
||||||
run to apply the changes to the cluster. This can be done with::
|
and reports how many rounds and how long each round is taking. The
|
||||||
|
elapsed time for rounds indicates the CPU load ceph-mgr will be
|
||||||
|
consuming when it tries to compute the next optimization plan.
|
||||||
|
|
||||||
|
#. Apply the changes::
|
||||||
|
|
||||||
source out.txt
|
source out.txt
|
||||||
|
|
||||||
|
The proposed changes are written to the output file ``out.txt`` in
|
||||||
|
the example above. These are normal ceph CLI commands that can be
|
||||||
|
run to apply the changes to the cluster.
|
||||||
|
|
||||||
|
|
||||||
The above steps can be repeated as many times as necessary to achieve
|
The above steps can be repeated as many times as necessary to achieve
|
||||||
a perfect distribution of PGs for each set of pools.
|
a perfect distribution of PGs for each set of pools.
|
||||||
|
|
||||||
You can see some (gory) details about what the tool is doing by
|
You can see some (gory) details about what the tool is doing by
|
||||||
passing ``--debug-osd 10`` to ``osdmaptool``.
|
passing ``--debug-osd 10`` and even more with ``--debug-crush 10``
|
||||||
|
to ``osdmaptool``.
|
||||||
|
@ -513,7 +513,7 @@ RADOS Gateway
|
|||||||
:Description: Enable logging of RGW's bandwidth usage.
|
:Description: Enable logging of RGW's bandwidth usage.
|
||||||
:Type: Boolean
|
:Type: Boolean
|
||||||
:Required: No
|
:Required: No
|
||||||
:Default: ``true``
|
:Default: ``false``
|
||||||
|
|
||||||
|
|
||||||
``rgw usage log flush threshold``
|
``rgw usage log flush threshold``
|
||||||
|
@ -57,6 +57,19 @@ Options
|
|||||||
:Type: String
|
:Type: String
|
||||||
:Default: None
|
:Default: None
|
||||||
|
|
||||||
|
``tcp_nodelay``
|
||||||
|
|
||||||
|
:Description: If set the socket option will disable Nagle's algorithm on
|
||||||
|
the connection which means that packets will be sent as soon
|
||||||
|
as possible instead of waiting for a full buffer or timeout to occur.
|
||||||
|
|
||||||
|
``1`` Disable Nagel's algorithm for all sockets.
|
||||||
|
|
||||||
|
``0`` Keep the default: Nagel's algorithm enabled.
|
||||||
|
|
||||||
|
:Type: Integer (0 or 1)
|
||||||
|
:Default: 0
|
||||||
|
|
||||||
|
|
||||||
Civetweb
|
Civetweb
|
||||||
========
|
========
|
||||||
|
@ -42,8 +42,7 @@ Some variables have been used in above commands, they are:
|
|||||||
- bucket: Holds a mapping between bucket name and bucket instance id
|
- bucket: Holds a mapping between bucket name and bucket instance id
|
||||||
- bucket.instance: Holds bucket instance information[2]
|
- bucket.instance: Holds bucket instance information[2]
|
||||||
|
|
||||||
Every metadata entry is kept on a single rados object.
|
Every metadata entry is kept on a single rados object. See below for implementation details.
|
||||||
See below for implementation defails.
|
|
||||||
|
|
||||||
Note that the metadata is not indexed. When listing a metadata section we do a
|
Note that the metadata is not indexed. When listing a metadata section we do a
|
||||||
rados pgls operation on the containing pool.
|
rados pgls operation on the containing pool.
|
||||||
|
@ -337,14 +337,17 @@ Pull the Realm
|
|||||||
--------------
|
--------------
|
||||||
|
|
||||||
Using the URL path, access key and secret of the master zone in the
|
Using the URL path, access key and secret of the master zone in the
|
||||||
master zone group, pull the realm to the host. To pull a non-default
|
master zone group, pull the realm configuration to the host. To pull a
|
||||||
realm, specify the realm using the ``--rgw-realm`` or ``--realm-id``
|
non-default realm, specify the realm using the ``--rgw-realm`` or
|
||||||
configuration options.
|
``--realm-id`` configuration options.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
# radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
|
# radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
|
||||||
|
|
||||||
|
.. note:: Pulling the realm also retrieves the remote's current period
|
||||||
|
configuration, and makes it the current period on this host as well.
|
||||||
|
|
||||||
If this realm is the default realm or the only realm, make the realm the
|
If this realm is the default realm or the only realm, make the realm the
|
||||||
default realm.
|
default realm.
|
||||||
|
|
||||||
@ -352,22 +355,6 @@ default realm.
|
|||||||
|
|
||||||
# radosgw-admin realm default --rgw-realm={realm-name}
|
# radosgw-admin realm default --rgw-realm={realm-name}
|
||||||
|
|
||||||
Pull the Period
|
|
||||||
---------------
|
|
||||||
|
|
||||||
Using the URL path, access key and secret of the master zone in the
|
|
||||||
master zone group, pull the period to the host. To pull a period from a
|
|
||||||
non-default realm, specify the realm using the ``--rgw-realm`` or
|
|
||||||
``--realm-id`` configuration options.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# radosgw-admin period pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
|
|
||||||
|
|
||||||
|
|
||||||
.. note:: Pulling the period retrieves the latest version of the zone group
|
|
||||||
and zone configurations for the realm.
|
|
||||||
|
|
||||||
Create a Secondary Zone
|
Create a Secondary Zone
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
@ -582,7 +569,7 @@ disaster recovery.
|
|||||||
::
|
::
|
||||||
|
|
||||||
# radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
|
# radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
|
||||||
--read-only=False
|
--read-only=false
|
||||||
|
|
||||||
2. Update the period to make the changes take effect.
|
2. Update the period to make the changes take effect.
|
||||||
|
|
||||||
@ -598,13 +585,13 @@ disaster recovery.
|
|||||||
|
|
||||||
If the former master zone recovers, revert the operation.
|
If the former master zone recovers, revert the operation.
|
||||||
|
|
||||||
1. From the recovered zone, pull the period from the current master
|
1. From the recovered zone, pull the latest realm configuration
|
||||||
zone.
|
from the current master zone.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
# radosgw-admin period pull --url={url-to-master-zone-gateway} \
|
# radosgw-admin realm pull --url={url-to-master-zone-gateway} \
|
||||||
--access-key={access-key} --secret={secret}
|
--access-key={access-key} --secret={secret}
|
||||||
|
|
||||||
2. Make the recovered zone the master and default zone.
|
2. Make the recovered zone the master and default zone.
|
||||||
|
|
||||||
|
@ -147,6 +147,8 @@ format must be edited manually:
|
|||||||
$ vi user.json
|
$ vi user.json
|
||||||
$ radosgw-admin metadata put user:<user-id> < user.json
|
$ radosgw-admin metadata put user:<user-id> < user.json
|
||||||
|
|
||||||
|
.. _s3_bucket_placement:
|
||||||
|
|
||||||
S3 Bucket Placement
|
S3 Bucket Placement
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
|
@ -22,6 +22,8 @@ placement groups for these pools. See
|
|||||||
`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
|
`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
|
||||||
for details on pool creation.
|
for details on pool creation.
|
||||||
|
|
||||||
|
.. _radosgw-pool-namespaces:
|
||||||
|
|
||||||
Pool Namespaces
|
Pool Namespaces
|
||||||
===============
|
===============
|
||||||
|
|
||||||
|
@ -7,8 +7,6 @@ PUT Bucket
|
|||||||
Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You may not
|
Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You may not
|
||||||
create buckets as an anonymous user.
|
create buckets as an anonymous user.
|
||||||
|
|
||||||
.. note:: We do not support request entities for ``PUT /{bucket}`` in this release.
|
|
||||||
|
|
||||||
Constraints
|
Constraints
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
In general, bucket names should follow domain name constraints.
|
In general, bucket names should follow domain name constraints.
|
||||||
@ -37,6 +35,16 @@ Parameters
|
|||||||
| ``x-amz-acl`` | Canned ACLs. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
|
| ``x-amz-acl`` | Canned ACLs. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
|
||||||
+---------------+----------------------+-----------------------------------------------------------------------------+------------+
|
+---------------+----------------------+-----------------------------------------------------------------------------+------------+
|
||||||
|
|
||||||
|
Request Entities
|
||||||
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
+-------------------------------+-----------+----------------------------------------------------------------+
|
||||||
|
| Name | Type | Description |
|
||||||
|
+===============================+===========+================================================================+
|
||||||
|
| ``CreateBucketConfiguration`` | Container | A container for the bucket configuration. |
|
||||||
|
+-------------------------------+-----------+----------------------------------------------------------------+
|
||||||
|
| ``LocationConstraint`` | String | A zonegroup api name, with optional :ref:`s3_bucket_placement` |
|
||||||
|
+-------------------------------+-----------+----------------------------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
HTTP Response
|
HTTP Response
|
||||||
|
@ -177,3 +177,34 @@ Also, check to ensure that the default site is disabled. ::
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Numerous objects in default.rgw.meta pool
|
||||||
|
=========================================
|
||||||
|
|
||||||
|
Clusters created prior to *jewel* have a metadata archival feature enabled by default, using the ``default.rgw.meta`` pool.
|
||||||
|
This archive keeps all old versions of user and bucket metadata, resulting in large numbers of objects in the ``default.rgw.meta`` pool.
|
||||||
|
|
||||||
|
Disabling the Metadata Heap
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
Users who want to disable this feature going forward should set the ``metadata_heap`` field to an empty string ``""``::
|
||||||
|
|
||||||
|
$ radosgw-admin zone get --rgw-zone=default > zone.json
|
||||||
|
[edit zone.json, setting "metadata_heap": ""]
|
||||||
|
$ radosgw-admin zone set --rgw-zone=default --infile=zone.json
|
||||||
|
$ radosgw-admin period update --commit
|
||||||
|
|
||||||
|
This will stop new metadata from being written to the ``default.rgw.meta`` pool, but does not remove any existing objects or pool.
|
||||||
|
|
||||||
|
Cleaning the Metadata Heap Pool
|
||||||
|
-------------------------------
|
||||||
|
|
||||||
|
Clusters created prior to *jewel* normally use ``default.rgw.meta`` only for the metadata archival feature.
|
||||||
|
|
||||||
|
However, from *luminous* onwards, radosgw uses :ref:`Pool Namespaces <radosgw-pool-namespaces>` within ``default.rgw.meta`` for an entirely different purpose, that is, to store ``user_keys`` and other critical metadata.
|
||||||
|
|
||||||
|
Users should check zone configuration before proceeding any cleanup procedures::
|
||||||
|
|
||||||
|
$ radosgw-admin zone get --rgw-zone=default | grep default.rgw.meta
|
||||||
|
[should not match any strings]
|
||||||
|
|
||||||
|
Having confirmed that the pool is not used for any purpose, users may safely delete all objects in the ``default.rgw.meta`` pool, or optionally, delete the entire pool itself.
|
||||||
|
@ -171,7 +171,7 @@ edit`` to include the ``xmlns:qemu`` value. Then, add a ``qemu:commandline``
|
|||||||
block as a child of that domain. The following example shows how to set two
|
block as a child of that domain. The following example shows how to set two
|
||||||
devices with ``qemu id=`` to different ``discard_granularity`` values.
|
devices with ``qemu id=`` to different ``discard_granularity`` values.
|
||||||
|
|
||||||
.. code-block:: guess
|
.. code-block:: xml
|
||||||
|
|
||||||
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
|
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
|
||||||
<qemu:commandline>
|
<qemu:commandline>
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
git
|
git
|
||||||
gcc
|
gcc
|
||||||
python-dev
|
python3-dev
|
||||||
python-pip
|
python3-pip
|
||||||
python-virtualenv
|
python3-virtualenv
|
||||||
doxygen
|
doxygen
|
||||||
ditaa
|
ditaa
|
||||||
libxml2-dev
|
libxml2-dev
|
||||||
@ -10,4 +10,4 @@ libxslt1-dev
|
|||||||
graphviz
|
graphviz
|
||||||
ant
|
ant
|
||||||
zlib1g-dev
|
zlib1g-dev
|
||||||
cython
|
cython3
|
||||||
|
@ -19,6 +19,8 @@ if test $(id -u) != 0 ; then
|
|||||||
fi
|
fi
|
||||||
export LC_ALL=C # the following is vulnerable to i18n
|
export LC_ALL=C # the following is vulnerable to i18n
|
||||||
|
|
||||||
|
ARCH=$(uname -m)
|
||||||
|
|
||||||
function munge_ceph_spec_in {
|
function munge_ceph_spec_in {
|
||||||
local OUTFILE=$1
|
local OUTFILE=$1
|
||||||
sed -e 's/@//g' -e 's/%bcond_with make_check/%bcond_without make_check/g' < ceph.spec.in > $OUTFILE
|
sed -e 's/@//g' -e 's/%bcond_with make_check/%bcond_without make_check/g' < ceph.spec.in > $OUTFILE
|
||||||
@ -51,18 +53,49 @@ EOF
|
|||||||
--install /usr/bin/gcc gcc /usr/bin/gcc-${new} 20 \
|
--install /usr/bin/gcc gcc /usr/bin/gcc-${new} 20 \
|
||||||
--slave /usr/bin/g++ g++ /usr/bin/g++-${new}
|
--slave /usr/bin/g++ g++ /usr/bin/g++-${new}
|
||||||
|
|
||||||
$SUDO update-alternatives \
|
if [ -f /usr/bin/g++-${old} ]; then
|
||||||
--install /usr/bin/gcc gcc /usr/bin/gcc-${old} 10 \
|
$SUDO update-alternatives \
|
||||||
--slave /usr/bin/g++ g++ /usr/bin/g++-${old}
|
--install /usr/bin/gcc gcc /usr/bin/gcc-${old} 10 \
|
||||||
|
--slave /usr/bin/g++ g++ /usr/bin/g++-${old}
|
||||||
|
fi
|
||||||
|
|
||||||
$SUDO update-alternatives --auto gcc
|
$SUDO update-alternatives --auto gcc
|
||||||
|
|
||||||
# cmake uses the latter by default
|
# cmake uses the latter by default
|
||||||
$SUDO ln -nsf /usr/bin/gcc /usr/bin/x86_64-linux-gnu-gcc
|
$SUDO ln -nsf /usr/bin/gcc /usr/bin/${ARCH}-linux-gnu-gcc
|
||||||
$SUDO ln -nsf /usr/bin/g++ /usr/bin/x86_64-linux-gnu-g++
|
$SUDO ln -nsf /usr/bin/g++ /usr/bin/${ARCH}-linux-gnu-g++
|
||||||
}
|
}
|
||||||
|
|
||||||
if [ x`uname`x = xFreeBSDx ]; then
|
function version_lt {
|
||||||
|
test $1 != $(echo -e "$1\n$2" | sort -rV | head -n 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensure_decent_gcc_on_rh {
|
||||||
|
local old=$(gcc -dumpversion)
|
||||||
|
local expected=5.1
|
||||||
|
local dts_ver=$1
|
||||||
|
if version_lt $old $expected; then
|
||||||
|
if test -t 1; then
|
||||||
|
# interactive shell
|
||||||
|
cat <<EOF
|
||||||
|
Your GCC is too old. Please run following command to add DTS to your environment:
|
||||||
|
|
||||||
|
scl enable devtoolset-7 bash
|
||||||
|
|
||||||
|
Or add following line to the end of ~/.bashrc to add it permanently:
|
||||||
|
|
||||||
|
source scl_source enable devtoolset-7
|
||||||
|
|
||||||
|
see https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/ for more details.
|
||||||
|
EOF
|
||||||
|
else
|
||||||
|
# non-interactive shell
|
||||||
|
source /opt/rh/devtoolset-$dts_ver/enable
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ x$(uname)x = xFreeBSDx ]; then
|
||||||
$SUDO pkg install -yq \
|
$SUDO pkg install -yq \
|
||||||
devel/babeltrace \
|
devel/babeltrace \
|
||||||
devel/git \
|
devel/git \
|
||||||
@ -114,7 +147,7 @@ if [ x`uname`x = xFreeBSDx ]; then
|
|||||||
exit
|
exit
|
||||||
else
|
else
|
||||||
source /etc/os-release
|
source /etc/os-release
|
||||||
case $ID in
|
case "$ID" in
|
||||||
debian|ubuntu|devuan)
|
debian|ubuntu|devuan)
|
||||||
echo "Using apt-get to install dependencies"
|
echo "Using apt-get to install dependencies"
|
||||||
$SUDO apt-get install -y lsb-release devscripts equivs
|
$SUDO apt-get install -y lsb-release devscripts equivs
|
||||||
@ -135,11 +168,11 @@ else
|
|||||||
|
|
||||||
backports=""
|
backports=""
|
||||||
control="debian/control"
|
control="debian/control"
|
||||||
case $(lsb_release -sc) in
|
case "$VERSION" in
|
||||||
squeeze|wheezy)
|
*squeeze*|*wheezy*)
|
||||||
control="/tmp/control.$$"
|
control="/tmp/control.$$"
|
||||||
grep -v babeltrace debian/control > $control
|
grep -v babeltrace debian/control > $control
|
||||||
backports="-t $(lsb_release -sc)-backports"
|
backports="-t $codename-backports"
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@ -152,47 +185,69 @@ else
|
|||||||
;;
|
;;
|
||||||
centos|fedora|rhel|ol|virtuozzo)
|
centos|fedora|rhel|ol|virtuozzo)
|
||||||
yumdnf="yum"
|
yumdnf="yum"
|
||||||
builddepcmd="yum-builddep -y"
|
builddepcmd="yum-builddep -y --setopt=*.skip_if_unavailable=true"
|
||||||
if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then
|
if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then
|
||||||
yumdnf="dnf"
|
yumdnf="dnf"
|
||||||
builddepcmd="dnf -y builddep --allowerasing"
|
builddepcmd="dnf -y builddep --allowerasing"
|
||||||
fi
|
fi
|
||||||
echo "Using $yumdnf to install dependencies"
|
echo "Using $yumdnf to install dependencies"
|
||||||
$SUDO $yumdnf install -y redhat-lsb-core
|
if [ "$ID" = "centos" -a "$ARCH" = "aarch64" ]; then
|
||||||
case $(lsb_release -si) in
|
$SUDO yum-config-manager --disable centos-sclo-sclo || true
|
||||||
Fedora)
|
$SUDO yum-config-manager --disable centos-sclo-rh || true
|
||||||
|
$SUDO yum remove centos-release-scl || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
case "$ID" in
|
||||||
|
fedora)
|
||||||
if test $yumdnf = yum; then
|
if test $yumdnf = yum; then
|
||||||
$SUDO $yumdnf install -y yum-utils
|
$SUDO $yumdnf install -y yum-utils
|
||||||
fi
|
fi
|
||||||
;;
|
;;
|
||||||
CentOS|RedHatEnterpriseServer|VirtuozzoLinux)
|
centos|rhel|ol|virtuozzo)
|
||||||
|
MAJOR_VERSION="$(echo $VERSION_ID | cut -d. -f1)"
|
||||||
$SUDO yum install -y yum-utils
|
$SUDO yum install -y yum-utils
|
||||||
MAJOR_VERSION=$(lsb_release -rs | cut -f1 -d.)
|
if test $ID = rhel ; then
|
||||||
if test $(lsb_release -si) = RedHatEnterpriseServer ; then
|
$SUDO yum-config-manager --enable rhel-$MAJOR_VERSION-server-optional-rpms
|
||||||
$SUDO yum install subscription-manager
|
|
||||||
$SUDO subscription-manager repos --enable=rhel-$MAJOR_VERSION-server-optional-rpms
|
|
||||||
fi
|
fi
|
||||||
$SUDO yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/$MAJOR_VERSION/x86_64/
|
rpm --quiet --query epel-release || \
|
||||||
$SUDO yum install --nogpgcheck -y epel-release
|
$SUDO yum -y install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-$MAJOR_VERSION.noarch.rpm
|
||||||
$SUDO rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$MAJOR_VERSION
|
$SUDO rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$MAJOR_VERSION
|
||||||
$SUDO rm -f /etc/yum.repos.d/dl.fedoraproject.org*
|
$SUDO rm -f /etc/yum.repos.d/dl.fedoraproject.org*
|
||||||
if test $(lsb_release -si) = CentOS -a $MAJOR_VERSION = 7 ; then
|
if test $ID = centos -a $MAJOR_VERSION = 7 ; then
|
||||||
$SUDO yum-config-manager --enable cr
|
$SUDO $yumdnf install -y python36-devel
|
||||||
fi
|
case "$ARCH" in
|
||||||
if test $(lsb_release -si) = VirtuozzoLinux -a $MAJOR_VERSION = 7 ; then
|
x86_64)
|
||||||
$SUDO yum-config-manager --enable cr
|
$SUDO yum -y install centos-release-scl
|
||||||
|
dts_ver=7
|
||||||
|
;;
|
||||||
|
aarch64)
|
||||||
|
$SUDO yum -y install centos-release-scl-rh
|
||||||
|
$SUDO yum-config-manager --disable centos-sclo-rh
|
||||||
|
$SUDO yum-config-manager --enable centos-sclo-rh-testing
|
||||||
|
dts_ver=7
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
elif test $ID = rhel -a $MAJOR_VERSION = 7 ; then
|
||||||
|
$SUDO yum-config-manager --enable rhel-server-rhscl-7-rpms
|
||||||
|
dts_ver=7
|
||||||
fi
|
fi
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
munge_ceph_spec_in $DIR/ceph.spec
|
munge_ceph_spec_in $DIR/ceph.spec
|
||||||
|
$SUDO $yumdnf install -y \*rpm-macros
|
||||||
$SUDO $builddepcmd $DIR/ceph.spec 2>&1 | tee $DIR/yum-builddep.out
|
$SUDO $builddepcmd $DIR/ceph.spec 2>&1 | tee $DIR/yum-builddep.out
|
||||||
|
[ ${PIPESTATUS[0]} -ne 0 ] && exit 1
|
||||||
|
if [ -n "$dts_ver" ]; then
|
||||||
|
ensure_decent_gcc_on_rh $dts_ver
|
||||||
|
fi
|
||||||
! grep -q -i error: $DIR/yum-builddep.out || exit 1
|
! grep -q -i error: $DIR/yum-builddep.out || exit 1
|
||||||
;;
|
;;
|
||||||
opensuse*|suse|sles)
|
opensuse*|suse|sles)
|
||||||
echo "Using zypper to install dependencies"
|
echo "Using zypper to install dependencies"
|
||||||
$SUDO zypper --gpg-auto-import-keys --non-interactive install lsb-release systemd-rpm-macros
|
zypp_install="zypper --gpg-auto-import-keys --non-interactive install --no-recommends"
|
||||||
|
$SUDO $zypp_install systemd-rpm-macros
|
||||||
munge_ceph_spec_in $DIR/ceph.spec
|
munge_ceph_spec_in $DIR/ceph.spec
|
||||||
$SUDO zypper --non-interactive install $(rpmspec -q --buildrequires $DIR/ceph.spec) || exit 1
|
$SUDO $zypp_install $(rpmspec -q --buildrequires $DIR/ceph.spec) || exit 1
|
||||||
;;
|
;;
|
||||||
alpine)
|
alpine)
|
||||||
# for now we need the testing repo for leveldb
|
# for now we need the testing repo for leveldb
|
||||||
@ -219,8 +274,7 @@ function populate_wheelhouse() {
|
|||||||
|
|
||||||
# although pip comes with virtualenv, having a recent version
|
# although pip comes with virtualenv, having a recent version
|
||||||
# of pip matters when it comes to using wheel packages
|
# of pip matters when it comes to using wheel packages
|
||||||
# workaround of https://github.com/pypa/setuptools/issues/1042
|
pip --timeout 300 $install 'setuptools >= 0.8' 'pip >= 7.0' 'wheel >= 0.24' || return 1
|
||||||
pip --timeout 300 $install 'setuptools >= 0.8,< 36' 'pip >= 7.0' 'wheel >= 0.24' || return 1
|
|
||||||
if test $# != 0 ; then
|
if test $# != 0 ; then
|
||||||
pip --timeout 300 $install $@ || return 1
|
pip --timeout 300 $install $@ || return 1
|
||||||
fi
|
fi
|
||||||
@ -236,6 +290,9 @@ function activate_virtualenv() {
|
|||||||
# because CentOS 7 has a buggy old version (v1.10.1)
|
# because CentOS 7 has a buggy old version (v1.10.1)
|
||||||
# https://github.com/pypa/virtualenv/issues/463
|
# https://github.com/pypa/virtualenv/issues/463
|
||||||
virtualenv ${env_dir}_tmp
|
virtualenv ${env_dir}_tmp
|
||||||
|
# install setuptools before upgrading virtualenv, as the latter needs
|
||||||
|
# a recent setuptools for setup commands like `extras_require`.
|
||||||
|
${env_dir}_tmp/bin/pip install --upgrade setuptools
|
||||||
${env_dir}_tmp/bin/pip install --upgrade virtualenv
|
${env_dir}_tmp/bin/pip install --upgrade virtualenv
|
||||||
${env_dir}_tmp/bin/virtualenv --python $interpreter $env_dir
|
${env_dir}_tmp/bin/virtualenv --python $interpreter $env_dir
|
||||||
rm -rf ${env_dir}_tmp
|
rm -rf ${env_dir}_tmp
|
||||||
@ -264,6 +321,12 @@ find . -name tox.ini | while read ini ; do
|
|||||||
(
|
(
|
||||||
cd $(dirname $ini)
|
cd $(dirname $ini)
|
||||||
require=$(ls *requirements.txt 2>/dev/null | sed -e 's/^/-r /')
|
require=$(ls *requirements.txt 2>/dev/null | sed -e 's/^/-r /')
|
||||||
|
md5=wheelhouse/md5
|
||||||
|
if test "$require"; then
|
||||||
|
if ! test -f $md5 || ! md5sum -c $md5 ; then
|
||||||
|
rm -rf wheelhouse
|
||||||
|
fi
|
||||||
|
fi
|
||||||
if test "$require" && ! test -d wheelhouse ; then
|
if test "$require" && ! test -d wheelhouse ; then
|
||||||
for interpreter in python2.7 python3 ; do
|
for interpreter in python2.7 python3 ; do
|
||||||
type $interpreter > /dev/null 2>&1 || continue
|
type $interpreter > /dev/null 2>&1 || continue
|
||||||
@ -271,6 +334,7 @@ find . -name tox.ini | while read ini ; do
|
|||||||
populate_wheelhouse "wheel -w $wip_wheelhouse" $require || exit 1
|
populate_wheelhouse "wheel -w $wip_wheelhouse" $require || exit 1
|
||||||
done
|
done
|
||||||
mv $wip_wheelhouse wheelhouse
|
mv $wip_wheelhouse wheelhouse
|
||||||
|
md5sum *requirements.txt > $md5
|
||||||
fi
|
fi
|
||||||
)
|
)
|
||||||
done
|
done
|
||||||
|
@ -1,3 +1,9 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
- install:
|
||||||
|
extra_packages:
|
||||||
|
- python3-cephfs
|
||||||
|
# For kernel_untar_build workunit
|
||||||
|
extra_system_packages:
|
||||||
|
deb: ['bison', 'flex', 'libelf-dev', 'libssl-dev']
|
||||||
|
rpm: ['bison', 'flex', 'elfutils-libelf-devel', 'openssl-devel']
|
||||||
- ceph:
|
- ceph:
|
||||||
|
3
ceph/qa/distros/all/centos_7.6.yaml
Normal file
3
ceph/qa/distros/all/centos_7.6.yaml
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
os_type: centos
|
||||||
|
os_version: "7.6"
|
||||||
|
|
3
ceph/qa/distros/all/rhel_7.6.yaml
Normal file
3
ceph/qa/distros/all/rhel_7.6.yaml
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
os_type: rhel
|
||||||
|
os_version: "7.6"
|
||||||
|
|
@ -30,8 +30,6 @@ ceph:
|
|||||||
- rbd-fuse-dbg
|
- rbd-fuse-dbg
|
||||||
- rbd-mirror-dbg
|
- rbd-mirror-dbg
|
||||||
- rbd-nbd-dbg
|
- rbd-nbd-dbg
|
||||||
- python3-cephfs
|
|
||||||
- python3-rados
|
|
||||||
rpm:
|
rpm:
|
||||||
- ceph-radosgw
|
- ceph-radosgw
|
||||||
- ceph-test
|
- ceph-test
|
||||||
@ -45,5 +43,3 @@ ceph:
|
|||||||
- python-ceph
|
- python-ceph
|
||||||
- rbd-fuse
|
- rbd-fuse
|
||||||
- ceph-debuginfo
|
- ceph-debuginfo
|
||||||
- python34-cephfs
|
|
||||||
- python34-rados
|
|
||||||
|
@ -491,15 +491,15 @@ function test_run_mon() {
|
|||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
|
|
||||||
run_mon $dir a --mon-initial-members=a || return 1
|
run_mon $dir a --mon-initial-members=a || return 1
|
||||||
create_rbd_pool || return 1
|
ceph mon dump | grep "mon.a" || return 1
|
||||||
# rbd has not been deleted / created, hence it has pool id 0
|
|
||||||
ceph osd dump | grep "pool 1 'rbd'" || return 1
|
|
||||||
kill_daemons $dir || return 1
|
kill_daemons $dir || return 1
|
||||||
|
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a --osd_pool_default_size=3 || return 1
|
||||||
|
run_osd $dir 0 || return 1
|
||||||
|
run_osd $dir 1 || return 1
|
||||||
|
run_osd $dir 2 || return 1
|
||||||
create_rbd_pool || return 1
|
create_rbd_pool || return 1
|
||||||
# rbd has been deleted / created, hence it does not have pool id 0
|
ceph osd dump | grep "pool 1 'rbd'" || return 1
|
||||||
! ceph osd dump | grep "pool 1 'rbd'" || return 1
|
|
||||||
local size=$(CEPH_ARGS='' ceph --format=json daemon $(get_asok_path mon.a) \
|
local size=$(CEPH_ARGS='' ceph --format=json daemon $(get_asok_path mon.a) \
|
||||||
config get osd_pool_default_size)
|
config get osd_pool_default_size)
|
||||||
test "$size" = '{"osd_pool_default_size":"3"}' || return 1
|
test "$size" = '{"osd_pool_default_size":"3"}' || return 1
|
||||||
@ -563,6 +563,7 @@ function run_mgr() {
|
|||||||
--admin-socket=$(get_asok_path) \
|
--admin-socket=$(get_asok_path) \
|
||||||
--run-dir=$dir \
|
--run-dir=$dir \
|
||||||
--pid-file=$dir/\$name.pid \
|
--pid-file=$dir/\$name.pid \
|
||||||
|
--mgr-module-path=$(realpath ${CEPH_ROOT}/src/pybind/mgr) \
|
||||||
"$@" || return 1
|
"$@" || return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1460,11 +1461,12 @@ function test_wait_for_clean() {
|
|||||||
local dir=$1
|
local dir=$1
|
||||||
|
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a --osd_pool_default_size=1 || return 1
|
run_mon $dir a --osd_pool_default_size=2 || return 1
|
||||||
|
run_osd $dir 0 || return 1
|
||||||
run_mgr $dir x || return 1
|
run_mgr $dir x || return 1
|
||||||
create_rbd_pool || return 1
|
create_rbd_pool || return 1
|
||||||
! TIMEOUT=1 wait_for_clean || return 1
|
! TIMEOUT=1 wait_for_clean || return 1
|
||||||
run_osd $dir 0 || return 1
|
run_osd $dir 1 || return 1
|
||||||
wait_for_clean || return 1
|
wait_for_clean || return 1
|
||||||
teardown $dir || return 1
|
teardown $dir || return 1
|
||||||
}
|
}
|
||||||
@ -1507,12 +1509,20 @@ function test_wait_for_health_ok() {
|
|||||||
local dir=$1
|
local dir=$1
|
||||||
|
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a --osd_pool_default_size=1 --osd_failsafe_full_ratio=.99 --mon_pg_warn_min_per_osd=0 || return 1
|
run_mon $dir a --osd_failsafe_full_ratio=.99 --mon_pg_warn_min_per_osd=0 || return 1
|
||||||
run_mgr $dir x --mon_pg_warn_min_per_osd=0 || return 1
|
run_mgr $dir x --mon_pg_warn_min_per_osd=0 || return 1
|
||||||
|
# start osd_pool_default_size OSDs
|
||||||
run_osd $dir 0 || return 1
|
run_osd $dir 0 || return 1
|
||||||
|
run_osd $dir 1 || return 1
|
||||||
|
run_osd $dir 2 || return 1
|
||||||
kill_daemons $dir TERM osd || return 1
|
kill_daemons $dir TERM osd || return 1
|
||||||
|
ceph osd down 0 || return 1
|
||||||
|
# expect TOO_FEW_OSDS warning
|
||||||
! TIMEOUT=1 wait_for_health_ok || return 1
|
! TIMEOUT=1 wait_for_health_ok || return 1
|
||||||
|
# resurrect all OSDs
|
||||||
activate_osd $dir 0 || return 1
|
activate_osd $dir 0 || return 1
|
||||||
|
activate_osd $dir 1 || return 1
|
||||||
|
activate_osd $dir 2 || return 1
|
||||||
wait_for_health_ok || return 1
|
wait_for_health_ok || return 1
|
||||||
teardown $dir || return 1
|
teardown $dir || return 1
|
||||||
}
|
}
|
||||||
@ -1878,7 +1888,7 @@ function test_flush_pg_stats()
|
|||||||
local jq_filter='.pools | .[] | select(.name == "rbd") | .stats'
|
local jq_filter='.pools | .[] | select(.name == "rbd") | .stats'
|
||||||
raw_bytes_used=`ceph df detail --format=json | jq "$jq_filter.raw_bytes_used"`
|
raw_bytes_used=`ceph df detail --format=json | jq "$jq_filter.raw_bytes_used"`
|
||||||
bytes_used=`ceph df detail --format=json | jq "$jq_filter.bytes_used"`
|
bytes_used=`ceph df detail --format=json | jq "$jq_filter.bytes_used"`
|
||||||
test $raw_bytes_used > 0 || return 1
|
test $raw_bytes_used -gt 0 || return 1
|
||||||
test $raw_bytes_used == $bytes_used || return 1
|
test $raw_bytes_used == $bytes_used || return 1
|
||||||
teardown $dir
|
teardown $dir
|
||||||
}
|
}
|
||||||
|
@ -308,7 +308,7 @@ function TEST_chunk_mapping() {
|
|||||||
|
|
||||||
ceph osd erasure-code-profile set remap-profile \
|
ceph osd erasure-code-profile set remap-profile \
|
||||||
plugin=lrc \
|
plugin=lrc \
|
||||||
layers='[ [ "_DD", "" ] ]' \
|
layers='[ [ "cDD", "" ] ]' \
|
||||||
mapping='_DD' \
|
mapping='_DD' \
|
||||||
crush-steps='[ [ "choose", "osd", 0 ] ]' || return 1
|
crush-steps='[ [ "choose", "osd", 0 ] ]' || return 1
|
||||||
ceph osd erasure-code-profile get remap-profile
|
ceph osd erasure-code-profile get remap-profile
|
||||||
|
@ -26,13 +26,14 @@ function run() {
|
|||||||
export CEPH_ARGS
|
export CEPH_ARGS
|
||||||
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
||||||
|
CEPH_ARGS+="--osd-objectstore=filestore "
|
||||||
|
|
||||||
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
for func in $funcs ; do
|
for func in $funcs ; do
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a || return 1
|
||||||
run_mgr $dir x || return 1
|
run_mgr $dir x || return 1
|
||||||
create_rbd_pool || return 1
|
create_pool rbd 4 || return 1
|
||||||
|
|
||||||
# check that erasure code plugins are preloaded
|
# check that erasure code plugins are preloaded
|
||||||
CEPH_ARGS='' ceph --admin-daemon $(get_asok_path mon.a) log flush || return 1
|
CEPH_ARGS='' ceph --admin-daemon $(get_asok_path mon.a) log flush || return 1
|
||||||
|
209
ceph/qa/standalone/mgr/balancer.sh
Executable file
209
ceph/qa/standalone/mgr/balancer.sh
Executable file
@ -0,0 +1,209 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2019 Red Hat <contact@redhat.com>
|
||||||
|
#
|
||||||
|
# Author: David Zafman <dzafman@redhat.com>
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Library Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2, or (at your option)
|
||||||
|
# any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Library Public License for more details.
|
||||||
|
#
|
||||||
|
source $CEPH_ROOT/qa/standalone/ceph-helpers.sh
|
||||||
|
|
||||||
|
function run() {
|
||||||
|
local dir=$1
|
||||||
|
shift
|
||||||
|
|
||||||
|
export CEPH_MON="127.0.0.1:7102" # git grep '\<7102\>' : there must be only one
|
||||||
|
export CEPH_ARGS
|
||||||
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
|
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
||||||
|
|
||||||
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
|
for func in $funcs ; do
|
||||||
|
$func $dir || return 1
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_POOL1=test1
|
||||||
|
TEST_POOL2=test2
|
||||||
|
|
||||||
|
function TEST_balancer() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
setup $dir || return 1
|
||||||
|
run_mon $dir a || return 1
|
||||||
|
run_mgr $dir x || return 1
|
||||||
|
run_osd $dir 0 || return 1
|
||||||
|
run_osd $dir 1 || return 1
|
||||||
|
run_osd $dir 2 || return 1
|
||||||
|
create_pool $TEST_POOL1 8
|
||||||
|
create_pool $TEST_POOL2 8
|
||||||
|
|
||||||
|
wait_for_clean || return 1
|
||||||
|
|
||||||
|
ceph pg dump pgs
|
||||||
|
ceph osd set-require-min-compat-client luminous
|
||||||
|
ceph balancer status || return 1
|
||||||
|
eval MODE=$(ceph balancer status | jq '.mode')
|
||||||
|
test $MODE = "none" || return 1
|
||||||
|
ACTIVE=$(ceph balancer status | jq '.active')
|
||||||
|
test $ACTIVE = "false" || return 1
|
||||||
|
|
||||||
|
ceph balancer ls || return 1
|
||||||
|
PLANS=$(ceph balancer ls)
|
||||||
|
test "$PLANS" = "[]" || return 1
|
||||||
|
ceph balancer eval || return 1
|
||||||
|
EVAL="$(ceph balancer eval)"
|
||||||
|
test "$EVAL" = "current cluster score 0.000000 (lower is better)"
|
||||||
|
ceph balancer eval-verbose || return 1
|
||||||
|
|
||||||
|
ceph balancer mode crush-compat || return 1
|
||||||
|
ceph balancer status || return 1
|
||||||
|
eval MODE=$(ceph balancer status | jq '.mode')
|
||||||
|
test $MODE = "crush-compat" || return 1
|
||||||
|
! ceph balancer optimize plan_crush $TEST_POOL1 || return 1
|
||||||
|
ceph balancer status || return 1
|
||||||
|
eval RESULT=$(ceph balancer status | jq '.optimize_result')
|
||||||
|
test "$RESULT" = "Distribution is already perfect" || return 1
|
||||||
|
|
||||||
|
ceph balancer on || return 1
|
||||||
|
ACTIVE=$(ceph balancer status | jq '.active')
|
||||||
|
test $ACTIVE = "true" || return 1
|
||||||
|
sleep 2
|
||||||
|
ceph balancer status || return 1
|
||||||
|
ceph balancer off || return 1
|
||||||
|
ACTIVE=$(ceph balancer status | jq '.active')
|
||||||
|
test $ACTIVE = "false" || return 1
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
ceph balancer reset || return 1
|
||||||
|
|
||||||
|
ceph balancer mode upmap || return 1
|
||||||
|
ceph balancer status || return 1
|
||||||
|
eval MODE=$(ceph balancer status | jq '.mode')
|
||||||
|
test $MODE = "upmap" || return 1
|
||||||
|
! ceph balancer optimize plan_upmap $TEST_POOL || return 1
|
||||||
|
ceph balancer status || return 1
|
||||||
|
eval RESULT=$(ceph balancer status | jq '.optimize_result')
|
||||||
|
test "$RESULT" = "Unable to find further optimization, or distribution is already perfect" || return 1
|
||||||
|
|
||||||
|
ceph balancer on || return 1
|
||||||
|
ACTIVE=$(ceph balancer status | jq '.active')
|
||||||
|
test $ACTIVE = "true" || return 1
|
||||||
|
sleep 2
|
||||||
|
ceph balancer status || return 1
|
||||||
|
ceph balancer off || return 1
|
||||||
|
ACTIVE=$(ceph balancer status | jq '.active')
|
||||||
|
test $ACTIVE = "false" || return 1
|
||||||
|
|
||||||
|
teardown $dir || return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
function TEST_balancer2() {
|
||||||
|
local dir=$1
|
||||||
|
TEST_PGS1=118
|
||||||
|
TEST_PGS2=132
|
||||||
|
TOTAL_PGS=$(expr $TEST_PGS1 + $TEST_PGS2)
|
||||||
|
OSDS=5
|
||||||
|
DEFAULT_REPLICAS=3
|
||||||
|
# Integer average of PGS per OSD (70.8), so each OSD >= this
|
||||||
|
FINAL_PER_OSD1=$(expr \( $TEST_PGS1 \* $DEFAULT_REPLICAS \) / $OSDS)
|
||||||
|
# Integer average of PGS per OSD (150)
|
||||||
|
FINAL_PER_OSD2=$(expr \( \( $TEST_PGS1 + $TEST_PGS2 \) \* $DEFAULT_REPLICAS \) / $OSDS)
|
||||||
|
|
||||||
|
CEPH_ARGS+="--debug_osd=20 "
|
||||||
|
setup $dir || return 1
|
||||||
|
run_mon $dir a || return 1
|
||||||
|
# Must do this before starting ceph-mgr
|
||||||
|
ceph config-key set mgr/balancer/upmap_max_deviation 1
|
||||||
|
run_mgr $dir x || return 1
|
||||||
|
for i in $(seq 0 $(expr $OSDS - 1))
|
||||||
|
do
|
||||||
|
run_osd $dir $i || return 1
|
||||||
|
done
|
||||||
|
|
||||||
|
ceph osd set-require-min-compat-client luminous
|
||||||
|
ceph balancer mode upmap || return 1
|
||||||
|
ceph balancer on || return 1
|
||||||
|
ceph balancer sleep 5
|
||||||
|
|
||||||
|
create_pool $TEST_POOL1 $TEST_PGS1
|
||||||
|
|
||||||
|
wait_for_clean || return 1
|
||||||
|
|
||||||
|
# Wait up to 2 minutes
|
||||||
|
OK=no
|
||||||
|
for i in $(seq 1 25)
|
||||||
|
do
|
||||||
|
sleep 5
|
||||||
|
if grep -q "Optimization plan is almost perfect" $dir/mgr.x.log
|
||||||
|
then
|
||||||
|
OK=yes
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
test $OK = "yes" || return 1
|
||||||
|
# Plan is found, but PGs still need to move
|
||||||
|
sleep 30
|
||||||
|
ceph osd df
|
||||||
|
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[0].pgs')
|
||||||
|
test $PGS -ge $FINAL_PER_OSD1 || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[1].pgs')
|
||||||
|
test $PGS -ge $FINAL_PER_OSD1 || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[2].pgs')
|
||||||
|
test $PGS -ge $FINAL_PER_OSD1 || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[3].pgs')
|
||||||
|
test $PGS -ge $FINAL_PER_OSD1 || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[4].pgs')
|
||||||
|
test $PGS -ge $FINAL_PER_OSD1 || return 1
|
||||||
|
|
||||||
|
create_pool $TEST_POOL2 $TEST_PGS2
|
||||||
|
|
||||||
|
# Wait up to 2 minutes
|
||||||
|
OK=no
|
||||||
|
for i in $(seq 1 25)
|
||||||
|
do
|
||||||
|
sleep 5
|
||||||
|
COUNT=$(grep "Optimization plan is almost perfect" $dir/mgr.x.log | wc -l)
|
||||||
|
if test $COUNT = "2"
|
||||||
|
then
|
||||||
|
OK=yes
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
test $OK = "yes" || return 1
|
||||||
|
# Plan is found, but PGs still need to move
|
||||||
|
sleep 30
|
||||||
|
ceph osd df
|
||||||
|
|
||||||
|
# We should be with plue or minus 1 of FINAL_PER_OSD2
|
||||||
|
# This is because here each pool is balanced independently
|
||||||
|
MIN=$(expr $FINAL_PER_OSD2 - 1)
|
||||||
|
MAX=$(expr $FINAL_PER_OSD2 + 1)
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[0].pgs')
|
||||||
|
test $PGS -ge $MIN -a $PGS -le $MAX || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[1].pgs')
|
||||||
|
test $PGS -ge $MIN -a $PGS -le $MAX || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[2].pgs')
|
||||||
|
test $PGS -ge $MIN -a $PGS -le $MAX || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[3].pgs')
|
||||||
|
test $PGS -ge $MIN -a $PGS -le $MAX || return 1
|
||||||
|
PGS=$(ceph osd df --format=json-pretty | jq '.nodes[4].pgs')
|
||||||
|
test $PGS -ge $MIN -a $PGS -le $MAX || return 1
|
||||||
|
|
||||||
|
teardown $dir || return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
main balancer "$@"
|
||||||
|
|
||||||
|
# Local Variables:
|
||||||
|
# compile-command: "make -j4 && ../qa/run-standalone.sh balancer.sh"
|
||||||
|
# End:
|
145
ceph/qa/standalone/misc/network-ping.sh
Executable file
145
ceph/qa/standalone/misc/network-ping.sh
Executable file
@ -0,0 +1,145 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
source $CEPH_ROOT/qa/standalone/ceph-helpers.sh
|
||||||
|
|
||||||
|
function run() {
|
||||||
|
local dir=$1
|
||||||
|
shift
|
||||||
|
|
||||||
|
export CEPH_MON="127.0.0.1:7146" # git grep '\<7146\>' : there must be only one
|
||||||
|
export CEPH_ARGS
|
||||||
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
|
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
||||||
|
CEPH_ARGS+="--debug_disable_randomized_ping=true "
|
||||||
|
CEPH_ARGS+="--debug_heartbeat_testing_span=5 "
|
||||||
|
CEPH_ARGS+="--osd_heartbeat_interval=1 "
|
||||||
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
|
for func in $funcs ; do
|
||||||
|
setup $dir || return 1
|
||||||
|
$func $dir || return 1
|
||||||
|
teardown $dir || return 1
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
function TEST_network_ping_test1() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
run_mon $dir a || return 1
|
||||||
|
run_mgr $dir x || return 1
|
||||||
|
run_osd $dir 0 || return 1
|
||||||
|
run_osd $dir 1 || return 1
|
||||||
|
run_osd $dir 2 || return 1
|
||||||
|
|
||||||
|
sleep 5
|
||||||
|
|
||||||
|
create_pool foo 16
|
||||||
|
|
||||||
|
# write some objects
|
||||||
|
timeout 20 rados bench -p foo 10 write -b 4096 --no-cleanup || return 1
|
||||||
|
|
||||||
|
# Get 1 cycle worth of ping data "1 minute"
|
||||||
|
sleep 10
|
||||||
|
flush_pg_stats
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "4" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "12" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
# Wait another 4 cycles to get "5 minute interval"
|
||||||
|
sleep 20
|
||||||
|
flush_pg_stats
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "4" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "12" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
|
||||||
|
# Wait another 10 cycles to get "15 minute interval"
|
||||||
|
sleep 50
|
||||||
|
flush_pg_stats
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "0" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "1000" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "4" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network 0 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.entries | length')" = "12" || return 1
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "0" || return 1
|
||||||
|
|
||||||
|
# Just check the threshold output matches the input
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path mgr.x) dump_osd_network 99 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "99" || return 1
|
||||||
|
CEPH_ARGS='' ceph daemon $(get_asok_path osd.0) dump_osd_network 98 | tee $dir/json
|
||||||
|
test "$(cat $dir/json | jq '.threshold')" = "98" || return 1
|
||||||
|
|
||||||
|
rm -f $dir/json
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test setting of mon_warn_on_slow_ping_time very low to
|
||||||
|
# get health warning
|
||||||
|
function TEST_network_ping_test2() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
export CEPH_ARGS
|
||||||
|
export EXTRA_OPTS+=" --mon_warn_on_slow_ping_time=0.001"
|
||||||
|
run_mon $dir a || return 1
|
||||||
|
run_mgr $dir x || return 1
|
||||||
|
run_osd $dir 0 || return 1
|
||||||
|
run_osd $dir 1 || return 1
|
||||||
|
run_osd $dir 2 || return 1
|
||||||
|
|
||||||
|
sleep 5
|
||||||
|
|
||||||
|
create_pool foo 16
|
||||||
|
|
||||||
|
# write some objects
|
||||||
|
timeout 20 rados bench -p foo 10 write -b 4096 --no-cleanup || return 1
|
||||||
|
|
||||||
|
# Get at least 1 cycle of ping data (this test runs with 5 second cycles of 1 second pings)
|
||||||
|
sleep 10
|
||||||
|
flush_pg_stats
|
||||||
|
|
||||||
|
ceph health | tee $dir/health
|
||||||
|
grep -q "Long heartbeat" $dir/health || return 1
|
||||||
|
|
||||||
|
ceph health detail | tee $dir/health
|
||||||
|
grep -q "OSD_SLOW_PING_TIME_BACK" $dir/health || return 1
|
||||||
|
grep -q "OSD_SLOW_PING_TIME_FRONT" $dir/health || return 1
|
||||||
|
rm -f $dir/health
|
||||||
|
}
|
||||||
|
|
||||||
|
main network-ping "$@"
|
||||||
|
|
||||||
|
# Local Variables:
|
||||||
|
# compile-command: "cd ../.. ; make -j4 && ../qa/run-standalone.sh network-ping.sh"
|
||||||
|
# End:
|
@ -39,7 +39,6 @@ function TEST_osd_pool_get_set() {
|
|||||||
|
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a || return 1
|
||||||
create_rbd_pool || return 1
|
|
||||||
create_pool $TEST_POOL 8
|
create_pool $TEST_POOL 8
|
||||||
|
|
||||||
local flag
|
local flag
|
||||||
|
@ -210,7 +210,7 @@ function TEST_crush_rename_bucket() {
|
|||||||
|
|
||||||
function TEST_crush_reject_empty() {
|
function TEST_crush_reject_empty() {
|
||||||
local dir=$1
|
local dir=$1
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a --osd_pool_default_size=1 || return 1
|
||||||
# should have at least one OSD
|
# should have at least one OSD
|
||||||
run_osd $dir 0 || return 1
|
run_osd $dir 0 || return 1
|
||||||
create_rbd_pool || return 1
|
create_rbd_pool || return 1
|
||||||
|
@ -213,6 +213,7 @@ function TEST_pool_create_rep_expected_num_objects() {
|
|||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
|
|
||||||
# disable pg dir merge
|
# disable pg dir merge
|
||||||
|
CEPH_ARGS+="--osd-objectstore=filestore"
|
||||||
export CEPH_ARGS
|
export CEPH_ARGS
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a || return 1
|
||||||
run_osd $dir 0 || return 1
|
run_osd $dir 0 || return 1
|
||||||
|
135
ceph/qa/standalone/osd/osd-backfill-recovery-log.sh
Executable file
135
ceph/qa/standalone/osd/osd-backfill-recovery-log.sh
Executable file
@ -0,0 +1,135 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# Copyright (C) 2019 Red Hat <contact@redhat.com>
|
||||||
|
#
|
||||||
|
# Author: David Zafman <dzafman@redhat.com>
|
||||||
|
#
|
||||||
|
# This program is free software; you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Library Public License as published by
|
||||||
|
# the Free Software Foundation; either version 2, or (at your option)
|
||||||
|
# any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU Library Public License for more details.
|
||||||
|
#
|
||||||
|
|
||||||
|
source $CEPH_ROOT/qa/standalone/ceph-helpers.sh
|
||||||
|
|
||||||
|
function run() {
|
||||||
|
local dir=$1
|
||||||
|
shift
|
||||||
|
|
||||||
|
# Fix port????
|
||||||
|
export CEPH_MON="127.0.0.1:7129" # git grep '\<7129\>' : there must be only one
|
||||||
|
export CEPH_ARGS
|
||||||
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
|
CEPH_ARGS+="--mon-host=$CEPH_MON --osd_max_backfills=1 --debug_reserver=20 "
|
||||||
|
|
||||||
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
|
for func in $funcs ; do
|
||||||
|
setup $dir || return 1
|
||||||
|
$func $dir || return 1
|
||||||
|
teardown $dir || return 1
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function _common_test() {
|
||||||
|
local dir=$1
|
||||||
|
local extra_opts="$2"
|
||||||
|
local loglen="$3"
|
||||||
|
local dupslen="$4"
|
||||||
|
local objects="$5"
|
||||||
|
local moreobjects=${6:-0}
|
||||||
|
|
||||||
|
local OSDS=6
|
||||||
|
|
||||||
|
run_mon $dir a || return 1
|
||||||
|
run_mgr $dir x || return 1
|
||||||
|
export CEPH_ARGS
|
||||||
|
|
||||||
|
for osd in $(seq 0 $(expr $OSDS - 1))
|
||||||
|
do
|
||||||
|
run_osd $dir $osd $extra_opts || return 1
|
||||||
|
done
|
||||||
|
|
||||||
|
create_pool test 1 1
|
||||||
|
|
||||||
|
for j in $(seq 1 $objects)
|
||||||
|
do
|
||||||
|
rados -p test put obj-${j} /etc/passwd
|
||||||
|
done
|
||||||
|
|
||||||
|
# Mark out all OSDs for this pool
|
||||||
|
ceph osd out $(ceph pg dump pgs --format=json | jq '.[0].up[]')
|
||||||
|
if [ "$moreobjects" != "0" ]; then
|
||||||
|
for j in $(seq 1 $moreobjects)
|
||||||
|
do
|
||||||
|
rados -p test put obj-more-${j} /etc/passwd
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
wait_for_clean
|
||||||
|
|
||||||
|
newprimary=$(ceph pg dump pgs --format=json | jq '.[0].up_primary')
|
||||||
|
kill_daemons
|
||||||
|
|
||||||
|
ERRORS=0
|
||||||
|
_objectstore_tool_nodown $dir $newprimary --no-mon-config --pgid 1.0 --op log | tee $dir/result.log
|
||||||
|
LOGLEN=$(jq '.pg_log_t.log | length' $dir/result.log)
|
||||||
|
if [ $LOGLEN != "$loglen" ]; then
|
||||||
|
echo "FAILED: Wrong log length got $LOGLEN (expected $loglen)"
|
||||||
|
ERRORS=$(expr $ERRORS + 1)
|
||||||
|
fi
|
||||||
|
DUPSLEN=$(jq '.pg_log_t.dups | length' $dir/result.log)
|
||||||
|
if [ $DUPSLEN != "$dupslen" ]; then
|
||||||
|
echo "FAILED: Wrong dups length got $DUPSLEN (expected $dupslen)"
|
||||||
|
ERRORS=$(expr $ERRORS + 1)
|
||||||
|
fi
|
||||||
|
grep "copy_up_to\|copy_after" $dir/osd.*.log
|
||||||
|
rm -f $dir/result.log
|
||||||
|
if [ $ERRORS != "0" ]; then
|
||||||
|
echo TEST FAILED
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Cause copy_up_to() to only partially copy logs, copy additional dups, and trim dups
|
||||||
|
function TEST_backfill_log_1() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
_common_test $dir "--osd_min_pg_log_entries=1 --osd_max_pg_log_entries=2 --osd_pg_log_dups_tracked=10" 1 9 150
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Cause copy_up_to() to only partially copy logs, copy additional dups
|
||||||
|
function TEST_backfill_log_2() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
_common_test $dir "--osd_min_pg_log_entries=1 --osd_max_pg_log_entries=2" 1 149 150
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Cause copy_after() to only copy logs, no dups
|
||||||
|
function TEST_recovery_1() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
_common_test $dir "--osd_min_pg_log_entries=50 --osd_max_pg_log_entries=50 --osd_pg_log_dups_tracked=60 --osd_pg_log_trim_min=10" 40 0 40
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Cause copy_after() to copy logs with dups
|
||||||
|
function TEST_recovery_2() {
|
||||||
|
local dir=$1
|
||||||
|
|
||||||
|
_common_test $dir "--osd_min_pg_log_entries=150 --osd_max_pg_log_entries=150 --osd_pg_log_dups_tracked=3000 --osd_pg_log_trim_min=10" 151 10 141 20
|
||||||
|
}
|
||||||
|
|
||||||
|
main osd-backfill-recovery-log "$@"
|
||||||
|
|
||||||
|
# Local Variables:
|
||||||
|
# compile-command: "make -j4 && ../qa/run-standalone.sh osd-backfill-recovery-log.sh"
|
||||||
|
# End:
|
@ -48,7 +48,7 @@ function markdown_N_impl() {
|
|||||||
# override any dup setting in the environment to ensure we do this
|
# override any dup setting in the environment to ensure we do this
|
||||||
# exactly once (modulo messenger failures, at least; we can't *actually*
|
# exactly once (modulo messenger failures, at least; we can't *actually*
|
||||||
# provide exactly-once semantics for mon commands).
|
# provide exactly-once semantics for mon commands).
|
||||||
CEPH_CLI_TEST_DUP_COMMAND=0 ceph osd down 0
|
( unset CEPH_CLI_TEST_DUP_COMMAND ; ceph osd down 0 )
|
||||||
sleep $sleeptime
|
sleep $sleeptime
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
@ -27,13 +27,14 @@ function run() {
|
|||||||
export CEPH_ARGS
|
export CEPH_ARGS
|
||||||
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
||||||
|
CEPH_ARGS+="--osd-objectstore=filestore "
|
||||||
|
|
||||||
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
for func in $funcs ; do
|
for func in $funcs ; do
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a || return 1
|
run_mon $dir a || return 1
|
||||||
run_mgr $dir x || return 1
|
run_mgr $dir x || return 1
|
||||||
create_rbd_pool || return 1
|
ceph osd pool create foo 8 || return 1
|
||||||
|
|
||||||
$func $dir || return 1
|
$func $dir || return 1
|
||||||
teardown $dir || return 1
|
teardown $dir || return 1
|
||||||
|
@ -57,6 +57,7 @@ function run() {
|
|||||||
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
||||||
CEPH_ARGS+="--osd-skip-data-digest=false "
|
CEPH_ARGS+="--osd-skip-data-digest=false "
|
||||||
|
CEPH_ARGS+="--osd-objectstore=filestore "
|
||||||
|
|
||||||
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
for func in $funcs ; do
|
for func in $funcs ; do
|
||||||
@ -490,7 +491,7 @@ function TEST_list_missing_erasure_coded_overwrites() {
|
|||||||
function TEST_corrupt_scrub_replicated() {
|
function TEST_corrupt_scrub_replicated() {
|
||||||
local dir=$1
|
local dir=$1
|
||||||
local poolname=csr_pool
|
local poolname=csr_pool
|
||||||
local total_objs=18
|
local total_objs=19
|
||||||
|
|
||||||
setup $dir || return 1
|
setup $dir || return 1
|
||||||
run_mon $dir a --osd_pool_default_size=2 || return 1
|
run_mon $dir a --osd_pool_default_size=2 || return 1
|
||||||
@ -512,6 +513,11 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
rados --pool $poolname setomapval $objname key-$objname val-$objname || return 1
|
rados --pool $poolname setomapval $objname key-$objname val-$objname || return 1
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# Increase file 1 MB + 1KB
|
||||||
|
dd if=/dev/zero of=$dir/new.ROBJ19 bs=1024 count=1025
|
||||||
|
rados --pool $poolname put $objname $dir/new.ROBJ19 || return 1
|
||||||
|
rm -f $dir/new.ROBJ19
|
||||||
|
|
||||||
local pg=$(get_pg $poolname ROBJ0)
|
local pg=$(get_pg $poolname ROBJ0)
|
||||||
local primary=$(get_primary $poolname ROBJ0)
|
local primary=$(get_primary $poolname ROBJ0)
|
||||||
|
|
||||||
@ -631,12 +637,18 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
objectstore_tool $dir 1 $objname set-bytes $dir/new.ROBJ18 || return 1
|
objectstore_tool $dir 1 $objname set-bytes $dir/new.ROBJ18 || return 1
|
||||||
# Make one replica have a different object info, so a full repair must happen too
|
# Make one replica have a different object info, so a full repair must happen too
|
||||||
objectstore_tool $dir $osd $objname corrupt-info || return 1
|
objectstore_tool $dir $osd $objname corrupt-info || return 1
|
||||||
|
;;
|
||||||
|
|
||||||
|
19)
|
||||||
|
# Set osd-max-object-size smaller than this object's size
|
||||||
|
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
local pg=$(get_pg $poolname ROBJ0)
|
local pg=$(get_pg $poolname ROBJ0)
|
||||||
|
|
||||||
|
ceph tell osd.\* injectargs -- --osd-max-object-size=1048576
|
||||||
|
|
||||||
inject_eio rep data $poolname ROBJ11 $dir 0 || return 1 # shard 0 of [1, 0], osd.1
|
inject_eio rep data $poolname ROBJ11 $dir 0 || return 1 # shard 0 of [1, 0], osd.1
|
||||||
inject_eio rep mdata $poolname ROBJ12 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
inject_eio rep mdata $poolname ROBJ12 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
||||||
inject_eio rep mdata $poolname ROBJ13 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
inject_eio rep mdata $poolname ROBJ13 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
||||||
@ -664,9 +676,10 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
err_strings[15]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ffdb2004:::ROBJ9:head : object info inconsistent "
|
err_strings[15]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ffdb2004:::ROBJ9:head : object info inconsistent "
|
||||||
err_strings[16]="log_channel[(]cluster[)] log [[]ERR[]] : scrub [0-9]*[.]0 3:c0c86b1d:::ROBJ14:head : no '_' attr"
|
err_strings[16]="log_channel[(]cluster[)] log [[]ERR[]] : scrub [0-9]*[.]0 3:c0c86b1d:::ROBJ14:head : no '_' attr"
|
||||||
err_strings[17]="log_channel[(]cluster[)] log [[]ERR[]] : scrub [0-9]*[.]0 3:5c7b2c47:::ROBJ16:head : can't decode 'snapset' attr buffer::malformed_input: .* no longer understand old encoding version 3 < 97"
|
err_strings[17]="log_channel[(]cluster[)] log [[]ERR[]] : scrub [0-9]*[.]0 3:5c7b2c47:::ROBJ16:head : can't decode 'snapset' attr buffer::malformed_input: .* no longer understand old encoding version 3 < 97"
|
||||||
err_strings[18]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub : stat mismatch, got 18/18 objects, 0/0 clones, 17/18 dirty, 17/18 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 113/120 bytes, 0/0 hit_set_archive bytes."
|
err_strings[18]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub : stat mismatch, got 19/19 objects, 0/0 clones, 18/19 dirty, 18/19 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 1049713/1049720 bytes, 0/0 hit_set_archive bytes."
|
||||||
err_strings[19]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub 1 missing, 7 inconsistent objects"
|
err_strings[19]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub 1 missing, 8 inconsistent objects"
|
||||||
err_strings[20]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub 17 errors"
|
err_strings[20]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 scrub 18 errors"
|
||||||
|
err_strings[21]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 soid 3:123a5f55:::ROBJ19:head : size 1049600 > 1048576 is too large"
|
||||||
|
|
||||||
for err_string in "${err_strings[@]}"
|
for err_string in "${err_strings[@]}"
|
||||||
do
|
do
|
||||||
@ -1209,6 +1222,69 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
],
|
],
|
||||||
"union_shard_errors": []
|
"union_shard_errors": []
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"object": {
|
||||||
|
"name": "ROBJ19",
|
||||||
|
"nspace": "",
|
||||||
|
"locator": "",
|
||||||
|
"snap": "head",
|
||||||
|
"version": 58
|
||||||
|
},
|
||||||
|
"errors": [
|
||||||
|
"size_too_large"
|
||||||
|
],
|
||||||
|
"union_shard_errors": [],
|
||||||
|
"selected_object_info": {
|
||||||
|
"oid": {
|
||||||
|
"oid": "ROBJ19",
|
||||||
|
"key": "",
|
||||||
|
"snapid": -2,
|
||||||
|
"hash": 2868534344,
|
||||||
|
"max": 0,
|
||||||
|
"pool": 3,
|
||||||
|
"namespace": ""
|
||||||
|
},
|
||||||
|
"version": "63'59",
|
||||||
|
"prior_version": "63'58",
|
||||||
|
"last_reqid": "osd.1.0:58",
|
||||||
|
"user_version": 58,
|
||||||
|
"size": 1049600,
|
||||||
|
"mtime": "2019-08-09T23:33:58.340709+0000",
|
||||||
|
"local_mtime": "2019-08-09T23:33:58.345676+0000",
|
||||||
|
"lost": 0,
|
||||||
|
"flags": [
|
||||||
|
"dirty",
|
||||||
|
"omap",
|
||||||
|
"data_digest",
|
||||||
|
"omap_digest"
|
||||||
|
],
|
||||||
|
"truncate_seq": 0,
|
||||||
|
"truncate_size": 0,
|
||||||
|
"data_digest": "0x3dde0ef3",
|
||||||
|
"omap_digest": "0xbffddd28",
|
||||||
|
"expected_object_size": 0,
|
||||||
|
"expected_write_size": 0,
|
||||||
|
"alloc_hint_flags": 0,
|
||||||
|
"manifest": {
|
||||||
|
"type": 0
|
||||||
|
},
|
||||||
|
"watchers": {}
|
||||||
|
},
|
||||||
|
"shards": [
|
||||||
|
{
|
||||||
|
"osd": 0,
|
||||||
|
"primary": false,
|
||||||
|
"errors": [],
|
||||||
|
"size": 1049600
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"osd": 1,
|
||||||
|
"primary": true,
|
||||||
|
"errors": [],
|
||||||
|
"size": 1049600
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"shards": [
|
"shards": [
|
||||||
{
|
{
|
||||||
@ -1325,7 +1401,7 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
"version": "79'66",
|
"version": "79'66",
|
||||||
"prior_version": "79'65",
|
"prior_version": "79'65",
|
||||||
"last_reqid": "client.4554.0:1",
|
"last_reqid": "client.4554.0:1",
|
||||||
"user_version": 74,
|
"user_version": 79,
|
||||||
"size": 7,
|
"size": 7,
|
||||||
"mtime": "",
|
"mtime": "",
|
||||||
"local_mtime": "",
|
"local_mtime": "",
|
||||||
@ -1377,7 +1453,7 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
"version": "95'67",
|
"version": "95'67",
|
||||||
"prior_version": "51'64",
|
"prior_version": "51'64",
|
||||||
"last_reqid": "client.4649.0:1",
|
"last_reqid": "client.4649.0:1",
|
||||||
"user_version": 75,
|
"user_version": 80,
|
||||||
"size": 1,
|
"size": 1,
|
||||||
"mtime": "",
|
"mtime": "",
|
||||||
"local_mtime": "",
|
"local_mtime": "",
|
||||||
@ -1463,7 +1539,7 @@ function TEST_corrupt_scrub_replicated() {
|
|||||||
"version": "95'67",
|
"version": "95'67",
|
||||||
"prior_version": "51'64",
|
"prior_version": "51'64",
|
||||||
"last_reqid": "client.4649.0:1",
|
"last_reqid": "client.4649.0:1",
|
||||||
"user_version": 75,
|
"user_version": 80,
|
||||||
"size": 1,
|
"size": 1,
|
||||||
"mtime": "",
|
"mtime": "",
|
||||||
"local_mtime": "",
|
"local_mtime": "",
|
||||||
@ -1536,6 +1612,10 @@ EOF
|
|||||||
inject_eio rep mdata $poolname ROBJ12 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
inject_eio rep mdata $poolname ROBJ12 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
||||||
inject_eio rep mdata $poolname ROBJ13 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
inject_eio rep mdata $poolname ROBJ13 $dir 1 || return 1 # shard 1 of [1, 0], osd.0
|
||||||
inject_eio rep data $poolname ROBJ13 $dir 0 || return 1 # shard 0 of [1, 0], osd.1
|
inject_eio rep data $poolname ROBJ13 $dir 0 || return 1 # shard 0 of [1, 0], osd.1
|
||||||
|
|
||||||
|
# ROBJ19 won't error this time
|
||||||
|
ceph tell osd.\* injectargs -- --osd-max-object-size=134217728
|
||||||
|
|
||||||
pg_deep_scrub $pg
|
pg_deep_scrub $pg
|
||||||
|
|
||||||
err_strings=()
|
err_strings=()
|
||||||
@ -1562,7 +1642,7 @@ EOF
|
|||||||
err_strings[20]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:c0c86b1d:::ROBJ14:head : candidate had a corrupt info"
|
err_strings[20]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:c0c86b1d:::ROBJ14:head : candidate had a corrupt info"
|
||||||
err_strings[21]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 soid 3:c0c86b1d:::ROBJ14:head : failed to pick suitable object info"
|
err_strings[21]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 soid 3:c0c86b1d:::ROBJ14:head : failed to pick suitable object info"
|
||||||
err_strings[22]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ce3f1d6a:::ROBJ1:head : candidate size 9 info size 7 mismatch"
|
err_strings[22]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ce3f1d6a:::ROBJ1:head : candidate size 9 info size 7 mismatch"
|
||||||
err_strings[23]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ce3f1d6a:::ROBJ1:head : data_digest 0x2d4a11c2 != data_digest 0x2ddbf8f5 from shard 0, data_digest 0x2d4a11c2 != data_digest 0x2ddbf8f5 from auth oi 3:ce3f1d6a:::ROBJ1:head[(][0-9]*'[0-9]* osd.1.0:65 dirty|omap|data_digest|omap_digest s 7 uv 3 dd 2ddbf8f5 od f5fba2c6 alloc_hint [[]0 0 0[]][)], size 9 != size 7 from auth oi 3:ce3f1d6a:::ROBJ1:head[(][0-9]*'[0-9]* osd.1.0:[0-9]* dirty|omap|data_digest|omap_digest s 7 uv 3 dd 2ddbf8f5 od f5fba2c6 alloc_hint [[]0 0 0[]][)], size 9 != size 7 from shard 0"
|
err_strings[23]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:ce3f1d6a:::ROBJ1:head : data_digest 0x2d4a11c2 != data_digest 0x2ddbf8f5 from shard 0, data_digest 0x2d4a11c2 != data_digest 0x2ddbf8f5 from auth oi 3:ce3f1d6a:::ROBJ1:head[(][0-9]*'[0-9]* osd.1.0:[0-9]* dirty|omap|data_digest|omap_digest s 7 uv 3 dd 2ddbf8f5 od f5fba2c6 alloc_hint [[]0 0 0[]][)], size 9 != size 7 from auth oi 3:ce3f1d6a:::ROBJ1:head[(][0-9]*'[0-9]* osd.1.0:[0-9]* dirty|omap|data_digest|omap_digest s 7 uv 3 dd 2ddbf8f5 od f5fba2c6 alloc_hint [[]0 0 0[]][)], size 9 != size 7 from shard 0"
|
||||||
err_strings[24]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:d60617f9:::ROBJ13:head : candidate had a read error"
|
err_strings[24]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 1 soid 3:d60617f9:::ROBJ13:head : candidate had a read error"
|
||||||
err_strings[25]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:d60617f9:::ROBJ13:head : candidate had a stat error"
|
err_strings[25]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:d60617f9:::ROBJ13:head : candidate had a stat error"
|
||||||
err_strings[26]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 soid 3:d60617f9:::ROBJ13:head : failed to pick suitable object info"
|
err_strings[26]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 soid 3:d60617f9:::ROBJ13:head : failed to pick suitable object info"
|
||||||
@ -1575,7 +1655,7 @@ EOF
|
|||||||
err_strings[33]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:ffdb2004:::ROBJ9:head : object info inconsistent "
|
err_strings[33]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 shard 0 soid 3:ffdb2004:::ROBJ9:head : object info inconsistent "
|
||||||
err_strings[34]="log_channel[(]cluster[)] log [[]ERR[]] : deep-scrub [0-9]*[.]0 3:c0c86b1d:::ROBJ14:head : no '_' attr"
|
err_strings[34]="log_channel[(]cluster[)] log [[]ERR[]] : deep-scrub [0-9]*[.]0 3:c0c86b1d:::ROBJ14:head : no '_' attr"
|
||||||
err_strings[35]="log_channel[(]cluster[)] log [[]ERR[]] : deep-scrub [0-9]*[.]0 3:5c7b2c47:::ROBJ16:head : can't decode 'snapset' attr buffer::malformed_input: .* no longer understand old encoding version 3 < 97"
|
err_strings[35]="log_channel[(]cluster[)] log [[]ERR[]] : deep-scrub [0-9]*[.]0 3:5c7b2c47:::ROBJ16:head : can't decode 'snapset' attr buffer::malformed_input: .* no longer understand old encoding version 3 < 97"
|
||||||
err_strings[36]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub : stat mismatch, got 18/18 objects, 0/0 clones, 17/18 dirty, 17/18 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 115/116 bytes, 0/0 hit_set_archive bytes."
|
err_strings[36]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub : stat mismatch, got 19/19 objects, 0/0 clones, 18/19 dirty, 18/19 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 1049715/1049716 bytes, 0/0 hit_set_archive bytes."
|
||||||
err_strings[37]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub 1 missing, 11 inconsistent objects"
|
err_strings[37]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub 1 missing, 11 inconsistent objects"
|
||||||
err_strings[38]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub 35 errors"
|
err_strings[38]="log_channel[(]cluster[)] log [[]ERR[]] : [0-9]*[.]0 deep-scrub 35 errors"
|
||||||
|
|
||||||
@ -2798,7 +2878,7 @@ EOF
|
|||||||
"version": "79'66",
|
"version": "79'66",
|
||||||
"prior_version": "79'65",
|
"prior_version": "79'65",
|
||||||
"last_reqid": "client.4554.0:1",
|
"last_reqid": "client.4554.0:1",
|
||||||
"user_version": 74,
|
"user_version": 79,
|
||||||
"size": 7,
|
"size": 7,
|
||||||
"mtime": "2018-04-05 14:34:05.598688",
|
"mtime": "2018-04-05 14:34:05.598688",
|
||||||
"local_mtime": "2018-04-05 14:34:05.599698",
|
"local_mtime": "2018-04-05 14:34:05.599698",
|
||||||
@ -2896,7 +2976,7 @@ EOF
|
|||||||
"version": "119'68",
|
"version": "119'68",
|
||||||
"prior_version": "51'64",
|
"prior_version": "51'64",
|
||||||
"last_reqid": "client.4834.0:1",
|
"last_reqid": "client.4834.0:1",
|
||||||
"user_version": 76,
|
"user_version": 81,
|
||||||
"size": 3,
|
"size": 3,
|
||||||
"mtime": "2018-04-05 14:35:01.500659",
|
"mtime": "2018-04-05 14:35:01.500659",
|
||||||
"local_mtime": "2018-04-05 14:35:01.502117",
|
"local_mtime": "2018-04-05 14:35:01.502117",
|
||||||
@ -2940,7 +3020,7 @@ EOF
|
|||||||
"version": "119'68",
|
"version": "119'68",
|
||||||
"prior_version": "51'64",
|
"prior_version": "51'64",
|
||||||
"last_reqid": "client.4834.0:1",
|
"last_reqid": "client.4834.0:1",
|
||||||
"user_version": 76,
|
"user_version": 81,
|
||||||
"size": 3,
|
"size": 3,
|
||||||
"mtime": "2018-04-05 14:35:01.500659",
|
"mtime": "2018-04-05 14:35:01.500659",
|
||||||
"local_mtime": "2018-04-05 14:35:01.502117",
|
"local_mtime": "2018-04-05 14:35:01.502117",
|
||||||
|
@ -30,7 +30,7 @@ function run() {
|
|||||||
export CEPH_MON="127.0.0.1:7121" # git grep '\<7121\>' : there must be only one
|
export CEPH_MON="127.0.0.1:7121" # git grep '\<7121\>' : there must be only one
|
||||||
export CEPH_ARGS
|
export CEPH_ARGS
|
||||||
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
CEPH_ARGS+="--fsid=$(uuidgen) --auth-supported=none "
|
||||||
CEPH_ARGS+="--mon-host=$CEPH_MON "
|
CEPH_ARGS+="--mon-host=$CEPH_MON --osd-objectstore=filestore"
|
||||||
|
|
||||||
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
local funcs=${@:-$(set | sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p')}
|
||||||
for func in $funcs ; do
|
for func in $funcs ; do
|
||||||
|
@ -1027,7 +1027,7 @@ def main(argv):
|
|||||||
|
|
||||||
# Specify a bad --op command
|
# Specify a bad --op command
|
||||||
cmd = (CFSD_PREFIX + "--op oops").format(osd=ONEOSD)
|
cmd = (CFSD_PREFIX + "--op oops").format(osd=ONEOSD)
|
||||||
ERRORS += test_failure(cmd, "Must provide --op (info, log, remove, mkfs, fsck, repair, export, export-remove, import, list, fix-lost, list-pgs, rm-past-intervals, dump-journal, dump-super, meta-list, get-osdmap, set-osdmap, get-inc-osdmap, set-inc-osdmap, mark-complete, dump-import, trim-pg-log)")
|
ERRORS += test_failure(cmd, "Must provide --op (info, log, remove, mkfs, fsck, repair, export, export-remove, import, list, fix-lost, list-pgs, rm-past-intervals, dump-journal, dump-super, meta-list, get-osdmap, set-osdmap, get-inc-osdmap, set-inc-osdmap, mark-complete, dump-export, trim-pg-log)")
|
||||||
|
|
||||||
# Provide just the object param not a command
|
# Provide just the object param not a command
|
||||||
cmd = (CFSD_PREFIX + "object").format(osd=ONEOSD)
|
cmd = (CFSD_PREFIX + "object").format(osd=ONEOSD)
|
||||||
@ -1742,6 +1742,27 @@ def main(argv):
|
|||||||
|
|
||||||
ERRORS += EXP_ERRORS
|
ERRORS += EXP_ERRORS
|
||||||
|
|
||||||
|
print("Test clear-data-digest")
|
||||||
|
for nspace in db.keys():
|
||||||
|
for basename in db[nspace].keys():
|
||||||
|
JSON = db[nspace][basename]['json']
|
||||||
|
cmd = (CFSD_PREFIX + "'{json}' clear-data-digest").format(osd='osd0', json=JSON)
|
||||||
|
logging.debug(cmd)
|
||||||
|
ret = call(cmd, shell=True, stdout=nullfd, stderr=nullfd)
|
||||||
|
if ret != 0:
|
||||||
|
logging.error("Clearing data digest failed for {json}".format(json=JSON))
|
||||||
|
ERRORS += 1
|
||||||
|
break
|
||||||
|
cmd = (CFSD_PREFIX + "'{json}' dump | grep '\"data_digest\": \"0xff'").format(osd='osd0', json=JSON)
|
||||||
|
logging.debug(cmd)
|
||||||
|
ret = call(cmd, shell=True, stdout=nullfd, stderr=nullfd)
|
||||||
|
if ret != 0:
|
||||||
|
logging.error("Data digest not cleared for {json}".format(json=JSON))
|
||||||
|
ERRORS += 1
|
||||||
|
break
|
||||||
|
break
|
||||||
|
break
|
||||||
|
|
||||||
print("Test pg removal")
|
print("Test pg removal")
|
||||||
RM_ERRORS = 0
|
RM_ERRORS = 0
|
||||||
for pg in ALLREPPGS + ALLECPGS:
|
for pg in ALLREPPGS + ALLECPGS:
|
||||||
@ -1771,11 +1792,11 @@ def main(argv):
|
|||||||
for pg in PGS:
|
for pg in PGS:
|
||||||
file = os.path.join(dir, pg)
|
file = os.path.join(dir, pg)
|
||||||
# Make sure this doesn't crash
|
# Make sure this doesn't crash
|
||||||
cmd = (CFSD_PREFIX + "--op dump-import --file {file}").format(osd=osd, file=file)
|
cmd = (CFSD_PREFIX + "--op dump-export --file {file}").format(osd=osd, file=file)
|
||||||
logging.debug(cmd)
|
logging.debug(cmd)
|
||||||
ret = call(cmd, shell=True, stdout=nullfd)
|
ret = call(cmd, shell=True, stdout=nullfd)
|
||||||
if ret != 0:
|
if ret != 0:
|
||||||
logging.error("Dump-import failed from {file} with {ret}".format(file=file, ret=ret))
|
logging.error("Dump-export failed from {file} with {ret}".format(file=file, ret=ret))
|
||||||
IMP_ERRORS += 1
|
IMP_ERRORS += 1
|
||||||
# This should do nothing
|
# This should do nothing
|
||||||
cmd = (CFSD_PREFIX + "--op import --file {file} --dry-run").format(osd=osd, file=file)
|
cmd = (CFSD_PREFIX + "--op import --file {file} --dry-run").format(osd=osd, file=file)
|
||||||
|
@ -1 +1 @@
|
|||||||
../.qa/
|
../../../../.qa/
|
@ -1 +1 @@
|
|||||||
../../../distros/supported
|
.qa/distros/supported
|
@ -1,6 +1,6 @@
|
|||||||
# make sure we get the same MPI version on all hosts
|
# make sure we get the same MPI version on all hosts
|
||||||
os_type: ubuntu
|
os_type: ubuntu
|
||||||
os_version: "14.04"
|
os_version: "16.04"
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
- pexec:
|
- pexec:
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# make sure we get the same MPI version on all hosts
|
# make sure we get the same MPI version on all hosts
|
||||||
os_type: ubuntu
|
os_type: ubuntu
|
||||||
os_version: "14.04"
|
os_version: "16.04"
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
- pexec:
|
- pexec:
|
||||||
|
@ -6,3 +6,5 @@ overrides:
|
|||||||
ms inject delay type: osd mds
|
ms inject delay type: osd mds
|
||||||
ms inject delay probability: .005
|
ms inject delay probability: .005
|
||||||
ms inject delay max: 1
|
ms inject delay max: 1
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -1,17 +0,0 @@
|
|||||||
|
|
||||||
os_type: ubuntu
|
|
||||||
os_version: "14.04"
|
|
||||||
|
|
||||||
overrides:
|
|
||||||
ceph:
|
|
||||||
conf:
|
|
||||||
client:
|
|
||||||
client permissions: false
|
|
||||||
roles:
|
|
||||||
- [mon.0, mds.0, osd.0, hadoop.master.0]
|
|
||||||
- [mon.1, mgr.x, osd.1, hadoop.slave.0]
|
|
||||||
- [mon.2, mgr.y, hadoop.slave.1, client.0]
|
|
||||||
openstack:
|
|
||||||
- volumes: # attached to each instance
|
|
||||||
count: 1
|
|
||||||
size: 10 # GB
|
|
@ -1 +0,0 @@
|
|||||||
../../../objectstore/filestore-xfs.yaml
|
|
@ -1,8 +0,0 @@
|
|||||||
tasks:
|
|
||||||
- ssh_keys:
|
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- hadoop:
|
|
||||||
- workunit:
|
|
||||||
clients:
|
|
||||||
client.0: [hadoop/repl.sh]
|
|
@ -1,10 +0,0 @@
|
|||||||
tasks:
|
|
||||||
- ssh_keys:
|
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- hadoop:
|
|
||||||
- workunit:
|
|
||||||
clients:
|
|
||||||
client.0: [hadoop/terasort.sh]
|
|
||||||
env:
|
|
||||||
NUM_RECORDS: "10000000"
|
|
@ -1,8 +0,0 @@
|
|||||||
tasks:
|
|
||||||
- ssh_keys:
|
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- hadoop:
|
|
||||||
- workunit:
|
|
||||||
clients:
|
|
||||||
client.0: [hadoop/wordcount.sh]
|
|
1
ceph/qa/suites/kcephfs/cephfs/begin.yaml
Symbolic link
1
ceph/qa/suites/kcephfs/cephfs/begin.yaml
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
.qa/cephfs/begin.yaml
|
@ -1,3 +0,0 @@
|
|||||||
tasks:
|
|
||||||
- install:
|
|
||||||
- ceph:
|
|
@ -1,6 +1,4 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- exec:
|
- exec:
|
||||||
client.0:
|
client.0:
|
||||||
- sudo ceph mds set inline_data true --yes-i-really-mean-it
|
- sudo ceph mds set inline_data true --yes-i-really-mean-it
|
||||||
|
1
ceph/qa/suites/kcephfs/mixed-clients/begin.yaml
Symbolic link
1
ceph/qa/suites/kcephfs/mixed-clients/begin.yaml
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
.qa/cephfs/begin.yaml
|
@ -1,6 +1,4 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- parallel:
|
- parallel:
|
||||||
- user-workload
|
- user-workload
|
||||||
- kclient-workload
|
- kclient-workload
|
||||||
|
@ -1,6 +1,4 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- parallel:
|
- parallel:
|
||||||
- user-workload
|
- user-workload
|
||||||
- kclient-workload
|
- kclient-workload
|
||||||
|
1
ceph/qa/suites/kcephfs/recovery/begin.yaml
Symbolic link
1
ceph/qa/suites/kcephfs/recovery/begin.yaml
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
.qa/cephfs/begin.yaml
|
@ -1,4 +1,2 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- kclient:
|
- kclient:
|
||||||
|
1
ceph/qa/suites/kcephfs/thrash/begin.yaml
Symbolic link
1
ceph/qa/suites/kcephfs/thrash/begin.yaml
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
.qa/cephfs/begin.yaml
|
@ -1,7 +1,7 @@
|
|||||||
tasks:
|
overrides:
|
||||||
- install:
|
ceph:
|
||||||
- ceph:
|
|
||||||
log-whitelist:
|
log-whitelist:
|
||||||
- but it is still running
|
- but it is still running
|
||||||
- objects unfound and apparently lost
|
- objects unfound and apparently lost
|
||||||
|
tasks:
|
||||||
- thrashosds:
|
- thrashosds:
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
tasks:
|
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- mds_thrash:
|
|
||||||
|
|
||||||
overrides:
|
overrides:
|
||||||
ceph:
|
ceph:
|
||||||
log-whitelist:
|
log-whitelist:
|
||||||
- not responding, replacing
|
- not responding, replacing
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- mds_thrash:
|
||||||
|
@ -2,9 +2,8 @@ overrides:
|
|||||||
ceph:
|
ceph:
|
||||||
log-whitelist:
|
log-whitelist:
|
||||||
- \(MON_DOWN\)
|
- \(MON_DOWN\)
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
- install:
|
|
||||||
- ceph:
|
|
||||||
- mon_thrash:
|
- mon_thrash:
|
||||||
revive_delay: 20
|
revive_delay: 20
|
||||||
thrash_delay: 1
|
thrash_delay: 1
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 500
|
ms inject socket failures: 500
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -0,0 +1,5 @@
|
|||||||
|
tasks:
|
||||||
|
- workunit:
|
||||||
|
clients:
|
||||||
|
all:
|
||||||
|
- rbd/krbd_udev_enumerate.sh
|
@ -0,0 +1,10 @@
|
|||||||
|
overrides:
|
||||||
|
ceph:
|
||||||
|
log-whitelist:
|
||||||
|
- pauserd,pausewr flag\(s\) set
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- workunit:
|
||||||
|
clients:
|
||||||
|
all:
|
||||||
|
- rbd/krbd_udev_netlink_enobufs.sh
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 500
|
ms inject socket failures: 500
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 500
|
ms inject socket failures: 500
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,6 +3,8 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 500
|
ms inject socket failures: 500
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
tasks:
|
tasks:
|
||||||
- exec:
|
- exec:
|
||||||
client.0:
|
client.0:
|
||||||
|
@ -1,5 +1,8 @@
|
|||||||
tasks:
|
tasks:
|
||||||
- install:
|
- install:
|
||||||
|
extra_system_packages:
|
||||||
|
deb: ['bison', 'flex', 'libelf-dev', 'libssl-dev']
|
||||||
|
rpm: ['bison', 'flex', 'elfutils-libelf-devel', 'openssl-devel']
|
||||||
- ceph:
|
- ceph:
|
||||||
- thrashosds:
|
- thrashosds:
|
||||||
chance_down: 1.0
|
chance_down: 1.0
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 1500
|
ms inject socket failures: 1500
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -26,6 +26,7 @@ tasks:
|
|||||||
- default.rgw.log
|
- default.rgw.log
|
||||||
- s3readwrite:
|
- s3readwrite:
|
||||||
client.0:
|
client.0:
|
||||||
|
force-branch: ceph-luminous
|
||||||
rgw_server: client.0
|
rgw_server: client.0
|
||||||
readwrite:
|
readwrite:
|
||||||
bucket: rwtest
|
bucket: rwtest
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -9,3 +9,5 @@ overrides:
|
|||||||
ms inject internal delays: .002
|
ms inject internal delays: .002
|
||||||
mgr:
|
mgr:
|
||||||
debug monc: 10
|
debug monc: 10
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -3,3 +3,5 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
global:
|
global:
|
||||||
ms inject socket failures: 5000
|
ms inject socket failures: 5000
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -4,3 +4,5 @@ overrides:
|
|||||||
global:
|
global:
|
||||||
ms inject socket failures: 500
|
ms inject socket failures: 500
|
||||||
mon mgr beacon grace: 90
|
mon mgr beacon grace: 90
|
||||||
|
log-whitelist:
|
||||||
|
- \(OSD_SLOW_PING_TIME
|
||||||
|
@ -11,6 +11,7 @@ overrides:
|
|||||||
conf:
|
conf:
|
||||||
osd:
|
osd:
|
||||||
filestore xfs extsize: true
|
filestore xfs extsize: true
|
||||||
|
osd objectstore: filestore
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
- install:
|
- install:
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user