mirror of
https://git.proxmox.com/git/pve-manager
synced 2025-07-16 21:09:52 +00:00
![]() Allows to automatically create multiple OSDs per physical device. The main use case are fast NVME drives that would be bottlenecked by a single OSD service. By using the 'ceph-volume lvm batch' command instead of the 'ceph-volume lvm create' for multiple OSDs / device, we don't have to deal with the split of the drive ourselves. But this means that the parameters to specify a DB or WAL device won't work as the 'batch' command doesn't use them. Dedicated DB and WAL devices don't make much sense anyway if we place the OSDs on fast NVME drives. Some other changes to how the command is built were needed as well, as the 'batch' command needs the path to the disk as a positional argument, not as '--data /dev/sdX'. We drop the '--cluster-fsid' parameter because the 'batch' command doesn't accept it. The 'create' will fall back to reading it from the ceph.conf file. Removal of OSDs works as expected without any code changes. As long as there are other OSDs on a disk, the VG & PV won't be removed, even if 'cleanup' is enabled. The '--no-auto' parameter is used to avoid the following deprecation warning: ``` --> DEPRECATION NOTICE --> You are using the legacy automatic disk sorting behavior --> The Pacific release will change the default to --no-auto --> passed data devices: 1 physical, 0 LVM --> relative data size: 0.3333333333333333 ``` Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com> |
||
---|---|---|
.. | ||
Cfg.pm | ||
FS.pm | ||
Makefile | ||
MDS.pm | ||
MGR.pm | ||
MON.pm | ||
OSD.pm | ||
Pool.pm |