三 部署Ceph集群

3.1 仓库推荐

  1. https://mirrors.aliyun.com/ceph/  #阿里云镜像仓库
  2. http://mirrors.163.com/ceph/  #网易镜像仓库
  3. https://mirrors.tuna.tsinghua.edu.cn/ceph/  #清华大学镜像源

3.2 服务器规划

主机

ip

角色

配置

ceph-deploy

172.16.10.156

deploy

4核 4G

ceph-mon-01

172.16.10.148

mon

4核 4G

ceph-mon-02

172.16.10.110

mon

4核 4G

ceph-mon-03

172.16.10.182

mon

4核 4G

ceph-mgr-01

172.16.10.225

mgr

4核 4G

ceph-mgr-02

172.16.10.248

mgr

4核 4G

ceph-node-01

172.16.10.126

osd

4核 4G

ceph-node-02

172.16.10.76

osd

4核 4G

ceph-node-03

172.16.10.44

osd

4核 4G

3.3 集群规划

3.3.1 网络规划

分布式存储Ceph(三) 部署Ceph集群_ubuntu

  • 运行两个独立网络的考量主要有:
  1. 性能:
  • OSD为客户端处理数据复制,复制多份时OSD间的网络负载势必会影响到客户端和Ceph集群的通讯,包括延时增加产生性能问题。
  • 恢复和重均衡也会显著增加公共网延迟。
  1. 安全:
  • 防止DOS,当OSD间流量失控时,归置组不能达到active+clean状态,引起数据无法读写。

3.3.2 数据保存规划

  • 通常数据和元数据放同一块磁盘上

3.3.2.1 单块磁盘

  • 机械硬盘或者SSD
  • Data: 即ceph保存的对象数据
  • Blcok: rocks DB数据即元数据
  • block-wal: 数据库的wal日志

3.3.2.2 两块磁盘

  • SSD
  • block: rocks DB数据即元数据
  • block-wal: 数据库的wal日志
  • 机械硬盘
  • Data: 即ceph保存的对象数据

3.3.2.3 三块磁盘

  • NVME
  • block: rocks DB数据即元数据
  • SSD
  • bock-wal: 数据库的wal日志
  • 机械硬盘
  • Data: 即ceph保存的对象数据

3.4 系统环境

3.4.1 系统版本

点击查看代码

root@ceph-deploy:~# cat /etc/issue
Ubuntu 18.04.3 LTS \n \l

3.4.2 配置主机名解析

点击查看代码

root@ceph-deploy:~# vim /etc/hosts
172.16.10.156 ceph-deploy
172.16.10.148 ceph-mon-01
172.16.10.110 ceph-mon-02
172.16.10.182 ceph-mon-03
172.16.10.225 ceph-mgr-01
172.16.10.248 ceph-mgr-02
172.16.10.126 ceph-osd-01
172.16.10.76 ceph-osd-02
172.16.10.44 ceph-osd-03

3.5 部署ceph-deploy节点

3.5.1 系统时间同步

点击查看代码

root@ceph-deploy:~# apt -y install chrony
root@ceph-deploy:~# systemctl start chrony
root@ceph-deploy:~# systemctl enable chrony

3.5.2 仓库准备

3.5.2.1 导入key

点击查看代码

root@ceph-deploy:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.5.2.2 添加仓库

点击查看代码

root@ceph-deploy:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-deploy:~# apt -y update && apt -y upgrade

3.5.3 安装ceph-deploy部署工具

3.5.3.1 查看ceph-deploy版本

点击查看代码

root@ceph-deploy:~# apt-cache madison ceph-deploy
ceph-deploy | 2.0.1-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu focal/universe amd64 Packages
ceph-deploy | 2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal/main amd64 Packages
ceph-deploy | 2.0.1-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu focal/universe Sources

3.5.3.2 安装ceph-deploy

点击查看代码

root@ceph-deploy:~# apt install  ceph-deploy -y

3.5.3.3 ceph-deploy使用方法

点击查看代码

root@ceph-deploy:~# ceph-deploy --help
new: 开始部署一个新的 ceph 存储集群, 并生成 CLUSTER.conf 集群配置文件和 keyring 认证文件。
install: 在远程主机上安装 ceph 相关的软件包, 可以通过--release 指定安装的版本。
rgw: 管理 RGW 守护程序(RADOSGW,对象存储网关)。
mgr: 管理 MGR 守护程序(ceph-mgr,Ceph Manager DaemonCeph 管理器守护程序)。
mds: 管理 MDS 守护程序(Ceph Metadata Server, ceph 源数据服务器)。
mon: 管理 MON 守护程序(ceph-mon,ceph 监视器)。
gatherkeys: 从指定获取提供新节点的验证 keys, 这些 keys 会在添加新的 MON/OSD/MD 加入的时候使用。
disk: 管理远程主机磁盘。
osd: 在远程主机准备数据磁盘, 即将指定远程主机的指定磁盘添加到 ceph 集群作为 osd 使用。
repo: 远程主机仓库管理。
admin: 推送 ceph 集群配置文件和 client.admin 认证文件到远程主机。
config: 将 ceph.conf 配置文件推送到远程主机或从远程主机拷贝。
uninstall: 从远端主机删除安装包
purgedata: 从/var/lib/ceph 删除 ceph 数据,会删除/etc/ceph 下的内容。
purge: 删除远端主机的安装包和所有数据。
forgetkeys: 从本地主机删除所有的验证 keyring, 包括 client.admin, monitor, bootstrap 等认证文件。
pkg: 管理远端主机的安装包。
calamari: 安装并配置一个 calamari web 节点, calamari 是一个 web 监控平台。

3.5.4 部署ceph集群管理组件

3.5.4.1 ceph-common组件介绍

  • ceph-common组件用于管理ceph集群
  • ceph-common组件会创建ceph用户

3.5.4.2 查看ceph-commom组件版本

点击查看代码

root@ceph-deploy:~# apt-cache madison ceph-common
ceph-common | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-common | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-common | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-common | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.5.4.3 安装ceph-common组件

点击查看代码

root@ceph-deploy:~# apt install ceph-common  -y

3.5.4.4 验证ceph-common组件版本

点击查看代码

root@ceph-deploy:~# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.5.5 设置ceph用户

3.5.5.1 查看ceph用户

点击查看代码

root@ceph-deploy:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.5.5.2 设置ceph用户shell

root@ceph-deploy:~# usermod -s /bin/bash ceph

3.5.5.3 设置ceph用户密码

root@ceph-deploy:~# passwd ceph

3.5.5.4 允许ceph用户以sudo执行特权命令

root@ceph-deploy:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.6 配置mon节点(ceph-mon-01)

3.6.1 系统时间同步

root@ceph-mon-01:~# apt -y install chrony
root@ceph-mon-01:~# systemctl start chrony
root@ceph-mon-01:~# systemctl enable chrony

3.6.2 准备仓库

3.6.2.1 导入key

root@ceph-mon-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.6.2.2 添加仓库

root@ceph-mon-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-mon-01:~# apt -y update && apt -y upgrade

3.6.3 安装ceph-mon

3.6.3.1 查看ceph-mon版本

root@ceph-mon-01:~# apt-cache madison ceph-mon
ceph-mon | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-mon | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.6.3.2 安装ceph-mon

root@ceph-mon-01:~# apt install ceph-mon -y

3.6.3.3 验证版本

root@ceph-mon-01:~# ceph-mon --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
root@ceph-mon-01:~# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.6.4 设置ceph用户

3.6.4.1 查看ceph用户

root@ceph-mon-01:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.6.4.2 设置用户bash

root@ceph-mon-01:~# usermod -s /bin/bash ceph

3.6.4.3 设置ceph用户密码

root@ceph-mon-01:~# usermod -s /bin/bash ceph

3.6.4.4  允许 ceph 用户以 sudo 执行特权命令

root@ceph-mon-01:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.6.5 初始化ceph-mon-01节点

点击查看代码

root@ceph-deploy:~# su - ceph
ceph@ceph-deploy:~$ mkdir ceph-cluster
ceph@ceph-deploy:~$ cd ceph-cluster/
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 172.16.10.0/24 --public-network 172.16.10.0/24 ceph-mon-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 172.16.10.0/24 --public-network 172.16.10.0/24 ceph-mon-01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5d028d3dc0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph-mon-01']
[ceph_deploy.cli][INFO ] func : <function new at 0x7f5cffccbc50>
[ceph_deploy.cli][INFO ] public_network : 172.16.10.0/24
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : 172.16.10.0/24
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds[ceph-mon-01][DEBUG ] connected to host: ceph-deploy
[ceph-mon-01][INFO ] Running command: ssh -CT -o BatchMode=yes ceph-mon-01[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password promptceph@ceph-mon-01's password:
[ceph-mon-01][DEBUG ] connected to host: ceph-mon-01
[ceph-mon-01][DEBUG ] detect platform information from remote host
[ceph-mon-01][DEBUG ] detect machine type
[ceph-mon-01][WARNIN] .ssh/authorized_keys does not exist, will skip adding keys
ceph@ceph-mon-01's password:
[ceph-mon-01][DEBUG ] connection detected need for sudo
ceph@ceph-mon-01's password:
sudo: unable to resolve host ceph-mon-01
[ceph-mon-01][DEBUG ] connected to host: ceph-mon-01
[ceph-mon-01][DEBUG ] detect platform information from remote host
[ceph-mon-01][DEBUG ] detect machine type
[ceph-mon-01][DEBUG ] find the location of an executable
[ceph-mon-01][INFO ] Running command: sudo /bin/ip link show
[ceph-mon-01][INFO ] Running command: sudo /bin/ip addr show
[ceph-mon-01][DEBUG ] IP addresses found: [u'172.16.10.148']
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon-01
[ceph_deploy.new][DEBUG ] Monitor ceph-mon-01 at 172.16.10.148
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-mon-01']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'172.16.10.148']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

3.6.6 验证ceph-mon-01节点初始化

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ls -l
total 28
-rw-rw-r-- 1 ceph ceph 267 Aug 27 16:55 ceph.conf #自动生成的配置文件
-rw-rw-r-- 1 ceph ceph 18394 Aug 27 16:55 ceph-deploy-ceph.log #初始化日志
-rw------- 1 ceph ceph 73 Aug 27 16:55 ceph.mon.keyring #用于 ceph mon 节点内部通讯认证的秘钥环文件
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.conf

[global]fsid = 6e521054-1532-4bc8-9971-7f8ae93e8430

public_network = 172.16.10.0/24

cluster_network = 172.16.10.0/24

mon_initial_members = ceph-mon-01

mon_host = 172.16.10.148

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

3.6.7 配置ceph-mon-01节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f47d95762d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f47d9553a50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon-01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon-01 ...
ceph@ceph-mon-01's password:
[ceph-mon-01][DEBUG ] connection detected need for sudo
ceph@ceph-mon-01's password:
sudo: unable to resolve host ceph-mon-01
[ceph-mon-01][DEBUG ] connected to host: ceph-mon-01
[ceph-mon-01][DEBUG ] detect platform information from remote host
[ceph-mon-01][DEBUG ] detect machine type
[ceph-mon-01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-mon-01][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon-01][DEBUG ] get remote short hostname
[ceph-mon-01][DEBUG ] deploying mon to ceph-mon-01
[ceph-mon-01][DEBUG ] get remote short hostname
[ceph-mon-01][DEBUG ] remote hostname: ceph-mon-01
[ceph-mon-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon-01][DEBUG ] create the mon path if it does not exist
[ceph-mon-01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon-01/done
[ceph-mon-01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon-01/done
[ceph-mon-01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon-01.mon.keyring
[ceph-mon-01][DEBUG ] create the monitor keyring file
[ceph-mon-01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon-01 --keyring /var/lib/ceph/tmp/ceph-ceph-mon-01.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon-01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon-01.mon.keyring
[ceph-mon-01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon-01][DEBUG ] create the init path if it does not exist
[ceph-mon-01][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-mon-01][INFO ] Running command: sudo systemctl enable ceph-mon@ceph-mon-01
[ceph-mon-01][INFO ] Running command: sudo systemctl start ceph-mon@ceph-mon-01
[ceph-mon-01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-01.asok mon_status
[ceph-mon-01][DEBUG ] ********************************************************************************
[ceph-mon-01][DEBUG ] status for monitor: mon.ceph-mon-01
[ceph-mon-01][DEBUG ] {
[ceph-mon-01][DEBUG ] "election_epoch": 3,
[ceph-mon-01][DEBUG ] "extra_probe_peers": [],
[ceph-mon-01][DEBUG ] "feature_map": {
[ceph-mon-01][DEBUG ] "mon": [
[ceph-mon-01][DEBUG ] {
[ceph-mon-01][DEBUG ] "features": "0x3f01cfb9fffdffff",
[ceph-mon-01][DEBUG ] "num": 1,
[ceph-mon-01][DEBUG ] "release": "luminous"
[ceph-mon-01][DEBUG ] }
[ceph-mon-01][DEBUG ] ]
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] "features": {
[ceph-mon-01][DEBUG ] "quorum_con": "4540138297136906239",
[ceph-mon-01][DEBUG ] "quorum_mon": [
[ceph-mon-01][DEBUG ] "kraken",
[ceph-mon-01][DEBUG ] "luminous",
[ceph-mon-01][DEBUG ] "mimic",
[ceph-mon-01][DEBUG ] "osdmap-prune",
[ceph-mon-01][DEBUG ] "nautilus",
[ceph-mon-01][DEBUG ] "octopus",
[ceph-mon-01][DEBUG ] "pacific",
[ceph-mon-01][DEBUG ] "elector-pinging"
[ceph-mon-01][DEBUG ] ],
[ceph-mon-01][DEBUG ] "required_con": "2449958747317026820",
[ceph-mon-01][DEBUG ] "required_mon": [
[ceph-mon-01][DEBUG ] "kraken",
[ceph-mon-01][DEBUG ] "luminous",
[ceph-mon-01][DEBUG ] "mimic",
[ceph-mon-01][DEBUG ] "osdmap-prune",
[ceph-mon-01][DEBUG ] "nautilus",
[ceph-mon-01][DEBUG ] "octopus",
[ceph-mon-01][DEBUG ] "pacific",
[ceph-mon-01][DEBUG ] "elector-pinging"
[ceph-mon-01][DEBUG ] ]
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] "monmap": {
[ceph-mon-01][DEBUG ] "created": "2021-08-29T06:36:59.023456Z",
[ceph-mon-01][DEBUG ] "disallowed_leaders: ": "",
[ceph-mon-01][DEBUG ] "election_strategy": 1,
[ceph-mon-01][DEBUG ] "epoch": 1,
[ceph-mon-01][DEBUG ] "features": {
[ceph-mon-01][DEBUG ] "optional": [],
[ceph-mon-01][DEBUG ] "persistent": [
[ceph-mon-01][DEBUG ] "kraken",
[ceph-mon-01][DEBUG ] "luminous",
[ceph-mon-01][DEBUG ] "mimic",
[ceph-mon-01][DEBUG ] "osdmap-prune",
[ceph-mon-01][DEBUG ] "nautilus",
[ceph-mon-01][DEBUG ] "octopus",
[ceph-mon-01][DEBUG ] "pacific",
[ceph-mon-01][DEBUG ] "elector-pinging"
[ceph-mon-01][DEBUG ] ]
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] "fsid": "6e521054-1532-4bc8-9971-7f8ae93e8430",
[ceph-mon-01][DEBUG ] "min_mon_release": 16,
[ceph-mon-01][DEBUG ] "min_mon_release_name": "pacific",
[ceph-mon-01][DEBUG ] "modified": "2021-08-29T06:36:59.023456Z",
[ceph-mon-01][DEBUG ] "mons": [
[ceph-mon-01][DEBUG ] {
[ceph-mon-01][DEBUG ] "addr": "172.16.10.148:6789/0",
[ceph-mon-01][DEBUG ] "crush_location": "{}",
[ceph-mon-01][DEBUG ] "name": "ceph-mon-01",
[ceph-mon-01][DEBUG ] "priority": 0,
[ceph-mon-01][DEBUG ] "public_addr": "172.16.10.148:6789/0",
[ceph-mon-01][DEBUG ] "public_addrs": {
[ceph-mon-01][DEBUG ] "addrvec": [
[ceph-mon-01][DEBUG ] {
[ceph-mon-01][DEBUG ] "addr": "172.16.10.148:3300",
[ceph-mon-01][DEBUG ] "nonce": 0,
[ceph-mon-01][DEBUG ] "type": "v2"
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] {
[ceph-mon-01][DEBUG ] "addr": "172.16.10.148:6789",
[ceph-mon-01][DEBUG ] "nonce": 0,
[ceph-mon-01][DEBUG ] "type": "v1"
[ceph-mon-01][DEBUG ] }
[ceph-mon-01][DEBUG ] ]
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] "rank": 0,
[ceph-mon-01][DEBUG ] "weight": 0
[ceph-mon-01][DEBUG ] }
[ceph-mon-01][DEBUG ] ],
[ceph-mon-01][DEBUG ] "stretch_mode": false
[ceph-mon-01][DEBUG ] },
[ceph-mon-01][DEBUG ] "name": "ceph-mon-01",
[ceph-mon-01][DEBUG ] "outside_quorum": [],
[ceph-mon-01][DEBUG ] "quorum": [
[ceph-mon-01][DEBUG ] 0
[ceph-mon-01][DEBUG ] ],
[ceph-mon-01][DEBUG ] "quorum_age": 2,
[ceph-mon-01][DEBUG ] "rank": 0,
[ceph-mon-01][DEBUG ] "state": "leader",
[ceph-mon-01][DEBUG ] "stretch_mode": false,
[ceph-mon-01][DEBUG ] "sync_provider": []
[ceph-mon-01][DEBUG ] }
[ceph-mon-01][DEBUG ] ********************************************************************************
[ceph-mon-01][INFO ] monitor: mon.ceph-mon-01 is running
[ceph-mon-01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-01.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-mon-01
ceph@ceph-mon-01's password:
[ceph-mon-01][DEBUG ] connection detected need for sudo
ceph@ceph-mon-01's password:
sudo: unable to resolve host ceph-mon-01
[ceph-mon-01][DEBUG ] connected to host: ceph-mon-01
[ceph-mon-01][DEBUG ] detect platform information from remote host
[ceph-mon-01][DEBUG ] detect machine type
[ceph-mon-01][DEBUG ] find the location of an executable
[ceph-mon-01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-01.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-mon-01 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmptXvt5K
ceph@ceph-mon-01's password:
[ceph-mon-01][DEBUG ] connection detected need for sudo
ceph@ceph-mon-01's password:
sudo: unable to resolve host ceph-mon-01
[ceph-mon-01][DEBUG ] connected to host: ceph-mon-01
[ceph-mon-01][DEBUG ] detect platform information from remote host
[ceph-mon-01][DEBUG ] detect machine type
[ceph-mon-01][DEBUG ] get remote short hostname
[ceph-mon-01][DEBUG ] fetch remote file
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon-01.asok mon_status
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-01/keyring auth get client.admin
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-01/keyring auth get client.bootstrap-mds
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-01/keyring auth get client.bootstrap-mgr
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-01/keyring auth get client.bootstrap-osd
[ceph-mon-01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon-01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmptXvt5K

3.6.8 验证ceph-mon-01节点

root@ceph-mon-01:/etc/ceph# ps -ef |grep ceph-mon
ceph 28546 1 0 14:36 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-01 --setuser ceph --setgroup ceph

3.6.9 ceph-mon-01服务管理

3.6.9.1 查看ceph-mon-01服务状态

点击查看代码

root@ceph-mon-01:/etc/ceph# systemctl status ceph-mon@ceph-mon-01
● ceph-mon@ceph-mon-01.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; indirect; vendor preset: enabled)
Active: active (running) since Sun 2021-08-29 14:36:59 CST; 3min 58s ago
Main PID: 28546 (ceph-mon)
Tasks: 26
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon-01.service
└─28546 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-01 --setuser ceph --setgroup ceph
Aug 29 14:36:59 ceph-mon-01 systemd[1]: Started Ceph cluster monitor daemon.

3.6.9.2 重启ceph-mon-01服务

root@ceph-mon-01:~# systemctl restart ceph-mon@ceph-mon-01

3.7 分发admin秘钥管理集群

3.7.1 admin秘钥分发到ceph-deploy节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-deploy
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-deploy
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8b71aa7320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['ceph-deploy']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f8b723ab9d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-deploy[ceph-deploy][DEBUG ] connection detected need for sudo
[ceph-deploy][DEBUG ] connected to host: ceph-deploy[ceph-deploy][DEBUG ] detect platform information from remote host
[ceph-deploy][DEBUG ] detect machine type[ceph-deploy][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3.7.2 授权ceph用户

root@ceph-deploy:~# setfacl -m u:ceph:rw /etc/ceph/ceph.client.admin.keyring

3.7.3 验证ceph集群状态

3.7.3.1 验证ceph集群版本

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph versions
{
"mon": {
"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 1
},
"mgr": {},
"osd": {},
"mds": {},
"overall": {
"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 1
}
}

3.7.3.2 验证集群状态

  • ceph-mon-01节点以添加到ceph集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
OSD count 0 < osd_pool_default_size 3
services:

mon: 1 daemons, quorum ceph-mon-01 (age 5m)

mgr: no daemons active

osd: 0 osds: 0 up, 0 in
data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0 B

usage: 0 B used, 0 B / 0 B avail pgs:

3.7.3.3 解决health: HEALTH_WARN

3.7.3.3.1 mon is allowing insecure global_id reclaim

ceph@ceph-deploy:~/ceph-cluster$ ceph config set mon auth_allow_insecure_global_id_reclaim false

3.7.3.3.2 OSD count 0 < osd_pool_default_size 3
  • 集群的 OSD 数量小于 3

3.7.3.4 验证HEALTH_WARN

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:

mon: 1 daemons, quorum ceph-mon-01 (age 7m)

mgr: no daemons active

osd: 0 osds: 0 up, 0 in
data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0 B

usage: 0 B used, 0 B / 0 B avail pgs:

3.8 部署manager节点(ceph-mgr-01)

3.8.1 系统时间同步

root@ceph-mgr-01:~# apt -y install chrony
root@ceph-mgr-01:~# systemctl start chrony
root@ceph-mgr-01:~# systemctl enable chrony

3.8.2 准备仓库

3.8.2.1 导入key

root@ceph-mgr-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.8.2.2 添加仓库

root@ceph-mgr-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-mgr-01:~# apt -y update && apt -y upgrade

3.8.3 安装ceph-mgr

3.8.3.1 查看ceph-mgr版本

root@ceph-mgr-01:~# apt-cache madison ceph-mgr
ceph-mgr | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-mgr | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-mgr | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-mgr | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.8.3.2 安装ceph-mgr

root@ceph-mgr-01:~# apt -y install ceph-mgr

3.8.3.3 验证ceph-mgr版本

root@ceph-mgr-01:~# ceph-mgr --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.8.4 设置ceph用户

3.8.4.1 查看ceph用户

root@ceph-mgr-01:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.8.4.2 设置用户shell

root@ceph-mgr-01:~# usermod -s /bin/bash ceph

3.8.4.3 设置ceph用户密码

root@ceph-mgr-01:~# passwd ceph

3.8.4.4  允许 ceph 用户以 sudo 执行特权命令

root@ceph-mgr-01:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.8.5 初始化ceph-mgr-01节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ceph-mgr-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mgr-01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('ceph-mgr-01', 'ceph-mgr-01')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe20cd96e60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7fe20d1f70d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mgr-01:ceph-mgr-01
The authenticity of host 'ceph-mgr-01 (172.16.10.225)' can't be established.
ECDSA key fingerprint is SHA256:FSg4Sx1V2Vrn4ZHy3xng6Cx0V0k8ctGe4Bh60m8IsAU.
Are you sure you want to continue connecting (yes/no)?
Warning: Permanently added 'ceph-mgr-01,172.16.10.225' (ECDSA) to the list of known hosts.
ceph@ceph-mgr-01's password:
[ceph-mgr-01][DEBUG ] connection detected need for sudo
ceph@ceph-mgr-01's password:
sudo: unable to resolve host ceph-mgr-01
[ceph-mgr-01][DEBUG ] connected to host: ceph-mgr-01
[ceph-mgr-01][DEBUG ] detect platform information from remote host
[ceph-mgr-01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr-01
[ceph-mgr-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr-01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr-01][DEBUG ] create a keyring file
[ceph-mgr-01][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr-01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr-01/keyring
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr-01
[ceph-mgr-01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr-01.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-mgr-01][INFO ] Running command: sudo systemctl start ceph-mgr@ceph-mgr-01
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph.target

3.8.6 验证ceph-mgr-01节点

root@ceph-mgr-01:~# ps -ef |grep ceph-mgr
ceph 3585 1 4 16:00 ? 00:00:07 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr-01 --setuser ceph --setgroup ceph

3.8.7 ceph-mgr-01服务管理

3.8.7.1 查看ceph-mgr-01服务状态

点击查看代码

root@ceph-mgr-01:~# systemctl status ceph-mgr@ceph-mgr-01
● ceph-mgr@ceph-mgr-01.service - Ceph cluster manager daemon
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; indirect; vendor preset: enabled)
Active: active (running) since Sun 2021-08-29 16:00:02 CST; 4min 45s ago
Main PID: 3585 (ceph-mgr)
Tasks: 62 (limit: 1032)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mgr-01.service
└─3585 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr-01 --setuser ceph --setgroup ceph
Aug 29 16:00:02 ceph-mgr-01 systemd[1]: Started Ceph cluster manager daemon.

Aug 29 16:00:02 ceph-mgr-01 systemd[1]: /lib/systemd/system/ceph-mgr@.service:21: Unknown lvalue 'ProtectHostname' in section 'Service'

Aug 29 16:00:02 ceph-mgr-01 systemd[1]: /lib/systemd/system/ceph-mgr@.service:22: Unknown lvalue 'ProtectKernelLogs' in section 'Service'

3.8.7.2 重启ceph-mgr-01服务

root@ceph-mgr-01:~# systemctl restart ceph-mgr@ceph-mgr-01

3.8.8 验证ceph集群状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:

mon: 1 daemons, quorum ceph-mon-01 (age 90m)

mgr: ceph-mgr-01(active, since 68s)

osd: 0 osds: 0 up, 0 in
data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0 B

usage: 0 B used, 0 B / 0 B avail

pgs:

3.8.9 验证ceph集群版本

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph versions
{
"mon": {
"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 1
},
"mgr": {
"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 1
},
"osd": {},
"mds": {},
"overall": {
"ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)": 2
}
}

3.9 部署OSD节点

3.9.1 系统时间同步

3.9.1.1 ceph-osd-01节点

root@ceph-node-01:~# apt -y install chrony
root@ceph-node-01:~# systemctl start chrony
root@ceph-node-01:~# systemctl enable chrony

3.9.1.2 ceph-osd-02节点

root@ceph-node-02:~# apt -y install chrony
root@ceph-node-02:~# systemctl start chrony
root@ceph-node-02:~# systemctl enable chrony

3.9.1.3 ceph-osd-03节点

root@ceph-node-03:~# apt -y install chrony
root@ceph-node-03:~# systemctl start chrony
root@ceph-node-03:~# systemctl enable chrony

3.9.2 准备仓库

3.9.2.1 导入key

root@ceph-node-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
root@ceph-node-02:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
root@ceph-node-03:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -

 3.9.2.2 添加仓库

root@ceph-node-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-node-02:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-node-03:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list

 3.9.2.3 更新仓库

root@ceph-node-01:~# apt -y update && apt -y upgrade
root@ceph-node-02:~# apt -y update && apt -y upgrade
root@ceph-node-03:~# apt -y update && apt -y upgrade

3.9.3 安装公共组件ceph-common

root@ceph-node-01:~# apt -y install ceph-common
root@ceph-node-02:~# apt -y install ceph-common
root@ceph-node-03:~# apt -y install ceph-common

3.9.4 设置ceph用户

3.9.4.1 查看ceph用户

root@ceph-node-01:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)
root@ceph-node-02:~# id ceph

uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)
root@ceph-node-03:~# id ceph

uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.9.4.1 设置ceph用户shell

root@ceph-node-01:~# usermod -s /bin/bash ceph
root@ceph-node-02:~# usermod -s /bin/bash ceph
root@ceph-node-03:~# usermod -s /bin/bash ceph

3.9.4.2 设置ceph用户密码

root@ceph-node-01:~# passwd ceph
root@ceph-node-02:~# passwd ceph
root@ceph-node-03:~# passwd ceph

3.9.4.3 允许 ceph 用户以 sudo 执行特权命令

root@ceph-node-01:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
root@ceph-node-02:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
root@ceph-node-03:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.9.5 初始化OSD节点

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node-01 ceph-node-02 ceph-node-03

3.9.6 安装OSD节点运行环境

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy install --release pacific ceph-node-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy install --release pacific ceph-node-02
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy install --release pacific ceph-node-03

3.9.7 列出OSD节点磁盘

3.9.7.1 ceph-node-01节点磁盘列表

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node-01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fdf5e1122d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph-node-01']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7fdf5e0e8250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : FalseWarning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-osd-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo fdisk -l
[ceph-node-01][INFO ] Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-01][INFO ] Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-01][INFO ] Disk /dev/vdc: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-01][INFO ] Disk /dev/vdd: 20 GiB, 21474836480 bytes, 41943040 sectors

3.9.7.2 ceph-node-01节点磁盘列表

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node-02
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node-02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa7eae632d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph-node-02']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7fa7eae39250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
ceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-osd-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo fdisk -l
[ceph-node-02][INFO ] Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-02][INFO ] Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-02][INFO ] Disk /dev/vdc: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-02][INFO ] Disk /dev/vdd: 20 GiB, 21474836480 bytes, 41943040 sectors

3.9.7.3 ceph-node-01节点磁盘列表

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node-03
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node-03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7febdfcf62d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph-node-03']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7febdfccc250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
ceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-osd-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo fdisk -l
[ceph-node-03][INFO ] Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-03][INFO ] Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-03][INFO ] Disk /dev/vdc: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node-03][INFO ] Disk /dev/vdd: 20 GiB, 21474836480 bytes, 41943040 sectors

3.9.8 擦除ceph-node节点数据盘

3.9.8.1 擦除ceph-node-01节点数据盘

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node-01 /dev/vdb /dev/vdc  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node-01 /dev/vdb /dev/vdc /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f84ae4f22d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f84ae4c8250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/vdb', '/dev/vdc', '/dev/vdd']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node-01
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-01][DEBUG ] zeroing last few blocks of device
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node-01][WARNIN] --> Zapping: /dev/vdb
[ceph-node-01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node-01][WARNIN] stderr: 10+0 records in
[ceph-node-01][WARNIN] 10+0 records out
[ceph-node-01][WARNIN] stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.203791 s, 51.5 MB/s
[ceph-node-01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on ceph-node-01
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-01][DEBUG ] zeroing last few blocks of device
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdc
[ceph-node-01][WARNIN] --> Zapping: /dev/vdc
[ceph-node-01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdc bs=1M count=10 conv=fsync
[ceph-node-01][WARNIN] stderr: 10+0 records in
[ceph-node-01][WARNIN] 10+0 records out
[ceph-node-01][WARNIN] stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0703279 s, 149 MB/s
[ceph-node-01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on ceph-node-01
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-01][DEBUG ] zeroing last few blocks of device
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdd
[ceph-node-01][WARNIN] --> Zapping: /dev/vdd
[ceph-node-01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdd bs=1M count=10 conv=fsync
[ceph-node-01][WARNIN] stderr: 10+0 records in
[ceph-node-01][WARNIN] 10+0 records out
[ceph-node-01][WARNIN] stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0935676 s, 112 MB/s
[ceph-node-01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdd>

3.9.8.2 擦除ceph-node-02节点数据盘

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node-02 /dev/vdb /dev/vdc  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node-02 /dev/vdb /dev/vdc /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb883d762d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-node-02
[ceph_deploy.cli][INFO ] func : <function disk at 0x7fb883d4c250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/vdb', '/dev/vdc', '/dev/vdd']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node-02
ceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-node-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-02][DEBUG ] zeroing last few blocks of device
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node-02][WARNIN] --> Zapping: /dev/vdb
[ceph-node-02][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-02][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node-02][WARNIN] stderr: 10+0 records in
[ceph-node-02][WARNIN] 10+0 records out
[ceph-node-02][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0419579 s, 250 MB/s
[ceph-node-02][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on ceph-node-02
ceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-node-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-02][DEBUG ] zeroing last few blocks of device
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdc
[ceph-node-02][WARNIN] --> Zapping: /dev/vdc
[ceph-node-02][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-02][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdc bs=1M count=10 conv=fsync
[ceph-node-02][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on ceph-node-02
ceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-node-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-02][DEBUG ] zeroing last few blocks of device
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdd
[ceph-node-02][WARNIN] --> Zapping: /dev/vdd
[ceph-node-02][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-02][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdd bs=1M count=10 conv=fsync
[ceph-node-02][WARNIN] stderr: 10+0 records in
[ceph-node-02][WARNIN] 10+0 records out
[ceph-node-02][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.547056 s, 19.2 MB/s
[ceph-node-02][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdd>

3.9.8.3 擦除ceph-node-03节点数据盘

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node-03 /dev/vdb /dev/vdc  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node-03 /dev/vdb /dev/vdc /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe6d2ba62d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-node-03
[ceph_deploy.cli][INFO ] func : <function disk at 0x7fe6d2b7c250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/vdb', '/dev/vdc', '/dev/vdd']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node-03
ceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-03][DEBUG ] zeroing last few blocks of device
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node-03][WARNIN] --> Zapping: /dev/vdb
[ceph-node-03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node-03][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on ceph-node-03
ceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-03][DEBUG ] zeroing last few blocks of device
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdc
[ceph-node-03][WARNIN] --> Zapping: /dev/vdc
[ceph-node-03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdc bs=1M count=10 conv=fsync
[ceph-node-03][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on ceph-node-03
ceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-03][DEBUG ] zeroing last few blocks of device
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdd
[ceph-node-03][WARNIN] --> Zapping: /dev/vdd
[ceph-node-03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdd bs=1M count=10 conv=fsync
[ceph-node-03][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdd>

3.9.9 添加OSD

3.9.9.1 添加OSD方法

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd -h
usage: ceph-deploy osd [-h] {list,create} ...
Create OSDs from a data disk on a remote host:ceph-deploy osd create {node} --data /path/to/device
For bluestore, optional devices can be used::ceph-deploy osd create {node} --data /path/to/data --block-db /path/to/db-device
ceph-deploy osd create {node} --data /path/to/data --block-wal /path/to/wal-device
ceph-deploy osd create {node} --data /path/to/data --block-db /path/to/db-device --block-wal /path/to/wal-device
For filestore, the journal must be specified, as well as the objectstore::ceph-deploy osd create {node} --filestore --data /path/to/data --journal /path/to/journal
For data devices, it can be an existing logical volume in the format of:

vg/lv, or a device. For other OSD components like wal, db, and journal, it

can be logical volume (in vg/lv format) or it must be a GPT partition.positional arguments:

{list,create}

list List OSD info from remote host(s)

create Create new Ceph OSD daemon by preparing and activating a

device
optional arguments:

-h, --help show this help message and exit

3.9.9.2 添加ceph-node-01节点OSD

3.9.9.2.1 添加ceph-node-01节点/dev/vdb

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-01 --data  /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-01 --data /dev/vdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f088b6145f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f088b6651d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdb
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-01
[ceph-node-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-01][WARNIN] osd keyring does not exist yet, creating one
[ceph-node-01][DEBUG ] create a keyring file
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 43200fd2-1472-4b6c-b4ec-c384580fe631
[ceph-node-01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de /dev/vdb
[ceph-node-01][WARNIN] stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node-01][WARNIN] stdout: Volume group "ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de" successfully created
[ceph-node-01][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631 ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de
[ceph-node-01][WARNIN] stdout: Logical volume "osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631" created.
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node-01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de/osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-01][WARNIN] Running command: /bin/ln -s /dev/ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de/osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631 /var/lib/ceph/osd/ceph-0/block
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:09:11.118+0800 7f72337a7700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-01][WARNIN] 2021-09-06T14:09:11.118+0800 7f72337a7700 -1 AuthRegistry(0x7f722c05b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-01][WARNIN] stderr: got monmap epoch 1
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCGsDVhhhXOAxAAD7GOy2/7BctI1wRwHK17OA==
[ceph-node-01][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-node-01][WARNIN] added entity osd.0 auth(key=AQCGsDVhhhXOAxAAD7GOy2/7BctI1wRwHK17OA==)
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 43200fd2-1472-4b6c-b4ec-c384580fe631 --setuser ceph --setgroup ceph
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:09:11.590+0800 7f8edfe14f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node-01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de/osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node-01][WARNIN] Running command: /bin/ln -snf /dev/ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de/osd-block-43200fd2-1472-4b6c-b4ec-c384580fe631 /var/lib/ceph/osd/ceph-0/block
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-43200fd2-1472-4b6c-b4ec-c384580fe631
[ceph-node-01][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-43200fd2-1472-4b6c-b4ec-c384580fe631.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node-01][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node-01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node-01][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node-01][INFO ] checking OSD status...
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-01 is now ready for osd use.
Unhandled exception in thread started by

3.9.9.2.2 添加ceph-node-01节点/dev/vdc

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-01 --data  /dev/vdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-01 --data /dev/vdc
[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2beaa6d5f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f2beaabe1d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdc
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdc
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-01
[ceph-node-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdc
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 000e303a-cafd-4a2b-96d8-17f556921569
[ceph-node-01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e /dev/vdc
[ceph-node-01][WARNIN] stdout: Physical volume "/dev/vdc" successfully created.
[ceph-node-01][WARNIN] stdout: Volume group "ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e" successfully created
[ceph-node-01][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-000e303a-cafd-4a2b-96d8-17f556921569 ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e
[ceph-node-01][WARNIN] stdout: Logical volume "osd-block-000e303a-cafd-4a2b-96d8-17f556921569" created.
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[ceph-node-01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e/osd-block-000e303a-cafd-4a2b-96d8-17f556921569
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node-01][WARNIN] Running command: /bin/ln -s /dev/ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e/osd-block-000e303a-cafd-4a2b-96d8-17f556921569 /var/lib/ceph/osd/ceph-1/block
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:34.199+0800 7f05fca57700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:34.199+0800 7f05fca57700 -1 AuthRegistry(0x7f05f805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-01][WARNIN] stderr: got monmap epoch 1
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQAVsTVhp80wCxAA5nJA080MT3BjWtEzgjjwTQ==
[ceph-node-01][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[ceph-node-01][WARNIN] added entity osd.1 auth(key=AQAVsTVhp80wCxAA5nJA080MT3BjWtEzgjjwTQ==)
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 000e303a-cafd-4a2b-96d8-17f556921569 --setuser ceph --setgroup ceph
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:34.675+0800 7f3cda5c7f00 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
[ceph-node-01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdc
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e/osd-block-000e303a-cafd-4a2b-96d8-17f556921569 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[ceph-node-01][WARNIN] Running command: /bin/ln -snf /dev/ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e/osd-block-000e303a-cafd-4a2b-96d8-17f556921569 /var/lib/ceph/osd/ceph-1/block
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-000e303a-cafd-4a2b-96d8-17f556921569
[ceph-node-01][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-000e303a-cafd-4a2b-96d8-17f556921569.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[ceph-node-01][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[ceph-node-01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[ceph-node-01][WARNIN] --> ceph-volume lvm create successful for: /dev/vdc
[ceph-node-01][INFO ] checking OSD status...
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-01 is now ready for osd use.
Unhandled exception in thread started by

3.9.9.2.3 添加ceph-node-01节点/dev/vdd

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-01 --data  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-01 --data /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcaa652d5f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fcaa657e1d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdd
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdd
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-01
[ceph-node-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdd
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e567608f-b6eb-4366-8626-020b6e9f8307
[ceph-node-01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815 /dev/vdd
[ceph-node-01][WARNIN] stdout: Physical volume "/dev/vdd" successfully created.
[ceph-node-01][WARNIN] stdout: Volume group "ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815" successfully created
[ceph-node-01][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-e567608f-b6eb-4366-8626-020b6e9f8307 ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815
[ceph-node-01][WARNIN] stdout: Logical volume "osd-block-e567608f-b6eb-4366-8626-020b6e9f8307" created.
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815/osd-block-e567608f-b6eb-4366-8626-020b6e9f8307
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-01][WARNIN] Running command: /bin/ln -s /dev/ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815/osd-block-e567608f-b6eb-4366-8626-020b6e9f8307 /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:57.199+0800 7fdbc8835700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-01][WARNIN] stderr:
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:57.199+0800 7fdbc8835700 -1 AuthRegistry(0x7fdbc405b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-01][WARNIN] stderr:
[ceph-node-01][WARNIN] stderr: got monmap epoch 1
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQAssTVhYU6MCBAArtQYqkR0H2yxD9qnc4CXog==
[ceph-node-01][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-node-01][WARNIN] added entity osd.2 auth(key=AQAssTVhYU6MCBAArtQYqkR0H2yxD9qnc4CXog==)
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid e567608f-b6eb-4366-8626-020b6e9f8307 --setuser ceph --setgroup ceph
[ceph-node-01][WARNIN] stderr: 2021-09-06T14:11:57.647+0800 7fecf00d6f00 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
[ceph-node-01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdd
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815/osd-block-e567608f-b6eb-4366-8626-020b6e9f8307 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-node-01][WARNIN] Running command: /bin/ln -snf /dev/ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815/osd-block-e567608f-b6eb-4366-8626-020b6e9f8307 /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-e567608f-b6eb-4366-8626-020b6e9f8307
[ceph-node-01][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-e567608f-b6eb-4366-8626-020b6e9f8307.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-node-01][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-node-01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-node-01][WARNIN] --> ceph-volume lvm create successful for: /dev/vdd
[ceph-node-01][INFO ] checking OSD status...
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-01 is now ready for osd use.

3.9.9.2.4 验证ceph-node-01节点OSD服务

root@ceph-node-01:~# ps -ef | grep ceph
root 1158 1 0 14:24 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 1676 1 0 14:24 ? 00:00:06 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
ceph 1677 1 0 14:24 ? 00:00:06 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph 1678 1 0 14:24 ? 00:00:06 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph

3.9.9.2.5 管理ceph-node-01节点OSD服务

root@ceph-node-01:~# systemctl status ceph-osd@0
root@ceph-node-01:~# systemctl status ceph-osd@1
root@ceph-node-01:~# systemctl status ceph-osd@2

3.9.9.2.6 验证ceph集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
Degraded data redundancy: 1 pg undersized
services:

mon: 1 daemons, quorum ceph-mon-01 (age 14m)

mgr: ceph-mgr-01(active, since 7d)

osd: 3 osds: 3 up (since 106s), 3 in (since 14m)
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B

usage: 16 MiB used, 60 GiB / 60 GiB avail

pgs: 100.000% pgs not active

1 undersized+peered

3.9.9.2.7 查看ceph-node-01节点和OSD的对应关系

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-3 0.05846 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000

3.9.9.3 添加ceph-node-02节点OSD

3.9.9.3.1 添加ceph-node-02节点/dev/vdb

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-02 --data  /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-02 --data /dev/vdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbfa5f845f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-02
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fbfa5fd51d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdb
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdbceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-node-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-02
[ceph-node-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-02][WARNIN] osd keyring does not exist yet, creating one
[ceph-node-02][DEBUG ] create a keyring file
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 888ad7db-c301-4fbc-8718-e6c2e03e409a
[ceph-node-02][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e /dev/vdb
[ceph-node-02][WARNIN] stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node-02][WARNIN] stdout: Volume group "ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e" successfully created
[ceph-node-02][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e
[ceph-node-02][WARNIN] stdout: Logical volume "osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a" created.
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-02][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[ceph-node-02][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-02][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e/osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-02][WARNIN] Running command: /bin/ln -s /dev/ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e/osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a /var/lib/ceph/osd/ceph-3/block
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:27:24.707+0800 7fea119a7700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-02][WARNIN] stderr:
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:27:24.707+0800 7fea119a7700 -1 AuthRegistry(0x7fea0c05b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-02][WARNIN] stderr:
[ceph-node-02][WARNIN] stderr: got monmap epoch 1
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQDLtDVhbgzOGhAAuxYy1eISS2jAgWNQMKp+Cg==
[ceph-node-02][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[ceph-node-02][WARNIN] stdout: added entity osd.3 auth(key=AQDLtDVhbgzOGhAAuxYy1eISS2jAgWNQMKp+Cg==)
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 888ad7db-c301-4fbc-8718-e6c2e03e409a --setuser ceph --setgroup ceph
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:27:25.539+0800 7f9134f15f00 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
[ceph-node-02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e/osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[ceph-node-02][WARNIN] Running command: /bin/ln -snf /dev/ceph-59bf707f-e80a-4f7f-b45c-9a5be15a6d3e/osd-block-888ad7db-c301-4fbc-8718-e6c2e03e409a /var/lib/ceph/osd/ceph-3/block
[ceph-node-02][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-node-02][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-888ad7db-c301-4fbc-8718-e6c2e03e409a
[ceph-node-02][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-888ad7db-c301-4fbc-8718-e6c2e03e409a.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-02][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[ceph-node-02][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-02][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[ceph-node-02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[ceph-node-02][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node-02][INFO ] checking OSD status...
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-02 is now ready for osd use.

3.9.9.3.2 添加ceph-node-02节点/dev/vdc

点击查看代码

[2021-09-06 14:28:53,241][ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[2021-09-06 14:28:53,242][ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-02 --data /dev/vdc
[2021-09-06 14:28:53,242][ceph_deploy.cli][INFO ] ceph-deploy options:
[2021-09-06 14:28:53,242][ceph_deploy.cli][INFO ] verbose : False
[2021-09-06 14:28:53,243][ceph_deploy.cli][INFO ] bluestore : None
[2021-09-06 14:28:53,243][ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fca5b04c5f0>
[2021-09-06 14:28:53,243][ceph_deploy.cli][INFO ] cluster : ceph
[2021-09-06 14:28:53,243][ceph_deploy.cli][INFO ] fs_type : xfs
[2021-09-06 14:28:53,244][ceph_deploy.cli][INFO ] block_wal : None
[2021-09-06 14:28:53,244][ceph_deploy.cli][INFO ] default_release : False
[2021-09-06 14:28:53,244][ceph_deploy.cli][INFO ] username : None
[2021-09-06 14:28:53,245][ceph_deploy.cli][INFO ] journal : None
[2021-09-06 14:28:53,245][ceph_deploy.cli][INFO ] subcommand : create
[2021-09-06 14:28:53,245][ceph_deploy.cli][INFO ] host : ceph-node-02
[2021-09-06 14:28:53,245][ceph_deploy.cli][INFO ] filestore : None
[2021-09-06 14:28:53,245][ceph_deploy.cli][INFO ] func : <function osd at 0x7fca5b09d1d0>
[2021-09-06 14:28:53,246][ceph_deploy.cli][INFO ] ceph_conf : None
[2021-09-06 14:28:53,246][ceph_deploy.cli][INFO ] zap_disk : False
[2021-09-06 14:28:53,246][ceph_deploy.cli][INFO ] data : /dev/vdc
[2021-09-06 14:28:53,246][ceph_deploy.cli][INFO ] block_db : None
[2021-09-06 14:28:53,247][ceph_deploy.cli][INFO ] dmcrypt : False
[2021-09-06 14:28:53,247][ceph_deploy.cli][INFO ] overwrite_conf : False
[2021-09-06 14:28:53,247][ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[2021-09-06 14:28:53,247][ceph_deploy.cli][INFO ] quiet : False
[2021-09-06 14:28:53,248][ceph_deploy.cli][INFO ] debug : False
[2021-09-06 14:28:53,248][ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdc
[2021-09-06 14:28:56,253][ceph-node-02][DEBUG ] connection detected need for sudo
[2021-09-06 14:28:59,114][ceph-node-02][DEBUG ] connected to host: ceph-node-02
[2021-09-06 14:28:59,115][ceph-node-02][DEBUG ] detect platform information from remote host
[2021-09-06 14:28:59,132][ceph-node-02][DEBUG ] detect machine type
[2021-09-06 14:28:59,137][ceph-node-02][DEBUG ] find the location of an executable
[2021-09-06 14:28:59,138][ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[2021-09-06 14:28:59,138][ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-02
[2021-09-06 14:28:59,139][ceph-node-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[2021-09-06 14:28:59,142][ceph-node-02][DEBUG ] find the location of an executable
[2021-09-06 14:28:59,144][ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdc
[2021-09-06 14:29:03,927][ceph-node-02][WARNING] Running command: /usr/bin/ceph-authtool --gen-print-key
[2021-09-06 14:29:03,928][ceph-node-02][WARNING] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 53615988-6019-4788-a016-cd1a9689dc2d
[2021-09-06 14:29:03,928][ceph-node-02][WARNING] Running command: /sbin/vgcreate --force --yes ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f /dev/vdc
[2021-09-06 14:29:03,928][ceph-node-02][WARNING] stdout: Physical volume "/dev/vdc" successfully created.
[2021-09-06 14:29:03,928][ceph-node-02][WARNING] stdout: Volume group "ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f" successfully created
[2021-09-06 14:29:03,929][ceph-node-02][WARNING] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-53615988-6019-4788-a016-cd1a9689dc2d ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f
[2021-09-06 14:29:03,929][ceph-node-02][WARNING] stdout: Logical volume "osd-block-53615988-6019-4788-a016-cd1a9689dc2d" created.
[2021-09-06 14:29:03,929][ceph-node-02][WARNING] Running command: /usr/bin/ceph-authtool --gen-print-key
[2021-09-06 14:29:03,929][ceph-node-02][WARNING] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
[2021-09-06 14:29:03,930][ceph-node-02][WARNING] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[2021-09-06 14:29:03,930][ceph-node-02][WARNING] Running command: /bin/chown -h ceph:ceph /dev/ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f/osd-block-53615988-6019-4788-a016-cd1a9689dc2d
[2021-09-06 14:29:03,930][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[2021-09-06 14:29:03,930][ceph-node-02][WARNING] Running command: /bin/ln -s /dev/ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f/osd-block-53615988-6019-4788-a016-cd1a9689dc2d /var/lib/ceph/osd/ceph-4/block
[2021-09-06 14:29:03,931][ceph-node-02][WARNING] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
[2021-09-06 14:29:03,931][ceph-node-02][WARNING] stderr: 2021-09-06T14:29:00.591+0800 7fc40a1d8700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[2021-09-06 14:29:03,931][ceph-node-02][WARNING] stderr:
[2021-09-06 14:29:03,931][ceph-node-02][WARNING] stderr: 2021-09-06T14:29:00.591+0800 7fc40a1d8700 -1 AuthRegistry(0x7fc40405b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[2021-09-06 14:29:03,931][ceph-node-02][WARNING] stderr:
[2021-09-06 14:29:03,932][ceph-node-02][WARNING] stderr: got monmap epoch 1
[2021-09-06 14:29:03,932][ceph-node-02][WARNING] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4 --add-key AQArtTVh67tKIxAA9fR3RaP9tHE2LQ0dHABKqw==
[2021-09-06 14:29:03,932][ceph-node-02][WARNING] stdout: creating /var/lib/ceph/osd/ceph-4/keyring
[2021-09-06 14:29:03,932][ceph-node-02][WARNING] stdout: added entity osd.4 auth(key=AQArtTVh67tKIxAA9fR3RaP9tHE2LQ0dHABKqw==)
[2021-09-06 14:29:03,933][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
[2021-09-06 14:29:03,933][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
[2021-09-06 14:29:03,933][ceph-node-02][WARNING] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid 53615988-6019-4788-a016-cd1a9689dc2d --setuser ceph --setgroup ceph
[2021-09-06 14:29:03,933][ceph-node-02][WARNING] stderr: 2021-09-06T14:29:01.039+0800 7f1426156f00 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
[2021-09-06 14:29:03,934][ceph-node-02][WARNING] --> ceph-volume lvm prepare successful for: /dev/vdc
[2021-09-06 14:29:03,934][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[2021-09-06 14:29:03,934][ceph-node-02][WARNING] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f/osd-block-53615988-6019-4788-a016-cd1a9689dc2d --path /var/lib/ceph/osd/ceph-4 --no-mon-config
[2021-09-06 14:29:03,934][ceph-node-02][WARNING] Running command: /bin/ln -snf /dev/ceph-926c5d4c-9ce8-49bd-87d1-7fb156efa45f/osd-block-53615988-6019-4788-a016-cd1a9689dc2d /var/lib/ceph/osd/ceph-4/block
[2021-09-06 14:29:03,934][ceph-node-02][WARNING] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
[2021-09-06 14:29:03,935][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[2021-09-06 14:29:03,935][ceph-node-02][WARNING] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[2021-09-06 14:29:03,935][ceph-node-02][WARNING] Running command: /bin/systemctl enable ceph-volume@lvm-4-53615988-6019-4788-a016-cd1a9689dc2d
[2021-09-06 14:29:03,935][ceph-node-02][WARNING] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-4-53615988-6019-4788-a016-cd1a9689dc2d.service → /lib/systemd/system/ceph-volume@.service.
[2021-09-06 14:29:03,935][ceph-node-02][WARNING] Running command: /bin/systemctl enable --runtime ceph-osd@4
[2021-09-06 14:29:03,936][ceph-node-02][WARNING] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@4.service → /lib/systemd/system/ceph-osd@.service.
[2021-09-06 14:29:03,936][ceph-node-02][WARNING] Running command: /bin/systemctl start ceph-osd@4
[2021-09-06 14:29:03,936][ceph-node-02][WARNING] --> ceph-volume lvm activate successful for osd ID: 4
[2021-09-06 14:29:03,936][ceph-node-02][WARNING] --> ceph-volume lvm create successful for: /dev/vdc
[2021-09-06 14:29:08,973][ceph-node-02][INFO ] checking OSD status...
[2021-09-06 14:29:08,974][ceph-node-02][DEBUG ] find the location of an executable
[2021-09-06 14:29:08,977][ceph-node-02][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[2021-09-06 14:29:09,142][ceph_deploy.osd][DEBUG ] Host ceph-node-02 is now ready for osd use.

3.9.9.3.3 添加ceph-node-02节点/dev/vdd

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-02 --data  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-02 --data /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa47ec245f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-02
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fa47ec751d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdd
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vddceph@ceph-node-02's password:
[ceph-node-02][DEBUG ] connection detected need for sudo
ceph@ceph-node-02's password:
sudo: unable to resolve host ceph-node-02
[ceph-node-02][DEBUG ] connected to host: ceph-node-02
[ceph-node-02][DEBUG ] detect platform information from remote host
[ceph-node-02][DEBUG ] detect machine type
[ceph-node-02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-02
[ceph-node-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdd
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e945f7fe-ecee-4441-9594-c8449c3fa807
[ceph-node-02][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-1689d9a0-4990-4a40-840b-2be6da21b840 /dev/vdd
[ceph-node-02][WARNIN] stdout: Physical volume "/dev/vdd" successfully created.
[ceph-node-02][WARNIN] stdout: Volume group "ceph-1689d9a0-4990-4a40-840b-2be6da21b840" successfully created
[ceph-node-02][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807 ceph-1689d9a0-4990-4a40-840b-2be6da21b840
[ceph-node-02][WARNIN] stdout: Logical volume "osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807" created.
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-02][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5
[ceph-node-02][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-02][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-1689d9a0-4990-4a40-840b-2be6da21b840/osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-02][WARNIN] Running command: /bin/ln -s /dev/ceph-1689d9a0-4990-4a40-840b-2be6da21b840/osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807 /var/lib/ceph/osd/ceph-5/block
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:30:56.179+0800 7f09324cc700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-02][WARNIN] stderr:
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:30:56.179+0800 7f09324cc700 -1 AuthRegistry(0x7f092c05b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-02][WARNIN] stderr:
[ceph-node-02][WARNIN] stderr: got monmap epoch 1
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-5/keyring --create-keyring --name osd.5 --add-key AQCftTVhcVw0ChAAH5oEXBvQ8VEl51tTBJ7gMQ==
[ceph-node-02][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-5/keyring
[ceph-node-02][WARNIN] stdout: added entity osd.5 auth(key=AQCftTVhcVw0ChAAH5oEXBvQ8VEl51tTBJ7gMQ==)
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid e945f7fe-ecee-4441-9594-c8449c3fa807 --setuser ceph --setgroup ceph
[ceph-node-02][WARNIN] stderr: 2021-09-06T14:30:56.651+0800 7f597d309f00 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _read_fsid unparsable uuid
[ceph-node-02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdd
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[ceph-node-02][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-1689d9a0-4990-4a40-840b-2be6da21b840/osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807 --path /var/lib/ceph/osd/ceph-5 --no-mon-config
[ceph-node-02][WARNIN] Running command: /bin/ln -snf /dev/ceph-1689d9a0-4990-4a40-840b-2be6da21b840/osd-block-e945f7fe-ecee-4441-9594-c8449c3fa807 /var/lib/ceph/osd/ceph-5/block
[ceph-node-02][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[ceph-node-02][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-5-e945f7fe-ecee-4441-9594-c8449c3fa807
[ceph-node-02][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-5-e945f7fe-ecee-4441-9594-c8449c3fa807.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-02][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@5
[ceph-node-02][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@5.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-02][WARNIN] Running command: /bin/systemctl start ceph-osd@5
[ceph-node-02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 5
[ceph-node-02][WARNIN] --> ceph-volume lvm create successful for: /dev/vdd
[ceph-node-02][INFO ] checking OSD status...
[ceph-node-02][DEBUG ] find the location of an executable
[ceph-node-02][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-02 is now ready for osd use.

3.9.9.3.4 验证ceph-node-02节点OSD服务

root@ceph-node-02:~# ps -ef | grep ceph
root 1027 1 0 Aug29 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 23273 1 0 14:27 ? 00:00:08 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
ceph 25079 1 0 14:29 ? 00:00:08 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
ceph 27079 1 0 14:30 ? 00:00:08 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph

3.9.9.3.5 管理ceph-node-02节点OSD服务

root@ceph-node-02:~# systemctl status ceph-osd@3
root@ceph-node-02:~# systemctl status ceph-osd@4
root@ceph-node-02:~# systemctl status ceph-osd@5

3.9.9.3.6 验证ceph集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 20m)

mgr: ceph-mgr-01(active, since 7d)

osd: 6 osds: 6 up (since 119s), 6 in (since 2m); 1 remapped pgs
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B

usage: 34 MiB used, 120 GiB / 120 GiB avail

pgs: 1 active+clean+remapped

3.9.9.3.7 查看ceph-node-02节点和OSD的对应关系

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-3 0.05846 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-5 0.05846 host ceph-node-02
3 hdd 0.01949 osd.3 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000

3.9.9.4 添加ceph-node-03节点OSD

3.9.9.4.1 添加ceph-node-03节点/dev/vdb

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-03 --data  /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-03 --data /dev/vdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f96ca9e45f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-03
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f96caa351d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdb
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdbceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-03
[ceph-node-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-03][WARNIN] osd keyring does not exist yet, creating one
[ceph-node-03][DEBUG ] create a keyring file
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 860c096c-fe10-4eec-b64c-d1ace0c7a99e
[ceph-node-03][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-bf514bfe-b342-4171-ac98-2d2b8b716329 /dev/vdb
[ceph-node-03][WARNIN] stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node-03][WARNIN] stdout: Volume group "ceph-bf514bfe-b342-4171-ac98-2d2b8b716329" successfully created
[ceph-node-03][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e ceph-bf514bfe-b342-4171-ac98-2d2b8b716329
[ceph-node-03][WARNIN] stdout: Logical volume "osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e" created.
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6
[ceph-node-03][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-bf514bfe-b342-4171-ac98-2d2b8b716329/osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-03][WARNIN] Running command: /bin/ln -s /dev/ceph-bf514bfe-b342-4171-ac98-2d2b8b716329/osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e /var/lib/ceph/osd/ceph-6/block
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-6/activate.monmap
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:34:02.683+0800 7f21109fd700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:34:02.683+0800 7f21109fd700 -1 AuthRegistry(0x7f210c05b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: got monmap epoch 1
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-6/keyring --create-keyring --name osd.6 --add-key AQBZtjVhSeWQGRAAKEt6qSw0JhaS1cXHsH7ZzQ==
[ceph-node-03][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-6/keyring
[ceph-node-03][WARNIN] stdout: added entity osd.6 auth(key=AQBZtjVhSeWQGRAAKEt6qSw0JhaS1cXHsH7ZzQ==)
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid 860c096c-fe10-4eec-b64c-d1ace0c7a99e --setuser ceph --setgroup ceph
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:34:03.527+0800 7ff3d6c17f00 -1 bluestore(/var/lib/ceph/osd/ceph-6/) _read_fsid unparsable uuid
[ceph-node-03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-bf514bfe-b342-4171-ac98-2d2b8b716329/osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e --path /var/lib/ceph/osd/ceph-6 --no-mon-config
[ceph-node-03][WARNIN] Running command: /bin/ln -snf /dev/ceph-bf514bfe-b342-4171-ac98-2d2b8b716329/osd-block-860c096c-fe10-4eec-b64c-d1ace0c7a99e /var/lib/ceph/osd/ceph-6/block
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-6-860c096c-fe10-4eec-b64c-d1ace0c7a99e
[ceph-node-03][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-6-860c096c-fe10-4eec-b64c-d1ace0c7a99e.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@6
[ceph-node-03][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@6.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl start ceph-osd@6
[ceph-node-03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 6
[ceph-node-03][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node-03][INFO ] checking OSD status...
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-03 is now ready for osd use.

3.9.9.4.2 添加ceph-node-03节点/dev/vdc

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-03 --data  /dev/vdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-03 --data /dev/vdc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7b2b2b45f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-03
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f7b2b3051d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdc
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdcceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-03
[ceph-node-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdc
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4ff1203c-8e5c-4324-8e65-ded87c6cabb8
[ceph-node-03][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-bc614765-6d67-42d0-9d48-bb6f77af870e /dev/vdc
[ceph-node-03][WARNIN] stdout: Physical volume "/dev/vdc" successfully created.
[ceph-node-03][WARNIN] stdout: Volume group "ceph-bc614765-6d67-42d0-9d48-bb6f77af870e" successfully created
[ceph-node-03][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8 ceph-bc614765-6d67-42d0-9d48-bb6f77af870e
[ceph-node-03][WARNIN] stdout: Logical volume "osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8" created.
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-7
[ceph-node-03][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-bc614765-6d67-42d0-9d48-bb6f77af870e/osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node-03][WARNIN] Running command: /bin/ln -s /dev/ceph-bc614765-6d67-42d0-9d48-bb6f77af870e/osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8 /var/lib/ceph/osd/ceph-7/block
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-7/activate.monmap
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:40:57.747+0800 7f6d89ea3700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:40:57.747+0800 7f6d89ea3700 -1 AuthRegistry(0x7f6d8405b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: got monmap epoch 1
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-7/keyring --create-keyring --name osd.7 --add-key AQD4tzVhpmG3JRAAWBQYh+BFXrN82qsQxlrUHw==
[ceph-node-03][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-7/keyring
[ceph-node-03][WARNIN] stdout: added entity osd.7 auth(key=AQD4tzVhpmG3JRAAWBQYh+BFXrN82qsQxlrUHw==)
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/keyring
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 7 --monmap /var/lib/ceph/osd/ceph-7/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-7/ --osd-uuid 4ff1203c-8e5c-4324-8e65-ded87c6cabb8 --setuser ceph --setgroup ceph
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:40:58.215+0800 7fa64ffa3f00 -1 bluestore(/var/lib/ceph/osd/ceph-7/) _read_fsid unparsable uuid
[ceph-node-03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdc
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-bc614765-6d67-42d0-9d48-bb6f77af870e/osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8 --path /var/lib/ceph/osd/ceph-7 --no-mon-config
[ceph-node-03][WARNIN] Running command: /bin/ln -snf /dev/ceph-bc614765-6d67-42d0-9d48-bb6f77af870e/osd-block-4ff1203c-8e5c-4324-8e65-ded87c6cabb8 /var/lib/ceph/osd/ceph-7/block
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-7-4ff1203c-8e5c-4324-8e65-ded87c6cabb8
[ceph-node-03][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-7-4ff1203c-8e5c-4324-8e65-ded87c6cabb8.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@7
[ceph-node-03][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@7.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl start ceph-osd@7
[ceph-node-03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 7
[ceph-node-03][WARNIN] --> ceph-volume lvm create successful for: /dev/vdc
[ceph-node-03][INFO ] checking OSD status...
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-03 is now ready for osd use.

3.9.9.4.3 添加ceph-node-03节点/dev/vdd

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-03 --data  /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-03 --data /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc9b5f945f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-03
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fc9b5fe51d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdd
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vddceph@ceph-node-03's password:
[ceph-node-03][DEBUG ] connection detected need for sudo
ceph@ceph-node-03's password:
sudo: unable to resolve host ceph-node-03
[ceph-node-03][DEBUG ] connected to host: ceph-node-03
[ceph-node-03][DEBUG ] detect platform information from remote host
[ceph-node-03][DEBUG ] detect machine type
[ceph-node-03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-03
[ceph-node-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdd
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d03609ee-e099-4d67-b19d-4f01c9540e86
[ceph-node-03][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-feac784b-587d-41f7-8a4b-ab581714ee16 /dev/vdd
[ceph-node-03][WARNIN] stdout: Physical volume "/dev/vdd" successfully created.
[ceph-node-03][WARNIN] stdout: Volume group "ceph-feac784b-587d-41f7-8a4b-ab581714ee16" successfully created
[ceph-node-03][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86 ceph-feac784b-587d-41f7-8a4b-ab581714ee16
[ceph-node-03][WARNIN] stdout: Logical volume "osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86" created.
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-8
[ceph-node-03][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-feac784b-587d-41f7-8a4b-ab581714ee16/osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-03][WARNIN] Running command: /bin/ln -s /dev/ceph-feac784b-587d-41f7-8a4b-ab581714ee16/osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86 /var/lib/ceph/osd/ceph-8/block
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:42:02.047+0800 7f4f6f936700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:42:02.047+0800 7f4f6f936700 -1 AuthRegistry(0x7f4f6805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-03][WARNIN] stderr:
[ceph-node-03][WARNIN] stderr: got monmap epoch 1
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQA5uDVhkEoKARAAa6Z4eEcme3yxVUN1hzBplQ==
[ceph-node-03][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-8/keyring
[ceph-node-03][WARNIN] stdout: added entity osd.8 auth(key=AQA5uDVhkEoKARAAa6Z4eEcme3yxVUN1hzBplQ==)
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid d03609ee-e099-4d67-b19d-4f01c9540e86 --setuser ceph --setgroup ceph
[ceph-node-03][WARNIN] stderr: 2021-09-06T14:42:02.499+0800 7fc4821f3f00 -1 bluestore(/var/lib/ceph/osd/ceph-8/) _read_fsid unparsable uuid
[ceph-node-03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdd
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
[ceph-node-03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-feac784b-587d-41f7-8a4b-ab581714ee16/osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86 --path /var/lib/ceph/osd/ceph-8 --no-mon-config
[ceph-node-03][WARNIN] Running command: /bin/ln -snf /dev/ceph-feac784b-587d-41f7-8a4b-ab581714ee16/osd-block-d03609ee-e099-4d67-b19d-4f01c9540e86 /var/lib/ceph/osd/ceph-8/block
[ceph-node-03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node-03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-8-d03609ee-e099-4d67-b19d-4f01c9540e86
[ceph-node-03][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-8-d03609ee-e099-4d67-b19d-4f01c9540e86.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@8
[ceph-node-03][WARNIN] stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@8.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node-03][WARNIN] Running command: /bin/systemctl start ceph-osd@8
[ceph-node-03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 8
[ceph-node-03][WARNIN] --> ceph-volume lvm create successful for: /dev/vdd
[ceph-node-03][INFO ] checking OSD status...
[ceph-node-03][DEBUG ] find the location of an executable
[ceph-node-03][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-03 is now ready for osd use.

3.9.9.4.4 验证ceph-node-03节点OSD服务

root@ceph-node-03:~# ps -ef | grep ceph
root 1034 1 0 Aug29 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 23444 1 0 14:34 ? 00:00:06 /usr/bin/ceph-osd -f --cluster ceph --id 6 --setuser ceph --setgroup ceph
ceph 25229 1 0 14:41 ? 00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 7 --setuser ceph --setgroup ceph
ceph 27027 1 0 14:42 ? 00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph --setgroup ceph

3.9.9.4.5 管理ceph-node-01节点OSD服务

root@ceph-node-03:~# systemctl status ceph-osd@6
root@ceph-node-03:~# systemctl status ceph-osd@7
root@ceph-node-03:~# systemctl status ceph-osd@8

3.9.9.4.6 验证ceph集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 30m)

mgr: ceph-mgr-01(active, since 7d)

osd: 9 osds: 9 up (since 36s), 9 in (since 45s)
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B

usage: 55 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.9.9.4.7 查看ceph-node-03节点和OSD的对应关系

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-3 0.05846 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-5 0.05846 host ceph-node-02
3 hdd 0.01949 osd.3 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.05846 host ceph-node-03
6 hdd 0.01949 osd.6 up 1.00000 1.00000
7 hdd 0.01949 osd.7 up 1.00000 1.00000
8 hdd 0.01949 osd.8 up 1.00000 1.00000

3.9.10 从集群移除OSD

3.9.10.1 查看集群OSD

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-3 0.05846 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-5 0.05846 host ceph-node-02
3 hdd 0.01949 osd.3 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.05846 host ceph-node-03
6 hdd 0.01949 osd.6 up 1.00000 1.00000
7 hdd 0.01949 osd.7 up 1.00000 1.00000
8 hdd 0.01949 osd.8 up 1.00000 1.00000

3.9.10.2 ceph集群移出OSD

ceph@ceph-deploy:~/ceph-cluster$ ceph osd out osd.2
marked out osd.2.

3.9.10.3 观察数据迁移

点击查看代码

#看到归置组状态变为active+clean,迁移完成(Control-c 退出。)
ceph@ceph-deploy:~/ceph-cluster$ ceph -w
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 27m)

mgr: ceph-mgr-01(active, since 26m)

osd: 8 osds: 8 up (since 25m), 8 in (since 21m)

data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B usage: 50 MiB used, 160 GiB / 160 GiB avail

pgs: 1 active+clean

3.9.10.4 停止OSD

root@ceph-node-01:~# systemctl stop ceph-osd@2

3.9.10.5 从CRUSH映射移除OSD

ceph@ceph-deploy:~/ceph-cluster$ ceph osd purge 2 --yes-i-really-mean-it
purged osd.2

3.9.10.6 验证集群OSD

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.15588 root default
-3 0.03897 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-5 0.05846 host ceph-node-02
3 hdd 0.01949 osd.3 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.05846 host ceph-node-03
6 hdd 0.01949 osd.6 up 1.00000 1.00000
7 hdd 0.01949 osd.7 up 1.00000 1.00000
8 hdd 0.01949 osd.8 up 1.00000 1.00000

3.9.10.7 验证集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 2h)

mgr: ceph-mgr-01(active, since 8d)

osd: 8 osds: 8 up (since 6m), 8 in (since 7m)
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B

usage: 50 MiB used, 160 GiB / 160 GiB avail

pgs: 1 active+clean

3.9.10.8 验证ceph-node-01节点OSD

root@ceph-node-01:~# ps -ef | grep ceph
root 1205 1 0 16:36 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 1792 1 0 16:36 ? 00:00:09 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph 1799 1 0 16:36 ? 00:00:09 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

3.9.11 移除OSD(osd.2)后重新加到集群

3.9.11.1 查询PV(/dev/vdd)

root@ceph-node-01:~# pvscan
PV /dev/vdb VG ceph-d6fc82f0-6bad-4b75-8cc5-73f86b33c9de lvm2 [<20.00 GiB / 0 free]
PV /dev/vdd VG ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815 lvm2 [<20.00 GiB / 0 free]
PV /dev/vdc VG ceph-67d74ac1-ac6a-4712-80b8-742da52d1c1e lvm2 [<20.00 GiB / 0 free]
Total: 3 [<59.99 GiB] / in use: 3 [<59.99 GiB] / in no VG: 0 [0 ]

3.9.11.2 删除对应的PV(/dev/vdd)

root@ceph-node-01:~# vgremove ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815
Do you really want to remove volume group "ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815" containing 1 logical volumes? [y/n]: y
Do you really want to remove and DISCARD active logical volume ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815/osd-block-e567608f-b6eb-4366-8626-020b6e9f8307? [y/n]: y
Logical volume "osd-block-e567608f-b6eb-4366-8626-020b6e9f8307" successfully removed
Volume group "ceph-7f4b4eda-1320-4642-a90f-4ac8d5c97815" successfully removed

3.9.11.3 擦除数据磁盘(/dev/vdd)

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node-01 /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node-01 /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f46f4e8b2d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f46f4e61250>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/vdd']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on ceph-node-01
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-node-01][DEBUG ] zeroing last few blocks of device[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/vdd
[ceph-node-01][WARNIN] --> Zapping: /dev/vdd
[ceph-node-01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node-01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdd bs=1M count=10 conv=fsync
[ceph-node-01][WARNIN] stderr: 10+0 records in
[ceph-node-01][WARNIN] 10+0 records out
[ceph-node-01][WARNIN] stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0696078 s, 151 MB/s
[ceph-node-01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdd>

3.9.11.4 添加OSD(/dev/vdd)

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node-01 --data /dev/vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node-01 --data /dev/vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7efe3c44c5f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-node-01
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7efe3c49d1d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/vdd
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vddWarning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
[ceph-node-01][DEBUG ] connection detected need for sudo
Warning: the ECDSA host key for 'ceph-node-01' differs from the key for the IP address '172.16.10.126'
Offending key for IP in /var/lib/ceph/.ssh/known_hosts:5
Matching host key in /var/lib/ceph/.ssh/known_hosts:6
Are you sure you want to continue connecting (yes/no)? yes
ceph@ceph-node-01's password:
sudo: unable to resolve host ceph-node-01
[ceph-node-01][DEBUG ] connected to host: ceph-node-01
[ceph-node-01][DEBUG ] detect platform information from remote host
[ceph-node-01][DEBUG ] detect machine type
[ceph-node-01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node-01
[ceph-node-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdd
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 99545fcc-4b11-45fe-9454-4bde5c72a7ed
[ceph-node-01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c /dev/vdd
[ceph-node-01][WARNIN] stdout: Physical volume "/dev/vdd" successfully created.
[ceph-node-01][WARNIN] stdout: Volume group "ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c" successfully created
[ceph-node-01][WARNIN] Running command: /sbin/lvcreate --yes -l 5119 -n osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c
[ceph-node-01][WARNIN] stdout: Logical volume "osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed" created.
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node-01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c/osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-01][WARNIN] Running command: /bin/ln -s /dev/ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c/osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-node-01][WARNIN] stderr: 2021-09-06T17:50:57.403+0800 7fc54e09e700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node-01][WARNIN] stderr: 2021-09-06T17:50:57.403+0800 7fc54e09e700 -1 AuthRegistry(0x7fc54805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node-01][WARNIN] stderr: got monmap epoch 1
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQCA5DVhV53BFRAA/HbGphI7pVsUGIlliF1GYw==
[ceph-node-01][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-node-01][WARNIN] added entity osd.2 auth(key=AQCA5DVhV53BFRAA/HbGphI7pVsUGIlliF1GYw==)
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 99545fcc-4b11-45fe-9454-4bde5c72a7ed --setuser ceph --setgroup ceph
[ceph-node-01][WARNIN] stderr: 2021-09-06T17:50:57.839+0800 7f6d0c992f00 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
[ceph-node-01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdd
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c/osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-node-01][WARNIN] Running command: /bin/ln -snf /dev/ceph-981e3c57-f25a-4bdf-9cd7-5a3a290bd34c/osd-block-99545fcc-4b11-45fe-9454-4bde5c72a7ed /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node-01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-99545fcc-4b11-45fe-9454-4bde5c72a7ed
[ceph-node-01][WARNIN] stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-99545fcc-4b11-45fe-9454-4bde5c72a7ed.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node-01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-node-01][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-node-01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-node-01][WARNIN] --> ceph-volume lvm create successful for: /dev/vdd
[ceph-node-01][INFO ] checking OSD status...
[ceph-node-01][DEBUG ] find the location of an executable
[ceph-node-01][INFO ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node-01 is now ready for osd use.

3.9.11.5 验证集群OSD

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-3 0.05846 host ceph-node-01
0 hdd 0.01949 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-5 0.05846 host ceph-node-02
3 hdd 0.01949 osd.3 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.05846 host ceph-node-03
6 hdd 0.01949 osd.6 up 1.00000 1.00000
7 hdd 0.01949 osd.7 up 1.00000 1.00000
8 hdd 0.01949 osd.8 up 1.00000 1.00000

3.9.11.6 验证集群

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 79m)

mgr: ceph-mgr-01(active, since 79m) osd: 9 osds: 9 up (since 5m), 9 in (since 74m)
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B usage: 59 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.9.11.7 验证节点OSD

root@ceph-node-01:~# ps -ef | grep ceph
root 1205 1 0 16:36 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 1792 1 0 16:36 ? 00:00:09 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph 1799 1 0 16:36 ? 00:00:09 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph 6644 1 0 17:50 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph

3.9.12 设置OSD服务自启动

3.9.12.1 设置ceph-node-01节点OSD服务开机自启动

root@ceph-node-01:~# systemctl enable ceph-osd@0 ceph-osd@1 ceph-osd@2
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /lib/systemd/system/ceph-osd@.service.

3.9.12.2 设置ceph-node-02节点OSD服务开机自启动

root@ceph-node-02:~# systemctl enable ceph-osd@3 ceph-osd@4 ceph-osd@5
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@4.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@5.service → /lib/systemd/system/ceph-osd@.service.

3.9.12.3 设置ceph-node-03节点OSD服务开机自启动

root@ceph-node-03:~# systemctl enable ceph-osd@6 ceph-osd@7 ceph-osd@8
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@6.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@7.service → /lib/systemd/system/ceph-osd@.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@8.service → /lib/systemd/system/ceph-osd@.service.

3.9.12.4 重启集群验证

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 1 daemons, quorum ceph-mon-01 (age 98m)

mgr: ceph-mgr-01(active, since 98m)

osd: 9 osds: 9 up (since 58s), 9 in (since 93m)
data:

pools: 1 pools, 1 pgs

objects: 0 objects, 0 B usage: 58 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.10 扩展ceph集群实现高可用

3.10.1 扩展ceph-mon-02节点

3.10.1.1 配置系统时间同步

root@ceph-mon-02:~# apt -y install chrony
root@ceph-mon-02:~# systemctl start chrony
root@ceph-mon-02:~# systemctl enable chrony

3.10.1.2 仓库准备

3.10.1.2.1 导入key

root@ceph-mon-02:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.10.1.2.2 添加仓库仓库

root@ceph-mon-02:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-mon-02:~# apt -y update && apt -y upgrade

3.10.1.3 安装ceph-mon

3.10.1.3.1 查看ceph-mon版本

root@ceph-mon-02:~# apt-cache madison ceph-mon
ceph-mon | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-mon | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.10.1.3.2 安装ceph-mon

root@ceph-mon-02:~# apt -y install ceph-common ceph-mon

3.10.1.3.3 验证ceph-mon版本

root@ceph-mon-02:~# ceph-mon --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
root@ceph-mon-02:~# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.10.1.4 设置ceph用户

3.10.1.4.1 查看用户

root@ceph-mon-02:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.10.1.4.2 设置用户bash

root@ceph-mon-02:~# usermod -s /bin/bash ceph

3.10.1.4.3 设置用户密码

root@ceph-mon-02:~# passwd ceph

3.10.1.4.4 允许 ceph 用户以 sudo 执行特权命令

root@ceph-mon-02:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.10.1.5 添加ceph-mon-02节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon-02
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add ceph-mon-02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f98042962d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['ceph-mon-02']
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f9804273a50>
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][INFO ] ensuring configuration of new mon host: ceph-mon-02
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mon-02
The authenticity of host 'ceph-mon-02 (172.16.10.110)' can't be established.
ECDSA key fingerprint is SHA256:RIsmHau9Yc1UXqEhAxSHKHMuJgz2iIm0fk9BHUb7Es4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-mon-02,172.16.10.110' (ECDSA) to the list of known hosts.
ceph@ceph-mon-02's password:
[ceph-mon-02][DEBUG ] connection detected need for sudo
ceph@ceph-mon-02's password:
sudo: unable to resolve host ceph-mon-02
[ceph-mon-02][DEBUG ] connected to host: ceph-mon-02
[ceph-mon-02][DEBUG ] detect platform information from remote host
[ceph-mon-02][DEBUG ] detect machine type
[ceph-mon-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph-mon-02
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 172.16.10.110
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon-02 ...
ceph@ceph-mon-02's password:
[ceph-mon-02][DEBUG ] connection detected need for sudo
ceph@ceph-mon-02's password:
sudo: unable to resolve host ceph-mon-02
[ceph-mon-02][DEBUG ] connected to host: ceph-mon-02
[ceph-mon-02][DEBUG ] detect platform information from remote host
[ceph-mon-02][DEBUG ] detect machine type
[ceph-mon-02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-mon-02][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon-02][DEBUG ] get remote short hostname
[ceph-mon-02][DEBUG ] adding mon to ceph-mon-02
[ceph-mon-02][DEBUG ] get remote short hostname
[ceph-mon-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon-02][DEBUG ] create the mon path if it does not exist
[ceph-mon-02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon-02/done
[ceph-mon-02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon-02/done
[ceph-mon-02][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon-02.mon.keyring
[ceph-mon-02][DEBUG ] create the monitor keyring file
[ceph-mon-02][INFO ] Running command: sudo ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.ceph-mon-02.monmap
[ceph-mon-02][WARNIN] got monmap epoch 1
[ceph-mon-02][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon-02 --monmap /var/lib/ceph/tmp/ceph.ceph-mon-02.monmap --keyring /var/lib/ceph/tmp/ceph-ceph-mon-02.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon-02][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon-02.mon.keyring
[ceph-mon-02][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon-02][DEBUG ] create the init path if it does not exist
[ceph-mon-02][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-mon-02][INFO ] Running command: sudo systemctl enable ceph-mon@ceph-mon-02
[ceph-mon-02][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon-02.service → /lib/systemd/system/ceph-mon@.service.
[ceph-mon-02][INFO ] Running command: sudo systemctl start ceph-mon@ceph-mon-02
[ceph-mon-02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-02.asok mon_status
[ceph-mon-02][WARNIN] ceph-mon-02 is not defined in `mon initial members`
[ceph-mon-02][WARNIN] monitor ceph-mon-02 does not exist in monmap
[ceph-mon-02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-02.asok mon_status
[ceph-mon-02][DEBUG ] ********************************************************************************
[ceph-mon-02][DEBUG ] status for monitor: mon.ceph-mon-02
[ceph-mon-02][DEBUG ] {
[ceph-mon-02][DEBUG ] "election_epoch": 0,
[ceph-mon-02][DEBUG ] "extra_probe_peers": [],
[ceph-mon-02][DEBUG ] "feature_map": {
[ceph-mon-02][DEBUG ] "mon": [
[ceph-mon-02][DEBUG ] {
[ceph-mon-02][DEBUG ] "features": "0x3f01cfb9fffdffff",
[ceph-mon-02][DEBUG ] "num": 1,
[ceph-mon-02][DEBUG ] "release": "luminous"
[ceph-mon-02][DEBUG ] }
[ceph-mon-02][DEBUG ] ]
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] "features": {
[ceph-mon-02][DEBUG ] "quorum_con": "0",
[ceph-mon-02][DEBUG ] "quorum_mon": [],
[ceph-mon-02][DEBUG ] "required_con": "2449958197560098820",
[ceph-mon-02][DEBUG ] "required_mon": [
[ceph-mon-02][DEBUG ] "kraken",
[ceph-mon-02][DEBUG ] "luminous",
[ceph-mon-02][DEBUG ] "mimic",
[ceph-mon-02][DEBUG ] "osdmap-prune",
[ceph-mon-02][DEBUG ] "nautilus",
[ceph-mon-02][DEBUG ] "octopus",
[ceph-mon-02][DEBUG ] "pacific",
[ceph-mon-02][DEBUG ] "elector-pinging"
[ceph-mon-02][DEBUG ] ]
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] "monmap": {
[ceph-mon-02][DEBUG ] "created": "2021-08-29T06:36:59.023456Z",
[ceph-mon-02][DEBUG ] "disallowed_leaders: ": "",
[ceph-mon-02][DEBUG ] "election_strategy": 1,
[ceph-mon-02][DEBUG ] "epoch": 1,
[ceph-mon-02][DEBUG ] "features": {
[ceph-mon-02][DEBUG ] "optional": [],
[ceph-mon-02][DEBUG ] "persistent": [
[ceph-mon-02][DEBUG ] "kraken",
[ceph-mon-02][DEBUG ] "luminous",
[ceph-mon-02][DEBUG ] "mimic",
[ceph-mon-02][DEBUG ] "osdmap-prune",
[ceph-mon-02][DEBUG ] "nautilus",
[ceph-mon-02][DEBUG ] "octopus",
[ceph-mon-02][DEBUG ] "pacific",
[ceph-mon-02][DEBUG ] "elector-pinging"
[ceph-mon-02][DEBUG ] ]
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] "fsid": "6e521054-1532-4bc8-9971-7f8ae93e8430",
[ceph-mon-02][DEBUG ] "min_mon_release": 16,
[ceph-mon-02][DEBUG ] "min_mon_release_name": "pacific",
[ceph-mon-02][DEBUG ] "modified": "2021-08-29T06:36:59.023456Z",
[ceph-mon-02][DEBUG ] "mons": [
[ceph-mon-02][DEBUG ] {
[ceph-mon-02][DEBUG ] "addr": "172.16.10.148:6789/0",
[ceph-mon-02][DEBUG ] "crush_location": "{}",
[ceph-mon-02][DEBUG ] "name": "ceph-mon-01",
[ceph-mon-02][DEBUG ] "priority": 0,
[ceph-mon-02][DEBUG ] "public_addr": "172.16.10.148:6789/0",
[ceph-mon-02][DEBUG ] "public_addrs": {
[ceph-mon-02][DEBUG ] "addrvec": [
[ceph-mon-02][DEBUG ] {
[ceph-mon-02][DEBUG ] "addr": "172.16.10.148:3300",
[ceph-mon-02][DEBUG ] "nonce": 0,
[ceph-mon-02][DEBUG ] "type": "v2"
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] {
[ceph-mon-02][DEBUG ] "addr": "172.16.10.148:6789",
[ceph-mon-02][DEBUG ] "nonce": 0,
[ceph-mon-02][DEBUG ] "type": "v1"
[ceph-mon-02][DEBUG ] }
[ceph-mon-02][DEBUG ] ]
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] "rank": 0,
[ceph-mon-02][DEBUG ] "weight": 0
[ceph-mon-02][DEBUG ] }
[ceph-mon-02][DEBUG ] ],
[ceph-mon-02][DEBUG ] "stretch_mode": false
[ceph-mon-02][DEBUG ] },
[ceph-mon-02][DEBUG ] "name": "ceph-mon-02",
[ceph-mon-02][DEBUG ] "outside_quorum": [],
[ceph-mon-02][DEBUG ] "quorum": [],
[ceph-mon-02][DEBUG ] "rank": -1,
[ceph-mon-02][DEBUG ] "state": "probing",
[ceph-mon-02][DEBUG ] "stretch_mode": false,
[ceph-mon-02][DEBUG ] "sync_provider": []
[ceph-mon-02][DEBUG ] }
[ceph-mon-02][DEBUG ] ********************************************************************************
[ceph-mon-02][INFO ] monitor: mon.ceph-mon-02 is currently at the state of probing

3.10.1.6 验证ceph-mon-02状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph quorum_status --format json-pretty
{

"election_epoch": 20,

"quorum": [

0,

1

],

"quorum_names": [

"ceph-mon-01",

"ceph-mon-02"

],

"quorum_leader_name": "ceph-mon-01",

"quorum_age": 204,

"features": {

"quorum_con": "4540138297136906239",

"quorum_mon": [

"kraken",

"luminous",

"mimic",

"osdmap-prune",

"nautilus",

"octopus",

"pacific",

"elector-pinging"

]

},

"monmap": {

"epoch": 2,

"fsid": "6e521054-1532-4bc8-9971-7f8ae93e8430",

"modified": "2021-09-07T08:03:35.011024Z",

"created": "2021-08-29T06:36:59.023456Z",

"min_mon_release": 16,

"min_mon_release_name": "pacific",

"election_strategy": 1,

"disallowed_leaders: ": "",

"stretch_mode": false,

"features": {

"persistent": [

"kraken",

"luminous",

"mimic",

"osdmap-prune",

"nautilus",

"octopus",

"pacific",

"elector-pinging"

],

"optional": []

},

"mons": [

{

"rank": 0,

"name": "ceph-mon-01",

"public_addrs": {

"addrvec": [

{

"type": "v2",

"addr": "172.16.10.148:3300",

"nonce": 0

},

{

"type": "v1",

"addr": "172.16.10.148:6789",

"nonce": 0

}

]

},

"addr": "172.16.10.148:6789/0",

"public_addr": "172.16.10.148:6789/0",

"priority": 0,

"weight": 0,

"crush_location": "{}"

},

{

"rank": 1,

"name": "ceph-mon-02",

"public_addrs": {

"addrvec": [

{

"type": "v2",

"addr": "172.16.10.110:3300",

"nonce": 0

},

{

"type": "v1",

"addr": "172.16.10.110:6789",

"nonce": 0

}

]

},

"addr": "172.16.10.110:6789/0",

"public_addr": "172.16.10.110:6789/0",

"priority": 0,

"weight": 0,

"crush_location": "{}"

}

]

}

}

3.10.1.7 验证集群状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 2 daemons, quorum ceph-mon-01,ceph-mon-02 (age 2m)

mgr: ceph-mgr-01(active, since 23h)

osd: 9 osds: 9 up (since 21h), 9 in (since 23h)
data:

pools: 1 pools, 1 pgs

objects: 3 objects, 0 B

usage: 61 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.10.1.8 ceph-mon-02节点服务管理

3.10.1.8.1 查看ceph-mon-02节点服务进程

点击查看代码

root@ceph-mon-02:~# ps -ef | grep ceph
root 15251 1 0 15:52 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 16939 1 0 16:03 ? 00:00:04 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-02 --setuser ceph --setgroup ceph

3.10.1.8.2 查看ceph-mon-02节点服务状态

点击查看代码

root@ceph-mon-02:~# systemctl status ceph-mon@ceph-mon-02
● ceph-mon@ceph-mon-02.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; indirect; vendor preset: enabled)
Active: active (running) since Tue 2021-09-07 16:03:34 CST; 15min ago
Main PID: 16939 (ceph-mon)
Tasks: 27
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon-02.service
└─16939 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-02 --setuser ceph --setgroup ceph
Sep 07 16:03:34 ceph-mon-02 systemd[1]: Started Ceph cluster monitor daemon.

3.10.1.8.3 重启ceph-mon-02节点服务

点击查看代码

root@ceph-mon-02:~# systemctl restart ceph-mon@ceph-mon-02

3.10.2 扩展ceph-mon-03节点

3.10.2.1 配置系统时间同步

点击查看代码

root@ceph-mon-03:~# apt -y install chrony
root@ceph-mon-03:~# systemctl start chrony
root@ceph-mon-03:~# systemctl enable chrony

3.10.2.2 仓库准备

3.10.2.2.1 导入key

点击查看代码

root@ceph-mon-03:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.10.2.2.2 添加仓库仓库

点击查看代码

root@ceph-mon-03:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-mon-03:~# apt -y update && apt -y upgrade

3.10.2.3 安装ceph-mon

3.10.2.3.1 查看ceph-mon版本

点击查看代码

root@ceph-mon-03:~# apt-cache madison ceph-mon
ceph-mon | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-mon | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-mon | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.10.2.3.2 安装ceph-mon

点击查看代码

root@ceph-mon-03:~# apt -y install ceph-common ceph-mon

3.10.2.3.3 验证ceph-mon版本

点击查看代码

root@ceph-mon-03:~# ceph-mon --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
root@ceph-mon-03:~# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.10.2.4 设置ceph用户

3.10.2.4.1 查看用户

点击查看代码

root@ceph-mon-03:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.10.2.4.2 设置用户bash

点击查看代码

root@ceph-mon-03:~# usermod -s /bin/bash ceph

3.10.2.4.3 设置用户密码

点击查看代码

root@ceph-mon-03:~# passwd ceph

3.10.2.4.4 允许 ceph 用户以 sudo 执行特权命令

点击查看代码

root@ceph-mon-02:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.10.2.5 添加ceph-mon-03节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon-03
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add ceph-mon-03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0cac4c72d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['ceph-mon-03']
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f0cac4a4a50>
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][INFO ] ensuring configuration of new mon host: ceph-mon-03
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mon-03
The authenticity of host 'ceph-mon-03 (172.16.10.182)' can't be established.
ECDSA key fingerprint is SHA256:XSg7vDxMjrdhyaTMmuUd+mX0l12+rzinnXzKeobysy0.
Are you sure you want to continue connecting (yes/no)?yes
Warning: Permanently added 'ceph-mon-03,172.16.10.182' (ECDSA) to the list of known hosts.
ceph@ceph-mon-03's password:
[ceph-mon-03][DEBUG ] connection detected need for sudo
ceph@ceph-mon-03's password:
sudo: unable to resolve host ceph-mon-03
[ceph-mon-03][DEBUG ] connected to host: ceph-mon-03
[ceph-mon-03][DEBUG ] detect platform information from remote host
[ceph-mon-03][DEBUG ] detect machine type
[ceph-mon-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph-mon-03
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 172.16.10.182
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon-03 ...
ceph@ceph-mon-03's password:
[ceph-mon-03][DEBUG ] connection detected need for sudo
ceph@ceph-mon-03's password:
sudo: unable to resolve host ceph-mon-03
[ceph-mon-03][DEBUG ] connected to host: ceph-mon-03
[ceph-mon-03][DEBUG ] detect platform information from remote host
[ceph-mon-03][DEBUG ] detect machine type
[ceph-mon-03][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-mon-03][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon-03][DEBUG ] get remote short hostname
[ceph-mon-03][DEBUG ] adding mon to ceph-mon-03
[ceph-mon-03][DEBUG ] get remote short hostname
[ceph-mon-03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon-03][DEBUG ] create the mon path if it does not exist
[ceph-mon-03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon-03/done
[ceph-mon-03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon-03/done
[ceph-mon-03][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon-03.mon.keyring
[ceph-mon-03][DEBUG ] create the monitor keyring file
[ceph-mon-03][INFO ] Running command: sudo ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.ceph-mon-03.monmap
[ceph-mon-03][WARNIN] got monmap epoch 2
[ceph-mon-03][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon-03 --monmap /var/lib/ceph/tmp/ceph.ceph-mon-03.monmap --keyring /var/lib/ceph/tmp/ceph-ceph-mon-03.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon-03][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon-03.mon.keyring
[ceph-mon-03][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon-03][DEBUG ] create the init path if it does not exist
[ceph-mon-03][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-mon-03][INFO ] Running command: sudo systemctl enable ceph-mon@ceph-mon-03
[ceph-mon-03][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon-03.service → /lib/systemd/system/ceph-mon@.service.
[ceph-mon-03][INFO ] Running command: sudo systemctl start ceph-mon@ceph-mon-03
[ceph-mon-03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-03.asok mon_status
[ceph-mon-03][WARNIN] ceph-mon-03 is not defined in `mon initial members`
[ceph-mon-03][WARNIN] monitor ceph-mon-03 does not exist in monmap
[ceph-mon-03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-03.asok mon_status
[ceph-mon-03][DEBUG ] ********************************************************************************
[ceph-mon-03][DEBUG ] status for monitor: mon.ceph-mon-03
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "election_epoch": 0,
[ceph-mon-03][DEBUG ] "extra_probe_peers": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addrvec": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.110:3300",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v2"
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.110:6789",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v1"
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ],
[ceph-mon-03][DEBUG ] "feature_map": {
[ceph-mon-03][DEBUG ] "mon": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "features": "0x3f01cfb9fffdffff",
[ceph-mon-03][DEBUG ] "num": 1,
[ceph-mon-03][DEBUG ] "release": "luminous"
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "features": {
[ceph-mon-03][DEBUG ] "quorum_con": "0",
[ceph-mon-03][DEBUG ] "quorum_mon": [],
[ceph-mon-03][DEBUG ] "required_con": "2449958197560098820",
[ceph-mon-03][DEBUG ] "required_mon": [
[ceph-mon-03][DEBUG ] "kraken",
[ceph-mon-03][DEBUG ] "luminous",
[ceph-mon-03][DEBUG ] "mimic",
[ceph-mon-03][DEBUG ] "osdmap-prune",
[ceph-mon-03][DEBUG ] "nautilus",
[ceph-mon-03][DEBUG ] "octopus",
[ceph-mon-03][DEBUG ] "pacific",
[ceph-mon-03][DEBUG ] "elector-pinging"
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "monmap": {
[ceph-mon-03][DEBUG ] "created": "2021-08-29T06:36:59.023456Z",
[ceph-mon-03][DEBUG ] "disallowed_leaders: ": "",
[ceph-mon-03][DEBUG ] "election_strategy": 1,
[ceph-mon-03][DEBUG ] "epoch": 2,
[ceph-mon-03][DEBUG ] "features": {
[ceph-mon-03][DEBUG ] "optional": [],
[ceph-mon-03][DEBUG ] "persistent": [
[ceph-mon-03][DEBUG ] "kraken",
[ceph-mon-03][DEBUG ] "luminous",
[ceph-mon-03][DEBUG ] "mimic",
[ceph-mon-03][DEBUG ] "osdmap-prune",
[ceph-mon-03][DEBUG ] "nautilus",
[ceph-mon-03][DEBUG ] "octopus",
[ceph-mon-03][DEBUG ] "pacific",
[ceph-mon-03][DEBUG ] "elector-pinging"
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "fsid": "6e521054-1532-4bc8-9971-7f8ae93e8430",
[ceph-mon-03][DEBUG ] "min_mon_release": 16,
[ceph-mon-03][DEBUG ] "min_mon_release_name": "pacific",
[ceph-mon-03][DEBUG ] "modified": "2021-09-07T08:03:35.011024Z",
[ceph-mon-03][DEBUG ] "mons": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.148:6789/0",
[ceph-mon-03][DEBUG ] "crush_location": "{}",
[ceph-mon-03][DEBUG ] "name": "ceph-mon-01",
[ceph-mon-03][DEBUG ] "priority": 0,
[ceph-mon-03][DEBUG ] "public_addr": "172.16.10.148:6789/0",
[ceph-mon-03][DEBUG ] "public_addrs": {
[ceph-mon-03][DEBUG ] "addrvec": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.148:3300",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v2"
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.148:6789",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v1"
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "rank": 0,
[ceph-mon-03][DEBUG ] "weight": 0
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.110:6789/0",
[ceph-mon-03][DEBUG ] "crush_location": "{}",
[ceph-mon-03][DEBUG ] "name": "ceph-mon-02",
[ceph-mon-03][DEBUG ] "priority": 0,
[ceph-mon-03][DEBUG ] "public_addr": "172.16.10.110:6789/0",
[ceph-mon-03][DEBUG ] "public_addrs": {
[ceph-mon-03][DEBUG ] "addrvec": [
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.110:3300",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v2"
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] {
[ceph-mon-03][DEBUG ] "addr": "172.16.10.110:6789",
[ceph-mon-03][DEBUG ] "nonce": 0,
[ceph-mon-03][DEBUG ] "type": "v1"
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ]
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "rank": 1,
[ceph-mon-03][DEBUG ] "weight": 0
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ],
[ceph-mon-03][DEBUG ] "stretch_mode": false
[ceph-mon-03][DEBUG ] },
[ceph-mon-03][DEBUG ] "name": "ceph-mon-03",
[ceph-mon-03][DEBUG ] "outside_quorum": [],
[ceph-mon-03][DEBUG ] "quorum": [],
[ceph-mon-03][DEBUG ] "rank": -1,
[ceph-mon-03][DEBUG ] "state": "probing",
[ceph-mon-03][DEBUG ] "stretch_mode": false,
[ceph-mon-03][DEBUG ] "sync_provider": []
[ceph-mon-03][DEBUG ] }
[ceph-mon-03][DEBUG ] ********************************************************************************
[ceph-mon-03][INFO ] monitor: mon.ceph-mon-03 is currently at the state of probing

3.10.2.6 验证ceph-mon-03状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph quorum_status --format json-pretty
{

"election_epoch": 24,

"quorum": [

0,

1,

2

],

"quorum_names": [

"ceph-mon-01",

"ceph-mon-02",

"ceph-mon-03"

],

"quorum_leader_name": "ceph-mon-01",

"quorum_age": 125,

"features": {

"quorum_con": "4540138297136906239",

"quorum_mon": [

"kraken",

"luminous",

"mimic",

"osdmap-prune",

"nautilus",

"octopus",

"pacific",

"elector-pinging"

]

},

"monmap": {

"epoch": 3,

"fsid": "6e521054-1532-4bc8-9971-7f8ae93e8430",

"modified": "2021-09-07T08:10:34.206861Z",

"created": "2021-08-29T06:36:59.023456Z",

"min_mon_release": 16,

"min_mon_release_name": "pacific",

"election_strategy": 1,

"disallowed_leaders: ": "",

"stretch_mode": false,

"features": {

"persistent": [

"kraken",

"luminous",

"mimic",

"osdmap-prune",

"nautilus",

"octopus",

"pacific",

"elector-pinging"

],

"optional": []

},

"mons": [

{

"rank": 0,

"name": "ceph-mon-01",

"public_addrs": {

"addrvec": [

{

"type": "v2",

"addr": "172.16.10.148:3300",

"nonce": 0

},

{

"type": "v1",

"addr": "172.16.10.148:6789",

"nonce": 0

}

]

},

"addr": "172.16.10.148:6789/0",

"public_addr": "172.16.10.148:6789/0",

"priority": 0,

"weight": 0,

"crush_location": "{}"

},

{

"rank": 1,

"name": "ceph-mon-02",

"public_addrs": {

"addrvec": [

{

"type": "v2",

"addr": "172.16.10.110:3300",

"nonce": 0

},

{

"type": "v1",

"addr": "172.16.10.110:6789",

"nonce": 0

}

]

},

"addr": "172.16.10.110:6789/0",

"public_addr": "172.16.10.110:6789/0",

"priority": 0,

"weight": 0,

"crush_location": "{}"

},

{

"rank": 2,

"name": "ceph-mon-03",

"public_addrs": {

"addrvec": [

{

"type": "v2",

"addr": "172.16.10.182:3300",

"nonce": 0

},

{

"type": "v1",

"addr": "172.16.10.182:6789",

"nonce": 0

}

]

},

"addr": "172.16.10.182:6789/0",

"public_addr": "172.16.10.182:6789/0",

"priority": 0,

"weight": 0,

"crush_location": "{}"

}

]

}

}

3.10.2.7 验证集群状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 3m)

mgr: ceph-mgr-01(active, since 23h)

osd: 9 osds: 9 up (since 21h), 9 in (since 23h)
data:

pools: 1 pools, 1 pgs

objects: 3 objects, 0 B

usage: 61 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.10.2.8 ceph-mon-03节点服务管理

3.10.2.8.1 查看ceph-mon-03节点服务进程

点击查看代码

root@ceph-mon-03:~# ps -ef | grep ceph
root 15132 1 0 15:52 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 16821 1 0 16:10 ? 00:00:04 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-03 --setuser ceph --setgroup ceph

3.10.2.8.2 查看ceph-mon-02节点服务状态

点击查看代码

root@ceph-mon-03:~# systemctl status ceph-mon@ceph-mon-03
● ceph-mon@ceph-mon-03.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; indirect; vendor preset: enabled)
Active: active (running) since Tue 2021-09-07 16:10:33 CST; 11min ago
Main PID: 16821 (ceph-mon)
Tasks: 27
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon-03.service
└─16821 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon-03 --setuser ceph --setgroup ceph
Sep 07 16:10:33 ceph-mon-03 systemd[1]: Started Ceph cluster monitor daemon.

3.10.2.8.3 重启ceph-mon-02节点服务

点击查看代码

root@ceph-mon-03:~# systemctl restart ceph-mon@ceph-mon-03

3.10.3 扩展ceph-mgr-02节点

3.10.3.1 配置系统时间同步

点击查看代码

root@ceph-mgr-02:~# apt -y install chrony
root@ceph-mgr-02:~# systemctl start chrony
root@ceph-mgr-02:~# systemctl enable chrony

3.10.3.2 仓库准备

3.10.3.2.1 导入key

点击查看代码

root@ceph-mgr-02:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK

3.10.3.2.2 添加仓库仓库

点击查看代码

root@ceph-mgr-02:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-mgr-02:~# apt -y update && apt -y upgrade

3.10.3.3 安装ceph-mgr

3.10.3.3.1 查看ceph-mgr版本

点击查看代码

root@ceph-mgr-02:~# apt-cache madison ceph-mgr
ceph-mgr | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-mgr | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main amd64 Packages
ceph-mgr | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main amd64 Packages
ceph-mgr | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main amd64 Packages
ceph | 12.2.4-0ubuntu1 | http://mirrors.ucloud.cn/ubuntu bionic/main Sources
ceph | 12.2.13-0ubuntu0.18.04.4 | http://mirrors.ucloud.cn/ubuntu bionic-security/main Sources
ceph | 12.2.13-0ubuntu0.18.04.8 | http://mirrors.ucloud.cn/ubuntu bionic-updates/main Sources

3.10.3.3.2 安装ceph-mgr

点击查看代码

root@ceph-mgr-02:~# apt -y install ceph-common ceph-mgr

3.10.3.3.3 验证ceph-mgr版本

点击查看代码

root@ceph-mgr-02:~# ceph-mgr --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
root@ceph-mgr-02:~# ceph --version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.10.3.4 设置ceph用户

3.10.3.4.1 查看用户

点击查看代码

root@ceph-mgr-02:~# id ceph
uid=64045(ceph) gid=64045(ceph) groups=64045(ceph)

3.10.3.4.2 设置用户bash

点击查看代码

root@ceph-mgr-02:~# usermod -s /bin/bash ceph

3.10.3.4.3 设置用户密码

点击查看代码

root@ceph-mgr-02:~# passwd ceph

3.10.3.4.4 允许 ceph 用户以 sudo 执行特权命令

点击查看代码

root@ceph-mgr-02:~# echo "ceph ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

3.10.3.5 添加ceph-mgr-02节点

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ceph-mgr-02
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mgr-02
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('ceph-mgr-02', 'ceph-mgr-02')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f87476cee60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f8747b2f0d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mgr-02:ceph-mgr-02
The authenticity of host 'ceph-mgr-02 (172.16.10.248)' can't be established.
ECDSA key fingerprint is SHA256:eNbRt8LVzq8+CudIlXlXe9z5pHolyzZ098I98MUT4uE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-mgr-02,172.16.10.248' (ECDSA) to the list of known hosts.
ceph@ceph-mgr-02's password:
[ceph-mgr-02][DEBUG ] connection detected need for sudo
ceph@ceph-mgr-02's password:
sudo: unable to resolve host ceph-mgr-02
[ceph-mgr-02][DEBUG ] connected to host: ceph-mgr-02
[ceph-mgr-02][DEBUG ] detect platform information from remote host
[ceph-mgr-02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr-02
[ceph-mgr-02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr-02][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr-02][DEBUG ] create a keyring file
[ceph-mgr-02][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr-02][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr-02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr-02/keyring
[ceph-mgr-02][INFO ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr-02
[ceph-mgr-02][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr-02.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-mgr-02][INFO ] Running command: sudo systemctl start ceph-mgr@ceph-mgr-02
[ceph-mgr-02][INFO ] Running command: sudo systemctl enable ceph.target

3.10.3.6 验证集群状态

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:

mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 7m)

mgr: ceph-mgr-01(active, since 23h), standbys: ceph-mgr-02

osd: 9 osds: 9 up (since 22h), 9 in (since 23h)
data:

pools: 1 pools, 1 pgs

objects: 3 objects, 0 B

usage: 61 MiB used, 180 GiB / 180 GiB avail

pgs: 1 active+clean

3.10.3.7 ceph-mgr-02节点服务管理

3.10.3.7.1 查看ceph-mgr-02节点服务进程

点击查看代码

root@ceph-mgr-02:~# ps -ef | grep ceph
root 16456 1 0 15:52 ? 00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph 18043 1 1 16:22 ? 00:00:06 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr-02 --setuser ceph --setgroup ceph

3.10.3.7.2 查看ceph-mgr-02节点服务状态

点击查看代码

root@ceph-mgr-02:~# systemctl status ceph-mgr@ceph-mgr-02
● ceph-mgr@ceph-mgr-02.service - Ceph cluster manager daemon
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; indirect; vendor preset: enabled)
Active: active (running) since Tue 2021-09-07 16:22:51 CST; 8min ago
Main PID: 18043 (ceph-mgr)
Tasks: 21 (limit: 1105)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mgr-02.service
└─18043 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr-02 --setuser ceph --setgroup ceph
Sep 07 16:22:51 ceph-mgr-02 systemd[1]: Started Ceph cluster manager daemon.

Sep 07 16:22:51 ceph-mgr-02 systemd[1]: /lib/systemd/system/ceph-mgr@.service:21: Unknown lvalue 'ProtectHostname' in section 'Service'

Sep 07 16:22:51 ceph-mgr-02 systemd[1]: /lib/systemd/system/ceph-mgr@.service:22: Unknown lvalue 'ProtectKernelLogs' in section 'Service'

3.10.3.7.3 重启ceph-mgr-02节点服务

点击查看代码

root@ceph-mgr-02:~# systemctl restart ceph-mgr@ceph-mgr-02

3.11 测试集群上传和下载

3.11.1 存取数据流程

  • 存取数据时,客户端必须先连接至RADOS集群上某存储池,然后根据对象名称由相关的CRUSH规则完成数据对象寻址。

3.11.2 创建pool

点击查看代码

#创建存储池,指定 pg 和 pgp 的数量, pgp 是对存在于 pg的数据进行组合存储, pgp 通常等于 pg 的值
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create mypool 32 32
pool 'mypool' created

3.11.3 查看pool

3.11.3.1 方法一

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
mypool

3.11.3.2 方法二

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ rados lspools
device_health_metrics
mypool

3.11.4 查看PG

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph pg ls-by-pool mypool
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
2.0 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [3,6,0]p3 [3,6,0]p3 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [2,6,3]p2 [2,6,3]p2 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.2 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [5,1,8]p5 [5,1,8]p5 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.3 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [5,2,8]p5 [5,2,8]p5 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.4 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [1,7,3]p1 [1,7,3]p1 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.5 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,0,4]p8 [8,0,4]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.6 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [1,6,3]p1 [1,6,3]p1 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.7 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [3,7,2]p3 [3,7,2]p3 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.8 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [3,7,0]p3 [3,7,0]p3 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.9 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [1,4,8]p1 [1,4,8]p1 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.a 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [6,1,3]p6 [6,1,3]p6 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.b 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,5,2]p8 [8,5,2]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.c 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [6,0,5]p6 [6,0,5]p6 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.d 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [6,3,2]p6 [6,3,2]p6 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.e 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [2,8,3]p2 [2,8,3]p2 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.f 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,4,0]p8 [8,4,0]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.10 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,1,5]p8 [8,1,5]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.11 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [4,1,8]p4 [4,1,8]p4 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.12 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [7,1,3]p7 [7,1,3]p7 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.13 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [7,4,2]p7 [7,4,2]p7 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.14 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [3,7,0]p3 [3,7,0]p3 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.15 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [7,1,3]p7 [7,1,3]p7 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.16 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [5,7,1]p5 [5,7,1]p5 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.17 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [5,6,2]p5 [5,6,2]p5 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.18 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,4,2]p8 [8,4,2]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.19 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [0,4,7]p0 [0,4,7]p0 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1a 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [3,8,2]p3 [3,8,2]p3 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1b 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [6,5,2]p6 [6,5,2]p6 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1c 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [8,4,1]p8 [8,4,1]p8 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1d 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [7,3,0]p7 [7,3,0]p7 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1e 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [2,7,5]p2 [2,7,5]p2 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800
2.1f 0 0 0 0 0 0 0 0 active+clean 4m 0'0 123:10 [0,3,8]p0 [0,3,8]p0 2021-09-07T18:09:47.823875+0800 2021-09-07T18:09:47.823875+0800


  • ​NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics​

3.11.5 上传文件

3.11.5.1 上传文件

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ sudo rados put msg1 /var/log/syslog --pool=mypool

3.11.5.2 列出文件

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ rados ls --pool=mypool
msg1

3.11.5.3 查看文件信息

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ ceph osd map mypool msg1
osdmap e124 pool 'mypool' (2) object 'msg1' -> pg 2.c833d430 (2.10) -> up ([8,1,5], p8) acting ([8,1,5], p8)
#查询结果表示文件放在了存储池id为2的c833d430的PG上,10为当前PG的id,2.10表示数据是在id为2的存储池中id为10的PG存储,在线的OSD编号8,1,5,主OSD为8,活动的OSD 8,1,5,三个OSD表示数据存放一共3个副本,PG中的OSD是ceph的crush算法计算出三份数据保存在哪些OSD。

3.11.6 下载文件

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ sudo rados get msg1 --pool=mypool /opt/1.txt
ceph@ceph-deploy:~/ceph-cluster$ ls -l /opt/
total 8
-rw-r--r-- 1 root root 4292 Sep 10 15:07 1.txt

3.11.7 删除文件

点击查看代码

ceph@ceph-deploy:~/ceph-cluster$ sudo rados rm msg1 --pool=mypool
ceph@ceph-deploy:~/ceph-cluster$ rados ls --pool=mypool