五 Ceph RBD使用
5.1 RBD架构图
- RBD既RADOS Blcok Device的简称,RBD块存储是常用的存储类型之一,RBD块设备类似磁盘可以被挂载,RBD块设备具有快照、多副本、克隆和一致性等特性,数据以条带化的方式存储在ceph集群的多个OSD中。
- 条带化技术就是一种自动的将I/O的负载均衡到多个屋里磁盘的技术,条带化技术就是将一块连续的数据分成很多小部分并把他们存储到不同磁盘上去。这就是使多个进程同时访问数据的多个不同部分而不会造成磁盘冲突,而且在需要这种数据进行顺序访问的时候可以获得最大程度上的I/O并行能力,从而获得非常好的性能。
5.2 rbd命令使用帮助
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd -h
usage: rbd <command> ...
Command-line interface for managing Ceph RBD images.Positional arguments:
<command>
bench Simple benchmark.
children 显示图像或其子元素快照
clone Clone a snapshot into a CoW child image.
config global get Get a global-level configuration override.
config global list (... ls) List global-level configuration overrides.
config global remove (... rm) Remove a global-level configuration
override.
config global set Set a global-level configuration override.
config image get Get an image-level configuration override.
config image list (... ls) List image-level configuration overrides.
config image remove (... rm) Remove an image-level configuration
override.
config image set Set an image-level configuration override.
config pool get Get a pool-level configuration override.
config pool list (... ls) List pool-level configuration overrides.
config pool remove (... rm) Remove a pool-level configuration
override.
config pool set Set a pool-level configuration override.
copy (cp) Copy src image to dest.
create 创建一个空镜像。
deep copy (deep cp) Deep copy src image to dest.
device list (showmapped) List mapped rbd images.
device map (map) Map an image to a block device.
device unmap (unmap) Unmap a rbd device.
diff Print extents that differ since a
previous snap, or image creation.
disk-usage (du) Show disk usage stats for pool, image or
snapshot.
encryption format Format image to an encrypted format.
export Export image to file.
export-diff Export incremental diff to file.
feature disable Disable the specified image feature.
feature enable Enable the specified image feature.
flatten Fill clone with parent data (make it
independent).
group create Create a group.
group image add Add an image to a group.
group image list (... ls) List images in a group.
group image remove (... rm) Remove an image from a group.
group list (group ls) List rbd groups.
group remove (group rm) Delete a group.
group rename Rename a group within pool.
group snap create Make a snapshot of a group.
group snap list (... ls) List snapshots of a group.
group snap remove (... rm) Remove a snapshot from a group.
group snap rename Rename group's snapshot.
group snap rollback Rollback group to snapshot.
image-cache invalidate Discard existing / dirty image cache
image-meta get Image metadata get the value associated
with the key.
image-meta list (image-meta ls) Image metadata list keys with values.
image-meta remove (image-meta rm) Image metadata remove the key and value
associated.
image-meta set Image metadata set key with value.
import Import image from file.
import-diff Import an incremental diff.
info Show information about image size,
striping, etc.
journal client disconnect Flag image journal client as disconnected.
journal export Export image journal.
journal import Import image journal.
journal info Show information about image journal.
journal inspect Inspect image journal for structural
errors.
journal reset Reset image journal.
journal status Show status of image journal.
list (ls) 列出 rbd 镜像。
lock add Take a lock on an image.
lock list (lock ls) Show locks held on an image.
lock remove (lock rm) Release a lock on an image.
merge-diff Merge two diff exports together.
migration abort Cancel interrupted image migration.
migration commit Commit image migration.
migration execute Execute image migration.
migration prepare Prepare image migration.
mirror image demote Demote an image to non-primary for RBD
mirroring.
mirror image disable Disable RBD mirroring for an image.
mirror image enable Enable RBD mirroring for an image.
mirror image promote Promote an image to primary for RBD
mirroring.
mirror image resync Force resync to primary image for RBD
mirroring.
mirror image snapshot Create RBD mirroring image snapshot.
mirror image status Show RBD mirroring status for an image.
mirror pool demote Demote all primary images in the pool.
mirror pool disable Disable RBD mirroring by default within a
pool.
mirror pool enable Enable RBD mirroring by default within a
pool.
mirror pool info Show information about the pool mirroring
configuration.
mirror pool peer add Add a mirroring peer to a pool.
mirror pool peer bootstrap create Create a peer bootstrap token to import
in a remote cluster
mirror pool peer bootstrap import Import a peer bootstrap token created
from a remote cluster
mirror pool peer remove Remove a mirroring peer from a pool.
mirror pool peer set Update mirroring peer settings.
mirror pool promote Promote all non-primary images in the
pool.
mirror pool status Show status for all mirrored images in
the pool.
mirror snapshot schedule add Add mirror snapshot schedule.
mirror snapshot schedule list (... ls)
List mirror snapshot schedule.
mirror snapshot schedule remove (... rm)
Remove mirror snapshot schedule.
mirror snapshot schedule status Show mirror snapshot schedule status.
namespace create Create an RBD image namespace.
namespace list (namespace ls) List RBD image namespaces.
namespace remove (namespace rm) Remove an RBD image namespace.
object-map check Verify the object map is correct.
object-map rebuild Rebuild an invalid object map.
perf image iostat Display image IO statistics.
perf image iotop Display a top-like IO monitor.
pool init 初始化池以供 RBD 使用。
pool stats 显示池统计信息。
remove (rm) Delete an image.
rename (mv) Rename image within pool.
resize Resize (expand or shrink) image.
snap create (snap add) Create a snapshot.
snap limit clear Remove snapshot limit.
snap limit set Limit the number of snapshots.
snap list (snap ls) Dump list of image snapshots.
snap protect Prevent a snapshot from being deleted.
snap purge Delete all unprotected snapshots.
snap remove (snap rm) Delete a snapshot.
snap rename Rename a snapshot.
snap rollback (snap revert) Rollback image to snapshot.
snap unprotect Allow a snapshot to be deleted.
sparsify Reclaim space for zeroed image extents.
status Show the status of this image.
trash list (trash ls) List trash images.
trash move (trash mv) Move an image to the trash.
trash purge Remove all expired images from trash.
trash purge schedule add Add trash purge schedule.
trash purge schedule list (... ls)
List trash purge schedule.
trash purge schedule remove (... rm)
Remove trash purge schedule.
trash purge schedule status Show trash purge schedule status.
trash remove (trash rm) Remove an image from trash.
trash restore Restore an image from trash.
watch Watch events on image.Optional arguments:
-c [ --conf ] arg 集群配置路径
--cluster arg 集群名称
--id arg 客户端 ID(不带 'client.' 前缀)
-n [ --name ] arg 客户端名称
-m [ --mon_host ] arg 监控主机
-K [ --keyfile ] arg 密钥路径
-k [ --keyring ] arg 密钥环路径
See 'rbd help <command>' for help on a specific command.
5.3 创建存储池
5.3.1 创建存储池
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create wgsrbd 64 64
pool 'wgsrbd' created
5.3.2 验证存储池
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
wgsrbd
5.3.3 存储池启用rbd
5.3.3.1 存储池启用rbd命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool application -h
General usage:usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
[--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
[--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
[-W WATCH_CHANNEL] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain,yaml}]
[--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]Ceph administration tooloptional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file, or "-" for stdin
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file, or "-" for stdout
--setuser SETUSER set user file permission
--setgroup SETGROUP set group file permission
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help)
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
-W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL
watch live cluster changes on a specific channel
(e.g., cluster, audit, cephadm, or '*' for all)
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
--block block until completion (scrub and deep-scrub only)
--period PERIOD, -p PERIOD
polling period, default 1.0 second (for polling
commands only)Local commands:ping <mon.id> Send simple presence/life test to a mon
<mon.id> may be 'mon.*' for all mons
daemon {type.id|path} <cmd>
Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
daemonperf {type.id | path} list|ls [stat-pats] [priority]
Get selected perf stats from daemon/admin socket
Optional shell-glob comma-delim match string stat-pats
Optional selection priority (can abbreviate name):
critical, interesting, useful, noninteresting, debug
List shows a table of all available stats
Run <count> times (default forever),
once per <interval> seconds (default 1)
Monitor commands:
osd pool application disable <pool> <app> [--yes-i-really-mean-it] disables use of an application <app> on pool <poolname>
osd pool application enable <pool> <app> [--yes-i-really-mean-it] enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>
osd pool application get [<pool>] [<app>] [<key>] get value of key <key> of application <app> on pool <poolname>
osd pool application rm <pool> <app> <key> removes application <app> metadata key <key> on pool <poolname>
osd pool application set <pool> <app> <key> <value> sets application <app> metadata key <key> to <value> on pool
<poolname>
5.3.3.2 存储池启用rbd
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable wgsrbd rbd
enabled application 'rbd' on pool 'wgsrbd'
5.3.4 初始化rbd
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd pool init -p wgsrbd
5.4 创建img镜像
5.4.1 镜像简介
- rbd存储池并不能直接用于块设备,而是需要事前在其中按需创建映像(image),并把映像文件作为块设备使用。rbd命令可以用于创建、查看以及删除块设备存在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。
5.4.2 镜像特性
- layering: 支持镜像分层快照特性,用于快照写时复制,可以对image创建快照并保护,然后从快照克隆出新的image出来,父子image之间采用COW技术,共享对象数据。
- strping: 支持条带化v2,类似raid 0,只不过在ceph环境中的数据被分散到不同的对象中,可改善顺序读写场景较多情况下的性能。
- exclusive-lock: 支持独占锁,限制一个镜像只能被一个客户端使用。
- object-map: 支持对象映射(依赖exclusive-lock),加速数据导入导出及已用空间统计等,此特性开启的时候,会记录image所有对象的一个位图,用以标记对象是否真的存在,在一些场景下可以加速io。
- fast-diff: 快读计算镜像与快照数据差异对比(依赖 object-map)。
- deep-flatten: 支持快照扁平化操作,用于快照管理时解决依赖关系等。
- journaling: 修改数据是否记录日志,该特性可以通过记录日志并通过日志恢复数据(依赖 exclusive-lock),开启此特性会增加磁盘IO使用。
- 默认开启的特性包括:layering/exlcusive-lock/object map/fast diff/deep flatten
5.4.3 创建img镜像命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help create
usage: rbd create [--pool <pool>] [--namespace <namespace>] [--image <image>]
[--image-format <image-format>] [--new-format]
[--order <order>] [--object-size <object-size>]
[--image-feature <image-feature>] [--image-shared]
[--stripe-unit <stripe-unit>]
[--stripe-count <stripe-count>] [--data-pool <data-pool>]
[--mirror-image-mode <mirror-image-mode>]
[--journal-splay-width <journal-splay-width>]
[--journal-object-size <journal-object-size>]
[--journal-pool <journal-pool>]
[--thick-provision] --size <size> [--no-progress]
<image-spec>
Create an empty image.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--image-format arg image format [default: 2]
--object-size arg object size in B/K/M [4K <= object size <= 32M]
--image-feature arg image features
[layering(+), exclusive-lock(+), object-map(+),
deep-flatten(+-), journaling(*)]
--image-shared shared image
--stripe-unit arg stripe unit in B/K/M
--stripe-count arg stripe count
--data-pool arg data pool
--mirror-image-mode arg mirror image mode [journal or snapshot]
--journal-splay-width arg number of active journal objects
--journal-object-size arg size of journal objects [4K <= size <= 64M]
--journal-pool arg pool for journal objects
--thick-provision fully allocate storage and zero image
-s [ --size ] arg image size (in M/G/T) [default: M]
--no-progress disable progress output
Image Features:
(*) supports enabling/disabling on existing images
(-) supports disabling-only on existing images
(+) enabled by default for new images if features not specified
5.4.4 创建镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd create wgs-img1 --size 3G --pool wgsrbd --image-format 2 --image-feature layering
ceph@ceph-deploy:~/ceph-cluster$ rbd create wgs-img2 --size 3G --pool wgsrbd --image-format 2 --image-feature layering
5.4.5 验证镜像
5.4.5.1 验证镜像命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help ls
usage: rbd ls [--long] [--pool <pool>] [--namespace <namespace>]
[--format <format>] [--pretty-format]
<pool-spec>
List rbd images.Positional arguments
<pool-spec> pool specification
(example: <pool-name>[/<namespace>]
Optional arguments
-l [ --long ] long listing format
-p [ --pool ] arg pool name
--namespace arg namespace name
--format arg output format (plain, json, or xml) [default: plain]
--pretty-format pretty formatting (json and xml)
5.4.5.2 验证镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls --pool wgsrbd
wgs-img1
5.4.5.3 查看镜像信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls --pool wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
5.4.5.4 以json格式显示镜像信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l --format json --pretty-format
[
{
"image": "wgs-img1",
"id": "a4ac9cb9fcb03",
"size": 3221225472,
"format": 2
}
]
5.4.6 镜像特性的启用
5.4.6.1 启用镜像特性命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help feature enable
usage: rbd feature enable [--pool <pool>] [--namespace <namespace>]
[--image <image>]
[--journal-splay-width <journal-splay-width>]
[--journal-object-size <journal-object-size>]
[--journal-pool <journal-pool>]
<image-spec> <features> [<features> ...]
Enable the specified image feature.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
<features> image features
[exclusive-lock, object-map, journaling]
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--journal-splay-width arg number of active journal objects
--journal-object-size arg size of journal objects [4K <= size <= 64M]
--journal-pool arg pool for journal objects
5.4.6.2 启用镜像指定特性
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd feature enable exclusive-lock -p wgsrbd --image wgs-img1
5.4.6.3 验证镜像特性
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd --image wgs-img1 -p wgsrbd inforbd image 'wgs-img1':
size 3 GiB in 768 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: a4ac9cb9fcb03
block_name_prefix: rbd_data.a4ac9cb9fcb03
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Sun Sep 19 18:02:40 2021
access_timestamp: Sun Sep 19 18:02:40 2021
modify_timestamp: Sun Sep 19 18:02:40 2021
5.4.7 镜像特性的禁用
5.4.7.1 禁用镜像特性命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help feature disable
usage: rbd feature disable [--pool <pool>] [--namespace <namespace>]
[--image <image>]
<image-spec> <features> [<features> ...]
Disable the specified image feature.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
<features> image features
[exclusive-lock, object-map, journaling]
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
5.4.7.2 禁用镜像指定特性
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd feature disable exclusive-lock -p wgsrbd --image wgs-img1
5.4.7.3 验证镜像特性
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd --image wgs-img1 -p wgsrbd info
rbd image 'wgs-img1':
size 3 GiB in 768 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: a4ac9cb9fcb03
block_name_prefix: rbd_data.a4ac9cb9fcb03
format: 2
features: layering
op_features:
flags:
create_timestamp: Sun Sep 19 18:02:40 2021
access_timestamp: Sun Sep 19 18:02:40 2021
modify_timestamp: Sun Sep 19 18:02:40 2021
5.5 配置客户端使用rbd
5.5.1 客户端映射镜像命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help map
usage: rbd map [--device-type <device-type>] [--pool <pool>]
[--namespace <namespace>] [--image <image>] [--snap <snap>]
[--read-only] [--exclusive] [--quiesce]
[--quiesce-hook <quiesce-hook>] [--options <options>]
<image-or-snap-spec>
Map an image to a block device.Positional arguments
<image-or-snap-spec> image or snapshot specification
(example:
[<pool-name>/[<namespace>/]]<image-name>[@<snap-name>
])
Optional arguments
-t [ --device-type ] arg device type [ggate, krbd (default), nbd]
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--snap arg snapshot name
--read-only map read-only
--exclusive disable automatic exclusive lock transitions
--quiesce use quiesce hooks
--quiesce-hook arg quiesce hook path
-o [ --options ] arg device specific options
5.5.2 客户端配置仓库
5.5.2.1 ubuntu20.04系统
点击查看代码
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade
5.5.2.2 centos7系统
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
5.5.3 客户端安装ceph-common
5.5.3.1 ubuntu20.04系统
点击查看代码
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common
5.5.3.2 centos7系统
点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common
5.5.4 客户端使用admin账号挂载RBD
5.5.4.1 客户端同步admin账户认证文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.admin.keyring root@ceph-client-ubuntu20.04-01:/etc/ceph
5.5.4.2 客户端映射镜像
点击查看代码
root@ceph-client-ubuntu20.04-01:~# rbd -p wgsrbd map wgs-img1
/dev/rbd0
5.5.4.3 客户端验证镜像文件
点击查看代码
root@ceph-client-ubuntu20.04-01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 32.3M 1 loop /snap/snapd/12883
rbd0 251:0 0 3G 0 disk
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 20G 0 part /
vdb 252:16 0 500G 0 disk /data
5.5.4.4 客户端格式化磁盘
点击查看代码
#在客户端只需要格式化一次即可
root@ceph-client-ubuntu20.04-01:~# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=98304 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=786432, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
5.5.4.5 客户端挂载rbd
点击查看代码
root@ceph-client-ubuntu20.04-01:~# mkdir -pv /data/rbd_data
mkdir: created directory '/data/rbd_data'
root@ceph-client-ubuntu20.04-01:~# mount /dev/rbd0 /data/rbd_data/
5.5.4.6 客户端写入数据
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/rbd_data/
root@ceph-client-ubuntu20.04-01:/data/rbd_data# dd if=/dev/zero of=/data/rbd_data/client-ubuntu20.04 bs=1MB count=10
10+0 records in
10+0 records out
10000000 bytes (10 MB, 9.5 MiB) copied, 0.00674456 s, 1.5 GB/s
5.5.4.7 验证rbd数据
点击查看代码
root@ceph-client-ubuntu20.04-01:/data/rbd_data# ls -lh
total 9.6M
-rw-r--r-- 1 root root 9.6M Sep 23 17:55 client-ubuntu20.04
5.5.4.8 查看存储池空间
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 180 GiB 179 GiB 1.1 GiB 1.1 GiB 0.62
TOTAL 180 GiB 179 GiB 1.1 GiB 1.1 GiB 0.62
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 16 KiB 3 48 KiB 0 56 GiB
wgsrbd 6 64 20 MiB 18 60 MiB 0.03 56 GiB
5.5.4.9 查看镜像状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
wgs-img2 3 GiB 2
5.5.4.10 查看映射
点击查看代码
root@ceph-client-ubuntu20.04-01:~# rbd showmapped
id pool namespace image snap device
0 wgsrbd wgs-img1 - /dev/rbd0
5.5.5 客户端使用普通账户挂载RBD
5.5.5.1 创建普通账户并授权
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth add client.wgs mon 'allow r' osd 'allow rwx pool=wgsrbd'
added key for client.wgs
5.5.5.2 验证用户信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs
[client.wgs]
key = AQDkVkxhcLoxIxAAtUtgNQ5mfcyIeMl8Dnhy8w==
caps mon = "allow r"
caps osd = "allow rwx pool=wgsrbd"
exported keyring for client.wgs
5.5.5.3 创建用户keyring文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.wgs.keyring
creating ceph.client.wgs.keyring
5.5.5.4 导出用户keyring
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs -o ceph.client.wgs.keyring
exported keyring for client.wgs
5.5.5.5 验证用户keyring文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.client.wgs.keyring
[client.wgs]
key = AQDkVkxhcLoxIxAAtUtgNQ5mfcyIeMl8Dnhy8w==
caps mon = "allow r"
caps osd = "allow rwx pool=wgsrbd"
5.5.5.6 客户端同步普通用户认证文件
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring root@ceph-client-centos7-01:/etc/ceph
5.5.5.7 在客户端验证权限
点击查看代码
[root@ceph-client-centos7-01 ~]# ceph --id wgs -s
cluster:
id: 6e521054-1532-4bc8-9971-7f8ae93e8430
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 72m)
mgr: ceph-mgr-01(active, since 22h), standbys: ceph-mgr-02
osd: 9 osds: 9 up (since 22h), 9 in (since 22h)
data:
pools: 2 pools, 65 pgs
objects: 21 objects, 24 MiB
usage: 1.1 GiB used, 179 GiB / 180 GiB avail
pgs: 65 active+clean
5.5.5.8 客户端映射rbd
点击查看代码
[root@ceph-client-centos7-01 ~]# rbd --id wgs -p wgsrbd map wgs-img2
/dev/rbd0
5.5.5.9 客户端验证rbd
点击查看代码
[root@ceph-client-centos7-01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 251:0 0 3G 0 disk
vdb 253:16 0 200G 0 disk /data
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
5.5.5.10 客户端格式化rbd
点击查看代码
#在客户端只需要格式化一次即可
[root@ceph-client-centos7-01 ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=98304 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=786432, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
5.5.5.11 客户端挂载rbd
点击查看代码
[root@ceph-client-centos7-01 ~]# mkdir -pv /data/rbd_data
mkdir: created directory '/data/rbd_data'
[root@ceph-client-centos7-01 ~]# mount /dev/rbd0 /data/rbd_data/
5.5.5.12 客户端写入数据
点击查看代码
[root@ceph-client-centos7-01 ~]# cd /data/rbd_data/
[root@ceph-client-centos7-01 rbd_data]# dd if=/dev/zero of=/data/rbd_data/client-centos7 bs=1MB count=10
10+0 records in
10+0 records out
10000000 bytes (10 MB, 9.5 MiB) copied, 0.00674456 s, 1.5 GB/s
5.5.5.13 客户端验证数据
点击查看代码
[root@ceph-client-centos7-01 rbd_data]# ls -l
total 9768
-rw-r--r-- 1 root root 10000000 Sep 23 20:51 client-centos7
5.5.5.14 查看存储池空间
点击查看代码
ceph@ceph-deploy:~$ cd -
/var/lib/ceph/ceph-cluster
ceph@ceph-deploy:~/ceph-cluster$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 180 GiB 179 GiB 1.2 GiB 1.2 GiB 0.69
TOTAL 180 GiB 179 GiB 1.2 GiB 1.2 GiB 0.69
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 16 KiB 3 48 KiB 0 56 GiB
wgsrbd 6 64 40 MiB 35 120 MiB 0.07 56 GiB
5.5.5.15 验证镜像状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
wgs-img2 3 GiB 2
5.5.5.16 查看映射
点击查看代码
[root@ceph-client-centos7-01 ~]# rbd showmapped
id pool namespace image snap device
0 wgsrbd wgs-img2 - /dev/rbd0
5.6 验证ceph内核模块
5.6.1 centos7系统
点击查看代码
[root@ceph-client-centos7-01 rbd_data]# lsmod |grep ceph
libceph 323584 1 rbd
dns_resolver 16384 1 libceph
libcrc32c 16384 2 xfs,libceph
[root@ceph-client-centos7-01 rbd_data]# modinfo libceph
filename: /lib/modules/4.19.0-1.el7.ucloud.x86_64/kernel/net/ceph/libceph.ko
license: GPL
description: Ceph core library
author: Patience Warnick <patience@newdream.net>
author: Yehuda Sadeh <yehuda@hq.newdream.net>
author: Sage Weil <sage@newdream.net>
srcversion: 0448A6BE71DB5EC276F6CCA
depends: libcrc32c,dns_resolver
retpoline: Y
intree: Y
name: libceph
vermagic: 4.19.0-1.el7.ucloud.x86_64 SMP mod_unload modversions
5.6.2 ubuntu系统
点击查看代码
root@ceph-client-ubuntu20.04-01:~# lsmod |grep ceph
libceph 327680 1 rbd
libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,xfs,raid456,libceph
root@ceph-client-ubuntu20.04-01:~# modinfo libceph
filename: /lib/modules/5.4.0-48-generic/kernel/net/ceph/libceph.ko
license: GPL
description: Ceph core library
author: Patience Warnick <patience@newdream.net>
author: Yehuda Sadeh <yehuda@hq.newdream.net>
author: Sage Weil <sage@newdream.net>
srcversion: A98DFA58A074ADA2D5F3483
depends: libcrc32c
retpoline: Y
intree: Y
name: libceph
vermagic: 5.4.0-48-generic SMP mod_unload
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 69:0F:B2:8C:24:82:6C:28:AB:28:F7:D2:E5:B8:D0:0B:2C:EF:1F:87
sig_hashalgo: sha512
signature: 14:D1:CA:51:B6:BC:6D:2C:BC:27:88:C3:8B:3C:70:6A:9C:AA:65:9C:
C7:07:E8:F1:2F:6D:5B:82:92:F5:DE:7B:68:3B:03:8C:B3:5C:1A:82:
09:7B:48:D1:1E:1C:89:D1:BE:C1:D3:B3:0F:3F:9F:4C:CF:8C:47:9A:
D1:C3:3B:BE:DD:DF:B7:6C:B5:85:8D:D4:6C:F5:03:0D:5E:73:D9:80:
16:AE:15:E1:64:E4:05:F6:B8:55:A8:AA:FA:A1:FB:46:A8:5D:13:84:
8A:35:1B:05:56:0A:19:46:83:72:D4:54:09:D4:04:1D:04:1B:7A:78:
92:C5:CB:0B:54:67:D3:3C:54:98:48:10:F9:F5:8B:14:A4:D7:20:B8:
F5:8D:CE:16:45:D7:4F:70:94:DE:0F:42:24:A0:32:AE:E2:80:7C:2A:
D8:FD:D5:9D:07:AC:D7:F5:B9:76:51:60:57:FD:18:8C:31:9F:3E:41:
2F:74:92:D8:E5:25:9F:2F:9B:C0:05:8F:59:F2:6F:9E:84:41:CF:AC:
92:A6:CA:50:AC:2A:34:58:CF:AB:58:A4:52:F0:F5:F5:F6:EA:63:B4:
92:C5:E7:94:B0:6A:68:00:6A:11:22:74:94:E7:49:1D:84:43:24:06:
26:EA:B3:70:A2:7F:1C:49:5A:F3:31:4A:26:46:4F:8A:32:3B:B3:EE:
CA:79:B9:DE:90:36:5F:3A:D8:99:11:95:36:1A:6D:8D:DE:A2:40:DA:
14:E0:5F:B1:0A:48:7F:29:B0:5B:94:97:70:DD:7B:CF:CB:C8:0A:14:
63:68:F6:77:D2:AE:55:00:9B:17:A1:A3:AF:5B:8B:3F:D6:54:7E:99:
47:7C:A5:40:2B:F1:7A:A3:68:38:86:E3:D6:7E:74:E3:5A:EA:F5:60:
79:BB:BE:15:1A:B0:3B:65:F9:87:09:51:B4:44:A0:84:94:1E:0E:B5:
BD:D9:62:AA:E4:AA:AA:31:39:3F:66:2F:EA:20:4F:A4:CF:18:E3:02:
4E:DC:48:3E:0A:3C:7B:92:5C:85:F9:0F:8D:DA:2C:12:EE:78:4D:D3:
09:51:C3:F3:60:B0:62:BD:51:2D:F2:68:64:51:BA:BA:98:56:6F:5E:
C1:06:C1:17:7E:56:84:83:08:46:A2:C8:42:76:2F:4C:CB:C1:B7:67:
45:83:A9:36:44:00:B8:09:13:00:04:C8:1D:DD:10:4B:FB:DF:93:D5:
64:EA:27:6D:50:F1:E8:F8:37:75:C0:4E:D0:D8:F7:06:F1:E5:D1:71:
98:FA:57:27:FC:EE:86:87:35:3B:5A:8C:8D:F8:31:51:F3:7A:F1:50:
0A:23:50:90:52:28:04:EE:D7:43:6A:E1
5.7 rbd镜像拉伸
5.7.1 rbd镜像拉伸命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help resize
usage: rbd resize [--pool <pool>] [--namespace <namespace>]
[--image <image>] --size <size> [--allow-shrink]
[--no-progress]
<image-spec>
Resize (expand or shrink) image.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
-s [ --size ] arg image size (in M/G/T) [default: M]
--allow-shrink permit shrinking
--no-progress disable progress output
5.7.2 查看当前镜像大小
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
wgs-img2 3 GiB 2
5.7.3 拉伸rbd镜像空间
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd resize -p wgsrbd --image wgs-img2 --size 5G
Resizing image: 100% complete...done.
5.7.4 验证rbd镜像拉伸大小
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
wgs-img2 5 GiB 2
5.7.5 客户端验证rbd镜像空间
点击查看代码
[root@ceph-client-centos7-01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 251:0 0 5G 0 disk /data/rbd_data
vdb 253:16 0 200G 0 disk /data
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
5.8 客户端开机自动挂载rbd镜像
点击查看代码
[root@ceph-client-centos7-01 ~]# cat /etc/rc.d/rc.local
rbd --user wgs -p wgsrbd map wgs-img2
mount /dev/rbd0 /data/rbd-data
5.9 卸载rbd镜像
点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/rbd_data/
[root@ceph-client-centos7-01 ~]# rbd --user wgs -p wgsrbd unmap wgs-img2
5.10 删除rbd镜像
5.10.1 删除rbd镜像命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help rm
usage: rbd rm [--pool <pool>] [--namespace <namespace>] [--image <image>]
[--no-progress]
<image-spec>
Delete an image.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--no-progress disable progress output
5.10.2 删除rbd镜像
点击查看代码
#数据会被删除且无法恢复
ceph@ceph-deploy:~/ceph-cluster$ rbd rm -p wgsrbd --image wgs-img2
Removing image: 100% complete...done.
5.10.3 验证删除rbd镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
NAME SIZE PARENT FMT PROT LOCK
wgs-img1 3 GiB 2
5.11 rbd镜像回收站机制
5.11.1 rbd回收站命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help trash
status Show the status of this image.
trash list (trash ls) List trash images.
trash move (trash mv) Move an image to the trash.
trash purge Remove all expired images from trash.
trash purge schedule add Add trash purge schedule.
trash purge schedule list (... ls)
List trash purge schedule.
trash purge schedule remove (... rm)
Remove trash purge schedule.
trash purge schedule status Show trash purge schedule status.
trash remove (trash rm) Remove an image from trash.
trash restore Restore an image from trash.
5.11.2 查看rbd镜像状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd status -p wgsrbd --image wgs-img1
Watchers:
watcher=192.168.1.248:0/3309072908 client.934198 cookie=18446462598732840961
5.11.3 将rbd镜像移动到回收站
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd trash move -p wgsrbd --image wgs-img1
5.11.4 查看回收站镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd trash list -p wgsrbd
e68a48c1b2ee4 wgs-img1
5.11.5 确认镜像状态
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd status -p wgsrbd --image wgs-img1
rbd: error opening image wgs-img1: (2) No such file or directory
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -l
5.11.6 rbd镜像移动到回收站后客户端挂载rbd镜像状态
5.11.6.1 客户端rbd镜像挂载正常
点击查看代码
root@ceph-client-ubuntu20.04-01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 32.3M 1 loop /snap/snapd/12883
rbd0 251:0 0 8G 0 disk /data/rbd_data
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 20G 0 part /
vdb 252:16 0 500G 0 disk /data
5.11.6.2 客户端数据写入正常
点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/rbd_data/
root@ceph-client-ubuntu20.04-01:/data/rbd_data# cp client-ubuntu20.04 client-ubuntu20.04-01
root@ceph-client-ubuntu20.04-01:/data/rbd_data# ls -l
total 19536
-rw-r--r-- 1 root root 10000000 Sep 23 17:55 client-ubuntu20.04
-rw-r--r-- 1 root root 10000000 Sep 23 21:44 client-ubuntu20.04-01
5.11.7 从回收站还原镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd trash restore -p wgsrbd --image wgs-img1 --image-id e68a48c1b2ee4
5.11.8 验证rbd镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd ls -p wgsrbd -lNAME SIZE PARENT FMT PROT LOCK
wgs-img1 8 GiB 2
ceph@ceph-deploy:~/ceph-cluster$ rbd status -p wgsrbd --image wgs-img1Watchers:
watcher=192.168.1.248:0/3309072908 client.934198 cookie=18446462598732840961
5.11.9 从回收站删除rbd镜像
5.11.9.1 客户端卸载rbd
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/rbd_data/
root@ceph-client-ubuntu20.04-01:~# rbd -p wgsrbd unmap wgs-img1
5.11.9.2 从回收站删除rbd镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd trash remove -p wgsrbd --image-id e68a48c1b2ee4
Removing image: 100% complete...done.
5.11.9.3 确认回收站镜像
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd trash list -p wgsrbd
5.12 rbd镜像快照
- 如果在做快照时映像仍在进行 I/O 操作,快照可能就获取不到该映像准确的或最新的数据,并且该快照可能不得不被克隆到一个新的可挂载的映像中。所以在做快照前先停止 I/O 操作。
5.12.1 客户端当前数据
点击查看代码
root@ceph-client-ubuntu20.04-01:/data/rbd_data# ls -l
total 9768
-rw-r--r-- 1 root root 10000000 Sep 24 16:34 client-ubuntu20.04
5.12.2 创建快照
5.12.2.1 创建快照命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap create
usage: rbd snap create [--pool <pool>] [--namespace <namespace>]
[--image <image>] [--snap <snap>] [--skip-quiesce]
[--ignore-quiesce-error] [--no-progress]
<snap-spec>
Create a snapshot.Positional arguments
<snap-spec> snapshot specification
(example:
[<pool-name>/[<namespace>/]]<image-name>@<snap-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--snap arg snapshot name
--skip-quiesce do not run quiesce hooks
--ignore-quiesce-error ignore quiesce hook error
--no-progress disable progress output
5.12.2.2 创建快照
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap create -p wgsrbd --image wgs-img1 --snap wgs-img1-snap-20210914
Creating snap: 100% complete...done.
5.12.3 验证快照
5.12.3.1 验证快照命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap list
usage: rbd snap list [--pool <pool>] [--namespace <namespace>]
[--image <image>] [--image-id <image-id>]
[--format <format>] [--pretty-format] [--all]
<image-spec>
Dump list of image snapshots.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--image-id arg image id
--format arg output format (plain, json, or xml) [default: plain]
--pretty-format pretty formatting (json and xml)
-a [ --all ] list snapshots from all namespaces
5.12.3.2 查看快照信息
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap list -p wgsrbd --image wgs-img1
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 wgs-img1-snap-20210914 3 GiB Fri Sep 24 16:36:29 2021
5.12.4 客户端删除数据并卸载rbd
5.12.4.1 客户端删除数据
点击查看代码
root@ceph-client-ubuntu20.04-01:/data/rbd_data# rm -rf client-ubuntu20.04
root@ceph-client-ubuntu20.04-01:/data/rbd_data# ls -l
total 0
5.12.4.2 客户端卸载rbd
点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/rbd_data/
5.12.4.3 客户端取消映射
点击查看代码
root@ceph-client-ubuntu20.04-01:~# rbd --id wgs -p wgsrbd unmap wgs-img1
5.12.5 快照回滚
5.12.5.1 快照回滚命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap rollback
usage: rbd snap rollback [--pool <pool>] [--namespace <namespace>]
[--image <image>] [--snap <snap>] [--no-progress]
<snap-spec>
Rollback image to snapshot.Positional arguments
<snap-spec> snapshot specification
(example:
[<pool-name>/[<namespace>/]]<image-name>@<snap-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--snap arg snapshot name
--no-progress disable progress output
5.12.5.2 快照回滚
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap rollback -p wgsrbd --image wgs-img1 --snap wgs-img1-snap-20210914
Rolling back to snapshot: 100% complete...done.
5.12.6 客户端验证数据
5.12.6.1 客户端映射镜像
点击查看代码
root@ceph-client-ubuntu20.04-01:~# rbd --id wgs -p wgsrbd map wgs-img1
/dev/rbd0
5.12.6.2 客户端挂载
点击查看代码
root@ceph-client-ubuntu20.04-01:~# mount /dev/rbd0 /data/rbd_data/
5.12.6.3 客户端验证数据
点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l /data/rbd_data/
total 9768
-rw-r--r-- 1 root root 10000000 Sep 24 16:34 client-ubuntu20.04
5.12.7 删除快照
5.12.7.1 删除快照命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap remove
usage: rbd snap remove [--pool <pool>] [--namespace <namespace>]
[--image <image>] [--snap <snap>]
[--image-id <image-id>] [--snap-id <snap-id>]
[--no-progress] [--force]
<snap-spec>
Delete a snapshot.Positional arguments
<snap-spec> snapshot specification
(example:
[<pool-name>/[<namespace>/]]<image-name>@<snap-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--snap arg snapshot name
--image-id arg image id
--snap-id arg snapshot id
--no-progress disable progress output
--force flatten children and unprotect snapshot if needed.
5.12.7.2 删除快照
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap remove -p wgsrbd --image wgs-img1 --snap wgs-img1-snap-20210914
Removing snap: 100% complete...done.
5.12.7.3 验证删除快照
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap list -p wgsrbd --image wgs-img1
5.12.8 设置快照数量限制
5.12.8.1 快照数量限制命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap limit set
usage: rbd snap limit set [--pool <pool>] [--namespace <namespace>]
[--image <image>] [--limit <limit>]
<image-spec>
Limit the number of snapshots.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
--limit arg maximum allowed snapshot count
5.12.8.2 修改快照数量限制
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap limit set -p wgsrbd --image wgs-img1 --limit 20
5.12.9 清除快照数量限制
5.12.9.1 清除快照数量限制命令格式
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd help snap limit clear
usage: rbd snap limit clear [--pool <pool>] [--namespace <namespace>]
[--image <image>]
<image-spec>
Remove snapshot limit.Positional arguments
<image-spec> image specification
(example: [<pool-name>/[<namespace>/]]<image-name>)
Optional arguments
-p [ --pool ] arg pool name
--namespace arg namespace name
--image arg image name
5.12.9.2 清除快照数量限制
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ rbd snap limit clear -p wgsrbd --image wgs-img1