共计 3530 个字符,预计需要花费 9 分钟才能阅读完成。
这篇文章主要介绍了 Ceph Block Device 块设备操作的示例分析,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让丸趣 TV 小编带着大家一起了解一下。
使用 ceph block device 需要如下三个步骤:
1. 在 ceph 集群的 pool 中创建一个 Block Device image.
2. ceph Client 使用 RBD 设备与 ceph 集群的 Block Device image 进行映射(Map)。
3. ceph Client 的 User Space 便可以挂载 (Mount) 该 RBD 设备。
Step1 创建 Block Device Image
首先,需要新建一个 pool,如果不想新建 pool,可以使用默认 pool,即 rbd。
命令:ceph osd pool create creating_pool_name pg_num
参数:creating_pool_name:要创建的 pool 的名字
pg_num : Placement Group 的个数
# ceph osd pool create testpool 512
pool testpool created
,需要在 ceph 集群中创建一个 Block Device Image。(查看 rbd 的命令,输入 man rbd 命令)
命令:rbd create –size {MegaBytes} {pool-name}/{image-name}
例如:在名为“testpool”的 pool 中创建“bar”的 Image,容量是 1024MB
# rbd create –size 1024 testpool/bar
查看 Block Device Images
# rbd ls testpool
rbd
以及查看一个 Block Device Images 的详细信息
# rbd info testpool/bar
rbd image bar :
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.5e3b248a65f6
format: 2
features: layering
flags:
Step2 ceph Client 使用 RBD 设备与 ceph 集群的 Block Device image 进行映射(Map)
命令:sudo rbd map rbd/myimage –id admin –keyring /path/to/keyring
例如
# sudo rbd map testpool/bar –id admin –keyring /etc/ceph/ceph.client.admin.keyring
dev/rbd0
查看已经映射的 Block Device 信息
# rbd showmapped
id pool image snap device
0 testpool bar – /dev/rbd0
Step3 ceph Client 的 User Space 挂载 (Mount) 该 RBD 设备
首先,使用该 block device 在 client-node 上创建一个文件系统。
# sudo mkfs.ext4 -m0 /dev/rbd/testpool/bar
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
其次,挂载该文件系统
# sudo mkdir /mnt/ceph-block-device
# sudo mount /dev/rbd/testpool/bar /mnt/ceph-block-device
查看 mount 信息
# mount
…
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
* Ceph Block Deviced 的其他相关操作
To create a new rbd image that is 100 GB:
rbd create mypool/myimage --size 102400
To use a non-default object size (8 MB):
rbd create mypool/myimage --size 102400
--object-size 8M
To delete an rbd image (be careful!):
rbd rm mypool/myimage
To create a new snapshot:
rbd snap create mypool/myimage@mysnap
To create a copy-on-write clone of a protected snapshot:
rbd clone mypool/myimage@mysnap otherpool/cloneimage
To see which clones of a snapshot exist:
rbd children mypool/myimage@mysnap
To delete a snapshot:
rbd snap rm mypool/myimage@mysnap
To map an image via the kernel with cephx enabled:
rbd map mypool/myimage --id admin --keyfile secretfile
To map an image via the kernel with different cluster name other than default ceph.
rbd map mypool/myimage –cluster cluster name
To unmap an image:
rbd unmap /dev/rbd0
To create an image and a clone from it:
rbd import --image-format 2 image mypool/parent
rbd snap create mypool/parent@snap
rbd snap protect mypool/parent@snap
rbd clone mypool/parent@snap otherpool/child
To create an image with a smaller stripe_unit (to better distribute small writes in some workloads):
rbd create mypool/myimage --size 102400
--stripe-unit 65536B --stripe-count 16
To change an image from one image format to another, export it and then import it as the desired image format:
rbd export mypool/myimage@snap /tmp/img
rbd import --image-format 2 /tmp/img mypool/myimage2
To lock an image for exclusive use:
rbd lock add mypool/myimage mylockid
To release a lock:
rbd lock remove mypool/myimage mylockid client.2485
感谢你能够认真阅读完这篇文章,希望丸趣 TV 小编分享的“Ceph Block Device 块设备操作的示例分析”这篇文章对大家有帮助,同时也希望大家多多支持丸趣 TV,关注丸趣 TV 行业资讯频道,更多相关知识等着你来学习!