共计 11170 个字符,预计需要花费 28 分钟才能阅读完成。
这篇文章主要为大家展示了“ceph-deploy 中 osd 模块有什么用”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让丸趣 TV 小编带领大家一起研究并学习一下“ceph-deploy 中 osd 模块有什么用”这篇文章吧。
ceph-deploy 源码分析——osd 模块
ceph-deploy 的 osd.py 模块是用来管理 osd 守护进程,主要是创建与激活 OSD。
osd 子命令格式如下
ceph-deploy osd [-h] {list,create,prepare,activate} ...
list: 显示 osd 列表信息
create: 创建 OSD,包含 prepare 与 activate
prepare: 准备 OSD,通过格式化 / 分区磁盘
activate: 激活准备的 OSD
OSD 管理
make 函数
priority 为 50
osd 子命令默认执行函数为 osd
@priority(50)
def make(parser):
Prepare a data disk on remote host.
sub_command_help = dedent(
Manage OSDs by preparing a data disk on remote host.
For paths, first prepare and then activate:
ceph-deploy osd prepare {osd-node-name}:/path/to/osd
ceph-deploy osd activate {osd-node-name}:/path/to/osd
For disks or journals the `create` command will do prepare and activate
for you.
)
parser.formatter_class = argparse.RawDescriptionHelpFormatter
parser.description = sub_command_help
osd_parser = parser.add_subparsers(dest= subcommand)
osd_parser.required = True
osd_list = osd_parser.add_parser(
list ,
help= List OSD info from remote host(s)
)
osd_list.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= remote host to list OSDs from
)
osd_create = osd_parser.add_parser(
create ,
help= Create new Ceph OSD daemon by preparing and activating disk
)
osd_create.add_argument(
--zap-disk ,
action= store_true ,
help= destroy existing partition table and content for DISK ,
)
osd_create.add_argument(
--fs-type ,
metavar= FS_TYPE ,
choices=[ xfs ,
btrfs
],
default= xfs ,
help= filesystem to use to format DISK (xfs, btrfs) ,
)
osd_create.add_argument(
--dmcrypt ,
action= store_true ,
help= use dm-crypt on DISK ,
)
osd_create.add_argument(
--dmcrypt-key-dir ,
metavar= KEYDIR ,
default= /etc/ceph/dmcrypt-keys ,
help= directory where dm-crypt keys are stored ,
)
osd_create.add_argument(
--bluestore ,
action= store_true , default=None,
help= bluestore objectstore ,
)
osd_create.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to prepare ,
)
osd_prepare = osd_parser.add_parser(
prepare ,
help= Prepare a disk for use as Ceph OSD by formatting/partitioning disk
)
osd_prepare.add_argument(
--zap-disk ,
action= store_true ,
help= destroy existing partition table and content for DISK ,
)
osd_prepare.add_argument(
--fs-type ,
metavar= FS_TYPE ,
choices=[ xfs ,
btrfs
],
default= xfs ,
help= filesystem to use to format DISK (xfs, btrfs) ,
)
osd_prepare.add_argument(
--dmcrypt ,
action= store_true ,
help= use dm-crypt on DISK ,
)
osd_prepare.add_argument(
--dmcrypt-key-dir ,
metavar= KEYDIR ,
default= /etc/ceph/dmcrypt-keys ,
help= directory where dm-crypt keys are stored ,
)
osd_prepare.add_argument(
--bluestore ,
action= store_true , default=None,
help= bluestore objectstore ,
)
osd_prepare.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to prepare ,
)
osd_activate = osd_parser.add_parser(
activate ,
help= Start (activate) Ceph OSD from disk that was previously prepared
)
osd_activate.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to activate ,
)
parser.set_defaults(
func=osd,
)
osd 函数,osd 子命令 list,create,prepare,activate 分别对应的函数为 osd_list、prepare、prepare、activate。
def osd(args):
cfg = conf.ceph.load(args)
if args.subcommand == list :
osd_list(args, cfg)
elif args.subcommand == prepare :
prepare(args, cfg, activate_prepared_disk=False)
elif args.subcommand == create :
prepare(args, cfg, activate_prepared_disk=True)
elif args.subcommand == activate :
activate(args, cfg)
else:
LOG.error(subcommand %s not implemented , args.subcommand)
sys.exit(1)
OSD 列表
命令行格式为:ceph-deploy osd list [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
osd_list 函数
执行 ceph –cluster=ceph osd tree –format=json 命令获取 OSD 信息
执行 ceph-disk list 命令获取磁盘、分区信息
根据两个命令结果以及 osd 目录下文件信息,组装输出 OSD 列表数据
def osd_list(args, cfg):
monitors = mon.get_mon_initial_members(args, error_on_empty=True, _cfg=cfg)
# get the osd tree from a monitor host
mon_host = monitors[0]
distro = hosts.get(
mon_host,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
# 执行 ceph --cluster=ceph osd tree --format=json 命令获取 osd 信息
tree = osd_tree(distro.conn, args.cluster)
distro.conn.exit()
interesting_files = [active , magic , whoami , journal_uuid]
for hostname, disk, journal in args.disk:
distro = hosts.get(hostname, username=args.username)
remote_module = distro.conn.remote_module
# 获取 OSD 的目录 /var/run/ceph/osd 下的 osd 名称
osds = distro.conn.remote_module.listdir(constants.osd_path)
# 执行 ceph-disk list 命令获取磁盘、分区信息
ceph_disk_executable = system.executable_path(distro.conn, ceph-disk)
output, err, exit_code = remoto.process.check(
distro.conn,
[
ceph_disk_executable,
list ,
]
)
# 循环 OSD
for _osd in osds:
# osd 路径,比如 /var/run/ceph/osd/ceph-0
osd_path = os.path.join(constants.osd_path, _osd)
# journal 路径
journal_path = os.path.join(osd_path, journal)
# OSD 的 id
_id = int(_osd.split( -)[-1]) # split on dash, get the id
osd_name = osd.%s % _id
metadata = {}
json_blob = {}
# piggy back from ceph-disk and get the mount point
# ceph-disk list 的结果与 osd 名称匹配,获取磁盘设备
device = get_osd_mount_point(output, osd_name)
if device:
metadata[device] = device
# read interesting metadata from files
# 获取 OSD 下的 active, magic, whoami, journal_uuid 文件信息
for f in interesting_files:
osd_f_path = os.path.join(osd_path, f)
if remote_module.path_exists(osd_f_path):
metadata[f] = remote_module.readline(osd_f_path)
# do we have a journal path?
# 获取 journal path
if remote_module.path_exists(journal_path):
metadata[journal path] = remote_module.get_realpath(journal_path)
# is this OSD in osd tree?
for blob in tree[nodes]:
if blob.get(id) == _id: # matches our OSD
json_blob = blob
# 输出 OSD 信息
print_osd(
distro.conn.logger,
hostname,
osd_path,
json_blob,
metadata,
)
distro.conn.exit()
创建 OSD 准备 OSD
创建 OSD 的命令行格式为:ceph-deploy osd create [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
准备 OSD 的命令行格式为:ceph-deploy osd prepare [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
prepare 函数,参数 activate_prepared_disk 为 True 是创建 OSD,为 False 是准备 OSD
调用 exceeds_max_osds 函数,单台主机超过 20 个 OSD,将会 warning
调用 get_bootstrap_osd_key 函数,获取当前目录下的 ceph.bootstrap-osd.keyring
循环 disk
配置写入 /etc/ceph/ceph.conf
创建并写入 /var/lib/ceph/bootstrap-osd/ceph.keyring
调用 prepare_disk 函数,准备 OSD
校验 OSD 状态,并将信息非正常状态信息写入 warning
def prepare(args, cfg, activate_prepared_disk):
LOG.debug(
Preparing cluster %s disks %s ,
args.cluster,
.join(: .join(x or for x in t) for t in args.disk),
)
# 单台主机超过 20 个 OSD,将会 warning
hosts_in_danger = exceeds_max_osds(args)
if hosts_in_danger:
LOG.warning(if ``kernel.pid_max`` is not increased to a high enough value)
LOG.warning(the following hosts will encounter issues:)
for host, count in hosts_in_danger.items():
LOG.warning(Host: %8s, OSDs: %s % (host, count))
# 获取当前目录下的 ceph.bootstrap-osd.keyring
key = get_bootstrap_osd_key(cluster=args.cluster)
bootstrapped = set()
errors = 0
for hostname, disk, journal in args.disk:
try:
if disk is None:
raise exc.NeedDiskError(hostname)
distro = hosts.get(
hostname,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
LOG.info(
Distro info: %s %s %s ,
distro.name,
distro.release,
distro.codename
)
if hostname not in bootstrapped:
bootstrapped.add(hostname)
LOG.debug(Deploying osd to %s , hostname)
conf_data = conf.ceph.load_raw(args)
# 配置写入 /etc/ceph/ceph.conf
distro.conn.remote_module.write_conf(
args.cluster,
conf_data,
args.overwrite_conf
)
# 创建并写入 /var/lib/ceph/bootstrap-osd/ceph.keyring
create_osd_keyring(distro.conn, args.cluster, key)
LOG.debug( Preparing host %s disk %s journal %s activate %s ,
hostname, disk, journal, activate_prepared_disk)
storetype = None
if args.bluestore:
storetype = bluestore
# 准备 OSD
prepare_disk(
distro.conn,
cluster=args.cluster,
disk=disk,
journal=journal,
activate_prepared_disk=activate_prepared_disk,
init=distro.init,
zap=args.zap_disk,
fs_type=args.fs_type,
dmcrypt=args.dmcrypt,
dmcrypt_dir=args.dmcrypt_key_dir,
storetype=storetype,
)
# give the OSD a few seconds to start
time.sleep(5)
# 校验 OSD 状态,并将信息非正常状态信息写入 warning
catch_osd_errors(distro.conn, distro.conn.logger, args)
LOG.debug(Host %s is now ready for osd use. , hostname)
distro.conn.exit()
except RuntimeError as e:
LOG.error(e)
errors += 1
if errors:
raise exc.GenericError(Failed to create %d OSDs % errors)
prepare_disk 函数
执行 ceph-disk -v prepare 命令准备 OSD
如果 activate_prepared_disk 为 True,设置 ceph 服务开机启动
def prepare_disk(
conn,
cluster,
disk,
journal,
activate_prepared_disk,
init,
zap,
fs_type,
dmcrypt,
dmcrypt_dir,
storetype):
Run on osd node, prepares a data disk for use.
ceph_disk_executable = system.executable_path(conn, ceph-disk)
args = [
ceph_disk_executable,
-v ,
prepare ,
]
if zap:
args.append(--zap-disk)
if dmcrypt:
args.append(--dmcrypt)
if dmcrypt_dir is not None:
args.append(--dmcrypt-key-dir)
args.append(dmcrypt_dir)
if storetype:
args.append(-- + storetype)
args.extend([
--cluster ,
cluster,
--fs-type ,
fs_type,
-- ,
disk,
])
if journal is not None:
args.append(journal)
# 执行 ceph-disk -v prepare 命令
remoto.process.run(
conn,
args
)
# 是否激活,激活即设置 ceph 服务开机启动
if activate_prepared_disk:
# we don t simply run activate here because we don t know
# which partition ceph-disk prepare created as the data
# volume. instead, we rely on udev to do the activation and
# just give it a kick to ensure it wakes up. we also enable
# ceph.target, the other key piece of activate.
if init == systemd :
system.enable_service(conn, ceph.target)
elif init == sysvinit :
system.enable_service(conn, ceph)
激活 OSD
命令行格式为:ceph-deploy osd activate [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
activate 函数
执行 ceph-disk -v activate 命令激活 OSD
校验 OSD 状态,并将信息非正常状态信息写入 warning
设置 ceph 服务开机启动
def activate(args, cfg):
LOG.debug(
Activating cluster %s disks %s ,
args.cluster,
# join elements of t with : , t s with
# allow None in elements of t; print as empty
.join(: .join((s or ) for s in t) for t in args.disk),
)
for hostname, disk, journal in args.disk:
distro = hosts.get(
hostname,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
LOG.info(
Distro info: %s %s %s ,
distro.name,
distro.release,
distro.codename
)
LOG.debug(activating host %s disk %s , hostname, disk)
LOG.debug(will use init type: %s , distro.init)
ceph_disk_executable = system.executable_path(distro.conn, ceph-disk)
# 执行 ceph-disk -v activate 命令激活 OSD
remoto.process.run(
distro.conn,
[
ceph_disk_executable,
-v ,
activate ,
--mark-init ,
distro.init,
--mount ,
disk,
],
)
# give the OSD a few seconds to start
time.sleep(5)
# 校验 OSD 状态,并将信息非正常状态信息写入 warning
catch_osd_errors(distro.conn, distro.conn.logger, args)
# 设置 ceph 服务开机启动
if distro.init == systemd :
system.enable_service(distro.conn, ceph.target)
elif distro.init == sysvinit :
system.enable_service(distro.conn, ceph)
distro.conn.exit()
手工管理 OSD
以 ceph-231 上磁盘 sdb 为例,创建 osd。
创建 OSD 准备 OSD
准备 OSD
[root@ceph-231 ~]# ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
创建 OSD 多一个操作,设置 ceph 服务开机启动
[root@ceph-231 ~]# systemctl enable ceph.target
激活 OSD
查看 init
[root@ceph-231 ~]# cat /proc/1/comm
systemd
激活 OSD
[root@ceph-231 ~]# ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
设置 ceph 服务开机启动
[root@ceph-231 ~]# systemctl enable ceph.target
以上是“ceph-deploy 中 osd 模块有什么用”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注丸趣 TV 行业资讯频道!