共计 3769 个字符,预计需要花费 10 分钟才能阅读完成。
今天就跟大家聊聊有关频繁添加删除 osd 导致 osd 无法 up 怎么办,可能很多人都不太了解,为了让大家更加了解,丸趣 TV 小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
### 环境介绍
预上线系统,手工已经设置好 crushmap,并且已经指定了 osd.139 所在的 location
集群开启了 noout(ceph osd set noout)
ceph 版本:0.94.5
osd 设置了 osd crush update on start = false, 避免 osd 启动以后改变 crushmap
### 故障现象 在模拟单节点故障发生的过程中,多次手工添加和删除同一个 osd(只删除数据和 keyring,不动 crushmap 内容),最后发现新加的 osd 进程虽然已经启动,并且启动日志也无报错,但是始终无法进入 up 状态。
2016-04-01 11:19:16.868837 7fee3654b900 0 ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43), process ceph-osd, pid 104255
.....
2016-04-01 11:19:19.295992 7fee3654b900 0 osd.139 12789 crush map has features 2200130813952, adjusting msgr requires for clients
2016-04-01 11:19:19.296008 7fee3654b900 0 osd.139 12789 crush map has features 2200130813952 was 8705, adjusting msgr requires for mons
2016-04-01 11:19:19.296016 7fee3654b900 0 osd.139 12789 crush map has features 2200130813952, adjusting msgr requires for osds
2016-04-01 11:19:19.296052 7fee3654b900 0 osd.139 12789 load_pgs
2016-04-01 11:19:19.296094 7fee3654b900 0 osd.139 12789 load_pgs opened 0 pgs
2016-04-01 11:19:19.296878 7fee3654b900 -1 osd.139 12789 log_to_monitors {default=true}
2016-04-01 11:19:19.305091 7fee246f1700 0 osd.139 12789 ignoring osdmap until we have initialized
2016-04-01 11:19:19.305239 7fee246f1700 0 osd.139 12789 ignoring osdmap until we have initialized
2016-04-01 11:19:19.305425 7fee3654b900 0 osd.139 12789 done with init, starting boot process
开启 debug osd=20 以后发现始终进行如下操作
2016-04-01 11:46:23.300790 7f9219d15700 20 osd.139 12813 update_osd_stat osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:23.300821 7f9219d15700 5 osd.139 12813 heartbeat: osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:25.200613 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:25.200644 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:25.200648 7f9231e86700 10 osd.139 12813 do_waiters -- finish
2016-04-01 11:46:25.600974 7f9219d15700 20 osd.139 12813 update_osd_stat osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:25.601002 7f9219d15700 5 osd.139 12813 heartbeat: osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:26.200759 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:26.200784 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:26.200788 7f9231e86700 10 osd.139 12813 do_waiters -- finish
2016-04-01 11:46:27.200867 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:27.200892 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:27.200895 7f9231e86700 10 osd.139 12813 do_waiters -- finish
2016-04-01 11:46:28.201002 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:28.201022 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:28.201030 7f9231e86700 10 osd.139 12813 do_waiters -- finish
2016-04-01 11:46:29.101147 7f9219d15700 20 osd.139 12813 update_osd_stat osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:29.101180 7f9219d15700 5 osd.139 12813 heartbeat: osd_stat(538 MB used, 3723 GB avail, 3724 GB total, peers []/[] op hist [])
2016-04-01 11:46:29.201115 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:29.201128 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:29.201132 7f9231e86700 10 osd.139 12813 do_waiters -- finish
2016-04-01 11:46:30.201237 7f9231e86700 5 osd.139 12813 tick
2016-04-01 11:46:30.201267 7f9231e86700 10 osd.139 12813 do_waiters -- start
2016-04-01 11:46:30.201271 7f9231e86700 10 osd.139 12813 do_waiters -- finish
### 解决方法 1. 在 crush 中删除对应的 osd 信息
ceph osd crush remove osd.139 # 注意可能会导致数据迁移
2. 启动 osd 服务, 将 osd 添加回 crushmap 内。
ceph osd crush add 139 1.0 host=xxx
看完上述内容,你们对频繁添加删除 osd 导致 osd 无法 up 怎么办有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注丸趣 TV 行业资讯频道,感谢大家的支持。
正文完