共计 5266 个字符,预计需要花费 14 分钟才能阅读完成。
这篇文章主要介绍了 IP 改变引起的 Ceph monitor 异常及 OSD 盘崩溃怎么办,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让丸趣 TV 小编带着大家一起了解一下。
公司搬家,所有服务器的 ip 改变。对 ceph 服务器配置好 ip 后启动,发现 monitor 进程启动失败,monitor 进程总是试图绑定到以前的 ip 地址,那当然不可能成功了。开始以为服务器的 ip 设置有问题,在改变 hostname、ceph.conf 等方法无果后,逐步分析发现,是 monmap 中的 ip 地址还是以前的 ip,ceph 通过读取 monmap 来启动 monitor 进程,所以需要修改 monmap。方法如下:
#Add the new monitor locations # monmaptool --create --add mon0 192.168.32.2:6789 --add osd1 192.168.32.3:6789 \ --add osd2 192.168.32.4:6789 --fsid 61a520db-317b-41f1-9752-30cedc5ffb9a \ --clobber monmap #Retrieve the monitor map # ceph mon getmap -o monmap.bin #Check new contents # monmaptool --print monmap.bin #Inject the monmap # ceph-mon -i mon0 --inject-monmap monmap.bin # ceph-mon -i osd1 --inject-monmap monmap.bin # ceph-mon -i osd2 --inject-monmap monmap.bin
再启动 monitor,一切正常。
但出现了上一篇文章中描述的一块 osd 盘挂掉的情况。查了一圈,只搜到 ceph 的官网上说是 ceph 的一个 bug。无力修复,于是删掉这块 osd,再重装:
# service ceph stop osd.4 #不必执行 ceph osd crush remove osd.4 # ceph auth del osd.4 # ceph osd rm 4 # umount /cephmp1 # mkfs.xfs -f /dev/sdc # mount /dev/sdc /cephmp1 # 此处执行 create 无法正常安装 osd # ceph-deploy osd prepare osd2:/cephmp1:/dev/sdf1 # ceph-deploy osd activate osd2:/cephmp1:/dev/sdf1
完成后重启该 osd,成功运行。ceph 会自动平衡数据,*** 的状态是:
[root@osd2 ~]# ceph -s cluster 61a520db-317b-41f1-9752-30cedc5ffb9a health HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked 32 sec monmap e3: 3 mons at {mon0=192.168.32.2:6789/0,osd1=192.168.32.3:6789/0,osd2=192.168.32.4:6789/0}, election epoch 76, quorum 0,1,2 mon0,osd1,osd2 osdmap e689: 6 osds: 6 up, 6 in pgmap v189608: 704 pgs, 5 pools, 34983 MB data, 8966 objects 69349 MB used, 11104 GB / 11172 GB avail 695 active+clean 9 incomplete
出现了 9 个 pg 的 incomplete 状态。
[root@osd2 ~]# ceph health detail HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked 32 sec; 1 osds have slow requests pg 5.95 is stuck inactive for 838842.634721, current state incomplete, last acting [1,4] pg 5.66 is stuck inactive since forever, current state incomplete, last acting [4,0] pg 5.de is stuck inactive for 808270.105968, current state incomplete, last acting [0,4] pg 5.f5 is stuck inactive for 496137.708887, current state incomplete, last acting [0,4] pg 5.11 is stuck inactive since forever, current state incomplete, last acting [4,1] pg 5.30 is stuck inactive for 507062.828403, current state incomplete, last acting [0,4] pg 5.bc is stuck inactive since forever, current state incomplete, last acting [4,1] pg 5.a7 is stuck inactive for 499713.993372, current state incomplete, last acting [1,4] pg 5.22 is stuck inactive for 496125.831204, current state incomplete, last acting [0,4] pg 5.95 is stuck unclean for 838842.634796, current state incomplete, last acting [1,4] pg 5.66 is stuck unclean since forever, current state incomplete, last acting [4,0] pg 5.de is stuck unclean for 808270.106039, current state incomplete, last acting [0,4] pg 5.f5 is stuck unclean for 496137.708958, current state incomplete, last acting [0,4] pg 5.11 is stuck unclean since forever, current state incomplete, last acting [4,1] pg 5.30 is stuck unclean for 507062.828475, current state incomplete, last acting [0,4] pg 5.bc is stuck unclean since forever, current state incomplete, last acting [4,1] pg 5.a7 is stuck unclean for 499713.993443, current state incomplete, last acting [1,4] pg 5.22 is stuck unclean for 496125.831274, current state incomplete, last acting [0,4] pg 5.de is incomplete, acting [0,4] pg 5.bc is incomplete, acting [4,1] pg 5.a7 is incomplete, acting [1,4] pg 5.95 is incomplete, acting [1,4] pg 5.66 is incomplete, acting [4,0] pg 5.30 is incomplete, acting [0,4] pg 5.22 is incomplete, acting [0,4] pg 5.11 is incomplete, acting [4,1] pg 5.f5 is incomplete, acting [0,4] 2 ops are blocked 8388.61 sec 1 ops are blocked 4194.3 sec 2 ops are blocked 8388.61 sec on osd.0 1 ops are blocked 4194.3 sec on osd.0 1 osds have slow requests
查了一圈无果。一个有同样遭遇的人的一段话:
I already tried ceph pg repair 4.77 , stop/start OSDs, ceph osd lost , ceph pg force_create_pg 4.77 . Most scary thing is force_create_pg does not work. At least it should be a way to wipe out a incomplete PG without destroying a whole pool.
以上方法尝试了一下,都不行。暂时无法解决,感觉有点坑。
PS:常用 pg 操作
[root@osd2 ~]# ceph pg map 5.de osdmap e689 pg 5.de (5.de) - up [0,4] acting [0,4] [root@osd2 ~]# ceph pg 5.de query [root@osd2 ~]# ceph pg scrub 5.de instructing pg 5.de on osd.0 to scrub [root@osd2 ~]# ceph pg 5.de mark_unfound_lost revert pg has no unfound objects #ceph pg dump_stuck stale #ceph pg dump_stuck inactive #ceph pg dump_stuck unclean [root@osd2 ~]# ceph osd lost 1 Error EPERM: are you SURE? this might mean real, permanent data loss. pass --yes-i-really-mean-it if you really do. [root@osd2 ~]# [root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it osd.4 is not down or doesn t exist [root@osd2 ~]# service ceph stop osd.4 === osd.4 === Stopping Ceph osd.4 on osd2...kill 22287...kill 22287...done [root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it marked osd lost in epoch 690 [root@osd1 mnt]# ceph pg repair 5.de instructing pg 5.de on osd.0 to repair [root@osd1 mnt]# ceph pg repair 5.de instructing pg 5.de on osd.0 to repair
感谢你能够认真阅读完这篇文章,希望丸趣 TV 小编分享的“IP 改变引起的 Ceph monitor 异常及 OSD 盘崩溃怎么办”这篇文章对大家有帮助,同时也希望大家多多支持丸趣 TV,关注丸趣 TV 行业资讯频道,更多相关知识等着你来学习!