共计 1733 个字符,预计需要花费 5 分钟才能阅读完成。
这篇文章主要介绍 ceph 中如何解决 HEALTH_WARN too few PGs per OSD 问题,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
health HEALTH_WARN too few PGs per OSD (16 min 30)
执行 ceph –s 发现集群状态并非 ok,具体信息如下:
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
too few PGs per OSD (16 min 30)
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e50: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v119: 64 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean
由于是新配置的集群,只有一个 pool
$ sudo ceph osd lspools
0 rbd,
查看 rbd pool 的 PGS
$ sudo ceph osd pool get rbd pg_num
pg_num: 64
pgs 为 64,因为是 2 副本的配置,所以当有 8 个 osd 的时候,每个 osd 上均分了 64/8 *2=16 个 pgs, 也就是出现了如上的错误 小于最小配置 30 个
解决办法:修改默认 pool rbd 的 pgs
$ sudo ceph osd pool set rbd pg_num 128
set pool 0 pg_num to 128
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_WARN
64 pgs stuck inactive
64 pgs stuck unclean
pool rbd pg_num 128 pgp_num 64
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e52: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v121: 128 pgs, 1 pools, 0 bytes data, 0 objects
715 MB used, 27550 GB / 29025 GB avail
64 active+clean
64 creating
发现需要把 pgp_num 也一并修改,默认两个 pg_num 和 pgp_num 一样大小均为 64,此处也将两个的值都设为 128
$ sudo ceph osd pool set rbd pgp_num 128
set pool 0 pgp_num to 128
最后查看集群状态,显示为 OK,错误解决:
$ sudo ceph -s
cluster 257faba1-f259-4164-a0f9-1726bd70b05a
health HEALTH_OK
monmap e1: 1 mons at {bdc217=192.168.13.217:6789/0}
election epoch 2, quorum 0 bdc217
osdmap e54: 8 osds: 8 up, 8 in
flags sortbitwise
pgmap v125: 128 pgs, 1 pools, 0 bytes data, 0 objects
718 MB used, 27550 GB / 29025 GB avail
128 active+clean
以上是“ceph 中如何解决 HEALTH_WARN too few PGs per OSD 问题”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注丸趣 TV 行业资讯频道!
正文完