共计 4163 个字符,预计需要花费 11 分钟才能阅读完成。
这篇文章主要介绍 MongoDB 副本集集群从节点控制台报错 10061 怎么办,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
——————————————————————————————————————————————–
首先查看集群 3 个节点的控制台日志
1、集群三台服务器控制台日志
192.168.72.33
2018-01-05T09:46:24.281+0800 I STORAGE
[initandlisten] Placing a marker at optime Jan 05 05:16:28:3e9
2018-01-05T09:46:24.432+0800 I NETWORK [HostnameCanonicalizationWorker]
Starting hostname canonicalization worker
2018-01-05T09:46:24.432+0800 I FTDC [initandlisten] Initializing full-time
diagnostic data capture with directory d:/mongodata/rs0-2/diagnostic.data
2018-01-05T09:46:24.443+0800 I NETWORK [initandlisten] waiting for connections
on port 27013
2018-01-05T09:46:25.485+0800 W NETWORK [ReplicationExecutor] Failed to
connect
to 192.168.72.31:27011, reason: errno:10061 由于目标计算机积极拒绝,无法连接。
2018-01-05T09:46:25.533+0800 I REPL [ReplicationExecutor] New replica set co
nfig in use: {_id: rs0 , version: 8, protocolVersion: 1, members: [
{_id: 0,
host: mongodb-rs0-0:27011 , arbiterOnly: false, buildIndexes: true,
hidden: fal
se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1}, {_id: 1, host:
mongo
db-rs0-1:27012 , arbiterOnly: false, buildIndexes: true, hidden: false,
priority
: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_id: 2, host:
mongodb-rs0-2:27013
, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0,
tags: {
}, slaveDelay: 0, votes: 1 } ], settings: {chainingAllowed: true,
heartbeatInte
rvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000,
getLas
tErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0}, replicaSetId:
Obje
ctId(59365592734d0747ee26e2a6) } }
2018-01-05T09:46:25.534+0800 I REPL [ReplicationExecutor] This node is mongo db-rs0-2:27013 in the config
192.168.72.32
2018-01-05T09:46:17.064+0800 I NETWORK
[HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2018-01-05T09:46:17.064+0800 I FTDC [initandlisten] Initializing full-time
diagnostic data capture with directory d:/mongodata/rs0-1/diagnostic.data
2018-01-05T09:46:17.076+0800 I NETWORK [initandlisten] waiting for connections
on port 27012
2018-01-05T09:46:18.102+0800 W NETWORK [ReplicationExecutor] Failed to
connect
to 192.168.72.31:27011, reason: errno:10061 由于目标计算机积极拒绝,无法连接。
2018-01-05T09:46:19.149+0800 W NETWORK [ReplicationExecutor] Failed to
connect
to 192.168.72.33:27013, reason: errno:10061 由于目标计算机积极拒绝,无法连接。
2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] New replica set co
nfig in use: {_id: rs0 , version: 8, protocolVersion: 1, members: [
{_id: 0,
host: mongodb-rs0-0:27011 , arbiterOnly: false, buildIndexes: true,
hidden: fal
se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1}, {_id: 1, host:
mongo
db-rs0-1:27012 , arbiterOnly: false, buildIndexes: true, hidden: false,
priority
: 1.0, tags: {}, slaveDelay: 0, votes: 1}, {_id: 2, host:
mongodb-rs0-2:27013
, arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0,
tags: {
}, slaveDelay: 0, votes: 1 } ], settings: {chainingAllowed: true,
heartbeatInte
rvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000,
getLas
tErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0}, replicaSetId:
Obje
ctId(59365592734d0747ee26e2a6) } }
2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] This node is mongo db-rs0-1:27012 in the config
192.168.72.31
2018-01-05T15:56:42.999+0800 I STORAGE [initandlisten]
Placing a marker at optime Jan 05 05:12:59:b4a
2018-01-05T15:56:43.000+0800 I STORAGE [initandlisten] Placing a marker at
optime Jan 05 05:13:08:8df
2018-01-05T15:56:43.000+0800 I STORAGE [initandlisten] Placing a marker at
optime Jan 05 05:14:05:329
2018-01-05T15:56:43.001+0800 I STORAGE [initandlisten] Placing a marker at
optime Jan 05 05:15:30:25f
2018-01-05T15:56:43.002+0800 I STORAGE [initandlisten] Placing a marker at
optime Jan 05 05:15:39:4b1
根据以上日志信息推测:由于集群主节点 192.168.72.31 发生存储类型的等待事件,导致主节点 192.168.72.31 拒绝 2 个从节点 192.168.72.32/33 的 TCP 连接
2、根据步骤 1 中的提示,查看 mongo 服务在操作系统层次的日志,操作系统日志从 2018-1-5 4:59:25 秒就已经告警提示 D 盘已经满载
3、查看 192.168.72.31 存储情况,果然如操作系统日志提示,D 盘只剩余 58MB 的可用空间
4、由以上信息可以断定:由于 Mongo 集群主节点 192.168.72.31 存储空间满,导致主节点 192.168.72.31 的 Mongo 进程无法完成写操作从而拒绝 2 个从节点的连接导致整个 mongo 集群服务中断。经沟通得知,地市技术对当前 Mongo 主节点 192.168.72.31 数据做了备份,没有注意到 D 盘存储情况。
事后,地市技术立即删除节点 192.168.72.31 的冗余数据备份释放 D 盘空间,由于调度程序处于僵死状态,地市技术决定重启整个 mongo 集群服务器 192.168.72.31/32/33。
5、重启完成后,mongo 集群恢复正常,主节点 192.168.72.31 的 mongo 控制台提示调度程序 bmi 被接受连接到 mongo 集群的 admin 库
以上是“MongoDB 副本集集群从节点控制台报错 10061 怎么办”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注丸趣 TV 行业资讯频道!