gpt4 book ai didi

MongoDB 不将 block 移动到分片集群中的新分片

转载 作者:可可西里 更新时间:2023-11-01 10:40:03 26 4
gpt4 key购买 nike

我正在运行 mongodb 来存储数据。 MongoDB 集群有 3 个分片,每个分片有 3 个服务器副本集,2 个 mongos 和 3 个配置服务器。每台服务器有 1TB 的存储空间。现在,3 个分片中的所有 2 个都具有大约 90% 容量的数据。当我添加一个新分片时,MongoDB 不会将任何 block 从旧分片移动到新分片。我检查了 mongos 分片状态。它表明。 MongoDB baclancer 正在运行。

mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("59c0ef31619ac70cb8ac5f5c")
}
shards:
{ "_id" : "rs0", "host" : "rs0/10.5.36.88:27017,10.5.36.92:27017,10.5.36.93:27017", "state" : 1, "maxSize" : 990000 }
{ "_id" : "rs1", "host" : "rs1/10.5.36.101:27017,10.5.36.103:27017,10.5.36.97:27017", "state" : 1, "maxSize" : 990000 }
{ "_id" : "rs2", "host" : "rs2/10.5.36.100:27017,10.5.36.117:27017,10.5.36.126:27017", "state" : 1, "maxSize" : 990000 }
{ "_id" : "rs3", "host" : "rs3/10.5.36.152:27017,10.5.36.156:27017,10.5.36.164:27017", "state" : 1, "maxSize" : 990000 }
active mongoses:
"3.4.9" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Wed Sep 20 2017 09:21:43 GMT+0700 by ConfigServer:Balancer
Collections with active migrations:
fbgroups.comments started at Wed Nov 22 2017 22:36:15 GMT+0700
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "fbpages", "primary" : "rs0", "partitioned" : true }
fbpages.comments
shard key: { "CommentFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 6263
rs1 6652
rs2 6175
too many chunks to print, use verbose if you want to force print
fbpages.links
shard key: { "PageFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 23
rs1 23
rs2 23
too many chunks to print, use verbose if you want to force print
fbpages.posts
shard key: { "PostFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 11931
rs1 11847
rs2 5043
too many chunks to print, use verbose if you want to force print
{ "_id" : "fbgroups", "primary" : "rs0", "partitioned" : true }
fbgroups.comments
shard key: { "CommentFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 6451
rs1 6451
rs2 4742
too many chunks to print, use verbose if you want to force print
fbgroups.links
shard key: { "GroupId" : 1 }
unique: true
balancing: true
chunks:
rs0 3
rs1 3
rs2 3
{ "GroupId" : { "$minKey" : 1 } } -->> { "GroupId" : "1391082767860588" } on : rs2 Timestamp(7, 0)
{ "GroupId" : "1391082767860588" } -->> { "GroupId" : "1564129037230139" } on : rs0 Timestamp(7, 1)
{ "GroupId" : "1564129037230139" } -->> { "GroupId" : "172020656162023" } on : rs0 Timestamp(4, 0)
{ "GroupId" : "172020656162023" } -->> { "GroupId" : "244621675585655" } on : rs0 Timestamp(5, 0)
{ "GroupId" : "244621675585655" } -->> { "GroupId" : "375231932588613" } on : rs2 Timestamp(6, 0)
{ "GroupId" : "375231932588613" } -->> { "GroupId" : "506856652708047" } on : rs2 Timestamp(8, 0)
{ "GroupId" : "506856652708047" } -->> { "GroupId" : "67046218160" } on : rs1 Timestamp(8, 1)
{ "GroupId" : "67046218160" } -->> { "GroupId" : "878610618830881" } on : rs1 Timestamp(1, 7)
{ "GroupId" : "878610618830881" } -->> { "GroupId" : { "$maxKey" : 1 } } on : rs1 Timestamp(1, 8)
fbgroups.postdata
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rs0 91
rs1 482
rs2 91
too many chunks to print, use verbose if you want to force print
fbgroups.posts
shard key: { "PostFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 26015
rs1 26092
rs2 6526
too many chunks to print, use verbose if you want to force print
{ "_id" : "test", "primary" : "rs1", "partitioned" : true }
{ "_id" : "intership", "primary" : "rs1", "partitioned" : false }
{ "_id" : "fbhashtags", "primary" : "rs2", "partitioned" : true }
fbhashtags.postdata
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rs0 2
rs1 2
rs2 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("58f122ff7fe5fb4520b4185a") } on : rs0 Timestamp(2, 0)
{ "_id" : ObjectId("58f122ff7fe5fb4520b4185a") } -->> { "_id" : ObjectId("58fac0537fe5fb051d0749de") } on : rs1 Timestamp(3, 0)
{ "_id" : ObjectId("58fac0537fe5fb051d0749de") } -->> { "_id" : ObjectId("5906119e7fe5fb2c7d9d41e9") } on : rs0 Timestamp(4, 0)
{ "_id" : ObjectId("5906119e7fe5fb2c7d9d41e9") } -->> { "_id" : ObjectId("591012257fe5fb70dc9e49bf") } on : rs1 Timestamp(5, 0)
{ "_id" : ObjectId("591012257fe5fb70dc9e49bf") } -->> { "_id" : ObjectId("5918b5d77fe5fb2feb06338a") } on : rs2 Timestamp(5, 1)
{ "_id" : ObjectId("5918b5d77fe5fb2feb06338a") } -->> { "_id" : { "$maxKey" : 1 } } on : rs2 Timestamp(1, 5)
fbhashtags.posts
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rs2 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : rs2 Timestamp(1, 0)
{ "_id" : "fbprofiles", "primary" : "rs2", "partitioned" : true }
fbprofiles.fbcomments
shard key: { "commentFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 18
rs1 18
rs2 19
too many chunks to print, use verbose if you want to force print
fbprofiles.fbposts
shard key: { "postFbId" : 1 }
unique: true
balancing: true
chunks:
rs0 7
rs1 7
rs2 3144
too many chunks to print, use verbose if you want to force print
fbprofiles.fbprofiles
shard key: { "baseUrl" : 1 }
unique: true
balancing: true
chunks:
rs0 2
rs1 2
rs2 141
too many chunks to print, use verbose if you want to force print
{ "_id" : "testnewfb", "primary" : "rs2", "partitioned" : false }
{ "_id" : "news_images", "primary" : "rs2", "partitioned" : false }
{ "_id" : "social_index", "primary" : "rs2", "partitioned" : false }
{ "_id" : "twitter", "primary" : "rs2", "partitioned" : true }
{ "_id" : "techmeme", "primary" : "rs2", "partitioned" : false }

为什么 MongoDB 不将数据移动到新的分片 (rs3)谢谢!

最佳答案

我在我的案例中发现了我的问题。那是三个 Mongo 配置服务器中的一个没有正确配置所有主机。我更改了/etc/hosts 和 stepDown() 主配置服务器,我发现它工作正常。它消耗大量时间。我滥用当服务器被电死时,它无法从以前的状态恢复。

关于MongoDB 不将 block 移动到分片集群中的新分片,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47605534/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com