gpt4 book ai didi

elasticsearch - ElasticSearch具有两个节点(不同机器)的未分配分片,其中1个主控两个新实例

转载 作者:行者123 更新时间:2023-12-02 22:31:55 24 4
gpt4 key购买 nike

我启动了两个干净的elasticsearch实例(节点),没有数据,两台不同的机器(一台窗口,一台osx)。他们成功地发现了彼此。一种是node.master:false。两者都是node.data:true。我启动了Kibana(创建te .kibana索引),并创建了一个测试索引(测试),number_of_replicas = 1,并且每个索引和群集的状态为黄色,我相信这是由于未分配的分片。我无所适从如何解决未分配的分片。

在尝试强制分片复制时,出现以下错误:

shard cannot be allocated on same node [tNUHIE6cTHO6h37P_s3m7w] it already exists on

一些细节:

_cat / nodes?v:
 host         ip       heap.percent ram.percent  load node.role master name              
192.168.1.99 192.168.1.99 2 81 1.95 d * node1
192.168.1.2 192.168.1.2 13 46 -1.00 d - node2

节点1:_cluster /运行状况
{
"cluster_name": "elasticsearch",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 2,
"number_of_data_nodes": 2,
"active_primary_shards": 6,
"active_shards": 9,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 3,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 75

}

日志中没有错误,但是如果我运行:

_集群/重新路由?漂亮
{ "commands" : [ { "allocate" : { "index" : "test", "shard" : 1, "node" : "node2" } } ] 
}

我得到以下回应:
{ "error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "[allocate] allocation of [test][1] on node { node2}{tNUHIE6cTHO6h37P_s3m7w}{192.168.1.2}{192.168.1.2:9300}{master=false} is not allowed,
reason: [YES(target node version [2.1.1] is same or newer than source node version [2.1.1])]
[YES(enough disk for shard on node, free: [111.6gb])]
[YES(shard not primary or relocation disabled)]
[YES(primary is already active)][YES(node passes include/exclude/require filters)]
[YES(allocation disabling is ignored)]
[NO(shard cannot be allocated on same node [tNUHIE6cTHO6h37P_s3m7w] it already exists on)]
[YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(below shard recovery limit of [2])][YES(no allocation awareness enabled)][YES(allocation disabling is ignored)]"
}
],
...
"status": 400
}

_cat / shards?v
  index   shard prirep state      docs store ip           node          
test 3 p STARTED 0 130b 192.168.1.2 node2
test 3 r UNASSIGNED
test 4 r STARTED 0 130b 192.168.1.2 node2
test 4 p STARTED 0 130b 192.168.1.99 node1
test 1 p STARTED 0 130b 192.168.1.2 node2
test 1 r UNASSIGNED
test 2 r STARTED 0 130b 192.168.1.2 node2
test 2 p STARTED 0 130b 192.168.1.99 node1
test 0 r STARTED 0 130b 192.168.1.2 node2
test 0 p STARTED 0 130b 192.168.1.99 node1
.kibana 0 p STARTED 1 3.1kb 192.168.1.2 node2
.kibana 0 r UNASSIGNED

解决新问题对新手的任何帮助将不胜感激。

最佳答案

您只能安全地重新路由副本分片。 GET _cat/shards?v清楚地表明test索引的id 1的分片(主)已经分配在node2上。您不能在已分配分片的同一节点上分配分片。这正是_cluster/reroute命令的输出告诉您的。与其在node2上分配,不如在node1上分配它。请尝试以下命令:

POST _cluster/reroute?explain
{
"commands": [
{
"allocate": {
"index": "test",
"shard": 1,
"node": "node1"
}
},
{
"allocate": {
"index": "test",
"shard": 2,
"node": "node1"
}
}
]
}

这将尝试分配两个未分配的副本分片。还要注意 explain选项。命令的响应将详细说明命令成功或失败的原因,并且在调试命令是否失败时非常方便。

关于elasticsearch - ElasticSearch具有两个节点(不同机器)的未分配分片,其中1个主控两个新实例,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34619265/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com