gpt4 book ai didi

elasticsearch - 解释ElasticSearch内存不足错误

转载 作者:行者123 更新时间:2023-12-02 22:32:51 26 4
gpt4 key购买 nike

首先,这是一个两个节点的群集,每个群集都带有“-Xms256m -Xmx1g -Xss256k”(考虑到计算机具有8G,这确实很糟糕)。

[2015-04-07 16:19:58,235][INFO ][monitor.jvm              ] [NODE1] [gc][ParNew][3246454][64605] duration [822ms], collections [1]/[4.3s], total [822ms]/[21m], memory [966.1mb]->[766.9mb]/[990.7mb], all_pools {[Code Cache] [13.1mb]->[13.1mb]/[48mb]}{[Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]}{[Par Survivor Space] [8.9mb]->[0b]/[33.2mb]}{[CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]}{[CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]}
[2015-04-07 16:28:02,550][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x03d14f1c, /10.0.6.100:36055 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:108)
at org.elasticsearch.search.controller.ScoreDocQueue.<init>(ScoreDocQueue.java:32)
....
[2015-04-07 21:55:54,743][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0xeea0018c, /10.0.6.100:36059 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 21:59:26,774][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x576557fa, /10.0.6.100:36054 => /10.0.6.105:9300]]
...
[2015-04-07 22:51:05,890][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x67f11ffe, /10.0.6.100:36052 => /10.0.
6.105:9300]]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: transport content length received [1.5gb] exceeded [891.6mb]
[2015-04-07 22:51:05,973][WARN ][cluster.action.shard ] [NODE1] sending failed shard for [test_index][15], nod
e[xvpLmlJkRSmZNj-pa_xUNA], [P], s[STARTED], reason [engine failure, message [OutOfMemoryError[Java heap space]]]

然后重新加入后(我重新启动了节点105)
[2015-04-07 22:59:11,095][INFO ][cluster.service          ] [NODE1] removed {[NODE2][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE7][GMBDo5K7RMGSgiIwZE7H8w][inet[/10.0.6.105:9300]]), reason transport disconnected (with verified connect)
[2015-04-07 22:59:30,954][INFO ][cluster.service ] [NODE1] added {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-receive(join from node[[NODE7][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]])
[2015-04-07 23:11:39,717][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x14a605ce, /10.0.6.100:36201 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space
[2015-04-07 23:16:04,963][WARN ][transport.netty ] [NODE1] exception caught on netty layer [[id: 0x5a6d934d, /10.0.6.100:36196 => /10.0.6.105:9300]]
java.lang.OutOfMemoryError: Java heap space

因此,我不知道如何解释“>”部分。谁真的出了内存? NODE 1(10.0.6.100)?为什么要使用9300端口?我的API最初与NODE1进行通信,因此在这种情况下,这是否意味着NODE1向NODE2发送了批量数据请求?这是第二天发生的事情

从NODE1日志:
[2015-04-08 09:02:46,410][INFO ][cluster.service          ] [NODE1] removed {[NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]],}, reason: zen-disco-node_failed([NODE2][mMWcFGhVQY-aBR2r9DO3_A][inet[/10.0.6.105:9300]]), reason failed to ping, tried [3] times, each with maximum [30s] timeout
[2015-04-08 09:03:27,554][WARN ][search.action ] [NODE1] Failed to send release search context
org.elasticsearch.transport.NodeDisconnectedException: [NODE2][inet[/10.0.6.105:9300]][search/freeContext] disconnected
....
Caused by: org.elasticsearch.transport.NodeNotConnectedException: [NODE2][inet[/10.0.6.105:9300]] Node not connected

但是在NODE2日志上,从04-08开始只有几行,但是像这样:
[2015-04-08 09:09:13,797][INFO ][discovery.zen            ] [NODE2] master_left [[NDOE1][xvpLmlJkRSmZNj-pa_xUNA][inet[/10.0.6.100:9300]]], reason [do not exists on master, act as master failure]

那么到底谁失败了?我在这里很困惑:|抱歉。任何帮助表示赞赏。我知道NODE1的GC非常长(MarkSweep超过3小时,直到昨晚我的两节点集群完全重新启动)。

最佳答案

日志的第一部分涉及Elasticsearch Garbage Collection日志记录格式

[2015-04-07 16:19:58,235][INFO][monitor.jvm][NODE1]
  • 垃圾收集运行
    [gc] 
  • 新的并行垃圾收集器
    [ParNew]
  • GC花费了822ms
    duration [822ms], 
  • 运行一次收集,总共需要4.3秒
    collections [1]/[4.3s]
  • 池“内存”的使用数量,以前是966.1mb,现在是766.9mb,总池大小为990.7mb
    memory [966.1mb]->[766.9mb]/[990.7mb], 
  • 池“代码缓存”的
  • 使用率数字
    [Code Cache] [13.1mb]->[13.1mb]/[48mb]
  • 游泳池“Par Eden Space”的
  • 使用率数字
    [Par Eden Space] [266.2mb]->[75.6mb]/[266.2mb]
  • 池“Par Survivor Space”的
  • 使用率数字
    [Par Survivor Space] [8.9mb]->[0b]/[33.2mb]
  • 池“CMS Old Gen”的
  • 使用率数字
    [CMS Old Gen] [690.8mb]->[691.2mb]/[691.2mb]
  • 池“CMS Perm Gen”的
  • 使用率数字
    [CMS Perm Gen] [33.6mb]->[33.6mb]/[82mb]

  • 如果您已经注意到您的内存池将近1G。我希望这会给您提示!

    关于elasticsearch - 解释ElasticSearch内存不足错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29564425/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com