gpt4 book ai didi

memory - Elasticsearch内存问题-ES进程占用所有RAM

转载 作者:行者123 更新时间:2023-12-02 22:15:01 24 4
gpt4 key购买 nike

我们的生产Elasticsearch集群遇到一个问题,随着时间的推移,Elasticsearch似乎正在消耗每台服务器上的所有RAM。每个盒子都有128GB的RAM,因此我们运行两个实例,为JVM Heap分配了30GB。剩余的68G留给OS和Lucene使用。上周,我们重新启动了每台服务器,并且在每个Elasticsearch进程使用24%的RAM的情况下就启动了RAM。现在已经快一周了,每个Elasticsearch实例的内存消耗已高达40%左右。我附上了我们的配置文件,希望有人可以帮助我们弄清为什么Elasticsearch超出了我们为内存利用率设置的限制。

当前,我们正在运行ES 1.3.2,但下周将在下一个版本中升级到1.4.2。

这是重启后立即显示的顶部 View (为清晰起见,删除了其他字段):

  PID USER       %MEM    TIME+  
2178 elastics 24.1 1:03.49
2197 elastics 24.3 1:07.32


今天一个:

 PID USER       %MEM    TIME+  
2178 elastics 40.5 2927:50
2197 elastics 40.1 3000:44


elasticserach-0.yml:

cluster.name: PROD  
node.name: "PROD6-0"
node.master: true
node.data: true
node.rack: PROD6
cluster.routing.allocation.awareness.force.rack.values:
PROD4,PROD5,PROD6,PROD7,PROD8,PROD9,PROD10,PROD11,PROD12
cluster.routing.allocation.awareness.attributes: rack
node.max_local_storage_nodes: 2
path.data: /es_data1
path.logs:/var/log/elasticsearch
bootstrap.mlockall: true
transport.tcp.port:9300
http.port: 9200
http.max_content_length: 400mb
gateway.recover_after_nodes: 17
gateway.recover_after_time: 1m
gateway.expected_nodes: 18
cluster.routing.allocation.node_concurrent_recoveries: 20
indices.recovery.max_bytes_per_sec: 200mb
discovery.zen.minimum_master_nodes: 10
discovery.zen.ping.timeout: 3s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: XXX
index.search.slowlog.threshold.query.warn: 10s
index.search.slowlog.threshold.query.info: 5s
index.search.slowlog.threshold.query.debug: 2s
index.search.slowlog.threshold.fetch.warn: 1s
index.search.slowlog.threshold.fetch.info: 800ms
index.search.slowlog.threshold.fetch.debug: 500ms
index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
monitor.jvm.gc.young.warn: 1000ms
monitor.jvm.gc.young.info: 700ms
monitor.jvm.gc.young.debug: 400ms
monitor.jvm.gc.old.warn: 10s
monitor.jvm.gc.old.info: 5s
monitor.jvm.gc.old.debug: 2s
action.auto_create_index: .marvel-*
action.disable_delete_all_indices: true
indices.cache.filter.size: 10%
index.refresh_interval: -1
threadpool.search.type: fixed
threadpool.search.size: 48
threadpool.search.queue_size: 10000000
cluster.routing.allocation.cluster_concurrent_rebalance: 6
indices.store.throttle.type: none
index.reclaim_deletes_weight: 4.0
index.merge.policy.max_merge_at_once: 5
index.merge.policy.segments_per_tier: 5
marvel.agent.exporter.es.hosts: ["1.1.1.1:9200","1.1.1.1:9200"]
marvel.agent.enabled: true
marvel.agent.interval: 30s
script.disable_dynamic: false


这是/ etc / sysconfig / elasticsearch-0:

# Directory where the Elasticsearch binary distribution resides
ES_HOME=/usr/share/elasticsearch
# Heap Size (defaults to 256m min, 1g max)
ES_HEAP_SIZE=30g
# Heap new generation
#ES_HEAP_NEWSIZE=
# max direct memory
#ES_DIRECT_SIZE=
# Additional Java OPTS
#ES_JAVA_OPTS=
# Maximum number of open files
MAX_OPEN_FILES=65535
# Maximum amount of locked memory
MAX_LOCKED_MEMORY=unlimited
# Maximum number of VMA (Virtual Memory Areas) a process can own
MAX_MAP_COUNT=262144
# Elasticsearch log directory
LOG_DIR=/var/log/elasticsearch
# Elasticsearch data directory
DATA_DIR=/es_data1
# Elasticsearch work directory
WORK_DIR=/tmp/elasticsearch
# Elasticsearch conf directory
CONF_DIR=/etc/elasticsearch
# Elasticsearch configuration file (elasticsearch.yml)
CONF_FILE=/etc/elasticsearch/elasticsearch-0.yml
# User to run as, change this to a specific elasticsearch user if possible
# Also make sure, this user can write into the log directories in case you change them
# This setting only works for the init script, but has to be configured separately for systemd startup
ES_USER=elasticsearch
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true


请让我知道我是否可以提供其他数据。在此先感谢您的帮助。

            total       used       free     shared    buffers     cached
Mem: 129022 119372 9650 0 219 46819
-/+ buffers/cache: 72333 56689
Swap: 28603 0 28603

最佳答案

您所看到的不是堆被炸毁,堆将始终受您在配置中设置的限制。 free -m和有关操作系统相关使用的顶级报告,因此使用该操作系统很可能是OS缓存FS调用。

这不会导致Java OOM。

如果您遇到的Java OOM与Java堆空间不足直接相关,那么还有其他事情在起作用。您的日志可能会提供一些有关此方面的信息。

关于memory - Elasticsearch内存问题-ES进程占用所有RAM,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28370821/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com