gpt4 book ai didi

Elasticsearch 打开的文件太多

转载 作者:行者123 更新时间:2023-12-02 23:00:03 24 4
gpt4 key购买 nike

问题:我有 5 个节点(1xMaster、1xClient、3xData)!它们都在同一个集群中运行。上传大量数据集后,我遇到了以下异常:

[2016-04-18 09:00:24,907][INFO ][node                     ] [Human Torch II] version[2.2.0], pid[68278], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-04-18 09:00:24,908][INFO ][node ] [Human Torch II] initializing ...
[2016-04-18 09:00:25,483][INFO ][plugins ] [Human Torch II] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-04-18 09:00:25,530][INFO ][env ] [Human Torch II] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [352.6gb], net total_space [464.8gb], spins? [unknown], types [hfs]
[2016-04-18 09:00:25,530][INFO ][env ] [Human Torch II] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-04-18 09:00:28,200][INFO ][node ] [Human Torch II] initialized
[2016-04-18 09:00:28,200][INFO ][node ] [Human Torch II] starting ...
[2016-04-18 09:00:28,322][INFO ][transport ] [Human Torch II] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2016-04-18 09:00:28,329][INFO ][discovery ] [Human Torch II] TEST/xSxhxmpYQ9SPk4Ux8SufpQ
[2016-04-18 09:00:31,357][INFO ][cluster.service ] [Human Torch II] new_master {Human Torch II}{xSxhxmpYQ9SPk4Ux8SufpQ}{127.0.0.1}{127.0.0.1:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-04-18 09:00:31,371][INFO ][http ] [Human Torch II] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2016-04-18 09:00:31,371][INFO ][node ] [Human Torch II] started
[2016-04-18 09:00:31,740][INFO ][gateway ] [Human Torch II] recovered [128] indices into cluster_state
[2016-04-18 09:00:50,810][INFO ][cluster.service ] [Human Torch II] added {{Xi'an Chi Xan}{OQjiTz-sR0Wcg8yIYnbSBA}{127.0.0.1}{127.0.0.1:9301}{data=false, master=false},}, reason: zen-disco-join(join from node[{Xi'an Chi Xan}{OQjiTz-sR0Wcg8yIYnbSBA}{127.0.0.1}{127.0.0.1:9301}{data=false, master=false}])
[2016-04-18 09:00:56,049][INFO ][cluster.service ] [Human Torch II] added {{Riot}{VZQyBWSxS_W3H33_Xpx7kw}{127.0.0.1}{127.0.0.1:9302}{master=false},}, reason: zen-disco-join(join from node[{Riot}{VZQyBWSxS_W3H33_Xpx7kw}{127.0.0.1}{127.0.0.1:9302}{master=false}])
[2016-04-18 09:01:01,727][INFO ][cluster.service ] [Human Torch II] added {{Topaz}{SShnnKN7SHKaxBGmn3TCig}{127.0.0.1}{127.0.0.1:9303}{master=false},}, reason: zen-disco-join(join from node[{Topaz}{SShnnKN7SHKaxBGmn3TCig}{127.0.0.1}{127.0.0.1:9303}{master=false}])
[2016-04-18 09:01:15,400][INFO ][cluster.service ] [Human Torch II] added {{Moondark}{j9oCYfm_TbW0cdEciwyBhQ}{127.0.0.1}{127.0.0.1:9304}{master=false},}, reason: zen-disco-join(join from node[{Moondark}{j9oCYfm_TbW0cdEciwyBhQ}{127.0.0.1}{127.0.0.1:9304}{master=false}])
[2016-04-18 09:01:30,174][WARN ][cluster.action.shard ] [Human Torch II] [logstash-2015.09.26][0] received shard failed for [logstash-2015.09.26][0], node[j9oCYfm_TbW0cdEciwyBhQ], [P], v[17], s[INITIALIZING], a[id=p6bW6TXYS9yJGiWpUbDkrg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-04-18T07:00:31.474Z]], indexUUID [xgsq0ZPVQ5OIdadydVB9rA], message [failed recovery], failure [IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]; ]
[logstash-2015.09.26][[logstash-2015.09.26][0]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:254)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [logstash-2015.09.26][[logstash-2015.09.26][0]] EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system];
at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:308)
at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:167)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1450)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1434)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:925)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:897)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)
... 5 more
Caused by: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:82)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:95)
at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:163)
at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:148)
at org.elasticsearch.index.engine.Engine.readLastCommittedSegmentInfos(Engine.java:349)
at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:298)
... 12 more
Suppressed: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:82)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:128)
at org.elasticsearch.index.engine.Engine.readLastCommittedSegmentInfos(Engine.java:345)
... 13 more

我无法再启动 elasticsearch。所以问题:

  1. 上传数据大小有限制吗?
  2. 我尝试使用 sudo ulimit -n 65535 增加最大打开文件数,但它不起作用。这是实际问题吗?
  3. 处理超大数据的最佳方式是什么?
  4. 堆大小可能是异常的原因吗?

更新:curl -s -XGET 'localhost:9200/_cat/nodes?v&h=ip,fdc,fdm'

ip         fdc  fdm 
127.0.0.1 2588 9000
127.0.0.1 1942 9000
127.0.0.1 1896 9000
127.0.0.1 2823 9000
127.0.0.1 338 9000

感谢您的帮助:)

最佳答案

好的,您在同一台主机上有 5 个节点,并且最多可以打开 9000 个文件。如果您对第二列求和,则您高于该数字,因此会出现错误。

为了在启动期间查看您的 ES 配置了多少个最大打开文件,您可以使用 -Des.max-open-files=true 启动您的进程,您的日志将显示如何您可以拥有许多最大打开文件。

检查 herehere (取决于您拥有的 Linux 发行版)如何为您的 Linux 发行版配置该设置,但您可能还需要调整 /etc/security/limits.conf

关于Elasticsearch 打开的文件太多,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36688798/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com