gpt4 book ai didi

apache-kafka - kafka 打开的文件太多

转载 作者:行者123 更新时间:2023-12-05 08:26:41 27 4
gpt4 key购买 nike

你有没有遇到过kafka的类似问题?我收到此错误:打开的文件太多。我不知道为什么。以下是一些日志:

[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180821_1_LOCATION-87/leader-epoch-checkpoint: Too many open fis
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180822_PARSE-136/leader-epoch-checkpoint: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-08-27 10:07:26,269] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD)
java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180813_1_STATISTICS-402/leader-epoch-checkpoint: Too many opens
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.LeaderEpochCheckpointFile.<init>(LeaderEpochCheckpointFile.scala:62)
at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278)
at kafka.log.Log.<init>(Log.scala:211)
at kafka.log.Log$.apply(Log.scala:1748)
at kafka.log.LogManager.loadLog(LogManager.scala:265)
at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

最佳答案

在 Kafka 中,每个主题都(可选)分成许多分区。对于每个分区,一些文件由代理维护(用于索引和实际数据)。

kafka-topics --zookeeper localhost:2181 --describe --topic topic_name

将为您提供主题topic_name 的分区数。每个主题的默认分区数 num.partitions/etc/kafka/server.properties

下定义

如果代理托管许多分区并且特定分区有许多日志段文件,则打开文件的总数可能会非常大。

可以通过运行查看当前的文件描述符限制

ulimit -n

您还可以使用 lsof 检查打开的文件数:

lsof | wc -l

要解决此问题,您需要更改打开文件描述符的限制:

ulimit -n <noOfFiles>

或者以某种方式减少打开文件的数量(例如,减少每个主题的分区数量)。

关于apache-kafka - kafka 打开的文件太多,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52032237/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com