gpt4 book ai didi

apache-kafka - 在 Kafka Consumer 配置中减少 max.poll.records 的影响

转载 作者:行者123 更新时间:2023-12-05 01:33:48 25 4
gpt4 key购买 nike

我正在编写一个消费者应用程序以从 kafka 流中选取记录并使用 spring-kafka 处理它。我的处理步骤如下:

Getting records from stream --> dump it into a table --> Fetch records and call API --> API will update records into a table --> calling Async Commit()

似乎在某些情况下,API 处理需要更多时间,因为正在获取更多记录并且我们遇到以下错误?

Member consumer-prov-em-1-399ede46-9e12-4388-b5b8-f198a4e6a5bcsending LeaveGroup request to coordinator apslt2555.uhc.com:9095 (id:2147483577 rack: null) due to consumer poll timeout has expired. Thismeans the time between subsequent calls to poll() was longer than theconfigured max.poll.interval.ms, which typically implies that the pollloop is spending too much time processing messages. You can addressthis either by increasing max.poll.interval.ms or by reducing themaximum size of batches returned in poll() with max.poll.records.

org.apache.kafka.clients.consumer.CommitFailedException: Commit cannotbe completed since the group has already rebalanced and assigned thepartitions to another member. This means that the time betweensubsequent calls to poll() was longer than the configuredmax.poll.interval.ms, which typically implies that the poll loop isspending too much time message processing. You can address this eitherby increasing max.poll.interval.ms or by reducing the maximum size ofbatches returned in poll() with max.poll.records.

我知道这可以通过减少 max.poll.records 或增加 max.poll.interval.ms 来解决。如果我将 max.poll.records 设置为 10,那么我想了解的是什么 poll() 行为?是否要从流中获取 10 条记录以等待这些记录被提交,然后再获取下 10 条记录?下一次投票何时发生?它是否也会影响性能,因为我们将 max.poll.records 从默认的 500 减少到 10。

我还必须增加 max.poll.interval.ms 吗?大概10分钟吧。在更改这些值时,我应该注意任何向下的影响吗?除了这些参数,还有其他方法可以处理这些错误吗?

最佳答案

max.poll.records 允许批处理消费模型,在该模型中,记录在刷新到另一个系统之前收集在内存中。这个想法是通过从 kafka 一起轮询来获取所有记录,然后在轮询循环中在内存中处理它。

如果您减少数量,那么消费者将更频繁地从 kafka 进行轮询。这意味着它需要更频繁地进行网络调用。这可能会降低 kafka 流处理的性能。

max.poll.interval.ms 控制在消费者主动离开组之前轮询调用之间的最长时间。如果这个数字增加,那么卡夫卡将需要更长的时间来检测消费者故障。另一方面,如果这个值太低,kafka 可能会错误地将许多活着的消费者检测为失败,从而更频繁地重新平衡。

关于apache-kafka - 在 Kafka Consumer 配置中减少 max.poll.records 的影响,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64364910/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com