gpt4 book ai didi

apache-kafka - Kafka Connect 进入重新平衡循环

转载 作者:行者123 更新时间:2023-12-03 14:58:27 34 4
gpt4 key购买 nike

我刚刚部署了我的 Kafka Connect(我只使用连接源来
MQTT)应用程序在两个实例的集群上(2 个容器上的 2
机器),现在它似乎进入了一种重新平衡循环,开始时我有一些数据,但没有出现新数据。这就是我在日志中得到的。

[2017-08-11 07:27:35,810] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:35,810] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:35,810] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723)
[2017-08-11 07:27:36,310] INFO Finished reading to end of log and updated config snapshot, new config log offset: 1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:727)
[2017-08-11 07:27:36,310] INFO Current config state offset 1 does not match group assignment 2. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:703)
[2017-08-11 07:27:36,311] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1030)
[2017-08-11 07:27:36,311] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1056)
[2017-08-11 07:27:36,311] INFO (Re-)joining group source-connector11234 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2017-08-11 07:27:36,315] INFO Successfully joined group source-connector11234 with generation 28 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2017-08-11 07:27:36,317] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:36,317] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:36,317] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723

最佳答案

我也遇到了类似的问题,在 mesos 集群上运行两个单独的容器——最终的解决方案是一个令人讨厌的解决方案,没有任何地方记录:

使用奇数个容器!

一些分布式系统依靠他们的 worker 来选举领导者。如果有两个,他们每个人都投票给另一个并陷入循环。这似乎也是这里发生的事情。

关于apache-kafka - Kafka Connect 进入重新平衡循环,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45635755/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com