gpt4 book ai didi

apache-kafka - 出现错误 :Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-1

转载 作者:行者123 更新时间:2023-12-02 02:47:06 26 4
gpt4 key购买 nike

运行生产者客户端时,我收到以下错误,该客户端从输入文件 kafka_message.log 获取消息。该日志文件每秒包含每条长度为 4096 的消息 100000 条记录

错误-

[2017-01-09 14:45:24,813] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0
[2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 with key: null, value: 4096 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-0

我运行的命令:

$ bin/kafka-console-producer.sh --broker-list x.x.x.x:xxxx,x.x.x.x:xxxx --batch-size 1000 --message-send-max-retries 10 --request-required-acks 1 --topic test2R2P2 <~/kafka_message.log

有 2 个代理正在运行,主题具有 partitions = 2replication Factor = 2

有人可以帮我理解这个错误的含义吗?我还看到消息丢失,这意味着输入文件中的所有消息都没有放入主题中?

单独说明:运行 kafka- Producer-perf-test.sh 并在测试运行时杀死其中一个代理(在 3 节点集群中)时,我看到数据丢失。这是预期的行为吗?我在多次测试中看到相同的结果。

我运行的命令:

描述主题:

 $ bin/kafka-topics.sh  --zookeeper x.x.x.x:2181/kafka-framework --describe |grep test4
Topic:test4R2P2 PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test4R2P2 Partition: 0 Leader: 0 Replicas: 1,0 Isr: 0,1
Topic: test4R2P2 Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0,1

运行性能测试:

$ bin/kafka-producer-perf-test.sh --num-records 100000 --record-size 4096  --throughput 1000  --topic test4R2P2 --producer-props bootstrap.servers=x.x.x.x:xxxx,x.x.x.x:xxxx

消费者命令:

$ bin/kafka-console-consumer.sh --zookeeper x.x.x.x:2181/kafka-framework --topic test4R2P2 1>~/kafka_message.log

检查消息计数:

$ wc -l ~/kafka_message.log
399418 /home/montana/kafka_message.log

我在主题 test4R2P2 中只看到 399418 条消息,因为我通过运行 perf test 4 次总共放置了 400000 条消息。

perf 命令抛出异常:

org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.

消费者命令抛出的异常:

[2017-01-10 07:40:07,246] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@695be565 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:40:07,472] WARN Fetching topic metadata with correlation id 1 for topics [Set(test4R2P2)] from broker [BrokerEndPoint(1,10.105.26.1,31052)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
[2017-01-10 07:42:23,073] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-0], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@7bd94073 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:44:58,195] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@2855ee73 (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:44:58,404] WARN Fetching topic metadata with correlation id 3 for topics [Set(test4R2P2)] from broker [BrokerEndPoint(1,10.105.26.1,31052)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
[2017-01-10 07:45:47,127] WARN [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-0], Error in fetch kafka.consumer.ConsumerFetcherThread$FetchRequest@f8887da (kafka.consumer.ConsumerFetcherThread)
[2017-01-10 07:50:56,291] ERROR [ConsumerFetcherThread-console-consumer-46599_node-44a8422fe1a0-1484033822261-f07d33d7-0-1], Error for partition [test4R2P2,1] to broker 1:kafka.common.NotLeaderForPartitionException (kafka.consumer.ConsumerFetcherThread)

最佳答案

根据评论,@amethystic 的建议似乎有效:

...you could increase the value for "request.timeout.ms" ...

关于apache-kafka - 出现错误 :Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-1,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41564051/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com