gpt4 book ai didi

docker - 卡夫卡生产者抛出 'TimeoutException: Batch Expired'异常

转载 作者:行者123 更新时间:2023-12-02 18:31:45 25 4
gpt4 key购买 nike

我正在测试适用于Twitter的Spring Cloud Stream App,
使用以下与Kafka相关的环境属性启动docker容器,

KAFKA_ADVERTISED_HOST_NAME=<ip>
advertised.host.name=<ip>:9092
spring.cloud.stream.bindings.output.destination=twitter-source-test
spring.cloud.stream.kafka.binder.brokers=<ip>:9092
spring.cloud.stream.kafka.binder.zkNodes=<ip>:2181

我的kafka producerConfig值如下,
 2017-01-12 14:47:09.979  INFO 1 --- [itterSource-1-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [192.168.127.188:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0

2017-01-12 14:47:09.985信息1-[itterSource-1-1] o.a.kafka.common.utils.AppInfoParser:Kafka版本:0.9.0.1

但是生产者不断抛出以下异常,
2017-01-12 14:47:42.196 ERROR 1 --- [ad | producer-3] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{-1, 1, 11, 99, 111, 110, 116, 101, 110, 116, 84, 121, 112, 101, 0, 0, 0, 12, 34, 116, 101, 120, 116...' to topic twitter-source-test:
org.apache.kafka.common.errors.TimeoutException: Batch Expired

我可以从Docker容器远程登录到代理192.168.127.188:9092和2181。另外,我的kafka服务器是 而不是docker容器

看到了一些解决方案,例如添加“advertised.host.name”,但是没有用,或者这是我赋予env Prop 的正确方法。

有什么帮助吗?

最佳答案

分享修复程序。

在server.properties中设置侦听器可以解决此问题。例如:listeners = PLAINTEXT://your.host.name:9092

关于docker - 卡夫卡生产者抛出 'TimeoutException: Batch Expired'异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41609877/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com