gpt4 book ai didi

java - Kafka无法更新元数据

转载 作者:行者123 更新时间:2023-12-03 21:40:44 26 4
gpt4 key购买 nike

我正在通过Spring-boot使用Kafka v0.10.1.1。

我正在尝试使用以下生产者代码在Kafka主题mobile-user中产生一条消息:

主题mobile-user具有5个分区和2个复制因子。我在问题的末尾附加了我的Kafka设置。

package com.service;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;

import com.shephertz.karma.constant.Constants;
import com.shephertz.karma.exception.KarmaException;
import com.shephertz.karma.util.Utils;

/**
* @author Prakash Pandey
*/
@Service
public class NotificationSender {

@Autowired
private KafkaTemplate<String, String> kafkaTemplate;

private static Logger LOGGER = LoggerFactory.getLogger(NotificationSender.class);

// Send Message
public void sendMessage(String topicName, String message) throws KarmaException {
LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
@Override
public void onSuccess(SendResult<String, String> result) {
LOGGER.info("sent message='{}'" + " to partition={}" + " with offset={}", message,
result.getRecordMetadata().partition(), result.getRecordMetadata().offset());
}

@Override
public void onFailure(Throwable ex) {
LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + Utils.getStackTrace(ex));

}
});

LOGGER.debug("Payload sent to kafka");
LOGGER.debug("topic: " + topicName + ", payload: " + message);
}
}


问题:

我可以成功向kafka发送消息,但有时会收到此错误:

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
2017-10-25 06:21:48, [ERROR] [karma-unified-notification-dispatcher - NotificationDispatcherSender - onFailure:43] Exception in sending message to kafka for queryorg.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:486)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:156)
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:241)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:151)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.


卡夫卡属性:

spring.kafka.producer.retries=5
spring.kafka.producer.batch-size=1000
spring.kafka.producer.request.timeout.ms=60000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=1
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.topic.retention=86400000

spring.zookeeper.hosts=192.20.1.19:2181,10.20.1.20:2181,10.20.1.26:2181
spring.kafka.session.timeout=30000
spring.kafka.connection.timeout=10000
spring.kafka.topic.partition=5
spring.kafka.message.replication=2

spring.kafka.listener.concurrency=1
spring.kafka.listener.poll-timeout=3000
spring.kafka.consumer.auto-commit-interval=1000
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.max-poll-records=200
spring.kafka.consumer.max-poll-interval-ms=300000


如果您可以帮助我解决这个问题,将非常有帮助。
谢谢。

请注意:我不会每次都收到上述消息。我可以成功地向 kafka-topic生成一条消息,并在 consumer上成功使用它。上面的错误大约是在 1000成功生成消息之后发生的。

最佳答案

更改默认的bootstrap-servers属性:

private List<String> bootstrapServers = new ArrayList<String>(
Collections.singletonList("localhost:9092"));


给你的:

spring.kafka.bootstrap-servers: ${kafka.binder.broker}:${kafka.binder.defaultBrokerPort}

关于java - Kafka无法更新元数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46932127/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com