gpt4 book ai didi

spring-integration-kafka 配置消费者从指定分区接收消息

转载 作者:行者123 更新时间:2023-12-04 15:56:24 27 4
gpt4 key购买 nike

我开始在我的项目中使用 spring-integration-kafka,我可以从 Kafka 生成和消费消息。但是现在,我想向特定分区生成消息,并从特定分区消费消息。

例如,我想向分区 3 生成消息,而消费只会从分区 3 接收消息。

到目前为止,我的主题有 8 个分区,我可以向特定分区生成消息,但我还没有找到配置消费者只接收来自特定分区的消息的方法。

因此,关于我应该如何使用 spring-integration-kafka 配置消费者的任何建议,或者任何其他需要与 KafkaConsumer.java 类相关的内容都可以从特定分区接收消息。

谢谢。

这是我的代码:

kafka-producer-context.xml

<int:publish-subscribe-channel id="inputToKafka" />

<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter" kafka-producer-context-ref="kafkaProducerContext"
auto-startup="true" order="1" channel="inputToKafka" />
<int-kafka:producer-context id="kafkaProducerContext"
producer-properties="producerProps">
<int-kafka:producer-configurations>
<int-kafka:producer-configuration
broker-list="127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094"
async="true" topic="testTopic"
key-class-type="java.lang.String"
key-encoder="encoder"
value-class-type="java.lang.String"
value-encoder="encoder"
partitioner="partitioner"
compression-codec="default" />
</int-kafka:producer-configurations>
</int-kafka:producer-context>

<util:properties id="producerProps">
<prop key="queue.buffering.max.ms">500</prop>
<prop key="topic.metadata.refresh.interval.ms">3600000</prop>
<prop key="queue.buffering.max.messages">10000</prop>
<prop key="retry.backoff.ms">100</prop>
<prop key="message.send.max.retries">2</prop>
<prop key="send.buffer.bytes">5242880</prop>
<prop key="socket.request.max.bytes">104857600</prop>
<prop key="socket.receive.buffer.bytes">1048576</prop>
<prop key="socket.send.buffer.bytes">1048576</prop>
<prop key="request.required.acks">1</prop>
</util:properties>

<bean id="encoder"
class="org.springframework.integration.kafka.serializer.common.StringEncoder" />

<bean id="partitioner" class="org.springframework.integration.kafka.support.DefaultPartitioner"/>

<task:executor id="taskExecutor" pool-size="5"
keep-alive="120" queue-capacity="500" />

KafkaProducer.java
public class KafkaProducer {

private static final Logger logger = LoggerFactory
.getLogger(KafkaProducer.class);

@Autowired
private MessageChannel inputToKafka;

public void sendMessage(String message) {

try {
inputToKafka.send(MessageBuilder.withPayload(message)
.setHeader(KafkaHeaders.TOPIC, "testTopic")
.setHeader(KafkaHeaders.PARTITION_ID, 3).build());
} catch (Exception e) {
logger.error(String.format(
"Failed to send [ %s ] to topic %s ", message, topic),
e);
}
}

}

kafka-consumer-context.xml
<int:channel id="inputFromKafka">
<int:dispatcher task-executor="kafkaMessageExecutor" />
</int:channel>

<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="127.0.0.1:2181" zk-connection-timeout="6000"
zk-session-timeout="6000" zk-sync-time="2000" />

<int-kafka:inbound-channel-adapter
id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
auto-startup="true" channel="inputFromKafka">
<int:poller fixed-delay="10" time-unit="MILLISECONDS"
max-messages-per-poll="5" />
</int-kafka:inbound-channel-adapter>


<bean id="consumerProperties"
class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="properties">
<props>
<prop key="auto.offset.reset">smallest</prop>
<prop key="socket.receive.buffer.bytes">1048576</prop>
<prop key="fetch.message.max.bytes">5242880</prop>
<prop key="auto.commit.interval.ms">1000</prop>
</props>
</property>
</bean>

<int-kafka:consumer-context id="consumerContext"
consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
consumer-properties="consumerProperties">
<int-kafka:consumer-configurations>
<int-kafka:consumer-configuration
group-id="defaultGrp" max-messages="20000">
<int-kafka:topic id="testTopic" streams="3" />
</int-kafka:consumer-configuration>
</int-kafka:consumer-configurations>
</int-kafka:consumer-context>

<task:executor id="kafkaMessageExecutor" pool-size="0-10"
keep-alive="120" queue-capacity="500" />

<int:outbound-channel-adapter channel="inputFromKafka"
ref="kafkaConsumer" method="processMessage" />

KafkaConsumer.java
public class KafkaConsumer {

private static final Logger log = LoggerFactory
.getLogger(KafkaConsumer.class);

@Autowired
KafkaService kafkaService;

public void processMessage(Map<String, Map<Integer, List<byte[]>>> msgs) {
for (Map.Entry<String, Map<Integer, List<byte[]>>> entry : msgs
.entrySet()) {
log.debug("Topic:" + entry.getKey());
ConcurrentHashMap<Integer, List<byte[]>> messages = (ConcurrentHashMap<Integer, List<byte[]>>) entry
.getValue();
log.debug("\n**** Partition: \n");
Set<Integer> keys = messages.keySet();
for (Integer i : keys)
log.debug("p:"+i);
log.debug("\n**************\n");
Collection<List<byte[]>> values = messages.values();
for (Iterator<List<byte[]>> iterator = values.iterator(); iterator
.hasNext();) {
List<byte[]> list = iterator.next();
for (byte[] object : list) {
String message = new String(object);
log.debug("Message: " + message);
try {
kafkaService.receiveMessage(message);
} catch (Exception e) {
log.error(String.format("Failed to process message %s",
message));
}
}
}

}
}
}

所以我的问题就在这里。当我向分区 3 或任何分区生成消息时,KafkaConsumer 始终会收到该消息。我想要的是:KafkaConsumer 只会从分区 3 接收消息,而不是从其他分区接收消息。

再次感谢。

最佳答案

您需要使用 message-driven-channel-adapter .

As a variant, the KafkaMessageListenerContainer can accept org.springframework.integration.kafka.core.Partition array argument to specify topics and their partitions pair.



您需要使用 this constructor 连接一个监听器容器并使用 listener-container 将其提供给适配器属性。

我们将用一个例子更新自述文件。

关于spring-integration-kafka 配置消费者从指定分区接收消息,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31109586/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com