- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.setOffsetInZooKeeper()
方法的一些代码示例,展示了ZookeeperOffsetHandler.setOffsetInZooKeeper()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ZookeeperOffsetHandler.setOffsetInZooKeeper()
方法的具体详情如下:
包路径:org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler
类名称:ZookeeperOffsetHandler
方法名:setOffsetInZooKeeper
暂无
代码示例来源:origin: apache/flink
/**
* Commits offsets for Kafka partitions to ZooKeeper. The given offsets to this method should be the offsets of
* the last processed records; this method will take care of incrementing the offsets by 1 before committing them so
* that the committed offsets to Zookeeper represent the next record to process.
*
* @param internalOffsets The internal offsets (representing last processed records) for the partitions to commit.
* @throws Exception The method forwards exceptions.
*/
public void prepareAndCommitOffsets(Map<KafkaTopicPartition, Long> internalOffsets) throws Exception {
for (Map.Entry<KafkaTopicPartition, Long> entry : internalOffsets.entrySet()) {
KafkaTopicPartition tp = entry.getKey();
Long lastProcessedOffset = entry.getValue();
if (lastProcessedOffset != null && lastProcessedOffset >= 0) {
setOffsetInZooKeeper(curatorClient, groupId, tp.getTopic(), tp.getPartition(), lastProcessedOffset + 1);
}
}
}
代码示例来源:origin: org.apache.flink/flink-connector-kafka-0.8_2.11
/**
* Commits offsets for Kafka partitions to ZooKeeper. The given offsets to this method should be the offsets of
* the last processed records; this method will take care of incrementing the offsets by 1 before committing them so
* that the committed offsets to Zookeeper represent the next record to process.
*
* @param internalOffsets The internal offsets (representing last processed records) for the partitions to commit.
* @throws Exception The method forwards exceptions.
*/
public void prepareAndCommitOffsets(Map<KafkaTopicPartition, Long> internalOffsets) throws Exception {
for (Map.Entry<KafkaTopicPartition, Long> entry : internalOffsets.entrySet()) {
KafkaTopicPartition tp = entry.getKey();
Long lastProcessedOffset = entry.getValue();
if (lastProcessedOffset != null && lastProcessedOffset >= 0) {
setOffsetInZooKeeper(curatorClient, groupId, tp.getTopic(), tp.getPartition(), lastProcessedOffset + 1);
}
}
}
代码示例来源:origin: org.apache.flink/flink-connector-kafka-0.8_2.10
/**
* Commits offsets for Kafka partitions to ZooKeeper. The given offsets to this method should be the offsets of
* the last processed records; this method will take care of incrementing the offsets by 1 before committing them so
* that the committed offsets to Zookeeper represent the next record to process.
*
* @param internalOffsets The internal offsets (representing last processed records) for the partitions to commit.
* @throws Exception The method forwards exceptions.
*/
public void prepareAndCommitOffsets(Map<KafkaTopicPartition, Long> internalOffsets) throws Exception {
for (Map.Entry<KafkaTopicPartition, Long> entry : internalOffsets.entrySet()) {
KafkaTopicPartition tp = entry.getKey();
Long lastProcessedOffset = entry.getValue();
if (lastProcessedOffset != null && lastProcessedOffset >= 0) {
setOffsetInZooKeeper(curatorClient, groupId, tp.getTopic(), tp.getPartition(), lastProcessedOffset + 1);
}
}
}
代码示例来源:origin: org.apache.flink/flink-connector-kafka-0.8
/**
* Commits offsets for Kafka partitions to ZooKeeper. The given offsets to this method should be the offsets of
* the last processed records; this method will take care of incrementing the offsets by 1 before committing them so
* that the committed offsets to Zookeeper represent the next record to process.
*
* @param internalOffsets The internal offsets (representing last processed records) for the partitions to commit.
* @throws Exception The method forwards exceptions.
*/
public void prepareAndCommitOffsets(Map<KafkaTopicPartition, Long> internalOffsets) throws Exception {
for (Map.Entry<KafkaTopicPartition, Long> entry : internalOffsets.entrySet()) {
KafkaTopicPartition tp = entry.getKey();
Long lastProcessedOffset = entry.getValue();
if (lastProcessedOffset != null && lastProcessedOffset >= 0) {
setOffsetInZooKeeper(curatorClient, groupId, tp.getTopic(), tp.getPartition(), lastProcessedOffset + 1);
}
}
}
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.getOffsetFrom
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.prepareAndCom
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.getCommittedO
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.close()方法的一些代
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.setOffsetInZo
本文整理了Java中org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.()方法的一些代码示例,展
我是一名优秀的程序员,十分优秀!