gpt4 book ai didi

apache-kafka - 为什么无法将 Kafka 偏移量设置到主题的开头?

转载 作者:行者123 更新时间:2023-12-02 18:42:24 24 4
gpt4 key购买 nike

我想将具有 1 个分区且给定组 ID testgroup1 的主题 mytopic 的偏移量设置为 0。但这并不总是可行。如果我想将偏移量设置为 0,我会收到以下消息:

bash-4.4# kafka-consumer-groups.sh --bootstrap-server localhost:9092 --topic mytopic --group testgroup1 --reset-offsets --to-offset 0 --execute
[2021-06-04 09:23:30,854] WARN New offset (0) is lower than earliest offset for topic partition mytopic-0. Value will be set to 1365671 (kafka.admin.ConsumerGroupCommand$)

bash-4.4# kafka-topics.sh --bootstrap-server localhost:9092 --topic mytopic --describe
Topic: mytopic PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
Topic: mytopic Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001

bash-4.4# kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-name mytopic --entity-type topics
Dynamic configs for topic mytopic are:
bash-4.4#

在 Kafka 日志中,我可以在整个主题被消耗后看到这一点;不确定是否真的相关:

[2021-06-04 10:18:36,130] INFO [Log partition=__consumer_offsets-19, dir=/kafka/logs] Deleting segment files LogSegment(baseOffset=0, size=634, lastModifiedTime=1598954190000, largestRecordTimestamp=Some(1585909899136)) (kafka.log.Log)
[2021-06-04 10:18:36,131] INFO Deleted log /kafka/logs/__consumer_offsets-19/00000000000000000000.log.deleted. (kafka.log.LogSegment)
[2021-06-04 10:18:36,132] INFO Deleted offset index /kafka/logs/__consumer_offsets-19/00000000000000000000.index.deleted. (kafka.log.LogSegment)
[2021-06-04 10:18:36,132] INFO Deleted time index /kafka/logs/__consumer_offsets-19/00000000000000000000.timeindex.deleted. (kafka.log.LogSegment)

甚至不可能使用此命令再次使用主题:

kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --group testgroup1 --topic mytopic

我读过其他问题,例如:

但是我还没有找到 Kafka 这样做的原因,即最早的偏移量设置为不同的值并且不可能再次返回到偏移量 0。也许这与数据保留有关,但我尝试将日志保留期设置为 3 年:

log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.7-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 26280
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 6000

最佳答案

具有清理策略 DELETE 的 Kafka 主题,主题的默认“类型”会修剪其数据(基于配置大小/时间保留),因此在您的情况下,主题中不再存在数据,偏移计数器始终为展望 future ,像 0 这样的旧偏移量在主题中没有数据,希望能够解决问题

检查您的主题配置是否内部配置了不同的保留配置

关于apache-kafka - 为什么无法将 Kafka 偏移量设置到主题的开头?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67834029/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com