gpt4 book ai didi

apache-kafka - 在任务迁移后与 exactly_once 和备用副本中止事务后无效的状态存储内容

转载 作者:行者123 更新时间:2023-12-02 04:19:49 25 4
gpt4 key购买 nike

我们有一个基于 Kafka Streams State Stores 的简单命令 ID 重复数据删除机制。它将过去一小时的命令 ID 存储在 persistentWindowStore 中。如果稍后在该拓扑中抛出异常,我们会遇到存储问题。我们使用 docker 运行 3 个节点,每个节点都为这个特定的 Streams 应用程序设置了多个线程。流程如下:

  • 消息在节点 1 Thread-1 (A) 上使用:
2020-04-16 07:26:19.144 DEBUG 1 --- [-StreamThread-1] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a is not a duplicate.
2020-04-16 07:26:19.144 INFO 1 --- [-StreamThread-1] c.g.f.c.s.v.CommandIdValidationUtils : Putting commandId in store: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a 1587013200000
2020-04-16 07:26:19.153 INFO 1 --- [-StreamThread-1] g.f.l.s.v.XXXXValidationHandler :
Exception in thread "XXXXXXProcessor-53816dfa-6ebf-47ad-864c-0fbcb61dc51c-StreamThread-1" java.lang.RuntimeException: Unexpected exponent value: -5
Failed to process stream task 1_2 due to the following error
Shutting down
Unclean shutdown of all active tasks
Flushing all stores registered in the state manager

拓扑中的第一个节点分析这是否是重复的,如果不是则将其放入状态存储中。下一个拓扑节点抛出异常。结果,事务中止,不提交偏移量。我仔细检查了更新日志主题 - 相关消息未提交。

  • 任务被迁移到另一个实例,状态存储被重新创建,消息在节点 2 Thread-2 (B) 上被消费:
2020-04-16 07:26:43.399 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator  : CommandId: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a is not a duplicate.
2020-04-16 07:26:43.400 INFO 1 --- [-StreamThread-2] c.g.f.c.s.v.CommandIdValidationUtils : Putting commandId in store: mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a 1587013200000
2020-04-16 07:26:43.425 INFO 1 --- [-StreamThread-2] g.f.l.s.v.XXXXValidationHandler :
Exception in thread "XXXXXXProcessor-d069f877-90ff-4ecf-bfbf-3b2bd0e20eac-StreamThread-2" java.lang.RuntimeException: XXXX

同样的故事,交易中止。

  • 任务再次迁移到节点 1 Thread-2 (C) Thread-1 已经失败:
2020-04-16 07:26:55.655  INFO 1 --- [-StreamThread-2] c.g.f.c.s.v.CommandIdValidationUtils     : next: KeyValue(1587013200000, mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a)
2020-04-16 07:26:55.655 WARN 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator : Command duplicate detected. Command id mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a

我们分析了任务恢复日志,一切似乎都很正常,中止的交易被跳过了:从分区中跳过中止的记录批处理

但是,此任务的 rocks 数据库包含命令 id:

docker cp 134d4f6f82cf:/tmp/kafka-streams/XXXXXXProcessor/1_0/COMMAND_ID_STORE/COMMAND_ID_STORE.1587013200000 .
ldb --db=COMMAND_ID_STORE.1587013200000 scan
mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a : mnl_cmd_1cd3a7c9-56ef-4c08-a312-d9b69681726a
mnl_cmd_c51cbb71-0274-44c6-b016-976add743fe6 : mnl_cmd_c51cbb71-0274-44c6-b016-976add743fe6

我们的理解是,即使事务在节点 1 (A) 上中止,rocks db 条目之后也没有被清理,并且在 (C) 期间另一个线程从已经存在的 rocks db 中获取数据(毕竟,磁盘路径与任务连接,而不是线程)。我们不确定在任务恢复期间或任务失败期间持久存储会发生什么。根据我们的说法,这种情况破坏了 exactly-once 语义,因为状态存储与变更日志主题不同步。

这是我们这边的错误配置/错误的假设吗?错误?我们尝试将存储更改为 inMemoryWindowStore,它似乎可以解决问题。但我们想了解这种行为,因为我们严重依赖状态存储。

我们使用 Java 11、kafka 客户端 4.1 和 spring-kafka 2.4.5。回滚到kafka clients 2.3.1,问题依旧。

EDIT 我们执行了一些配置更改的更多测试,将 num.standby.replicas = 1 更改为 num.standby.replicas = 0 后问题消失了。从日志来看,似乎已正确创建备用状态,即未考虑中止的消息,但磁盘上的状态仍然被破坏,这很容易重现。再来一个分析: - 节点 A Thread-2 处理消息,崩溃 - 节点 B Thread-2 是备用副本,接管,崩溃 - 节点 B Thread-1 接管,检测到重复

编辑 2与检查点文件相关的详细日志。首先,发送一个有效命令,然后发送一个无效命令(因此,变更日志主题不为空)。

NODE_3 2020-04-15 21:06:14.470 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager        : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Checkpointable offsets read from checkpoint: {}
NODE_3 2020-04-15 21:06:19.413 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint null
NODE_3 2020-04-15 21:06:28.470 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.ProcessorStateManager : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] standby-task [1_2] Checkpointable offsets read from checkpoint: {}
NODE_3 2020-04-15 21:06:29.634 TRACE 1 --- [-StreamThread-2] o.a.k.s.s.internals.OffsetCheckpoint : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:06:29.640 TRACE 1 --- [-StreamThread-2] o.a.k.s.s.internals.OffsetCheckpoint : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint
NODE_3 2020-04-15 21:11:15.909 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:11:15.912 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint

NODE_1 log1:2020-04-15 21:11:33.942 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator : CommandId: mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc is not a duplicate.

NODE_3 2020-04-15 21:11:47.195 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint : Writing tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp
NODE_3 2020-04-15 21:11:47.233 TRACE 1 --- [-StreamThread-1] o.a.k.s.s.internals.OffsetCheckpoint : Swapping tmp checkpoint file /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint.tmp /tmp/kafka-streams/XXXXProcessor/1_2/.checkpoint
NODE_3 2020-04-15 21:11:49.075 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.ProcessorStateManager : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint 1
NODE_3 2020-04-15 21:11:49.436 TRACE 1 --- [-StreamThread-2] o.a.k.s.p.i.StoreChangelogReader : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] Found checkpoint 1 from changelog XXXXProcessor-COMMAND_ID_STORE-changelog-2 for store COMMAND_ID_STORE.

NODE_3 2020-04-15 21:11:52.023 DEBUG 1 --- [-StreamThread-2] c.g.f.c.s.validation.CommandIdValidator : CommandId: mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc is not a duplicate.
NODE_3 2020-04-15 21:11:53.683 ERROR 1 --- [-StreamThread-2] o.a.k.s.p.i.AssignedStreamsTasks : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-2] Failed to process stream task 1_2 due to the following error: java.lang.RuntimeException

NODE_3 2020-04-15 21:12:05.346 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.ProcessorStateManager : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] task [1_2] Restoring state store COMMAND_ID_STORE from changelog topic XXXXProcessor-COMMAND_ID_STORE-changelog at checkpoint 1
NODE_3 2020-04-15 21:12:05.562 TRACE 1 --- [-StreamThread-1] o.a.k.s.p.i.StoreChangelogReader : stream-thread [XXXXProcessor-94f7be8e-beec-411f-b4ec-9031527bccdf-StreamThread-1] Found checkpoint 1 from changelog XXXXProcessor-COMMAND_ID_STORE-changelog-2 for store COMMAND_ID_STORE.

NODE_3 2020-04-15 21:12:06.424 WARN 1 --- [-StreamThread-1] c.g.f.c.s.validation.CommandIdValidator : Command duplicate detected. Command id mnl_cmd_9f1752da-45b7-4ef7-9ef8-209d826530bc

谢谢你的帮助。

在 COMMAND_ID_STORE 上运行的代码(类似于 Kafka Streams 中的示例)

添加状态存储

    WindowBytesStoreSupplier storeSupplier = Stores.persistentWindowStore(storeName,
RETENTION,
WINDOW_SIZE,
true);
StoreBuilder<WindowStore<String, String>> storeBuilder = Stores
.windowStoreBuilder(storeSupplier, Serdes.String(), Serdes.String());
streamsBuilder.addStateStore(storeBuilder);

获取

public static boolean isCommandIdDuplicate(String commandId, WindowStore<String, String> commandIdStore) {
long previousHourStart = DateTimeUtils.previousHourStart();

final WindowStoreIterator<String> storeIterator = commandIdStore
.fetch(commandId, previousHourStart, DateTimeUtils.currentTimeMillis());
final boolean hasNext = storeIterator.hasNext();
log.info("hasNext: {}", hasNext);
while (storeIterator.hasNext()) {
log.info("next: {}", storeIterator.next());
}
storeIterator.close();
return hasNext;

}

    final long windowStartTimestamp = DateTimeUtils.thisHourStart();
log.info("Putting commandId in store: {} {}", commandId, windowStartTimestamp);
commandIdStore.put(commandId, commandId, windowStartTimestamp);

转换

.transformValues(ValueTransformerSupplier<YYY,
? extends XXX>) CommandIdValidator::new,
COMMAND_ID_STORE);

变形金刚

@Slf4j
public class CommandIdValidator<T extends GeneratedMessageV3> implements
ValueTransformer<T, XXX<T>> {

WindowStore<String, String> commandIdStore;

@Override
public void init(ProcessorContext context) {
commandIdStore = (WindowStore<String, String>) context.getStateStore(COMMAND_ID_STORE);
}

@Override
public XXX<T> transform(T command) {

log.debug("Validating commandId.");

String commandId = getCommandId(command);

if (CommandIdValidationUtils.isCommandIdDuplicate(commandId, commandIdStore)) {
log.warn("Command duplicate detected. Command id " + commandId);
return new XXX(command,
XXX.duplicatedCommandId(commandId, command.getClass().getName()));
} else {
log.debug("CommandId: {} is not a duplicate.", commandId);
CommandIdValidationUtils.putCommandIdInStore(commandId, commandIdStore);
return XXX.successful(command);
}
}

@Override
public void close() {

}

}

最佳答案

您所描述的听起来确实像是 Kafka Streams 中的错误。

We rolled back to kafka clients 2.3.1 and the problem persists.

来自哪个版本? 2.4.02.4.12.5.0 最近都发布了,你能不能试试看问题是否还存在? (一些 EOS 错误已得到修复;我找不到与您描述的问题匹配的工单……所以不确定错误是否仍然存在。)

EOS 应该如何运作:

对于本地 RocksDB 实例,Kafka Streams 还会将“检查点”文件和一些元数据写入本地磁盘。如果检查点文件存在,我们就知道 RocksDB 是“干净的”,即与变更日志主题同步。如果检查点文件不存在,我们知道状态已损坏,整个 RocksDB 实例在启动时被清除,状态是从变更日志重建的。因此,检查点文件只应在干净的任务关闭(即重新平衡)期间写入,并且应在任务分配后和处理开始前删除。在处理过程中不应该有检查点文件,因此如果我们崩溃,我们知道存储已损坏并且我们从变更日志重建它。

一个可能是错误的理论如下:当第一个线程崩溃时(在实例 A 上),没有写入检查点文件,这是正确的。在第一次重新平衡之后,一个 StandbyTask 被分配给实例 A —— 这个 StandbyTask 应该清除本地状态存储(因为没有检查点文件)并且错误是这个删除不会发生(也许你可以通过验证日志;清除应该被记录下来,或者文件系统上的一些“创建文件”时间戳帮助)。当实例 B 上的第二个线程崩溃时,将触发另一个重新平衡。 StandbyTask 错误地认为它处于干净状态并写入检查点文件。当事件任务在实例 A 的另一个线程上启动时,它只会采用损坏的状态,因为(不正确的)现有检查点文件表明状态是一致的。

  • 当您切换到内存存储时,问题自然消失,因为磁盘上没有本地状态,也没有检查点文件,因此状态总是从变更日志主题重建。
  • 当您禁用 StandbyTasks 时,在第二次重新平衡中不会写入任何检查点文件,因此当事件任务在实例 A 中的第二个线程上启动时,它现在会正确清除存储并从变更日志中重建。

也许您可以尝试验证这一点,也可以打开错误报告?

关于apache-kafka - 在任务迁移后与 exactly_once 和备用副本中止事务后无效的状态存储内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61247789/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com