gpt4 book ai didi

java - MongoSink 响应后提交给 kafka 消费者 - alpakka mongo 连接器

转载 作者:行者123 更新时间:2023-12-02 06:04:37 25 4
gpt4 key购买 nike

我正在使用 alpakka 连接器来使用来自 Kafka 的数据包并将它们插入到 Mongo db 中。我试图在收到 Mongo db 的响应后提交偏移量,但找不到任何相关内容。如何确保数据包成功插入 Mongodb 后才会提交偏移量?

我使用 Consumer.CommittableSource 作为源,MongoSink 作为接收器,并使用 RunnableGraph 运行流。请参阅代码以获取更多说明。

来源:

    public Source<ConsumerMessage.CommittableMessage<String, String>, Consumer.Control> source() {
return Consumer.committableSource(consumerSettings, subscription);
}

流程:

   public Flow<ConsumerMessage.CommittableMessage, String, NotUsed> transformation() {
return Flow.of(ConsumerMessage.CommittableMessage.class).map(i -> i.record().value().toString());
}

水槽:

    public Sink<String, CompletionStage<Done>> sink() {
return MongoSink.insertOne(mongoCollection);
}

图表:

        RunnableGraph graph = RunnableGraph.fromGraph(GraphDSL.create(sink(), (builder, s) -> {
builder.from(builder.add(source()).out()).via(builder.add(transformation())).to(s);
return ClosedShape.getInstance();
}));
graph.run(ActorMaterializer.create(actorSystem));

编辑:

使用 PassThroughFlow,插入 Mongo 正在工作,并且没有给出任何异常或错误,但仍然无法提交数据包。 conversionCommit() 函数从未被调用过。

更新流程:

    public Flow<String, String, NotUsed> transformationMongo() {
LOGGER.info("Insert into Mongo");
return MongoFlow.insertOne(connection.getDbConnection());
}
public Flow<ConsumerMessage.CommittableMessage, ConsumerMessage.CommittableOffset, NotUsed> transformationCommit() {
return Flow.of(ConsumerMessage.CommittableMessage.class).map(i -> i.committableOffset());
}

水槽:

    public Sink<ConsumerMessage.CommittableOffset, CompletionStage<Done>> sinkCommit() {
CommitterSettings committerSettings = CommitterSettings.create(actorSystem);
return Committer.sink(committerSettings);
}

PassThroughFlow:

public class PassThroughFlow {
public Graph<FlowShape<ConsumerMessage.CommittableMessage, ConsumerMessage.CommittableMessage>, NotUsed> apply(Flow<ConsumerMessage.CommittableMessage, String, NotUsed> flow) {
return Flow.fromGraph(GraphDSL.create(builder -> {
UniformFanOutShape broadcast = builder.add(Broadcast.create(2));
FanInShape2 zip = builder.add(ZipWith.create((left, right) -> Keep.right()));
builder.from(broadcast.out(0)).via(builder.add(flow)).toInlet(zip.in0());
builder.from(broadcast.out(1)).toInlet(zip.in1());
return FlowShape.apply(broadcast.in(), zip.out());
}));
}
}

图表:


RunnableGraph graph = RunnableGraph.fromGraph(GraphDSL.create(sinkCommit(), (builder, s) -> {
builder.from(builder.add(source()).out()).via(builder.add(passThroughFlow.apply(transformation().via(transformationMongo())))).via(builder.add(transformationCommit())).to(sales);
return ClosedShape.getInstance();
}));
graph.run(ActorMaterializer.create(actorSystem));

//Insertion to mongo is working but still the packet not committed. transformationCommit() function has never been called.

最佳答案

使用 MongoFlow 而不是 MongoSink这使您可以在之后继续流,在那里您将能够提交偏移量。

关于java - MongoSink 响应后提交给 kafka 消费者 - alpakka mongo 连接器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55950058/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com