gpt4 book ai didi

google-cloud-platform - 如何在使用 Google Cloud Dataflow 清除 Cloud Memorystore 中的缓存后插入数据?

转载 作者:行者123 更新时间:2023-12-03 06:36:23 25 4
gpt4 key购买 nike

如果要由数据流处理的输入文件有数据,我正在执行一项任务,以清除 memorystore 的缓存。这意味着,如果输入文件没有记录,则不会刷新内存存储,但输入文件即使有一条记录,也应刷新内存存储,然后应对输入文件进行处理。
我的数据流应用程序是一个多管道应用程序,它读取、处理然后将数据存储在 memorystore 中。管道正在成功执行。但是,内存存储的刷新正在工作,但刷新后,插入没有发生。
我编写了一个函数,在检查输入文件是否有记录后刷新内存。
FlushingMemorystore.java

package com.click.example.functions;

import afu.org.checkerframework.checker.nullness.qual.Nullable;
import com.google.auto.value.AutoValue;
import org.apache.beam.sdk.io.redis.RedisConnectionConfiguration;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PDone;
import org.apache.beam.vendor.grpc.v1p26p0.com.google.common.base.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Pipeline;

public class FlushingMemorystore {


private static final Logger LOGGER = LoggerFactory.getLogger(FlushingMemorystore.class);

public static FlushingMemorystore.Read read() {
return (new AutoValue_FlushingMemorystore_Read.Builder())
.setConnectionConfiguration(RedisConnectionConfiguration.create()).build();
}

@AutoValue
public abstract static class Read extends PTransform<PCollection<Long>, PDone> {

public Read() {
}

@Nullable
abstract RedisConnectionConfiguration connectionConfiguration();

@Nullable
abstract Long expireTime();
abstract FlushingMemorystore.Read.Builder toBuilder();

public FlushingMemorystore.Read withEndpoint(String host, int port) {
Preconditions.checkArgument(host != null, "host cannot be null");
Preconditions.checkArgument(port > 0, "port cannot be negative or 0");
return this.toBuilder().setConnectionConfiguration(this.connectionConfiguration().withHost(host).withPort(port)).build();
}

public FlushingMemorystore.Read withAuth(String auth) {
Preconditions.checkArgument(auth != null, "auth cannot be null");
return this.toBuilder().setConnectionConfiguration(this.connectionConfiguration().withAuth(auth)).build();
}

public FlushingMemorystore.Read withTimeout(int timeout) {
Preconditions.checkArgument(timeout >= 0, "timeout cannot be negative");
return this.toBuilder().setConnectionConfiguration(this.connectionConfiguration().withTimeout(timeout)).build();
}

public FlushingMemorystore.Read withConnectionConfiguration(RedisConnectionConfiguration connectionConfiguration) {
Preconditions.checkArgument(connectionConfiguration != null, "connection cannot be null");
return this.toBuilder().setConnectionConfiguration(connectionConfiguration).build();
}

public FlushingMemorystore.Read withExpireTime(Long expireTimeMillis) {
Preconditions.checkArgument(expireTimeMillis != null, "expireTimeMillis cannot be null");
Preconditions.checkArgument(expireTimeMillis > 0L, "expireTimeMillis cannot be negative or 0");
return this.toBuilder().setExpireTime(expireTimeMillis).build();
}

public PDone expand(PCollection<Long> input) {
Preconditions.checkArgument(this.connectionConfiguration() != null, "withConnectionConfiguration() is required");
input.apply(ParDo.of(new FlushingMemorystore.Read.ReadFn(this)));
return PDone.in(input.getPipeline());
}

private static class ReadFn extends DoFn<Long, String> {
private static final int DEFAULT_BATCH_SIZE = 1000;
private final FlushingMemorystore.Read spec;
private transient Jedis jedis;
private transient Pipeline pipeline;
private int batchCount;

public ReadFn(FlushingMemorystore.Read spec) {
this.spec = spec;
}

@Setup
public void setup() {
this.jedis = this.spec.connectionConfiguration().connect();
}

@StartBundle
public void startBundle() {
this.pipeline = this.jedis.pipelined();
this.pipeline.multi();
this.batchCount = 0;
}

@ProcessElement
public void processElement(DoFn<Long, String>.ProcessContext c) {
Long count = c.element();
batchCount++;

if(count==null && count < 0) {
LOGGER.info("No Records are there in the input file");
} else {
if (pipeline.isInMulti()) {
pipeline.exec();
pipeline.sync();
jedis.flushDB();
}
LOGGER.info("*****The memorystore is flushed*****");
}
}

@FinishBundle
public void finishBundle() {
if (this.pipeline.isInMulti()) {
this.pipeline.exec();
this.pipeline.sync();
}
this.batchCount=0;
}

@Teardown
public void teardown() {
this.jedis.close();
}

}

@AutoValue.Builder
abstract static class Builder {

Builder() {
}

abstract FlushingMemorystore.Read.Builder setExpireTime(Long expireTimeMillis);

abstract FlushingMemorystore.Read build();

abstract FlushingMemorystore.Read.Builder setConnectionConfiguration(RedisConnectionConfiguration connectionConfiguration);

}

}

}
我在 Starter Pipeline 代码中使用该函数。
正在使用该函数的启动管道的代码片段:
 StorageToRedisOptions options = PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(StorageToRedisOptions.class);

Pipeline p = Pipeline.create(options);

PCollection<String> lines = p.apply(
"ReadLines", TextIO.read().from(options.getInputFile()));

/**
* Flushing the Memorystore if there are records in the input file
*/
lines.apply("Checking Data in input file", Count.globally())
.apply("Flushing the data store", FlushingMemorystore.read()
.withConnectionConfiguration(RedisConnectionConfiguration
.create(options.getRedisHost(), options.getRedisPort())));
清除缓存后插入处理数据的代码片段:
 dataset.apply(SOME_DATASET_TRANSFORMATION, RedisIO.write()
.withMethod(RedisIO.Write.Method.SADD)
.withConnectionConfiguration(RedisConnectionConfiguration
.create(options.getRedisHost(), options.getRedisPort())));
数据流执行良好,它也会刷新内存存储,但此后插入不起作用。你能指出我哪里出错了吗?
任何解决问题的解决方案都非常感谢。提前致谢!
编辑:
根据评论中的要求提供其他信息
使用的运行时是 Java 11,它使用 Apache Beam SDK for 2.24.0
如果输入文件有记录,它会用一些逻辑处理数据。例如,如果输入文件有如下数据:
abcabc|Bruce|Wayne|2000
abbabb|Tony|Stark|3423
在这种情况下,数据流将计算记录数,其中 2 并将根据逻辑处理 id、名字等,然后将其存储在 memorystore 中。此输入文件每天都会出现,因此,如果输入文件有记录,则应清除(或刷新)内存库。
虽然管道没有破裂,但我想我错过了一些东西。

最佳答案

我怀疑这里的问题是您需要确保“刷新”步骤在 RedisIO.write 步骤发生之前运行(并完成)。梁有Wait.on您可以为此使用转换。
为了实现这一点,我们可以使用刷新 PTransform 的输出作为我们已经刷新数据库的信号 - 我们只在完成刷新后写入数据库。 process调用您的冲洗 DoFn 将如下所示:

@ProcessElement
public void processElement(DoFn<Long, String>.ProcessContext c) {
Long count = c.element();

if(count==null && count < 0) {
LOGGER.info("No Records are there in the input file");
} else {
if (pipeline.isInMulti()) {
pipeline.exec();
pipeline.sync();
jedis.flushDB();
}
LOGGER.info("*****The memorystore is flushed*****");
}
c.output("READY");
}
一旦我们有一个信号指出数据库已被刷新,我们可以使用它在向其写入新数据之前等待:
Pipeline p = Pipeline.create(options);

PCollection<String> lines = p.apply(
"ReadLines", TextIO.read().from(options.getInputFile()));

/**
* Flushing the Memorystore if there are records in the input file
*/
PCollection<String> flushedSignal = lines
.apply("Checking Data in input file", Count.globally())
.apply("Flushing the data store", FlushingMemorystore.read()
.withConnectionConfiguration(RedisConnectionConfiguration
.create(options.getRedisHost(), options.getRedisPort())));

// Then we use the flushing signal to start writing to Redis:

dataset
.apply(Wait.on(flushedSignal))
.apply(SOME_DATASET_TRANSFORMATION, RedisIO.write()
.withMethod(RedisIO.Write.Method.SADD)
.withConnectionConfiguration(RedisConnectionConfiguration
.create(options.getRedisHost(), options.getRedisPort())));

关于google-cloud-platform - 如何在使用 Google Cloud Dataflow 清除 Cloud Memorystore 中的缓存后插入数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64653112/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com