gpt4 book ai didi

google-cloud-dataflow - Google 数据流,DATA_LOSS 异常

转载 作者:行者123 更新时间:2023-12-05 03:11:22 26 4
gpt4 key购买 nike

我正在从谷歌数据流中获取低于 DATA_LOSS 的异常。我有 10-15 个 Json 文件(每个文件大约 2-3 MB)。我正在使用 jackson2 解析文件,使用 ParDo() 进行一些转换,最后进行分组以删除重复项。如果我做错了什么,你能帮忙吗?

它与 DirectPipelineRunner 一起工作正常。

2016-05-11T13:06:31.277Z: Detail:  (eb15ba3070c2acbc): Checking required Cloud APIs are enabled.
2016-05-11T13:06:31.637Z: Detail: (eb15ba3070c2abc7): Expanding GroupByKey operations into optimizable parts.
2016-05-11T13:06:31.640Z: Detail: (eb15ba3070c2a6b5): Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
2016-05-11T13:06:31.646Z: Detail: (eb15ba3070c2a77f): Annotating graph with Autotuner information.
2016-05-11T13:06:31.732Z: Detail: (eb15ba3070c2a5c0): Fusing adjacent ParDo, Read, Write, and Flatten operations
2016-05-11T13:06:31.735Z: Detail: (eb15ba3070c2a0ae): Fusing consumer ParDo(ParserEdition) into ReadEditions4GCS
2016-05-11T13:06:31.737Z: Detail: (eb15ba3070c2ab9c): Fusing consumer ParDo(GetRelatedArticles) into ParDo(FlattenArticles)
2016-05-11T13:06:31.739Z: Detail: (eb15ba3070c2a68a): Fusing consumer GroupByKey/GroupByWindow into GroupByKey/Read
2016-05-11T13:06:31.741Z: Detail: (eb15ba3070c2a178): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/Ungroup into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/GroupByWindow
2016-05-11T13:06:31.743Z: Detail: (eb15ba3070c2ac66): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/GroupByWindow into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Read
2016-05-11T13:06:31.745Z: Detail: (eb15ba3070c2a754): Fusing consumer Write2Gcs/Write2Gcs into Write2Gcs/FileBasedSink.ReshardForWrite/Ungroup
2016-05-11T13:06:31.747Z: Detail: (eb15ba3070c2a242): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Write into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Reify
2016-05-11T13:06:31.750Z: Detail: (eb15ba3070c2ad30): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Reify into Write2Gcs/FileBasedSink.ReshardForWrite/RandomKey
2016-05-11T13:06:31.752Z: Detail: (eb15ba3070c2a81e): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/RandomKey into Write2Gcs/FileBasedSink.ReshardForWrite/Window.Into()
2016-05-11T13:06:31.754Z: Detail: (eb15ba3070c2a30c): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/Window.Into() into ParDo(Article2CSV)
2016-05-11T13:06:31.757Z: Detail: (eb15ba3070c2adfa): Fusing consumer ParDo(Article2CSV) into AnonymousParDo
2016-05-11T13:06:31.759Z: Detail: (eb15ba3070c2a8e8): Fusing consumer GroupByKey/Write into GroupByKey/Reify
2016-05-11T13:06:31.761Z: Detail: (eb15ba3070c2a3d6): Fusing consumer AnonymousParDo into GroupByKey/GroupByWindow
2016-05-11T13:06:31.763Z: Detail: (eb15ba3070c2aec4): Fusing consumer GroupByKey/Reify into ParDo(Article2KV)
2016-05-11T13:06:31.765Z: Detail: (eb15ba3070c2a9b2): Fusing consumer ParDo(FlattenArticles) into ParDo(ParserEdition)
2016-05-11T13:06:31.768Z: Detail: (eb15ba3070c2a4a0): Fusing consumer ParDo(Article2KV) into ParDo(GetRelatedArticles)
2016-05-11T13:06:31.815Z: Basic: (eb15ba3070c2aa26): Worker configuration: n1-standard-1 in us-central1-f.
2016-05-11T13:06:32.154Z: Detail: (eb15ba3070c2a931): Adding StepResource setup and teardown to workflow graph.
2016-05-11T13:06:32.262Z: Basic: (120e40c18a94ee3a): Starting 3 workers...
2016-05-11T13:06:32.272Z: Basic: S01: (b31e9392dace1359): Executing operation GroupByKey/Create
2016-05-11T13:06:32.504Z: Basic: S02: (27044e90035e1dd6): Executing operation ReadEditions4GCS+ParDo(ParserEdition)+ParDo(FlattenArticles)+ParDo(GetRelatedArticles)+ParDo(Article2KV)+GroupByKey/Reify+GroupByKey/Write
2016-05-11T13:07:11.352Z: Detail: (e26d7dfd74bb5700): Workers have started successfully.
2016-05-11T13:07:23.464Z: Error: (91724060ab73dbcb): java.io.IOException: DATA_LOSS: Inconsistent number of records, parsed 108, expected 109 when dataflow-articlemetadatapipeline-g-05110606-31f5-harness-cmwd talking to tcp://localhost:12345
at com.google.cloud.dataflow.sdk.runners.worker.ApplianceShuffleWriter.write(Native Method)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.writeChunk(ChunkingShuffleEntryWriter.java:72)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.close(ChunkingShuffleEntryWriter.java:66)
at com.google.cloud.dataflow.sdk.runners.worker.ShuffleSink$ShuffleSinkWriter.close(ShuffleSink.java:272)
at com.google.cloud.dataflow.sdk.util.common.worker.WriteOperation.finish(WriteOperation.java:100)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:254)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:191)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:144)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:180)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:161)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:148)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

如果我多次运行相同的代码。我也确实得到了稍微不同的异常

2016-05-11T13:00:27.649Z: Detail:  (7ad6fdbb36cc3e7a): Checking required Cloud APIs are enabled.
2016-05-11T13:00:27.994Z: Detail: (7ad6fdbb36cc3ed9): Expanding GroupByKey operations into optimizable parts.
2016-05-11T13:00:27.998Z: Detail: (7ad6fdbb36cc350f): Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
2016-05-11T13:00:28.009Z: Detail: (7ad6fdbb36cc37b1): Annotating graph with Autotuner information.
2016-05-11T13:00:28.106Z: Detail: (7ad6fdbb36cc356e): Fusing adjacent ParDo, Read, Write, and Flatten operations
2016-05-11T13:00:28.110Z: Detail: (7ad6fdbb36cc3ba4): Fusing consumer ParDo(ParserEdition) into ReadEditions4GCS
2016-05-11T13:00:28.112Z: Detail: (7ad6fdbb36cc31da): Fusing consumer ParDo(GetRelatedArticles) into ParDo(FlattenArticles)
2016-05-11T13:00:28.114Z: Detail: (7ad6fdbb36cc3810): Fusing consumer GroupByKey/GroupByWindow into GroupByKey/Read
2016-05-11T13:00:28.117Z: Detail: (7ad6fdbb36cc3e46): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/Ungroup into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/GroupByWindow
2016-05-11T13:00:28.120Z: Detail: (7ad6fdbb36cc347c): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/GroupByWindow into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Read
2016-05-11T13:00:28.124Z: Detail: (7ad6fdbb36cc3ab2): Fusing consumer Write2Gcs/Write2Gcs into Write2Gcs/FileBasedSink.ReshardForWrite/Ungroup
2016-05-11T13:00:28.127Z: Detail: (7ad6fdbb36cc30e8): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Write into Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Reify
2016-05-11T13:00:28.129Z: Detail: (7ad6fdbb36cc371e): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/GroupByKey/Reify into Write2Gcs/FileBasedSink.ReshardForWrite/RandomKey
2016-05-11T13:00:28.132Z: Detail: (7ad6fdbb36cc3d54): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/RandomKey into Write2Gcs/FileBasedSink.ReshardForWrite/Window.Into()
2016-05-11T13:00:28.135Z: Detail: (7ad6fdbb36cc338a): Fusing consumer Write2Gcs/FileBasedSink.ReshardForWrite/Window.Into() into ParDo(Article2CSV)
2016-05-11T13:00:28.137Z: Detail: (7ad6fdbb36cc39c0): Fusing consumer ParDo(Article2CSV) into AnonymousParDo
2016-05-11T13:00:28.139Z: Detail: (7ad6fdbb36cc3ff6): Fusing consumer GroupByKey/Write into GroupByKey/Reify
2016-05-11T13:00:28.141Z: Detail: (7ad6fdbb36cc362c): Fusing consumer AnonymousParDo into GroupByKey/GroupByWindow
2016-05-11T13:00:28.144Z: Detail: (7ad6fdbb36cc3c62): Fusing consumer GroupByKey/Reify into ParDo(Article2KV)
2016-05-11T13:00:28.146Z: Detail: (7ad6fdbb36cc3298): Fusing consumer ParDo(FlattenArticles) into ParDo(ParserEdition)
2016-05-11T13:00:28.148Z: Detail: (7ad6fdbb36cc38ce): Fusing consumer ParDo(Article2KV) into ParDo(GetRelatedArticles)
2016-05-11T13:00:28.196Z: Basic: (7ad6fdbb36cc3b3c): Worker configuration: n1-standard-1 in us-central1-f.
2016-05-11T13:00:28.459Z: Detail: (7ad6fdbb36cc3b9b): Adding StepResource setup and teardown to workflow graph.
2016-05-11T13:00:28.639Z: Basic: (cea9ab4bd124bf89): Starting 3 workers...
2016-05-11T13:00:28.658Z: Basic: S01: (e5a53851aa035056): Executing operation GroupByKey/Create
2016-05-11T13:00:28.896Z: Basic: S02: (5803a8f4cae47397): Executing operation ReadEditions4GCS+ParDo(ParserEdition)+ParDo(FlattenArticles)+ParDo(GetRelatedArticles)+ParDo(Article2KV)+GroupByKey/Reify+GroupByKey/Write
2016-05-11T13:01:12.228Z: Detail: (5d4a90d7ea1437dd): Workers have started successfully.
2016-05-11T13:01:22.911Z: Error: (f5a249985c78da4a): com.google.cloud.dataflow.sdk.util.UserCodeException: java.lang.RuntimeException: java.io.IOException: INVALID_ARGUMENT: unable to parse secondary key
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:193)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$Article2KV.processElement(ArticleMetaDataPipeline.java:147)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$GetRelatedArticles.lambda$processElement$0(ArticleMetaDataPipeline.java:71)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1689)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.RuntimeException: java.io.IOException: INVALID_ARGUMENT: unable to parse secondary key
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:176)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at com.google.cloud.dataflow.sdk.util.ReifyTimestampAndWindowsDoFn.processElement(ReifyTimestampAndWindowsDoFn.java:38)
Caused by: java.io.IOException: INVALID_ARGUMENT: unable to parse secondary key
at com.google.cloud.dataflow.sdk.runners.worker.ApplianceShuffleWriter.write(Native Method)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.writeChunk(ChunkingShuffleEntryWriter.java:72)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.put(ChunkingShuffleEntryWriter.java:56)
at com.google.cloud.dataflow.sdk.runners.worker.ShuffleSink$ShuffleSinkWriter.add(ShuffleSink.java:263)
at com.google.cloud.dataflow.sdk.runners.worker.ShuffleSink$ShuffleSinkWriter.add(ShuffleSink.java:169)
at com.google.cloud.dataflow.sdk.util.common.worker.WriteOperation.process(WriteOperation.java:90)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at com.google.cloud.dataflow.sdk.util.ReifyTimestampAndWindowsDoFn.processElement(ReifyTimestampAndWindowsDoFn.java:38)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$Article2KV.processElement(ArticleMetaDataPipeline.java:147)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$GetRelatedArticles.lambda$processElement$0(ArticleMetaDataPipeline.java:71)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1689)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

2016-05-11T13:01:25.776Z: Error: (e9a78cb2969ddea0): java.lang.RuntimeException: java.io.IOException: DATA_LOSS: Inconsistent number of records, parsed 97, expected 98 when dataflow-articlemetadatapipeline-g-05110600-a71d-harness-p0a2 talking to tcp://localhost:12345
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:176)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at com.google.cloud.dataflow.sdk.util.ReifyTimestampAndWindowsDoFn.processElement(ReifyTimestampAndWindowsDoFn.java:38)
Caused by: java.io.IOException: DATA_LOSS: Inconsistent number of records, parsed 97, expected 98 when dataflow-articlemetadatapipeline-g-05110600-a71d-harness-p0a2 talking to tcp://localhost:12345
at com.google.cloud.dataflow.sdk.runners.worker.ApplianceShuffleWriter.write(Native Method)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.writeChunk(ChunkingShuffleEntryWriter.java:72)
at com.google.cloud.dataflow.sdk.runners.worker.ChunkingShuffleEntryWriter.put(ChunkingShuffleEntryWriter.java:56)
at com.google.cloud.dataflow.sdk.runners.worker.ShuffleSink$ShuffleSinkWriter.add(ShuffleSink.java:263)
at com.google.cloud.dataflow.sdk.runners.worker.ShuffleSink$ShuffleSinkWriter.add(ShuffleSink.java:169)
at com.google.cloud.dataflow.sdk.util.common.worker.WriteOperation.process(WriteOperation.java:90)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at com.google.cloud.dataflow.sdk.util.ReifyTimestampAndWindowsDoFn.processElement(ReifyTimestampAndWindowsDoFn.java:38)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$Article2KV.processElement(ArticleMetaDataPipeline.java:147)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:213)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase$1.output(ParDoFnBase.java:174)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnContext.outputWindowedValue(DoFnRunner.java:333)
at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:487)
at uk.news.pipeline.api.ArticleMetaDataPipeline$GetRelatedArticles.processElement(ArticleMetaDataPipeline.java:69)
at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:189)

....

代码:

  static class ParserEdition extends DoFn<String, Edition> {
@Override
public void processElement(ProcessContext c) throws Exception {
final String editionStr = c.element();
ObjectMapper mapper = new ObjectMapper();
ObjectReader reader = mapper.reader(Edition.class);
final Object editionObj = reader.readValue(editionStr);
c.output((Edition)editionObj);
}
}

static class FlattenArticles extends DoFn<Edition,Article>{

@Override
public void processElement(ProcessContext c) throws Exception {
final List<Article> articleList = c.element().getArticleList();
for (Article a : articleList){
c.output(a);
}
}
}

static class GetRelatedArticles extends DoFn<Article, Article>{

@Override
public void processElement(ProcessContext c) throws Exception {

final Article tArticle = c.element();
if(tArticle.getCategory().equals("article")){
Article cloneArticle = (Article)SerializationUtils.clone(tArticle);
cloneArticle.setImage(getRelatedImage(tArticle));
c.output(cloneArticle);
final List<Article> relateArticle = getRelateArticle(tArticle, 5);
relateArticle.parallelStream().forEach(a -> c.output(a));
}
}

public List<Article> getRelateArticle(Article art, int i){
List<Article> list = new ArrayList<>();
if(i <= 0 || art.getArticleList() == null){
return null;
}else {
for(Article a : art.getArticleList()) {
if (a.getCategory().equals("article")) {
Article cloneArticle = (Article)SerializationUtils.clone(a);
cloneArticle.setImage(getRelatedImage(a));
list.add(cloneArticle);
final List<Article> relateArticle = getRelateArticle(a, i - 1);
if (relateArticle != null) {
list.addAll(relateArticle);
}
}
}
}
return list;
}

public Image getRelatedImage(Article art){
Image image = new Image();
try{
final Article article = art.getArticleList().parallelStream().filter(
a -> (a.getCategory().equals("image") && a.getIdentifier().equals(art.getLeadAssetId())))
.findFirst().get();
if(article!=null){
image.setId(article.getIdentifier());
image.setImageUrl(URLEncoder.encode(article.getCrops().get(0).getImageId(), Charset.defaultCharset().name()));
}
}catch (Exception e){ }
return image;
}
}

static class Article2CSV extends DoFn<Article,String>{

private String delimiter;

Article2CSV(String delimiter){
this.delimiter = delimiter;
}

@Override
public void processElement(ProcessContext c) throws Exception {
final Article a = c.element();
String str = a.getIdentifier()+delimiter+a.getTitle() +delimiter+getTeaserText(a) +
delimiter+a.getPublished() +delimiter+ a.getLeadAssetId() +
delimiter+ a.getImage().getImageUrl();
c.output(str);
}

private String getTeaserText(Article a){
String teaser = "";
if(!a.getContent().isEmpty()){
for(Content c : a.getContent()){
if(teaser.length() <= 100){
teaser = teaser + c.getData().getText();
}
}
}
return teaser;
}
}


static class Article2KV extends DoFn<Article, KV<String, Article>> {
@Override
public void processElement(ProcessContext c) throws Exception {
final Article art = c.element();
if(art!=null && !StringUtils.isBlank(art.getIdentifier()))
c.output(KV.of(art.getIdentifier(),art));
}
}

........


PipelineOptionsFactory.register(ArticleMetaDataOptions.class);
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args).as(ArticleMetaDataOptions.class);

final ArticleMetaDataOptions opts = (ArticleMetaDataOptions) options;
if (!opts.isTestMode())
options.setRunner(BlockingDataflowPipelineRunner.class);

options.setDefaultWorkerLogLevel(DataflowWorkerLoggingOptions.Level.DEBUG);
Pipeline p = Pipeline.create(options);

final PCollection<String> edition4GCS = p.apply(TextIO.Read.named("ReadEditions4GCS")
.from("gs://editions-newsuk/*"));

// get articles from all the editions
final PCollection<Article> articlePCollection = edition4GCS.apply(ParDo.of(new ParserEdition())).apply(ParDo.of(new FlattenArticles()));

// get related articles
final PCollection<Article> articles = articlePCollection.apply(ParDo.of(new GetRelatedArticles()));

// convert into KV
final PCollection<KV<String, Article>> articlesKV = articles.apply(ParDo.of(new Article2KV()));

// Group by *** if this code below this commented. It then always works...
final PCollection<KV<String, Iterable<Article>>> groupByCollection = articlesKV.apply(GroupByKey.<String, Article>create());


// filter the duplicate/partial articles
PCollection<Article> filterArticles = groupByCollection.apply(ParDo.of(new DoFn<KV<String, Iterable<Article>>, Article>() {
public void processElement(ProcessContext c) {
String articleId = c.element().getKey();
Iterable<Article> arts = c.element().getValue();
boolean found = false;
Article article = null;
if(arts!=null){
for(Article at : arts){
article = at;
if(at!=null && !StringUtils.isBlank(at.getImage().getImageUrl())){
found = true;
c.output(at);
break;
}
}
if(!found ){
c.output(article);
}
}
}}));


// transform into file and persist to GCS
filterArticles.apply(ParDo.of(new Article2CSV(opts.getDelimiter()))).apply(TextIO.Write.named("Write2Gcs").withoutSharding().to(opts.getOutputLocation()));

最佳答案

在从关联的 startBundleprocessElementfinishBundle< 返回之前,必须同步并完成对 DoFn.Context#output 的调用 方法。

在共享的代码中,您似乎在使用 someList.parallelStream().forEach(e -> c.output(e)) 来输出元素。 parallelStream 的使用违反了要求。

使用常规(非并行)forEach 应该可以防止这些问题。

关于google-cloud-dataflow - Google 数据流,DATA_LOSS 异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37163200/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com