gpt4 book ai didi

scala - org.apache.spark.SparkException : Job aborted due to stage failure: Task 98 in stage 11. 0失败4次

转载 作者:行者123 更新时间:2023-12-01 03:07:17 25 4
gpt4 key购买 nike

我正在使用Google Cloud Dataproc来完成工作,我的编辑器是Zepplin。我试图将json数据写入gcp存储桶。当我尝试10MB文件之前,它成功了。但是以10GB的文件失败。我的dataproc有1个主服务器,带有4CPU,26GB内存,500GB磁盘。 5名 worker 使用相同的配置。我想它应该能够处理10GB的数据。

我的命令是toDatabase.repartition(10).write.json("gs://mypath")
错误是

org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at org.apache.spark.sql.DataFrameWriter.json(DataFrameWriter.scala:528)
... 54 elided
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 98 in stage 11.0 failed 4 times, most recent failure: Lost task 98.3 in stage 11.0 (TID 3895, etl-w-2.us-east1-b.c.team-etl-234919.internal, executor 294): ExecutorLostFailure (executor 294 exited caused by one of the running tasks) Reason: Container marked as failed: container_1554684028327_0001_01_000307 on host: etl-w-2.us-east1-b.c.team-etl-234919.internal. Exit status: 143. Diagnostics: [2019-04-08 01:50:14.153]Container killed on request. Exit code is 143
[2019-04-08 01:50:14.153]Container exited with a non-zero exit code 143.
[2019-04-08 01:50:14.154]Killed by external signal

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
... 74 more

知道为什么吗?

最佳答案

如果Spark worker 使用较小的数据集而不是较大的数据集,则很可能会遇到Spark worker 的内存不足限制。每个员工的内存问题将更多地取决于您的分区和每个执行者设置,而不是整个群集范围内的可用内存(因此,创建较大的群集将无助于此类问题)。

您可以尝试以下任意组合:

  • 分为多个分区以输出,而不是10个
  • 使用highmem创建集群,而不是standard机器
  • 使用 Spark 内存设置创建集群,该设置会更改内存与CPU的比率:例如gcloud dataproc clusters create --properties spark:spark.executor.cores=1会将每个执行程序更改为一次仅以相同的内存量运行一个任务,而Dataproc通常每台计算机运行2个执行程序并相应地划分CPU 。在4核计算机上,您通常有2个执行程序,每个执行程序都允许2个内核。然后,此设置将仅为这两个执行器中的每个执行器提供1个内核,同时仍使用一半的机器内存。
  • 关于scala - org.apache.spark.SparkException : Job aborted due to stage failure: Task 98 in stage 11. 0失败4次,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55565364/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com