gpt4 book ai didi

apache-spark - 3600 秒超时,触发工作人员与心跳驱动程序中的 Spark 驱动程序通信

转载 作者:行者123 更新时间:2023-12-03 09:28:58 28 4
gpt4 key购买 nike

我没有配置任何超时值,而是使用了默认设置。
在哪里配置3600秒超时?如何解决?

错误信息:

18/01/10 13:51:44 WARN Executor: Issue communicating with driver in heartbeater
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [3600 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:738)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply$mcV$sp(Executor.scala:767)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:767)
at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:767)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1948)
at org.apache.spark.executor.Executor$$anon$2.run(Executor.scala:767)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [3600 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
... 14 more

最佳答案

在错误消息中它说:

This timeout is controlled by spark.executor.heartbeatInterval



因此,您尝试的第一件事是增加此值。它可以通过多种方式完成,例如将值增加到 10000 秒:
  • 使用时 spark-submit只需添加标志:
    --conf spark.executor.heartbeatInterval=10000s
  • 您可以在 spark-defaults.conf 中添加一行:
    spark.executor.heartbeatInterval 10000s
  • 创建新时SparkSession在您的程序中,添加一个配置参数(Scala):
    val spark = SparkSession.builder
    .config("spark.executor.heartbeatInterval", "10000s")
    .getOrCreate()

  • 如果这没有帮助,尝试增加 spark.network.timeout 的值可能是个好主意。以及。它也是与这些类型的超时相关的问题的常见来源。

    关于apache-spark - 3600 秒超时,触发工作人员与心跳驱动程序中的 Spark 驱动程序通信,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48219169/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com