- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
if (!rdd.isEmpty()) rdd.foreach(a => println(a._1))) 行会生-6ren">
当我在集群中执行我的 Spark 代码时,info.foreachRDD(rdd => if (!rdd.isEmpty()) rdd.foreach(a => println(a._1)))
行会生成一条错误消息(见下文)。
我做错了什么,错误消息表示什么?
val info = dstreamdata.map[(String, (Long, Long, Long, List[String]))](a => {
(a._1, (a._2._1, a._2._2, 1, a._2._3))
}).
reduceByKey((a, b) => {
(Math.min(a._1, b._1), Math.max(a._2, b._2), a._3 + b._3, a._4 ++ b._4)
}).
updateStateByKey(Utils.updateState)
info.foreachRDD(rdd => if (!rdd.isEmpty()) rdd.foreach(a => println(a._1)))
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply$mcZ$sp(RDD.scala:1430)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1430)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1430)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.isEmpty(RDD.scala:1429)
at org.consumer.kafka.KafkaTestConsumer$$anonfun$run$2.apply(KafkaTestConsumer.scala:155)
at org.consumer.kafka.KafkaTestConsumer$$anonfun$run$2.apply(KafkaTestConsumer.scala:155)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/11/23 19:17:46 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 51.0 failed 4 times, most recent failure: Lost task 1.3 in stage 51.0 (TID 476, ip-175-33-333-6.eu-east-1.compute.internal): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_1479883845484_0065_01_000003 on host: ip-175-33-333-6.eu-east-1.compute.internal. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_1479883845484_0065_01_000003
Exit code: 50
Stack trace: ExitCodeException exitCode=50:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 50
Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 51.0 failed 4 times, most recent failure: Lost task 1.3 in stage 51.0 (TID 476, ip-175-33-333-6.eu-east-1.compute.internal): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_1479883845484_0065_01_000003 on host: ip-172-20-235-6.eu-west-1.compute.internal. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_1479883845484_0065_01_000003
Exit code: 50
Stack trace: ExitCodeException exitCode=50:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 50
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply$mcZ$sp(RDD.scala:1430)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1430)
at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1430)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.isEmpty(RDD.scala:1429)
at org.consumer.kafka.KafkaTestConsumer$$anonfun$run$2.apply(KafkaTestConsumer.scala:155)
at org.consumer.kafka.KafkaTestConsumer$$anonfun$run$2.apply(KafkaTestConsumer.scala:155)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
最佳答案
我在这里找到了我的解决方案:Spark when union a lot of RDD throws stack overflow error
基本上我有一个 List[RDD[]] 并且我在做 list.reduce(_ union _)。这导致了 java.lang.StackOverflowError(我在日志中看到了一个 WARN),然后我看到了一系列
java.io.IOException:连接由对等方重置
...
容器编号:container_
退出代码:50
堆栈跟踪:ExitCodeException exitCode=50
解决方案是将 list.reduce(_ union _) 替换为 sparkContext.union(list)。
关于scala - 为什么启动我的 Spark Streaming 应用程序会给出 "Container exited with a non-zero exit code 50"?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40771894/
我正在使用 gmock 并模拟了一个函数 boost::beast::http::response_parser作为输出参数。功能签名看起来像: error_code readFromSocket(b
我的任务是打印由“非元音、元音、非元音”组成的单词列表,即 bab、bac、bad、bad ... 到 zuz。 我已经设法创建了一个代码,它执行前两个字母,但在最后一个循环中丢失并只打印'}' -
我正在尝试使用 label2rgb 生成 RGB 标签切片并使用它来更新 RGB 体积,如下所示: labelRGB_slice=label2rgb(handles.label(:,:,han
我有一个很奇怪的问题。我在 dll 中定义了一个接口(interface),如下所示: public interface IKreator2 { string Name { get; set;
在我的 openshift Drupal 托管中,网络都在 SSL 下 http://domain.com -> https://www.domain.com 确定 http://www.domain
我收到警告“退出构造函数时不可为空的事件‘SomeEvent’必须包含非空值。考虑将事件声明为可空。” 这是我的代码的一个非常简化的版本,它复制了完全相同的问题。我在这里错过了什么?这与 .Net 6
在一次大学考试中,我被要求测试一些 apache 簿记员类/方法,在这样做的过程中,我想在我的参数化测试中使用 mockito。没有 mockito 的测试工作正常但是当我尝试模拟接口(interfa
假设 A 列在 7 行中有以下值: 2 [空白的] 0 -0.3 0 [空白的] 0 如何获取范围(7 行)中非空/空白且不为零的最后一个值?因此,在这种情况下,正确答案是 -0.3。 最佳答案 =I
考虑以下受 this talk 启发的代码: template struct even_common_type_helper_impl; template struct even_common_typ
考虑这段代码, struct A {}; struct B { B(const A&) {} }; void f(B) { cout << "f()"<
考虑下面的类(class)。如果我对它运行 Findbugs,它会在第 5 行但不在第 7 行给我一个错误(“可序列化类中的非 transient 非可序列化实例字段”)。 1 public clas
我正在编写一个 python 脚本来计算 数据包丢失 通过使用 ping IP 地址linux 中的 subprocess 模块。 CSV 文件中保存了多个 IP 地址。当只给出可 ping 目的地时
我只是做文本更改,在文本之前它工作正常。请任何人都可以帮助我。 提前致谢 最佳答案 我已经解决了: ionic cordova 插件rmcordova-plugin-ionic-webview ion
我如何定义 在 persistence.xml 中? 我的项目在 Tomcat 6 和 Tomcat 7 中运行良好。 现在我正在使用 Struts 2 Spring 3.0.5 JPA 2 Jbos
我有一个 maven 仓库中不存在的第三方 jar,我们称它为“a.jar”,它也依赖于至少 20 多个第三方 jar,其中大部分不在 maven 中或者,我们称它们为“b.jar、c.jar、d.j
我已经浏览了各种线程很多小时(不夸张),但一直无法找到一种解决方案组合,使我能够将非 www 和 http 转发到 www 和 https,同时仍然能够查看 php 文件没有扩展名。如下是我的ngin
Scott Meyer 关于非成员函数增加封装并允许更优雅的设计(设计方面)的论点对我来说似乎非常有效。看这里:Article 但是我对此有疑问。 (似乎还有其他人,尤其是库开发人员,他们通常完全忽略
在对类设计的一些事实感到困惑时,特别是函数是否应该是成员,我查看了 Effective c++ 并找到了第 23 条,即 Prefer non-member non-friend functions
我正在尝试使用 firebase 云功能将通知发送到一个点半径的圆内的设备。我能够获取圈内设备的 ID,但无法获取 token ,使用 console.log(token) 打印时 token 为空。
我在我的项目中使用 React-ckeditor 5 包。我得到一个反序列化的 html 数据,我正在使用 React-html-parser 包将它解析成 html 模板,并将这个解析的数据传递给
我是一名优秀的程序员,十分优秀!