- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试将 Spark 集群与仅依赖于 scala 2.11(代码在 scala 中)、spark 2.1.0 和 java 8 的应用程序一起使用。
我的集群由 2 个节点和 1 个主节点组成,每个节点都在具有相同名称 (spark) 和相同操作系统 (Ubuntu 16.04.2 LTS) 的帐户上的同一位置获得所有依赖项(jar)、项目文件)。
我尝试在 IntelliJIdea 中运行的代码:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.graphx._
object Main extends App {
val sparkConf = new SparkConf()
.setAppName("Application")
.setMaster("spark://<IP-Address-Master>:7077")
val sparkContext = new SparkContext(sparkConf)
sparkContext.setLogLevel("ERROR")
val NB_VERTICES = 50 // vertices count (TO ADAPT)
val DENSITY = 50 // graph density (TO ADAPT)
var graph = generateGraph(NB_VERTICES, sparkContext, DENSITY)// graph generation based on vertices number and density
var hasChanged = true // boolean to loop over
while(hasChanged){
previousGraph = graph // Save previous graph
graph = execute(graph, 1) // Execute 1 iteration of our algorithm
hasChanged = hasGraphChanged(previousGraph, graph) // Verify if it has changed, if it's false we break out of the loop
}
}
精度:我没有添加“generateGraph”等函数,因为我认为这会使帖子太长。但重要的是要知道:此代码在本地而不是集群中运行时可以完美运行。它仅依赖于 Spark graphX、scala 和 java。
因此,当我的集群启动并运行时(每个工作线程都在 Web UI 上注册并可见),我尝试运行此应用程序,但收到以下错误:
17/06/08 16:05:00 INFO SparkContext: Running Spark version 2.1.0
17/06/08 16:05:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/08 16:05:01 WARN Utils: Your hostname, workstation resolves to a loopback address: 127.0.1.1; using 172.16.24.203 instead (on interface enx28f10e4fec2a)
17/06/08 16:05:01 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/06/08 16:05:01 INFO SecurityManager: Changing view acls to: spark
17/06/08 16:05:01 INFO SecurityManager: Changing modify acls to: spark
17/06/08 16:05:01 INFO SecurityManager: Changing view acls groups to:
17/06/08 16:05:01 INFO SecurityManager: Changing modify acls groups to:
17/06/08 16:05:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); groups with view permissions: Set(); users with modify permissions: Set(spark); groups with modify permissions: Set()
17/06/08 16:05:02 INFO Utils: Successfully started service 'sparkDriver' on port 42652.
17/06/08 16:05:02 INFO SparkEnv: Registering MapOutputTracker
17/06/08 16:05:02 INFO SparkEnv: Registering BlockManagerMaster
17/06/08 16:05:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/08 16:05:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/08 16:05:02 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-fe269631-606f-4e03-a75a-82809f4dce2d
17/06/08 16:05:02 INFO MemoryStore: MemoryStore started with capacity 869.7 MB
17/06/08 16:05:02 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/08 16:05:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/08 16:05:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://172.16.24.203:4040
17/06/08 16:05:02 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://172.16.24.203:7077...
17/06/08 16:05:03 INFO TransportClientFactory: Successfully created connection to /172.16.24.203:7077 after 50 ms (0 ms spent in bootstraps)
17/06/08 16:05:03 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170608160503-0000
17/06/08 16:05:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42106.
17/06/08 16:05:03 INFO NettyBlockTransferService: Server created on 172.16.24.203:42106
17/06/08 16:05:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/08 16:05:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 172.16.24.203, 42106, None)
17/06/08 16:05:03 INFO BlockManagerMasterEndpoint: Registering block manager 172.16.24.203:42106 with 869.7 MB RAM, BlockManagerId(driver, 172.16.24.203, 42106, None)
17/06/08 16:05:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 172.16.24.203, 42106, None)
17/06/08 16:05:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20170608160503-0000/0 on worker-20170608145510-172.16.24.196-41159 (172.16.24.196:41159) with 8 cores
17/06/08 16:05:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20170608160503-0000/0 on hostPort 172.16.24.196:41159 with 8 cores, 1024.0 MB RAM
17/06/08 16:05:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 172.16.24.203, 42106, None)
17/06/08 16:05:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20170608160503-0000/1 on worker-20170608185509-172.16.24.210-42227 (172.16.24.210:42227) with 4 cores
17/06/08 16:05:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20170608160503-0000/1 on hostPort 172.16.24.210:42227 with 4 cores, 1024.0 MB RAM
17/06/08 16:05:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20170608160503-0000/0 is now RUNNING
17/06/08 16:05:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20170608160503-0000/1 is now RUNNING
17/06/08 16:05:03 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/06/08 16:05:10 ERROR TaskSetManager: Task 1 in stage 6.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 6.0 failed 4 times, most recent failure: Lost task 1.3 in stage 6.0 (TID 14, 172.16.24.196, executor 0): java.lang.ClassNotFoundException: Main$$anonfun$3
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
at Main$.hasGraphChanged(Main.scala:168)
at Main$.main(Main.scala:401)
at Main.main(Main.scala)
Caused by: java.lang.ClassNotFoundException: Main$$anonfun$3
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:85)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
看来他们都注册并获得了运行此应用程序的访问权限,但无论如何他们都失败了。
我尝试在 Spark-Shell 中运行一个简单的 Pi 近似值,它运行良好并且分布在集群上。我不知道这可能是什么,我尝试了这里提出的很多方法(在每个节点上为 JARS 设置 env,使用 SparkConf.addJars 手动添加它们等),但我仍然收到此错误。
有人知道它可能是什么吗?
非常感谢。
最佳答案
代码显示的是完整代码吗?!如果是这样,那就是问题所在。
将代码包装在一个对象内,例如带有 main
入口方法的 object SparkApp
,然后重新开始。
object SparkApp {
def main(args: Array[String]): Unit = {
// ...your code here
}
}
您还可以使用object SparkApp extends App
,但众所周知,这有时会导致失败。
我强烈建议使用最新、最好的 Spark 2.1.1。
关于将 scala 应用程序提交到独立 Spark 集群时出现 java.lang.ClassNotFoundException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44445207/
我正在通过 labrepl 工作,我看到了一些遵循此模式的代码: ;; Pattern (apply #(apply f %&) coll) ;; Concrete example user=> (a
我从未向应用商店提交过应用,但我会在不久的将来提交。 到目前为止,我对为 iPhone 而非 iPad 进行设计感到很自在。 我了解,通过将通用PAID 应用放到应用商店,客户只需支付一次就可以同时使
我有一个应用程序,它使用不同的 Facebook 应用程序(2 个不同的 AppID)在 Facebook 上发布并显示它是“通过 iPhone”/“通过 iPad”。 当 Facebook 应用程序
我有一个要求,我们必须通过将网站源文件保存在本地 iOS 应用程序中来在 iOS 应用程序 Webview 中运行网站。 Angular 需要服务器来运行应用程序,但由于我们将文件保存在本地,我们无法
所以我有一个单页客户端应用程序。 正常流程: 应用程序 -> OAuth2 服务器 -> 应用程序 我们有自己的 OAuth2 服务器,因此人们可以登录应用程序并获取与用户实体关联的 access_t
假设我有一个安装在用户设备上的 Android 应用程序 A,我的应用程序有一个 AppWidget,我们可以让其他 Android 开发人员在其中以每次安装成本为基础发布他们的应用程序推广广告。因此
Secrets of the JavaScript Ninja中有一个例子它提供了以下代码来绕过 JavaScript 的 Math.min() 函数,该函数需要一个可变长度列表。 Example:
当我分别将数组和对象传递给 function.apply() 时,我得到 NaN 的 o/p,但是当我传递对象和数组时,我得到一个数字。为什么会发生这种情况? 由于数组也被视为对象,为什么我无法使用它
CFSDN坚持开源创造价值,我们致力于搭建一个资源共享平台,让每一个IT人在这里找到属于你的精彩世界. 这篇CFSDN的博客文章ASP转换格林威治时间函数DateDiff()应用由作者收集整理,如果你
我正在将列表传递给 map并且想要返回一个带有合并名称的 data.frame 对象。 例如: library(tidyverse) library(broom) mtcars %>% spl
我有一个非常基本的问题,但我不知道如何实现它:我有一个返回数据框,其中每个工具的返回值是按行排列的: tmp<-as.data.frame(t(data.frame(a=rnorm(250,0,1)
我正在使用我的 FB 应用创建群组并邀请用户加入我的应用群组,第一次一切正常。当我尝试创建另一个组时,出现以下错误: {"(OAuthException - #4009) (#4009) 在有更多用户
我们正在开发一款类似于“会说话的本”应用程序的 child 应用程序。它包含大量用于交互式动画的 JPEG 图像序列。 问题是动画在 iPad Air 上播放正常,但在 iPad 2 上播放缓慢或滞后
我关注 clojure 一段时间了,它的一些功能非常令人兴奋(持久数据结构、函数式方法、不可变状态)。然而,由于我仍在学习,我想了解如何在实际场景中应用,证明其好处,然后演化并应用于更复杂的问题。即,
我开发了一个仅使用挪威语的应用程序。该应用程序不使用本地化,因为它应该仅以一种语言(挪威语)显示。但是,我已在 Info.plist 文件中将“本地化 native 开发区域”设置为“no”。我还使用
读完 Anthony's response 后上a style-related parser question ,我试图说服自己编写单体解析器仍然可以相当紧凑。 所以而不是 reference ::
multicore 库中是否有类似 sapply 的东西?还是我必须 unlist(mclapply(..)) 才能实现这一点? 如果它不存在:推理是什么? 提前致谢,如果这是一个愚蠢的问题,我们深表
我喜欢在窗口中弹出结果,以便更容易查看和查找(例如,它们不会随着控制台继续滚动而丢失)。一种方法是使用 sink() 和 file.show()。例如: y <- rnorm(100); x <- r
我有一个如下所示的 spring mvc Controller @RequestMapping(value="/new", method=RequestMethod.POST) public Stri
我正在阅读 StructureMap关于依赖注入(inject),首先有两部分初始化映射,具体类类型的接口(interface),另一部分只是实例化(请求实例)。 第一部分需要配置和设置,这是在 Bo
我是一名优秀的程序员,十分优秀!