- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
因此,我一直在尝试着手运行 Spark-scala。我写了一个简单的测试程序,它只是稍微扩展了 SparkPi 示例:
def main(args: Array[String]): Unit = {
test()
}
def calcPi(spark: SparkContext, args: Array[String], numSlices: Long): Array[Double] = {
val start = System.nanoTime()
val slices = if (args.length > 0) args(0).toInt else 2
val n = math.min(numSlices * slices, Int.MaxValue).toInt // avoid overflow
val count = spark.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
val piVal = 4.0 * count / n
println("Pi is roughly " + piVal)
spark.stop()
val end = System.nanoTime()
return Array(piVal, end - start, (piVal - Math.PI)/Math.PI)
}
def test(): Unit ={
val conf = new SparkConf().setAppName("Pi Test")
conf.setSparkHome("/usr/local/spark")
conf.setMaster("spark://<URL_OF_SPARK_CLUSTER>:7077")
conf.set("spark.executor.memory", "512m")
conf.set("spark.cores.max", "1")
conf.set("spark.blockManager.port", "33291")
conf.set("spark.executor.port", "33292")
conf.set("spark.broadcast.port", "33293")
conf.set("spark.fileserver.port", "33294")
conf.set("spark.driver.port", "33296")
conf.set("spark.replClassServer.port", "33297")
val sc = new SparkContext(conf)
val pi = calcPi(sc, Array(), 1000)
for(item <- pi) {
println(item)
}
}
然后我确保端口 33291-33300 在我的机器上打开。
当我运行程序时,它成功地命中了 spark 集群,并且似乎分配了核心:
但是当程序到达实际运行 hadoop 作业的位置时,应用程序日志显示:
15/12/07 11:50:21 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at BotDetector.scala:49), which has no missing parents
15/12/07 11:50:21 INFO MemoryStore: ensureFreeSpace(1840) called with curMem=0, maxMem=2061647216
15/12/07 11:50:21 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1840.0 B, free 1966.1 MB)
15/12/07 11:50:21 INFO MemoryStore: ensureFreeSpace(1194) called with curMem=1840, maxMem=2061647216
15/12/07 11:50:21 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1194.0 B, free 1966.1 MB)
15/12/07 11:50:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.5.106:33291 (size: 1194.0 B, free: 1966.1 MB)
15/12/07 11:50:21 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/12/07 11:50:21 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at BotDetector.scala:49)
15/12/07 11:50:21 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/12/07 11:50:36 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:50:51 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:51:06 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:51:21 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:51:36 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:51:51 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:52:06 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:52:21 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/12/07 11:52:22 INFO AppClient$ClientActor: Executor updated: app-20151207175020-0003/0 is now EXITED (Command exited with code 1)
15/12/07 11:52:22 INFO SparkDeploySchedulerBackend: Executor app-20151207175020-0003/0 removed: Command exited with code 1
15/12/07 11:52:22 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/12/07 11:52:22 INFO AppClient$ClientActor: Executor added: app-20151207175020-0003/1 on worker-20151207173821-10.240.0.7-33295 (10.240.0.7:33295) with 5 cores
15/12/07 11:52:22 INFO SparkDeploySchedulerBackend: Granted executor ID app-20151207175020-0003/1 on hostPort 10.240.0.7:33295 with 5 cores, 512.0 MB RAM
15/12/07 11:52:22 INFO AppClient$ClientActor: Executor updated: app-20151207175020-0003/1 is now LOADING
15/12/07 11:52:23 INFO AppClient$ClientActor: Executor updated: app-20151207175020-0003/1 is now RUNNING
15/12/07 11:52:36 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
当我进入远程服务器并查看工作日志时,他们说:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hduser/apache-tez-0.7.0-src/tez-dist/target/tez-0.7.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/12/07 17:50:21 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/12/07 17:50:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/07 17:50:21 INFO spark.SecurityManager: Changing view acls to: hduser,jschirmer
15/12/07 17:50:21 INFO spark.SecurityManager: Changing modify acls to: hduser,jschirmer
15/12/07 17:50:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hduser, jschirmer); users with modify permissions: Set(hduser, jschirmer)
15/12/07 17:50:22 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/12/07 17:50:22 INFO Remoting: Starting remoting
15/12/07 17:50:22 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@10.240.0.7:33292]
15/12/07 17:50:22 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 33292.
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:65)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:97)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:159)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:66)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:65)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
... 4 more
15/12/07 17:52:22 INFO util.Utils: Shutdown hook called
我已经尝试将驱动程序和执行程序端口设置为显式打开端口,结果相同。目前还不清楚是什么问题。有人有什么建议吗?
另外,请注意,如果我将这段完全相同的代码编译成一个 fat jar,并将其复制到远程服务器,并通过 spark-submit 运行它,那么它会成功运行。我的服务器上确实有一个 yarn 配置,我愿意运行 spark-yarn,但我的理解是这不能从远程服务器完成,因为你将 master 指定为 yarn-cluster,而且没有地方可以将主机放入配置中。
最佳答案
看来你的防火墙有问题。首先检查您是否在集群中启用了所有必需的端口然后在有一些 random ports 之后在 spark 中,因此您需要为集群修复这些端口,然后才能远程使用 spark。
关于scala - Spark 集群无法从远程 Scala 应用程序分配资源,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34095200/
我正在通过 labrepl 工作,我看到了一些遵循此模式的代码: ;; Pattern (apply #(apply f %&) coll) ;; Concrete example user=> (a
我从未向应用商店提交过应用,但我会在不久的将来提交。 到目前为止,我对为 iPhone 而非 iPad 进行设计感到很自在。 我了解,通过将通用PAID 应用放到应用商店,客户只需支付一次就可以同时使
我有一个应用程序,它使用不同的 Facebook 应用程序(2 个不同的 AppID)在 Facebook 上发布并显示它是“通过 iPhone”/“通过 iPad”。 当 Facebook 应用程序
我有一个要求,我们必须通过将网站源文件保存在本地 iOS 应用程序中来在 iOS 应用程序 Webview 中运行网站。 Angular 需要服务器来运行应用程序,但由于我们将文件保存在本地,我们无法
所以我有一个单页客户端应用程序。 正常流程: 应用程序 -> OAuth2 服务器 -> 应用程序 我们有自己的 OAuth2 服务器,因此人们可以登录应用程序并获取与用户实体关联的 access_t
假设我有一个安装在用户设备上的 Android 应用程序 A,我的应用程序有一个 AppWidget,我们可以让其他 Android 开发人员在其中以每次安装成本为基础发布他们的应用程序推广广告。因此
Secrets of the JavaScript Ninja中有一个例子它提供了以下代码来绕过 JavaScript 的 Math.min() 函数,该函数需要一个可变长度列表。 Example:
当我分别将数组和对象传递给 function.apply() 时,我得到 NaN 的 o/p,但是当我传递对象和数组时,我得到一个数字。为什么会发生这种情况? 由于数组也被视为对象,为什么我无法使用它
CFSDN坚持开源创造价值,我们致力于搭建一个资源共享平台,让每一个IT人在这里找到属于你的精彩世界. 这篇CFSDN的博客文章ASP转换格林威治时间函数DateDiff()应用由作者收集整理,如果你
我正在将列表传递给 map并且想要返回一个带有合并名称的 data.frame 对象。 例如: library(tidyverse) library(broom) mtcars %>% spl
我有一个非常基本的问题,但我不知道如何实现它:我有一个返回数据框,其中每个工具的返回值是按行排列的: tmp<-as.data.frame(t(data.frame(a=rnorm(250,0,1)
我正在使用我的 FB 应用创建群组并邀请用户加入我的应用群组,第一次一切正常。当我尝试创建另一个组时,出现以下错误: {"(OAuthException - #4009) (#4009) 在有更多用户
我们正在开发一款类似于“会说话的本”应用程序的 child 应用程序。它包含大量用于交互式动画的 JPEG 图像序列。 问题是动画在 iPad Air 上播放正常,但在 iPad 2 上播放缓慢或滞后
我关注 clojure 一段时间了,它的一些功能非常令人兴奋(持久数据结构、函数式方法、不可变状态)。然而,由于我仍在学习,我想了解如何在实际场景中应用,证明其好处,然后演化并应用于更复杂的问题。即,
我开发了一个仅使用挪威语的应用程序。该应用程序不使用本地化,因为它应该仅以一种语言(挪威语)显示。但是,我已在 Info.plist 文件中将“本地化 native 开发区域”设置为“no”。我还使用
读完 Anthony's response 后上a style-related parser question ,我试图说服自己编写单体解析器仍然可以相当紧凑。 所以而不是 reference ::
multicore 库中是否有类似 sapply 的东西?还是我必须 unlist(mclapply(..)) 才能实现这一点? 如果它不存在:推理是什么? 提前致谢,如果这是一个愚蠢的问题,我们深表
我喜欢在窗口中弹出结果,以便更容易查看和查找(例如,它们不会随着控制台继续滚动而丢失)。一种方法是使用 sink() 和 file.show()。例如: y <- rnorm(100); x <- r
我有一个如下所示的 spring mvc Controller @RequestMapping(value="/new", method=RequestMethod.POST) public Stri
我正在阅读 StructureMap关于依赖注入(inject),首先有两部分初始化映射,具体类类型的接口(interface),另一部分只是实例化(请求实例)。 第一部分需要配置和设置,这是在 Bo
我是一名优秀的程序员,十分优秀!