- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我有一个由 2 台机器组成的集群,我正在尝试使用 YARN 集群管理器提交一个 spark 作业。
我可以使用独立的集群管理器成功运行 map-reduce 作业和 spark 作业。但是当我用 YARN 运行它时,我得到了一个错误。
下面是输出:
mike@mp-desktop ~/opt/hadoop $ spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster ~/prg/scala/spark-examples_2.11-1.0.jar 10
16/07/09 08:59:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/09 08:59:01 INFO client.RMProxy: Connecting to ResourceManager at mp-desktop/192.168.1.60:8050
16/07/09 08:59:01 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
16/07/09 08:59:01 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/07/09 08:59:01 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
16/07/09 08:59:01 INFO yarn.Client: Setting up container launch context for our AM
16/07/09 08:59:01 INFO yarn.Client: Setting up the launch environment for our AM container
16/07/09 08:59:01 INFO yarn.Client: Preparing resources for our AM container
16/07/09 08:59:02 INFO yarn.Client: Uploading resource file:/home/mike/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar -> hdfs://mp-desktop:9000/user/mike/.sparkStaging/application_1468043888852_0001/spark-assembly-1.6.2-hadoop2.6.0.jar
16/07/09 08:59:06 INFO yarn.Client: Uploading resource file:/home/mike/prg/scala/spark-examples_2.11-1.0.jar -> hdfs://mp-desktop:9000/user/mike/.sparkStaging/application_1468043888852_0001/spark-examples_2.11-1.0.jar
16/07/09 08:59:06 INFO yarn.Client: Uploading resource file:/tmp/spark-2ee6dfd6-e9d3-4ca4-9e98-5ce9e75dc757/__spark_conf__7114661171911035574.zip -> hdfs://mp-desktop:9000/user/mike/.sparkStaging/application_1468043888852_0001/__spark_conf__7114661171911035574.zip
16/07/09 08:59:06 INFO spark.SecurityManager: Changing view acls to: mike
16/07/09 08:59:06 INFO spark.SecurityManager: Changing modify acls to: mike
16/07/09 08:59:06 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mike); users with modify permissions: Set(mike)
16/07/09 08:59:07 INFO yarn.Client: Submitting application 1 to ResourceManager
16/07/09 08:59:07 INFO impl.YarnClientImpl: Submitted application application_1468043888852_0001
16/07/09 08:59:08 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:08 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1468043947113
final status: UNDEFINED
tracking URL: http://mp-desktop:8088/proxy/application_1468043888852_0001/
user: mike
16/07/09 08:59:09 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:10 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:11 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:12 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:13 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:14 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:15 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:16 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:17 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:18 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:19 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:20 INFO yarn.Client: Application report for application_1468043888852_0001 (state: ACCEPTED)
16/07/09 08:59:21 INFO yarn.Client: Application report for application_1468043888852_0001 (state: FAILED)
16/07/09 08:59:21 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1468043888852_0001 failed 2 times due to AM Container for appattempt_1468043888852_0001_000002 exited with exitCode: -1
For more detailed output, check application tracking page:http://mp-desktop:8088/cluster/app/application_1468043888852_0001Then, click on links to logs of each attempt.
Diagnostics: File /home/mike/hadoopstorage/nm-local-dir/usercache/mike/appcache/application_1468043888852_0001/container_1468043888852_0001_02_000001 does not exist
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1468043947113
final status: FAILED
tracking URL: http://mp-desktop:8088/cluster/app/application_1468043888852_0001
user: mike
16/07/09 08:59:21 INFO yarn.Client: Deleting staging directory .sparkStaging/application_1468043888852_0001
Exception in thread "main" org.apache.spark.SparkException: Application application_1468043888852_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/07/09 08:59:21 INFO util.ShutdownHookManager: Shutdown hook called
16/07/09 08:59:21 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-2ee6dfd6-e9d3-4ca4-9e98-5ce9e75dc757
谢谢!
最佳答案
我得到的错误信息是类似的:
16/07/15 13:55:53 INFO Client: Application report for application_1468583505911_0002 (state: ACCEPTED)
16/07/15 13:55:54 INFO Client: Application report for application_1468583505911_0002 (state: ACCEPTED)
16/07/15 13:55:55 INFO Client: Application report for application_1468583505911_0002 (state: ACCEPTED)
16/07/15 13:55:56 INFO Client: Application report for application_1468583505911_0002 (state: FAILED)
16/07/15 13:55:56 INFO Client:
client token: N/A
diagnostics: Application application_1468583505911_0002 failed 2 times due to AM Container for appattempt_1468583505911_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://<redacted>:8088/cluster/app/application_1468583505911_0002Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://<redacted>:8020/user/root/.sparkStaging/application_1468583505911_0002/__spark_conf__4995486282135454270.zip
java.io.FileNotFoundException: File does not exist: hdfs://<redacted>:8020/user/root/.sparkStaging/application_1468583505911_0002/__spark_conf__4995486282135454270.zip
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1367)
at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1359)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1359)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
尝试在客户端模式下运行 YARN 而不是集群模式,后者会将驱动程序日志打印到您的 shell:
spark-submit --class myClass --master yarn/path/to/myClass.jar
日志输出显示 myClass 立即失败,因为我的 args 数量不正确(类期望超过 1 个 arg)。该类因我的自定义退出代码 (42) 而失败,并将“使用情况”信息打印到日志中,让我能够解决实际问题。
当我使用 --master yarn-cluster 运行时,这个输出对我来说是不可见的,我也看不到上面提到的“使用情况”信息。相反,我遇到的只是上面显示的模糊的“文件不存在”问题。
为 myClass 指定正确数量的参数解决了这个问题。
此时,我假设我的 Spark 作业失败得如此之快,以至于它开始清理在 YARN 检查之前复制的 .sparkStaging 文件。
关于hadoop - Spark 1.6.2 & yarn : diagnostics: Application failed 2 times due to AM Container for exited with exitCode: -1,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38279054/
大家好,所有rdf/sparql开发人员。这是一个困扰了我一段时间的问题,但是自从发布rdf和sparql规范以来,似乎没人能准确回答这个问题。 为了说明这种情况,RDF定义了几种方法来处理资源的多值
我在我的应用程序中使用 Bootstrap ,现在遇到了一个大问题。问题是 .container 元素在 1360 px 的屏幕上具有 274px 的左右边距,这是相当大的。结果,一切看起来都被挤到了
我在删除Docker容器时遇到问题-当我使用前一个命令时,它不起作用(Docker报告了容器ID,但没有删除它)。后者起作用了。据我所知,Docker语法是相同的: C:\Users\user>doc
std::back_inserter 仅适用于带有 push_back 的容器,因此它不适用于 set 和 map 另一方面,std::inserter 适用于所有容器类型。那么我可以一直使用 std
我正在开发 Spring Boot + Redis 示例。在此示例中,我开发了一些自定义方法,这些方法基于 RoleName 提取详细信息。对于以下方法 userRepository.findByRo
在我的 Swift 应用程序中尝试实现 Google Tag Manager v5 时,我遇到了以下警告,这给我带来了一些麻烦: GoogleTagManager warning: No defaul
安装了新的 Laravel 8 项目并在加载第一个实例时,出现以下错误。这很奇怪,因为我把它放在一边,后来从 Laravel 5.8 -> 6 升级了另一个项目(工作正常),当我去检查网站时遇到了类似
我有以下测试代码,它只创建一个空的 hashmap (containers.map) 并在之后填充它: hashtable = containers.Map('KeyType','char','Va
我对它们之间的差异有一点了解,但是拥有专家意见将是很棒的。 Container-Optimized Google Compute Engine Images Google Container Engi
我会模板化一个函数,以便将它与 vector、set 或任何其他 STL 容器(具有正确的 API...)一起使用 我的函数当前原型(prototype)是: vector> f ( const ve
我正在尝试匹配包含和不包含某些字符串的 Pandas DataFrame 的行。例如: import pandas df = pandas.Series(['ab1', 'ab2', 'b2', 'c
我需要在一个非常庞大的全文索引数据库中找到一些文本,但我不知道在我的查询术语变体中使用什么更好。 我看过一些使用的例子 SELECT Foo.Bar FROM Foo WHERE
Traceback (most recent call last): File "demo.py", line 132, in `result = find_strawberry(image
我正在尝试编写一个函数,其中一列包含一个子字符串并且不包含另一个子字符串。 在下面的示例中,如果我的行包含“某些项目”并且不包含“开销”,我希望我的函数返回 1。 row| example strin
我试图在文本文件中 append 包含给定字符串集的任何行。我创建了一个测试文件,在其中放置了这些字符串之一。我的代码应该将文本文件中包含这些字符串之一的任何行打印在与文本文件中的上一行相同的行上。这
我正在尝试学习如何使用 std.container 中可用的各种容器结构,但我无法理解如何执行以下操作: 1) 如何创建一个空容器?例如,假设我有一个用户定义的类 Foo,并且想要创建一个应该包含 F
$contains: [1, 2] // @> [1, 2] (PG array contains operator) $contained: [1, 2] // <@ [1,
我看到 CSS 中使用了这种“div#container”语法,我想知道它是如何工作的。有人有它的资源吗? 最佳答案 除了作为上面提到的唯一引用之外,ID 还增加了特异性(我强烈建议您阅读这篇文章或一
我有一个生成很多子对象的应用程序,每个子对象都与一些全局应用程序对象一起工作,例如在全局应用程序注册表中注册自己,更新应用程序统计信息等。 应用程序应该如何将访问这些全局对象的能力传递给 child
Here is a Sencha fiddle of my tab panel setup.按钮被动态添加到 vbox 选项卡容器中,该容器是 hbox 布局设置的一部分。选项卡容器的宽度由 flex
我是一名优秀的程序员,十分优秀!