- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我有 Hadoop 2.6.0.2.2.0.0-2041
和 Hive 0.14.0.2.2.0.0-2041
使用命令构建 Spark 后:
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -DskipTests package
我尝试使用以下命令在 YARN 上运行 Pi 示例:
export HADOOP_CONF_DIR=/etc/hadoop/conf
/var/home2/test/spark/bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--executor-memory 3G \
--num-executors 50 \
hdfs:///user/test/jars/spark-examples-1.3.0-hadoop2.4.0.jar \
1000
我得到异常:application_1427875242006_0029 由于 AM Container for appattempt_1427875242006_0029_000002 exitCode: 1 而失败 2
这实际上是 Diagnostics: Exception from container-launch.
(请请参阅下面的日志)。
应用程序跟踪 url 显示以下消息:
java.lang.Exception: Unknown container. Container either has not started or has already completed or doesn't belong to this node at all
还有:
Error: Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster
我让 Hadoop 在 4 个节点上正常工作,完全不知道如何让 Spark 在 YARN 上工作。
我应该设置 spark.yarn.access.namenodes
Spark 配置属性吗?虽然我的应用程序不需要直接访问任何名称节点,但也许这会解决问题?
请告知在哪里寻找,任何想法都会有很大帮助,谢谢!
Spark assembly has been built with Hive, including Datanucleus jars on classpath
15/04/06 10:53:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/04/06 10:53:42 INFO impl.TimelineClientImpl: Timeline service address: http://etl-hdp-yarn.foo.bar.com:8188/ws/v1/timeline/
15/04/06 10:53:42 INFO client.RMProxy: Connecting to ResourceManager at etl-hdp-yarn.foo.bar.com/192.168.0.16:8050
15/04/06 10:53:42 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
15/04/06 10:53:42 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (4096 MB per container)
15/04/06 10:53:42 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/04/06 10:53:42 INFO yarn.Client: Setting up container launch context for our AM
15/04/06 10:53:42 INFO yarn.Client: Preparing resources for our AM container
15/04/06 10:53:43 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
15/04/06 10:53:43 INFO yarn.Client: Uploading resource file:/var/home2/test/spark-1.3.0/assembly/target/scala-2.10/spark-assembly-1.3.0-hadoop2.6.0.jar -> hdfs://etl-hdp-nn1.foo.bar.com:8020/user/test/.sparkStaging/application_1427875242006_0029/spark-assembly-1.3.0-hadoop2.6.0.jar
15/04/06 10:53:44 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs:/user/test/jars/spark-examples-1.3.0-hadoop2.4.0.jar
15/04/06 10:53:44 INFO yarn.Client: Setting up the launch environment for our AM container
15/04/06 10:53:44 INFO spark.SecurityManager: Changing view acls to: test
15/04/06 10:53:44 INFO spark.SecurityManager: Changing modify acls to: test
15/04/06 10:53:44 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(test); users with modify permissions: Set(test)
15/04/06 10:53:44 INFO yarn.Client: Submitting application 29 to ResourceManager
15/04/06 10:53:44 INFO impl.YarnClientImpl: Submitted application application_1427875242006_0029
15/04/06 10:53:45 INFO yarn.Client: Application report for application_1427875242006_0029 (state: ACCEPTED)
15/04/06 10:53:45 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1428317623905
final status: UNDEFINED
tracking URL: http://etl-hdp-yarn.foo.bar.com:8088/proxy/application_1427875242006_0029/
user: test
15/04/06 10:53:46 INFO yarn.Client: Application report for application_1427875242006_0029 (state: ACCEPTED)
15/04/06 10:53:47 INFO yarn.Client: Application report for application_1427875242006_0029 (state: ACCEPTED)
15/04/06 10:53:48 INFO yarn.Client: Application report for application_1427875242006_0029 (state: ACCEPTED)
15/04/06 10:53:49 INFO yarn.Client: Application report for application_1427875242006_0029 (state: FAILED)
15/04/06 10:53:49 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1427875242006_0029 failed 2 times due to AM Container for appattempt_1427875242006_0029_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://etl-hdp-yarn.foo.bar.com:8088/proxy/application_1427875242006_0029/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1427875242006_0029_02_000001
Exit code: 1
Exception message: /mnt/hdfs01/hadoop/yarn/local/usercache/test/appcache/application_1427875242006_0029/container_1427875242006_0029_02_000001/launch_container.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
Stack trace: ExitCodeException exitCode=1: /mnt/hdfs01/hadoop/yarn/local/usercache/test/appcache/application_1427875242006_0029/container_1427875242006_0029_02_000001/launch_container.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1428317623905
final status: FAILED
tracking URL: http://etl-hdp-yarn.foo.bar.com:8088/cluster/app/application_1427875242006_0029
user: test
Exception in thread "main" org.apache.spark.SparkException: Application finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:622)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:647)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
最佳答案
如果您将 spark 与 hdp 一起使用,那么我们必须做以下事情。
将这些条目添加到您的 $SPARK_HOME/conf/spark-defaults.conf
spark.driver.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (您安装的 HDP 版本)
spark.yarn.am.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (您安装的 HDP 版本)
在 $SPARK_HOME/conf 中创建 java-opts 文件并在该文件中添加已安装的 HDP 版本,例如
-Dhdp.version=2.2.0.0-2041 (您安装的 HDP 版本)
要了解 hdp verion 请在集群中运行命令 hdp-select status hadoop-client
关于hadoop - Spark 1.3.0 : Running Pi example on YARN fails,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29470542/
我们有数据(此时未分配)要转换/聚合/透视到 wazoo。 我在 www 上看了看,我问的所有答案都指向 hadoop 可扩展、运行便宜(没有 SQL 服务器机器和许可证)、快速(如果你有足够的数据)
这很明显,我们都同意我们可以将 HDFS + YARN + MapReduce 称为 Hadoop。但是,Hadoop 生态系统中的其他不同组合和其他产品会怎样? 例如,HDFS + YARN + S
如果 es-hadoop 只是连接到 HDFS 的 Hadoop 连接器,它如何支持 Hadoop 分析? 最佳答案 我假设您指的是 this project .在这种情况下,ES Hadoop 项目
看完this和 this论文,我决定我想在 MapReduce 上为大型数据集实现分布式体积渲染设置作为我的本科论文工作。 Hadoop 是一个合理的选择吗? Java 不会扼杀一些性能提升或使与 C
我一直在尝试查找有关如何通过命令行提交 hadoop 作业的信息。 我知道命令 - hadoop jar jar-file 主类输入输出 还有另一个命令,我正在尝试查找有关它的信息,但未能找到 - h
Hadoop 服务器在 Kubernetes 中。而Hadoop客户端位于外网。所以我尝试使用 kubernetes-service 来使用 Hadoop 服务器。但是 hadoop fs -put
有没有人遇到奇怪的环境问题,在调用 hadoop 命令时被迫使用 SU 而不是 SUDO? sudo su -c 'hadoop fs -ls /' hdfs Found 4 itemsdrwxr-x
在更改 mapred-site.xml 中的属性后,我给出了一个 tar.bz2 文件、.gz 和 tar.gz 文件作为输入。以上似乎都没有奏效。我假设这里发生的是 hadoop 作为输入读取的记录
如何在 Hadoop Pipes 中获取正在 hadoop 映射器 中执行的输入文件 名称? 我可以很容易地在基于 java 的 map reducer 中获取文件名,比如 FileSplit fil
我想使用 MapReduce 方法分析连续的数据流(通过 HTTP 访问),因此我一直在研究 Apache Hadoop。不幸的是,Hadoop 似乎期望以固定大小的输入文件开始作业,而不是能够在新数
名称节点可以执行任务吗?默认情况下,任务在集群的数据节点上执行。 最佳答案 假设您正在询问MapReduce ... 使用YARN,MapReduce任务在应用程序主数据库中执行,而不是在nameno
我有一个关系A包含 (zip-code). 我还有另一个关系B包含 (name:gender:zip-code) (x:m:1234) (y:f:1234) (z:m:1245) (s:f:1235)
我是hadoop地区的新手。您能帮我负责(k2,list[v2,v2,v2...])形式的输出(意味着将键及其所有关联值组合在一起)的责任是吗? 谢谢。 最佳答案 这是Hadoop的MapReduce
因此,我一直在尝试编写一个hadoop程序,该程序将输入作为一个包含许多文件的文件,并且我希望hadoop程序的输出仅是输入文件的一行。但是我还没有做到这一点。我也不想去 reducer 课。如果有人
我使用的输入文本文件的内容是 1 "Come 1 "Defects," 1 "I 1 "Information 1 "J" 2 "Plain 5 "Project 1
谁能告诉我以下grep命令的作用: $ bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' 最佳答案 http:/
我不了解mapreducer的基本功能,mapreducer是否有助于将文件放入HDFS 或mapreducer仅有助于分析HDFS中现有文件中的内容 我对hadoop非常陌生,任何人都可以指导我理解
CopyFromLocal将从本地文件系统上载数据。 不要放会从任何文件上传数据,例如。本地FS,亚马逊S3 或仅来自本地fs ??? 最佳答案 请找到两个命令的用法。 put ======= Usa
我开始研究hadoop mapreduce。 我是Java和hadoop的初学者,并且了解hadoop mapreduce的编码,但是有兴趣了解它在云中的内部工作方式。 您能否分享一些很好的链接来说明
我一直在寻找Hadoop mapreduce类的类路径。我正在使用Hortonworks 2.2.4版沙箱。我需要这样的类路径来运行我的javac编译器: javac -cp (CLASS_PATH)
我是一名优秀的程序员,十分优秀!