gpt4 book ai didi

hadoop - 在hadoop中解析Spark驱动程序主机时出现错误

转载 作者:行者123 更新时间:2023-12-02 21:46:05 26 4
gpt4 key购买 nike

我正在尝试针对Apache Hadoop 2.2.0 YARN集群运行Spark-1.0.1。两者都部署在我的Windows 7计算机上。当我尝试运行JavaSparkPI示例时,我正在Hadoop端解析异常。在Spark方面,所有参数看起来都不错,并且端口的5位数字之后没有多余的字符。有人可以帮忙吗...

Exception in thread "main" java.lang.NumberFormatException: For input string: "57831'"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java.lang.Integer.parseInt(Integer.java:527)
at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
at org.apache.spark.util.Utils$.parseHostPort(Utils.scala:544)
at org.apache.spark.deploy.yarn.ExecutorLauncher.waitForSparkMaster(ExecutorLauncher.scala:163)
at org.apache.spark.deploy.yarn.ExecutorLauncher.run(ExecutorLauncher.scala:101)
at org.apache.spark.deploy.yarn.ExecutorLauncher$$anonfun$main$1.apply$mcV$sp(ExecutorLauncher.scala:263)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:53)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:52)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:52)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ExecutorLauncher.scala:262)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ExecutorLauncher.scala)


14/08/11 09:00:38 INFO yarn.Client: Command for starting the Spark ApplicationMaster:
List(%JAVA_HOME%/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=%PWD%/tmp,
-Dspark.tachyonStore.folderName=\"spark-80c61976-f671-41b9-96a0-0c7c5c317fdb\",
-Dspark.yarn.secondary.jars=\"\",
-Dspark.driver.host=\"W01B62GR.UBSPROD.MSAD.UBS.NET\",
-Dspark.app.name=\"JavaSparkPi\",
-Dspark.jars=\"file:/N:/Nick/Spark/spark-1.0.1-bin-hadoop2/bin/../lib/spark-examples-1.0.1-hadoop2.2.0.jar\",
-Dspark.fileserver.uri=\"http://139.149.169.172:57836\",
-Dspark.executor.extraClassPath=\"N:\Nick\Spark\spark-1.0.1-bin-hadoop2\lib\spark-examples-1.0.1-hadoop2.2.0.jar\",
-Dspark.master=\"yarn-client\", -Dspark.driver.port=\"57831\",
-Dspark.driver.extraClassPath=\"N:\Nick\Spark\spark-1.0.1-bin-hadoop2\lib\spark-examples-1.0.1-hadoop2.2.0.jar\",
-Dspark.httpBroadcast.uri=\"http://139.149.169.172:57835\",
-Dlog4j.configuration=log4j-spark-container.properties,
org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null,
--args 'W01B62GR.UBSPROD.MSAD.UBS.NET:57831' ,
--executor-memory, 1024, --executor-cores, 1,
--num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)

最佳答案

该错误看起来很清楚:57831'不是数字。 57831是。看你的论点:
'W01B62GR.UBSPROD.MSAD.UBS.NET:57831''不应该在那里。如果您的意思不是原来的参数,请显示命令行。我不确定这在没有Cygwin的Windows上是否可以使用。

关于hadoop - 在hadoop中解析Spark驱动程序主机时出现错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25238908/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com