gpt4 book ai didi

bash - Hadoop Yarn 上的 Spark 安装

转载 作者:可可西里 更新时间:2023-11-01 15:31:19 26 4
gpt4 key购买 nike

请有人帮助我,我正在尝试在 Haoop Yarn 上安装 spark,但出现此错误:

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:113)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:379)
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:141)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

hadoop 守护进程是:

4064 SecondaryNameNode
3478 NameNode
4224 ResourceManager
4480 NodeManager
3727 DataNode
6279 Jps

和 bash 文件:

export JAVA_HOME=/home/user/hadoop-two/jdk1.7.0_71
export HADOOP_INSTALL=/home/user/hadoop-two/hadoop-2.6.0
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export SPARK_HOME=/home/user/hadoop-two/spark-1.4.0

最佳答案

安装 Spark,并配置上述环境变量。在 conf/spark-env.sh 文件中配置 JAVA_HOME 和 HADOOP_CONF_DIR:

export HADOOP_CONF_DIR=/home/user/hadoop-2.7.1/etc/hadoop
export JAVA_HOME=/home/user/jdk1.8.0_60

并在 spark Conf 目录中定义 slave(放置 slave 的 dns 名称):

conf/slaves

并使用命令在 YARN 上启动 spark:

bin/spark-shell --master yarn-client

到此为止!!!!

关于bash - Hadoop Yarn 上的 Spark 安装,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32358999/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com