gpt4 book ai didi

hadoop - 所有节点启动失败

转载 作者:可可西里 更新时间:2023-11-01 14:48:03 24 4
gpt4 key购买 nike

我已经设置了我的配置文件并格式化了我的文件系统,但是每当我尝试执行启动 shell 脚本时,我都会收到此错误。

下面我放了hstart的别名

错误:

computer:~ seanplowman$ hstart
18/04/14 23:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-namenode-Seans
localhost: Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-datanode-Seans
localhost: Error: Could not find or load main class Mac.log

Starting secondary namenodes [0.0.0.0]
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-secondarynamenode-Seans
0.0.0.0: Error: Could not find or load main class Mac.log

18/04/14 23:35:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
/usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-resourcemanager-Seans
Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-nodemanager-Seans
localhost: Error: Could not find or load main class Mac.log

jps 还表示在运行启动脚本后没有任何节点启动。根据我的研究,我的主机名似乎有问题,但尝试更改这些主机名并没有解决任何问题。

我将提供我的其他配置文件来展示它们是如何针对上下文进行设置的。

/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. The uri's scheme determines
the config property (fs.SCHEME.impl) naming the FileSystem implementation
class. The uri's authority is used to determine the host, port, etc. for a filesystem.
</description>
</property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9010</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>

我也对 hadoop-env.sh 进行了一些更改。我会把它们放在下面。

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

.bashrc

#Hadoop variables
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
###end of paste

.bash_profile

alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"

我不确定从这里开始的下一步是查看几乎所有涉及的文件。

最佳答案

我认为您的 Mac 主机名中有空格。例如,Seans Mac

默认的日志文件就是这样命名的,

HDFS:log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
yarn :log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out

$HOSTNAME 是问题所在,空格是意外的。

如果您查看输出,您会注意到 hadoop-seanplowman-namenode-Seans,所以我怀疑

HADOOP_IDENT_STRING = 运行脚本的用户 = seanplowman
命令 = hadoop
主机名 = Seans Mac

看看是否fixing the hostname没有空格会改变任何东西。

如果没有,编辑yarn-daemon.shhadoop-daemon.sh 脚本以开始

#!/usr/bin/env bash
set -xv

然后用输出编辑问题

关于hadoop - 所有节点启动失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49838686/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com