gpt4 book ai didi

Hadoop : Starting Datanode doesn't seem to respond

转载 作者:可可西里 更新时间:2023-11-01 15:24:01 26 4
gpt4 key购买 nike

我的测试环境

我正在尝试在我的测试环境中部署一个基于 3 个节点的 Hadoop 集群:

  • 1 个名称节点(主节点:172.30.10.64)
  • 2 个数据节点(slave1:172.30.10.72 和 slave2:172.30.10.62)

我将具有主属性的文件配置到我的名称节点中,并将具有从属属性的文件配置到我的数据节点中。

硕士文件

主持人:

127.0.0.1       localhost
172.30.10.64 master
172.30.10.62 slave2
172.30.10.72 slave1

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

hdfs-site.xml :

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
</configuration>

核心网站.xml :

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>

yarn 网站.xml:

<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>

mapred-site.xml :

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
</configuration>

我有奴隶文件:

slave1
slave2

母版文件:

master

从属文件:

我只添加了根据主文件更改的文件。

hdfs-site.xml :

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>

我的问题

我从 /usr/local/hadoop/sbin 启动:

./start-dfs.sh && ./start-yarn.sh

这是我得到的:

hduser@master:/usr/local/hadoop/sbin$ ./start-dfs.sh && ./start-yarn.sh 
18/03/14 10:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
hduser@master's password:
master: starting namenode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-namenode-master.out
hduser@slave2's password: hduser@slave1's password:
slave2: starting datanode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-datanode-slave2.out

所以我从我的 slave2 打开了日志文件:

2018-03-14 10:46:05,494 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:06,495 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:07,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$

我做了什么

我尝试了一些东西,但到目前为止都没有效果:

  • 从主站到从站以及从站之间 ping 正常
  • ssh 从主站到从站以及从站之间工作正常
  • hdfs namenode -format 在我的主节点
  • 重新创建 Namenode 和 Datanaode 文件夹
  • 在我的主虚拟机中打开端口 9000
  • 防火墙已禁用:sudo ufw status --> 已禁用

我有点迷茫,因为一切似乎都很好,我不知道为什么我不克服启动我的 hadoop 集群。

最佳答案

我也许会找到答案:

我从主节点重新生成 ssh key ,然后复制到从节点。它现在似乎可以工作了。

#Generate a ssh key for hduser
$ ssh-keygen -t rsa -P ""

#Authorize the key to enable password less ssh
$ cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/.ssh/authorized_keys
$ chmod 600 authorized_keys

#Copy this key to slave1 to enable password less ssh and slave2 too
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2

关于Hadoop : Starting Datanode doesn't seem to respond,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49274487/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com