gpt4 book ai didi

hadoop - 只能写入 1 minReplication 节点中的 0 个。有 0 个数据节点正在运行,并且在此操作中排除了 0 个节点

转载 作者:行者123 更新时间:2023-12-02 20:20:16 26 4
gpt4 key购买 nike

我在两个集群上设置了 hadoop,并在我尝试使用以下文件放置文件时在主节点中:hadoop fs -put test.txt /mydata/我收到以下错误:

put: File /mydata/test.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.

当我输入 hdfs dfsadmin -report它给了我以下信息:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: 0.00%
Replicated Blocks:
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Low redundancy blocks with highest priority to recover: 0
Pending deletion blocks: 0
Erasure Coded Block Groups:
Low redundancy block groups: 0
Block groups with corrupt internal blocks: 0
Missing block groups: 0
Low redundancy blocks with highest priority to recover: 0
Pending deletion blocks: 0

然后,当我尝试使用 hadoop fs -ls / 从 datanode 访问 hdfs 时它给了我以下信息:
INFO ipc.Client: Retrying connect to server: master/172.31.81.91:10001. Already tried 0 time(s); maxRetries=45

我在 2 个 aws-ubuntu 实例上设置了实例并打开了所有 TCP/IPV4 端口。我有以下设置:

在两个设置上:

核心站点.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://172.31.81.91:9000</value>
</property>
</configuration>

hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>
</property>

<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>
</property>
</configuration>

mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
</configuration>

/etc/hosts
127.0.0.1 localhost
172.31.81.91 master
172.31.45.232 slave-1

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

worker
172.31.45.232

当我输入 jps我可以得到

掌握
12532 NameNode
12847 SecondaryNameNode
13599 Jps

数据节点
5172 Jps
4810 DataNode

当我输入 sudo netstat -ntlp我可以得到:

掌握:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9870 0.0.0.0:* LISTEN 12532/java
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 696/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1106/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 809/cupsd
tcp 0 0 172.31.81.91:9000 0.0.0.0:* LISTEN 12532/java
tcp 0 0 0.0.0.0:9868 0.0.0.0:* LISTEN 12847/java
tcp6 0 0 :::80 :::* LISTEN 1176/apache2
tcp6 0 0 :::22 :::* LISTEN 1106/sshd
tcp6 0 0 ::1:631 :::* LISTEN 809/cupsd

数据节点:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9864 0.0.0.0:* LISTEN 4810/java
tcp 0 0 0.0.0.0:9866 0.0.0.0:* LISTEN 4810/java
tcp 0 0 0.0.0.0:9867 0.0.0.0:* LISTEN 4810/java
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 691/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1142/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 854/cupsd
tcp 0 0 127.0.0.1:45029 0.0.0.0:* LISTEN 4810/java
tcp6 0 0 :::22 :::* LISTEN 1142/sshd
tcp6 0 0 ::1:631 :::* LISTEN 854/cupsd

我正在使用 hadoop 3.1.3,任何帮助将不胜感激!谢谢!

最佳答案

我自己的快速回答......经过多次尝试

  • 如果使用 AWS,所有 ip 都应该是公共(public) IP。
  • 在 core-site.xml 中,使用公共(public) dns 而不是 IP
  • 格式化后删除数据节点文件。不知道为什么......但这真的解决了我的问题。

  • 感谢帮助!

    关于hadoop - 只能写入 1 minReplication 节点中的 0 个。有 0 个数据节点正在运行,并且在此操作中排除了 0 个节点,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60793650/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com