gpt4 book ai didi

hadoop - HDFS集群的某些datanode在reducers运行时突然断开连接

转载 作者:可可西里 更新时间:2023-11-01 14:23:37 25 4
gpt4 key购买 nike

我有 8 台从属计算机和 1 台运行 Hadoop(ver 0.21)的主控计算机

当我在 10GB 数据上运行 MapReduce 代码时,集群的一些数据节点突然断开连接在所有映射器完成并处理了大约 80% 的缩减器后,随机将一个或多个数据节点从网络中断开。然后其他数据节点开始从网络中消失,即使我在发现某些数据节点断开连接时终止了 MapReduce 作业也是如此。

我尝试将 dfs.datanode.max.xcievers 更改为 4096,关闭所有计算节点的防火墙,禁用 selinux 并将文件打开数限制增加到 20000但它们根本不起作用......

有人有解决这个问题的想法吗?

以下是mapreduce的错误日志

12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED
java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427)

以下是datanode的日志

2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010
2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010
2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257)
at java.lang.Thread.run(Thread.java:722)

2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453
2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010]
at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284)
at java.lang.Thread.run(Thread.java:722)

hdfs-site.xml

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/data/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

<property>
<name>dfs.http.address</name>
<value>0.0.0.0:20070</value>
<description>50070
The address and the base port where the dfs namenode web ui will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>

<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:20075</value>
<description>50075
The datanode http server address and port.
If the port is 0 then the server will start on a free port.
</description>
</property>

<property>
<name>dfs.secondary.http.address</name>
<value>0.0.0.0:20090</value>
<description>50090
The secondary namenode http server address and port.
If the port is 0 then the server will start on a free port.
</description>
</property>

<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:20010</value>
<description>50010
The address where the datanode server will listen to.
If the port is 0 then the server will start on a free port.
</description>

<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:20020</value>
<description>50020
The datanode ipc server address and port.
If the port is 0 then the server will start on a free port.
</description>
</property>

<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:20475</value>
</property>

<property>
<name>dfs.https.address</name>
<value>0.0.0.0:20470</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>masternode:29001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/data/mapreduce/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/data/mapreduce/local</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>32</value>
<description> default number of map tasks per job.</description>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>8</value>
<description> default number of reduce tasks per job.</description>
</property>
<property>
<name>mapred.map.child.java.opts</name>
<value>-Xmx2048M</value>
</property>
<property>
<name>io.sort.mb</name>
<value>500</value>
</property>
<property>
<name>mapred.task.timeout</name>
<value>1800000</value> <!-- 30 minutes -->
</property>


<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:20030</value>
<description> 50030
The job tracker http server address and port the server will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>

<property>
<name>mapred.task.tracker.http.address</name>
<value>0.0.0.0:20060</value>
<description> 50060

</property>

</configuration>

最佳答案

尝试在 conf/hdfs-site.xml 中配置 max.xcievers http://hbase.apache.org/book.html#dfs.datanode.max.xcievers :

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

关于hadoop - HDFS集群的某些datanode在reducers运行时突然断开连接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10844486/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com