gpt4 book ai didi

hadoop - Apache Hadoop 2.7.3,套接字超时错误

转载 作者:行者123 更新时间:2023-12-02 20:49:45 25 4
gpt4 key购买 nike

我有与以下链接相同的问题。

Hadoop, Socket Timeout Error

你能帮我解决一下吗,我在安装 Apache Hadoop 2.7.3 EC2 时遇到了同样的问题。链接中提到的属性是否需要添加到名称和数据节点配置文件中?如果是,所有 .xmls 是什么?提前致谢。

此外,应用程序正在尝试按照以下错误访问 EC2 上的内部 IP,我需要打开任何端口吗?它在 Web UI 上显示 8042。

所有节点和 Nodemanager 和 Resource Manager(RM) 都显示在 jps 上运行。

当我尝试运行 map reduce 示例时来自 Namenode 的错误如下:

作业 job_1506038808044_0002 失败,状态为 FAILED,原因是:应用程序 application_1506038808044_0002 由于启动 appattempt_1506038808044_0002_000002 时出错而失败了 2 次。得到异常:org.apache.hadoop.net.ConnectTimeoutException:从 ip-172-31-1-10/172.31.1.10 调用到 ip-172-31-5-59.ec2.internal:43555 在套接字超时异常上失败: org.apache.hadoop.net.ConnectTimeoutException:WAITING channel 准备好连接时超时 20000 毫秒。 ch : java.nio.channels.SocketChannel[连接挂起远程=ip-172-31-5-59.ec2.internal/172.31.5.59:43555]

最后,RM Web UI 在作业运行时始终显示以下消息:

状态:WAITING AM 容器被分配、启动和注册到 RM。

谢谢,
阿莎

最佳答案

在尝试了 Hadoop 中存在的解决方案后,Socket Timeout Error(我的问题中的链接)并将下面添加到 hdfs-site.xml 文件中,通过允许所有 ICMP 和 UDP 规则到 ec2 实例以便它们可以相互 ping 来解决问题.

<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>2000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>2000000</value>
</property>

<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>true</value>
<description>Whether datanodes should use datanode hostnames when
connecting to other datanodes for data transfer.
</description>
</property>

<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.rpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>

<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the service RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>

<property>
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>

<property>
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>

关于hadoop - Apache Hadoop 2.7.3,套接字超时错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46372689/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com