gpt4 book ai didi

java - Mapreduce 作业因 IO 异常而失败

转载 作者:可可西里 更新时间:2023-11-01 14:41:28 25 4
gpt4 key购买 nike

我正在运行单节点 hadoop 环境。我有一个 mapreduce 作业来计算某些特定时间段内某些监控信息的平均值,比如每小时平均值。该作业将输出写入 hdfs 中的路径。在运行该作业之前,它会及时清理。它工作正常一个月。昨天,在运行作业时,我从 jobclient 得到一个异常,说:

文件/user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 只能复制到 0 个节点,而不是 1 个

完整的堆栈跟踪如下:



..........

14/01/17 12:00:09 INFO mapred.JobClient: map 100% reduce 32%
14/01/17 12:00:12 INFO mapred.JobClient: map 100% reduce 74%
14/01/17 12:00:17 INFO mapred.JobClient: Task Id : attempt_201401141113_0007_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy2.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)

从谷歌上初步搜索,说存储空间问题。但我不这么认为,因为我的整个输入数据应该小于600MB并且有大约1.5GB 节点上可用的可用空间。我运行了 hadoop dfsadmin -report 命令,它返回如下:



$hadoop dfsadmin -report
Configured Capacity: 11353194496 (10.57 GB)
Present Capacity: 2354425856 (2.19 GB)
DFS Remaining: 1633726464 (1.52 GB)
DFS Used: 720699392 (687.31 MB)
DFS Used%: 30.61%
Under replicated blocks: 49
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 192.168.1.149:50010
Decommission Status : Normal
Configured Capacity: 11353194496 (10.57 GB)
DFS Used: 720699392 (687.31 MB)
Non DFS Used: 8998768640 (8.38 GB)
DFS Remaining: 1633726464(1.52 GB)
DFS Used%: 6.35%
DFS Remaining%: 14.39%
Last contact: Fri Jan 17 04:36:55 GMT+05:30 2014


请给我一个解决方案。这可能是配置问题。我不太了解hadoop配置。请帮助..

最佳答案

我认为,也许您的问题实际上是空间问题。您有一个副本集,因此如果您的输入为 600 MB,则您的集群将占用 1.2 GB。

您仍然有 300mb 的可用空间,这可能不足以在节点之间发送数据。

我的建议是使用较小的数据集来检查这是否是问题所在,大约 300mb 或更少。然后,如果你不这样解决它,请尝试在 conf/hdfs-site.xml 上将副本设置为 0

<property>
<name>dfs.replication</name>
<value>0</value>
</property>

关于java - Mapreduce 作业因 IO 异常而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21181834/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com