gpt4 book ai didi

hadoop - Hive 失败,出现 java.io.IOException(拆分超出最大块位置 .... 拆分大小 : 45 maxsize: 10)

转载 作者:可可西里 更新时间:2023-11-01 15:39:43 27 4
gpt4 key购买 nike

配置单元确实需要处理 45 个文件。每个大小约为 1GB。映射器执行完成 100% 后,配置单元失败并显示上述错误消息。

Driver returned: 1.  Errors: OK
Hive history file=/tmp/hue/hive_job_log_hue_201308221004_1738621649.txt
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1376898282169_0441, Tracking URL = http://SH02SVR2882.hadoop.sh2.ctripcorp.com:8088/proxy/application_1376898282169_0441/
Kill Command = //usr/lib/hadoop/bin/hadoop job -kill job_1376898282169_0441
Hadoop job information for Stage-1: number of mappers: 236; number of reducers: 0
2013-08-22 10:04:40,205 Stage-1 map = 0%, reduce = 0%
2013-08-22 10:05:07,486 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 121.28 sec
.......................
2013-08-22 10:09:18,625 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 7707.18 sec
MapReduce Total cumulative CPU time: 0 days 2 hours 8 minutes 27 seconds 180 msec
Ended Job = job_1376898282169_0441
Ended Job = -541447549, job is filtered out (removed at runtime).
Ended Job = -1652692814, job is filtered out (removed at runtime).
Launching Job 3 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job Submission failed with exception
'java.io.IOException(Max block location exceeded for split: Paths:/tmp/hive-beeswax-logging/hive_2013-08-22_10-04-32_755_6427103839442439579/-ext-10001/000009_0:0+28909,....,/tmp/hive-beeswax-logging/hive_2013-08-22_10-04-32_755_6427103839442439579/-ext-10001/000218_0:0+45856
Locations:10.8.75.17:...:10.8.75.20:; InputFormatClass: org.apache.hadoop.mapred.TextInputFormat
splitsize: 45 maxsize: 10)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 236 Cumulative CPU: 7707.18 sec HDFS Read: 63319449229 HDFS Write: 8603165 SUCCESS
Total MapReduce CPU Time Spent: 0 days 2 hours 8 minutes 27 seconds 180 msec

但是我没有设置maxsize。执行了很多次,但得到相同的错误。我试图为配置单元添加 mapreduce.jobtracker.split.metainfo.maxsize 属性。但在这种情况下,hive 在没有任何 map 工作的情况下失败了。

最佳答案

设置 mapreduce.job.max.split.locations > 45

在我们的情况下,它解决了问题。

关于hadoop - Hive 失败,出现 java.io.IOException(拆分超出最大块位置 .... 拆分大小 : 45 maxsize: 10),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18370647/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com