gpt4 book ai didi

hadoop - WARN mapreduce.LoadIncrementalHFiles:跳过EMR上的非目录hdfs

转载 作者:行者123 更新时间:2023-12-02 20:55:48 24 4
gpt4 key购买 nike

我正在尝试使用Hbase中的mapreduce批量加载文本文件。
一切正常,但是当我在最后一步进行大容量加载时,我得到警告,并且我的mapreduce工作卡住了。

17/06/15 10:22:43 INFO mapreduce.Job: Job job_1495181241247_0013 completed successfully
17/06/15 10:22:43 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=836391
FILE: Number of bytes written=1988049
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=73198
HDFS: Number of bytes written=12051358
HDFS: Number of read operations=8
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=196200
Total time spent by all reduces in occupied slots (ms)=428490
Total time spent by all map tasks (ms)=4360
Total time spent by all reduce tasks (ms)=4761
Total vcore-milliseconds taken by all map tasks=4360
Total vcore-milliseconds taken by all reduce tasks=4761
Total megabyte-milliseconds taken by all map tasks=6278400
Total megabyte-milliseconds taken by all reduce tasks=13711680
Map-Reduce Framework
Map input records=5604
Map output records=5603
Map output bytes=8240332
Map output materialized bytes=836387
Input split bytes=240
Combine input records=0
Combine output records=0
Reduce input groups=5603
Reduce shuffle bytes=836387
Reduce input records=5603
Reduce output records=179296
Spilled Records=11206
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=137
CPU time spent (ms)=11240
Physical memory (bytes) snapshot=820736000
Virtual memory (bytes) snapshot=7694557184
Total committed heap usage (bytes)=724566016
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=72958
File Output Format Counters
Bytes Written=12051358
Incremental upload completed..........
job is successfull..........H file Loading Will start Now
17/06/15 10:22:43 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://ip:8020/user/hadoop/ESGTRF/outputdir/output0/_SUCCESS

同一件事在cloudera上也起作用,但是当我在AWS EMR上运行时,出现此问题。

我怀疑配置有问题。
我没有明确提到任何配置。

最佳答案

明确设置权限后,我的问题解决了

hfs.setPermission(new Path(outputPath+"/columnFamilyName"),FsPermission.valueOf("drwxrwxrwx"));

关于hadoop - WARN mapreduce.LoadIncrementalHFiles:跳过EMR上的非目录hdfs,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44565720/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com