gpt4 book ai didi

java - Hadoop MapReduce 错误 : Mkdirs failed to create file; job failed

转载 作者:可可西里 更新时间:2023-11-01 16:47:27 39 4
gpt4 key购买 nike

我正在尝试在 Hadoop 上执行 C4.5 算法。但是,我遇到了问题并且陷入了以下错误。我拥有所有权限。谁能帮帮我?

Java.lang.Exception: java.io.IOException: Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.io.IOException: Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:442)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:428)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:801)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.<init>(ReduceTask.java:484)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:414)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-03-12 19:08:04,332 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1386)) - Job job_local960306821_0001 failed with state FAILED due to: NA
2016-03-12 19:08:04,492 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1391)) - Counters: 33
File System Counters
FILE: Number of bytes read=523
FILE: Number of bytes written=249822
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=14
Map output records=56
Map output bytes=863
Map output materialized bytes=981
Input split bytes=93
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=981
Reduce input records=0
Reduce output records=0
Spilled Records=56
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=188743680
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=374
File Output Format Counters
Bytes Written=0



Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at C45.run(C45.java:192)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at C45.main(C45.java:53)

最佳答案

(从评论中复制,如果其他人遇到这个问题)

基于日志行

Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)

问题不在 HDFS 而在本地文件系统。因此,您需要适配节点的写入权限。

关于java - Hadoop MapReduce 错误 : Mkdirs failed to create file; job failed,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35958682/

39 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com