gpt4 book ai didi

java - 将文本文件转换为SequentialFileOutput格式

转载 作者:行者123 更新时间:2023-12-02 21:19:03 24 4
gpt4 key购买 nike

我正在尝试将文本文件转换为sequentialFileoutputFormat格式,但是出现错误消息:

java.io.IOException wrong key class /home/mmrao/test.txt is not class org.apache.hadoop.io.LogWritable


Mapper 
public class SequenceFi[enter image description here][1]leMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
private Text filenameKey;

@Override
protected void setup(Context context) throws IOException, InterruptedException {
InputSplit split = context.getInputSplit();

Path path = ((FileSplit) split).getPath();
// filenameKey = new LongWritable();
filenameKey = new Text(path.toString());
}

@Override
protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
context.write(filenameKey, value);
}
}


WholeFileInputFormat:
public class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable> {
@Override
protected boolean isSplitable(JobContext context, Path file) {
return false;
}

@Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
WholeFileRecordReader reader = new WholeFileRecordReader();
reader.initialize(split, context);
return reader;
}
}

WholeFileRecordReader::

public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable> {
private FileSplit fileSplit;
private Configuration conf;
private BytesWritable value = new BytesWritable();
private boolean processed = false;

@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();
}

@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int) fileSplit.getLength()];
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(conf);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}

@Override
public NullWritable getCurrentKey() throws IOException, InterruptedException {
return NullWritable.get();
}

@Override
public BytesWritable getCurrentValue() throws IOException, InterruptedException {
return value;
}

@Override
public float getProgress() throws IOException {
return processed ? 1.0f : 0.0f;
}

@Override
public void close() throws IOException {
// do nothing
}
}

DriverClass:

public class SmallFilesToSequenceFileConverter extends Configured implements Tool {

/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
System.exit(ToolRunner.run(new Configuration(), new SmallFilesToSequenceFileConverter(), args));
}

public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
Configuration conf = getConf();
@SuppressWarnings("deprecation")
Job job = new Job(conf);
job.setJobName("SequenceFile ");
job.setJarByClass(SmallFilesToSequenceFileConverter.class);
FileInputFormat.setInputDirRecursive(job, true);
FileInputFormat.addInputPath(job, new Path(args[0]));
job.setInputFormatClass(WholeFileInputFormat.class);
// job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setMapperClass(SequenceFileMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(BytesWritable.class);
// job.setReducerClass(IntSumReducer.class);
// job.setNumReduceTasks(0);
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.submit();
job.waitForCompletion(true);

return 0;
}
}

注意:输入文件位于Hdfs位置,以提供命令行输入和输出

查询:
hadoop jar seq.jar package.driverclass ip op

Eroor日志:::::::::::::::::::::
mmrao@master:~$ yarn jar /home/mmrao/Downloads/seq.jar seq.SmallFilesToSequenceFileConverter /seq/files /seqout

16/06/25 10:08:43 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/06/25 10:08:45 INFO input.FileInputFormat: Total input paths to process : 2
16/06/25 10:08:45 INFO mapreduce.JobSubmitter: number of splits:2
16/06/25 10:08:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466829146657_0001
16/06/25 10:08:47 INFO impl.YarnClientImpl: Submitted application application_1466829146657_0001
16/06/25 10:08:47 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1466829146657_0001/
16/06/25 10:08:47 INFO mapreduce.Job: Running job: job_1466829146657_0001
16/06/25 10:08:57 INFO mapreduce.Job: Job job_1466829146657_0001 running in uber mode : false
16/06/25 10:08:57 INFO mapreduce.Job: map 0% reduce 0%
16/06/25 10:09:09 INFO mapreduce.Job: map 50% reduce 0%
16/06/25 10:09:10 INFO mapreduce.Job: map 100% reduce 0%
16/06/25 10:09:17 INFO mapreduce.Job: Task Id : attempt_1466829146657_0001_r_000000_0, Status : FAILED
Error: java.io.IOException: wrong key class: org.apache.hadoop.io.Text is not class org.apache.hadoop.io.LongWritable
at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1308)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

最佳答案

我认为您在编写查询时做错了。如果您可以提供确切的查询,那么我可以给您更多的答案。

但是,根据您的错误,这应该是由于错误的查询造成的。

您的查询应该像这样
Hadoop jar

示例-> hadoop jar /home/training/Desktop/file.jar DriverFile
/user/file/abc.txt / user / file / output

这里,
DriverFile->这是我的java类,包含main方法

/home/training/Desktop/file.jar-> jar位置

/user/file/abc.txt->要处理的文件的完整地址(此文件应该在您的hdfs中)

/ user / file / output->输出目录(应该是唯一的)

如果仍然在应用此错误。请获取您的日志的屏幕截图并将其发布在此处

关于java - 将文本文件转换为SequentialFileOutput格式,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37942086/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com