gpt4 book ai didi

hadoop - 如何从 Hadoop 序列文件中提取数据?

转载 作者:可可西里 更新时间:2023-11-01 14:42:40 25 4
gpt4 key购买 nike

Hadoop序列文件真是奇怪。我将图像打包到序列文件中,无法恢复图像。我做了一些简单的测试。而且我发现使用序列文件前后字节大小甚至不一样。

Configuration confHadoop = new Configuration();
FileSystem fs = FileSystem.get(confHadoop);

String fileName = args[0];
Path file = new Path(fs.getUri().toString() + "/" + fileName);
Path seqFile = new Path("/temp.seq");
SequenceFile.Writer writer = null;
FSDataInputStream in = null;
try{
writer = SequenceFile.createWriter(confHadoop,Writer.file(seqFile), Writer.keyClass(Text.class),
Writer.valueClass(BytesWritable.class));

in = fs.open(file);
byte buffer[] = IOUtils.toByteArray(in);


System.out.println("original size ----> " + String.valueOf(buffer.length));
writer.append(new Text(fileName), new BytesWritable(buffer));
System.out.println(calculateMd5(buffer));
writer.close();

}finally{
IOUtils.closeQuietly(in);
}

SequenceFile.Reader reader = new SequenceFile.Reader(confHadoop, Reader.file(seqFile));

Text key = new Text();
BytesWritable val = new BytesWritable();

while (reader.next(key, val)) {
System.out.println("size get from sequence file --->" + String.valueOf(val.getLength()));
String md5 = calculateMd5(val.getBytes());
Path readSeq=new Path("/write back.png");
FSDataOutputStream out = null;
out = fs.create(readSeq);
out.write(val.getBytes()); //YES! GOT THE ORIGIANL IAMGE
out.close();
System.out.println(md5);
.............
}

输出显示我得到了相同数量的字节,并且在我将图像写回本地磁盘后,我确定我得到了原始图像。但是为什么MD5值不一样呢?

我在这里做错了什么?

14/04/22 16:21:35 INFO compress.CodecPool: Got brand-new compressor [.deflate]
original size ----> 485709
c413e36fd864b27d4c8927956298edbb
14/04/22 16:21:35 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
size get from sequence file --->485709
322cce20b732126bcb8876c4fcd925cb

最佳答案

终于解决了这个奇怪的问题,不得不分享一下。首先,我将向您展示从序列中获取字节的错误方法。

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path input = new Path(inPath);
Reader reader = new SequenceFile.Reader(conf, Reader.file(input));
Text key = new Text();

BytesWritable val = new BytesWritable();
while (reader.next(key, val)) {
fileName = key.toString();
byte[] data = val.getBytes(); //don't think you have got the data!
}

原因是 getBytes() 没有返回原始数据的准确大小。我把数据放在using中

FSDataInputStream in = null;
in = fs.open(input);
byte[] buffer = IOUtils.toByteArray(in);

Writer writer = SequenceFile.createWriter(conf,
Writer.file(output), Writer.keyClass(Text.class),
Writer.valueClass(BytesWritable.class));

writer.append(new Text(inPath), new BytesWritable(buffer));
writer.close();

我检查了输出序列文件的大小,它是原始大小加上头部,我不确定 getBytes() 给我的字节数比原来多的原因。但是让我们看看如何正确获取数据。

选项 #1,复制您需要的数据大小。

byte[] rawdata = val.getBytes();
length = val.getLength(); //exactly size of original data
byte[] data = Arrays.copyOfRange(rawdata, 0, length); this is corrent

Option #2

byte[] data = val.copyBytes();

这个更甜。 :)终于搞定了。

关于hadoop - 如何从 Hadoop 序列文件中提取数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23211493/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com