gpt4 book ai didi

Java - 空 orc 文件

转载 作者:可可西里 更新时间:2023-11-01 15:56:23 26 4
gpt4 key购买 nike

我正在尝试使用 orc-core 编写 orc 文件稍后由 hive 读取。

正在写入的文件具有正确的行数,但列中没有内容。我可以看到,两者都试图在配置单元中使用选择查询读取文件,并且都使用 hive --orcfiledump -d

我尝试了指南中提供的示例,它写入了两个long type 列,生成的文件被hive 正确读取。我怀疑这与我正在编写 string columns 的事实有关,但我仍然无法使其工作。

这就是我目前编写文件的方式:

    //  File schema
String outputFormat = "struct<";
for(int i=0;i<outputSchema.length;i++){
outputFormat+=outputSchema[i]+":string,";
}
outputFormat+="lastRecordHash:string,currentHash:string>";
TypeDescription orcSchema = TypeDescription.fromString(outputFormat);

// Initializes buffers
VectorizedRowBatch batch = orcSchema.createRowBatch();
ArrayList<BytesColumnVector> orcBuffers = new ArrayList<>(numFields+2);
for(int i=0;i<numFields+2;i++){
BytesColumnVector bcv = (BytesColumnVector) batch.cols[i];
orcBuffers.add(i, bcv);
}

...

// Initializes writer
Writer writer=null;
try{
writer = OrcFile.createWriter(new Path(hdfsUri+outputPath), OrcFile.writerOptions(conf).setSchema(orcSchema));
partitionCounter++;
}
catch(IOException e){
log.error("Cannot open hdfs file. Reason: "+e.getMessage());
session.transfer(flowfile, hdfsFailure);
return;
}

// Writes content
String[] records = ...

for(int i=0;i<records.length;i++){
fields = records[i].split(fieldSeparator);

int row=batch.size++;

// Filling the orc buffers
for(int j=0;j<numFields;j++){
orcBuffers.get(j).vector[row] = fields[j].getBytes();
hashDigest.append(fields[j]);
}
if (batch.size == batch.getMaxSize()) {
try{
writer.addRowBatch(batch);
batch.reset();
}
catch(IOException e){
log.error("Cannot write to hdfs. Reason: "+e.getMessage());
return;
}
}
}
if (batch.size != 0) {
try{
writer.addRowBatch(batch);
batch.reset();
}
catch(IOException e){
log.error("Cannot write to hdfs. Reason: "+e.getMessage());
return;
}
}
writer.close();

非常感谢任何建议或有用的引用。

谢谢大家

最佳答案

看来我需要对 API 文档进行深入审查。我错过了什么:

  • 在初始化阶段对每个BytesColumnVector调用initBuffer()
  • 分配调用 setVal() 的列的值。这也可以使用 setRef() 来完成。它被记录为两者中最快的,但我不知道是否适合我的具体情况,我会尝试一下。

这是更新后的代码:

//  File schema
String outputFormat = "struct<";
for(int i=0;i<outputSchema.length;i++){
outputFormat+=outputSchema[i]+":string,";
}
outputFormat+="lastRecordHash:string,currentHash:string>";
TypeDescription orcSchema = TypeDescription.fromString(outputFormat);

// Initializes buffers
VectorizedRowBatch batch = orcSchema.createRowBatch();
ArrayList<BytesColumnVector> orcBuffers = new ArrayList<>(numFields+2);
for(int i=0;i<numFields+2;i++){
BytesColumnVector bcv = (BytesColumnVector) batch.cols[i];
bcv.initBuffer();
orcBuffers.add(i, bcv);
}

...

// Initializes writer
Writer writer=null;
try{
writer = OrcFile.createWriter(new Path(hdfsUri+outputPath), OrcFile.writerOptions(conf).setSchema(orcSchema));
partitionCounter++;
}
catch(IOException e){
log.error("Cannot open hdfs file. Reason: "+e.getMessage());
session.transfer(flowfile, hdfsFailure);
return;
}

// Writes content
String[] records = ...

for(int i=0;i<records.length;i++){
fields = records[i].split(fieldSeparator);

int row=batch.size++;

// Filling the orc buffers
for(int j=0;j<numFields;j++){
orcBuffers.get(j).setVal(row, fields[j].getBytes());
hashDigest.append(fields[j]);
}
if (batch.size == batch.getMaxSize()) {
try{
writer.addRowBatch(batch);
batch.reset();
}
catch(IOException e){
log.error("Cannot write to hdfs. Reason: "+e.getMessage());
return;
}
}
}
if (batch.size != 0) {
try{
writer.addRowBatch(batch);
batch.reset();
}
catch(IOException e){
log.error("Cannot write to hdfs. Reason: "+e.getMessage());
return;
}
}
writer.close();

关于Java - 空 orc 文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43101279/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com