gpt4 book ai didi

hadoop - HBase 与 MapReduce

转载 作者:可可西里 更新时间:2023-11-01 14:59:33 26 4
gpt4 key购买 nike

我在 hadoop 集群上设置了一个 HBase 集群,其中所有节点都禁用了 IPv6。

一切正常;我能够运行 Java 客户端以使用标准 Put、Scan、Get 访问 HBase,...

我写了一个map-reduce程序来访问HBase,但是我得到了以下错误:

Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.net.DNS.reverseDns(DNS.java:72)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInp...
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInp...

这是我在所有节点中的/etc/hosts 文件:

127.0.0.1       localhost.localdomain   localhost
192.168.0.252 master.hadoop.com master
192.168.0.251 slave.hadoop.com slave

这是我的 map-reduce 客户端程序:

import java.io.IOException;
import java.util.StringTokenizer;

import java.net.InetAddress;
import org.apache.hadoop.net.DNS.*;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.fs.Path;

public class HBaseAndMapReduceExample {

public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put> {
public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
// this example is just copying the data from the source table...
context.write(row, resultToPut(row,value));
}

private static Put resultToPut(ImmutableBytesWritable key, Result result) throws IOException {
Put put = new Put(key.get());
for (KeyValue kv : result.raw()) {
put.add(kv);
}

return put;
}
}

public static void main(String[] args) throws Exception {
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"HBaseAndMapReduceExample");
job.setJarByClass(HBaseAndMapReduceExample.class); // class that contains mapper

Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
// set other scan attrs

TableMapReduceUtil.initTableMapperJob(
"testtable", // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
null, // mapper output key
null, // mapper output value
job);

TableMapReduceUtil.initTableReducerJob(
"testtable2", // output table
null, // reducer class
job);
job.setNumReduceTasks(0);

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException("error with job!");
}
}
}

最佳答案

也尝试添加您的动物园管理员地址:

Configuration config = HBaseConfiguration.create();
conf.setStrings("hbase.zookeeper.quorum",
"your-zookeeper-addr");
Job job = new Job(config,"HBaseAndMapReduceExample");

关于hadoop - HBase 与 MapReduce,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13209478/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com