gpt4 book ai didi

hadoop - HBase "between"过滤器

转载 作者:可可西里 更新时间:2023-11-01 15:18:47 27 4
gpt4 key购买 nike

我正在尝试使用过滤器列表检索范围内的行,但没有成功。下面是我的代码片段。

我想检索 1000 到 2000 之间的数据。

HTable table = new HTable(conf, "TRAN_DATA");

    List<Filter> filters = new ArrayList<Filter>();

SingleColumnValueFilter filter1 = new SingleColumnValueFilter(Bytes.toBytes("TRAN"),
Bytes.toBytes("TRAN_ID"),
CompareFilter.CompareOp.GREATER, new BinaryComparator(Bytes.toBytes("1000")));
filter1.setFilterIfMissing(true);
filters.add(filter1);

SingleColumnValueFilter filter2 = new SingleColumnValueFilter(Bytes.toBytes("TRAN"),
Bytes.toBytes("TRAN_ID"),
CompareFilter.CompareOp.LESS,new BinaryComparator(Bytes.toBytes("2000")));

filters.add(filter2);

FilterList filterList = new FilterList(filters);

Scan scan = new Scan();
scan.setFilter(filterList);
ResultScanner scanner1 = table.getScanner(scan);

System.out.println("Results of scan #1 - MUST_PASS_ALL:");
int n = 0;

for (Result result : scanner1) {
for (KeyValue kv : result.raw()) {
System.out.println("KV: " + kv + ", Value: "
+ Bytes.toString(kv.getValue()));
{
n++;

}
}
scanner1.close();

尝试了所有可能的方法
1. SingleColumnValueFilter filter2 = new SingleColumnValueFilter(Bytes.toBytes("TRANSACTIONS"), Bytes.toBytes("TRANS_ID"), CompareFilter.CompareOp.LESS, new SubstringComparator("5000"));

  1. SingleColumnValueFilter filter2 = new SingleColumnValueFilter(Bytes.toBytes("TRANSACTIONS"), Bytes.toBytes("TRANS_ID"), CompareFilter.CompareOp.LESS, Bytes.toBytes("5000"));以上方法均无效:(

最佳答案

这里肯定有一点是在创建 FILTERLIST 时,您还必须指定 FilterList.Operator,否则不确定 filterlist 将如何处理多个过滤器。在你的情况下,它应该是这样的:-

FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL, filters);

看看这是否有帮助。

关于hadoop - HBase "between"过滤器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10429412/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com