gpt4 book ai didi

java - 具有Snappy压缩的HBase表中的BulkLoad获取UnsatisfiedLinkError

转载 作者:行者123 更新时间:2023-12-02 20:08:34 24 4
gpt4 key购买 nike

尝试在启用了“快速压缩”的情况下从M / R批量加载到表中时。我收到以下错误:

ERROR mapreduce.LoadIncrementalHFiles: Unexpected execution exception during splitting
java.util.concurrent.ExecutionException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:335)
at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:234)

表格说明是:
DESCRIPTION                                                                                                                 ENABLED                                                             
{NAME => 'matrix_com', FAMILIES => [{NAME => 't', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'SNAPPY' true
, VERSIONS => '12', TTL => '1555200000', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 't
rue'}]}

如果Hadoop已安装了所有快照式编解码器,那么在使用快照式创建表时HBase也不会给出错误,为什么我会收到此错误?

最佳答案

似乎这是Hadoop开发人员修复的错误。请检查以下链接
https://issues.apache.org/jira/browse/MAPREDUCE-5799

关于java - 具有Snappy压缩的HBase表中的BulkLoad获取UnsatisfiedLinkError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19492347/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com