gpt4 book ai didi

java - Spark 无法访问 hbase 但可以使用 java 代码访问

转载 作者:可可西里 更新时间:2023-11-01 14:23:56 25 4
gpt4 key购买 nike

我正在使用 spark 1.3.0 和 hbase 1.0。一星期后。 Hbase 使用 java 代码运行成功。但是当将 Hbase 与 spark 一起使用时会出现错误。我还检查了 hbase shell 是否工作正常。这个错误发生在很长时间之后,否则也可以与 spark 一起正常工作。

我已经检查过 hadoop 和 hbase 集群健康状况良好。

在 Spark 界面

Caused by: java.io.IOException: Enable/Disable failed 
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:110)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:907)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1076)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:885)
at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:78)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
... 23 more
Caused by:org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode = ConnectionLoss for /hbase/table/my_sample_table
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:360)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:685)
at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.getTableState(ZKTableStateClientSideReader.java:186)
at org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader.isDisabledTable(ZKTableStateClientSideReader.java:60)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:108)
... 29 more

在某些情况下

   WARN  ZKUtil:484 - hconnection-0x792dd4040x0, quorum=Megatron:2222, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:481)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getCat org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
clusterId(ZooKeeperRegistry.java:86)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:623)
at sun.reflect.GeneratedConstructorAccessor530.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:183)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:230)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1511)
at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:412)
at org.apache.spark.api.java.JavaRDD.count(JavaRDD.scala:32)

但是当尝试将某些东西放入 hbase 时首先发生错误。尝试从 hbase 读取时出现第二个错误。

最佳答案

问题出在 HBaese 表锁定上。处于挂起或边缘状态。有一些建议的解决方案。

运行以下命令:

  • hbase hbck -fix
  • HMaster 重启

如果上述解决方案失败

  • 重启主机

如果上述解决方案失败

  • 重启动物园管理员

如果一切都失败了

  • 重启集群

更多细节可以在这个网址找到: JIRA Ticket

关于java - Spark 无法访问 hbase 但可以使用 java 代码访问,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30660128/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com