gpt4 book ai didi

pyspark - Spark SQL 使用 Python : Unable to instantiate org. apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

转载 作者:行者123 更新时间:2023-12-03 20:18:10 27 4
gpt4 key购买 nike

我想用 Spark SQL 测试基本的东西。我想加载一个 csv。文件,保存在我的笔记本电脑上,并在其上运行一些 sql 查询。但不知何故我无法使用 sqlContext 加载数据。我收到错误:

Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient. 

但是,我没有使用 Hive。

我正在使用 Windows 10 并使用 Anaconda 安装了 python。我为 hadoop 2.6 安装了 Spark 2.0.2 预构建。我使用 iPython Notebook 作为用户界面。

我的代码如下:
file = "C:/Andra/spark-2.0.2-bin-hadoop2.6/zip.csv"
df = sqlContext\
.read \
.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.option("mode", "DROPMALFORMED")\
.load(file)

问题在于 Spark SQL,因为我可以使用加载相同的文件
textFile=sc.textFile("C:/Andra/spark-2.0.2-bin-hadoop2.6/zip.csv")

如果我想运行 Spark SQL 文档中的示例 https://spark.apache.org/docs/latest/sql-programming-guide.html我犯了同样的错误。
from pyspark.sql import SparkSession

spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
df = spark.read.json("C:/Andra/spark-2.0.2-bin-hadoop2.6/examples/src/main/resources/people.json")

我的印象是我可以在不使用 Hive 的情况下使用 Spark SQL,因为我使用的数据本地保存在我的笔记本电脑上。此外,与上述相同的文档意味着:

“Spark SQL 的一种用途是执行 SQL 查询。Spark SQL 可以 也可以 用于从现有 Hive 安装中读取数据。有关如何配置此功能的更多信息,请参阅 Hive 表部分。 ”

还有一些使用 Hive 创建 spark session 的示例。因此,如果使用 hive 是强制性的,那么上面的内容将毫无用处。

但是,我想配置 Hive 以查看是否可以解决问题。文档指南 ( https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables ) 指出

“Hive 的配置是通过将 hive-site.xml、core-site.xml(用于安全配置)和 hdfs-site.xml(用于 HDFS 配置)文件放在 conf/中来完成的。”

但是,我找不到那些文件。

所以我的问题是:
  • 使用 Spark SQL 需要 Hive 吗?
  • 如果没有,我该怎么做才能让 Spark SQL 正常工作?
  • 如果是,我如何正确配置它,我可以找到
    需要那些文件?

  • 任何帮助表示赞赏!谢谢!

    这是完整的错误声明:
    ---------------------------------------------------------------------------
    Py4JJavaError Traceback (most recent call last)
    <ipython-input-4-e50d7a8fb32b> in <module>()
    1 file = "C:/Andra/spark-2.0.2-bin-hadoop2.6/zip.csv"
    ----> 2 df = sqlContext .read .format("com.databricks.spark.csv") .option("header", "true") .option("inferschema", "true") .option("mode", "DROPMALFORMED") .load(file)

    C:\Andra\spark-2.0.2-bin-hadoop2.6\python\pyspark\sql\readwriter.pyc in load(self, path, format, schema, **options)
    145 self.options(**options)
    146 if isinstance(path, basestring):
    --> 147 return self._df(self._jreader.load(path))
    148 elif path is not None:
    149 if type(path) != list:

    C:\Andra\spark-2.0.2-bin-hadoop2.6\python\lib\py4j-0.10.3-src.zip\py4j\java_gateway.py in __call__(self, *args)
    1131 answer = self.gateway_client.send_command(command)
    1132 return_value = get_return_value(
    -> 1133 answer, self.gateway_client, self.target_id, self.name)
    1134
    1135 for temp_arg in temp_args:

    C:\Andra\spark-2.0.2-bin-hadoop2.6\python\pyspark\sql\utils.pyc in deco(*a, **kw)
    61 def deco(*a, **kw):
    62 try:
    ---> 63 return f(*a, **kw)
    64 except py4j.protocol.Py4JJavaError as e:
    65 s = e.java_exception.toString()

    C:\Andra\spark-2.0.2-bin-hadoop2.6\python\lib\py4j-0.10.3-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317 raise Py4JJavaError(
    318 "An error occurred while calling {0}{1}{2}.\n".
    --> 319 format(target_id, ".", name), value)
    320 else:
    321 raise Py4JError(

    Py4JJavaError: An error occurred while calling o110.load.
    : java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
    at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:189)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
    at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
    at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
    at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
    at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
    at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
    at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
    Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
    ... 33 more
    Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
    ... 39 more
    Caused by: java.lang.NullPointerException
    at org.apache.thrift.transport.TSocket.open(TSocket.java:170)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
    at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
    ... 44 more

    最佳答案

    我最近遇到了同样的问题。就我而言,我同时在本地计算机上运行两个 python jupyter 笔记本。第一个笔记本运行良好。第二个继续扔可怕的

    Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    我不确定权限如何工作。它似乎是第一个运行一些如何锁定本地元存储的笔记本。元存储不能在两个不同的 session 之间共享是有意义的。

    也许有人知道如何启用多个笔记本?

    安迪

    关于pyspark - Spark SQL 使用 Python : Unable to instantiate org. apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41611078/

    27 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com