gpt4 book ai didi

apache-spark - SQLException上的sqlContext HiveDriver错误:不支持的方法

转载 作者:行者123 更新时间:2023-12-03 11:59:33 24 4
gpt4 key购买 nike

我一直在尝试使用sqlContext.read.format("jdbc").options(driver="org.apache.hive.jdbc.HiveDriver")将Hive表插入Spark,但没有成功。我已经完成研究并阅读以下内容:

How to connect to remote hive server from spark

Spark 1.5.1 not working with hive jdbc 1.2.0

http://belablotski.blogspot.in/2016/01/access-hive-tables-from-spark-using.html

我使用了最新的Hortonworks Sandbox 2.6,并在那里向社区提出了同样的问题:

https://community.hortonworks.com/questions/156828/pyspark-jdbc-py4jjavaerror-calling-o95load-javasql.html?childToView=156936#answer-156936

我想通过pyspark做的非常简单:

df = sqlContext.read.format("jdbc").options(driver="org.apache.hive.jdbc.HiveDriver", url="jdbc:hive2://localhost:10016/default", dbtable="sample_07",user="maria_dev", password="maria_dev").load()

那给了我这个错误:
17/12/30 19:55:14 INFO HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://localhost:10016/default
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/current/spark-client/python/pyspark/sql/readwriter.py", line 139, in load
return self._df(self._jreader.load())
File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o119.load.
: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveResultSetMetaData.isSigned(HiveResultSetMetaData.java:143)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:136)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:748)

使用beeline,效果很好
beeline> !connect jdbc:hive2://localhost:10016/default maria_dev maria_dev
Connecting to jdbc:hive2://localhost:10016/default
Connected to: Spark SQL (version 2.1.1.2.6.1.0-129)
Driver: Hive JDBC (version 1.2.1000.2.6.1.0-129)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10016/default> select * from sample_07 limit 2;
+----------+-------------------------+------------+---------+--+
| code | description | total_emp | salary |
+----------+-------------------------+------------+---------+--+
| 00-0000 | All Occupations | 134354250 | 40690 |
| 11-0000 | Management occupations | 6003930 | 96150 |
+----------+-------------------------+------------+---------+--+

我也可以这样做:
spark = SparkSession.Builder().appName("testapp").enableHiveSupport().‌​getOrCreate()
spark.sql("select * from default.sample_07").collect()

但这直接读入Hive元数据。我想使用JDBC来触发Thrift Server以获得更细致的安全性。

我可以像这样做PostgreSQL:
sqlContext.read.format("jdbc").options(driver="org.postgresql.Driver")

我还可以使用Scala java.sql.{DriverManager, Connection, Statement, ResultSet}创建JDBC Connection作为客户端,以连接到Spark。但这基本上会将所有数据放入内存,然后手动重新创建Dataframe。

所以问题是:有没有一种方法可以用Hive表数据创建Spark数据框,而又不像Scala那样将数据加载到JDBC客户端的内存中,而不像上面的示例那样使用 SparkSession.Builder()?我的用例是我需要处理细粒度的安全性。

最佳答案

我不确定我是否正确理解了您的问题,但是据我所知,您将需要将一个 hive 表放入数据帧,因为您无需具有JDBC连接,在示例链接中,它们是尝试连接到其他数据库(RDBMS),而不是Hive。

请参阅以下方法,使用配置单元上下文可以将表放入数据框。

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, SQLContext}

def main(args: Array[String]): Unit = {

val sparkConf = new SparkConf().setAppName("APPName")
val sc = new SparkContext(sparkConf)
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val sqlContext = new SQLContext(sc)

val hive_df = hiveContext.sql("select * from schema.table").first()

//other way
// val hive_df= hiveContext.table ("SchemaName.TableName")

//Below will print the first line
df.first()
//count on dataframe
df.count()

}

如果您确实想使用JDBC连接,那么下面有一个我用于Oracle数据库的示例,它可能会对您有所帮助。
val oracle_data = sqlContext.load("jdbc", Map("url" -> "jdbc:oracle:thin:username/password//hostname:2134/databaseName", "dbtable" -> "Your query tmp", "driver" -> "oracle.jdbc.driver.OracleDriver"));

关于apache-spark - SQLException上的sqlContext HiveDriver错误:不支持的方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48087779/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com