gpt4 book ai didi

scala - Spark 连接到本地 Hive 而不是远程

转载 作者:可可西里 更新时间:2023-11-01 14:36:11 26 4
gpt4 key购买 nike

我正在使用 Spring Framework 创建一个 API 来查询我在 Hadoop 中的一些表。我使用的命令:

  println("-----------------------------------------------------------------before )
val spark = SparkSession
.builder()
.appName("API")
.master("local[*])
.enableHiveSupport()
.getOrCreate()
println("--------------------------------------------------------------------Session was created")

我使用的是 Spark 2.11.6 和 Scala v2.2.0。当我使用 spark-shell 时,我连接到远程集群。

在日志中我没有收到任何错误,但我看到创建了一个本地配置单元存储库:

     [           main] o.a.h.hive.metastore.MetaStoreDirectSql  : Using direct SQL, underlying DB is DERBY
main] o.a.hadoop.hive.ql.session.SessionState : Created local directory: C:/Users/..../.../Local/Temp/..._resources
2018-05-10 16:32:32.556 INFO 16148 --- [ main] o.a.hadoop.hive.ql.session.SessionState : Created HDFS directory: /tmp/hive/myuser/....

我正在尝试连接到远程 Cloudera 集群。我将 xml 文件(hive-site、hdfs-site、core-stire、yarn-site)复制到我项目的 conf 目录,到 $SPARK_CONF 目录。我将 SPARK_HOME 路径添加到 PATH 变量,并将 HADDOP_HOME 变量指定为指向 winutils 位置。

我还能做什么?

日志很长,我看到了一些消息,可能对您有任何暗示:

-----------------------------------------------------------------ENV=local[*]
2018-05-10 16:32:16.930 WARN 16148 --- [ main] org.apache.hadoop.util.NativeCodeLoader : Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[ main] org.apache.spark.util.Utils : Successfully started service 'SparkUI' on port 4040.
main] o.s.jetty.server.handler.ContextHandler : Started o.s.j.s.ServletContextHandler@13ee97af{/stages/pool/json,null,AVAILABLE,@Spark}
[ main] org.apache.spark.ui.SparkUI : Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
[ main] o.apache.spark.sql.internal.SharedState : URL.setURLStreamHandlerFactory failed to set FsUrlStreamHandlerFactory
[ main] DataNucleus.Persistence : Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
[ main] DataNucleus.Datastore : The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
[ main] DataNucleus.Query : Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
[ main] o.a.h.hive.metastore.MetaStoreDirectSql : Using direct SQL, underlying DB is DERBY
[ main] o.a.hadoop.hive.metastore.ObjectStore : Failed to
get database global_temp, returning NoSuchObjectException
[ main] o.a.hadoop.hive.ql.session.SessionState : Created local directory: C:/Users/myuser/AppData/Local/Temp/1fa7a82b-fe17-4795-8973-212010634cd1_resources
[ main] o.a.hadoop.hive.ql.session.SessionState : Created HDFS directory: /tmp/hive/myuser/1fa7a82b-fe17-4795-8973-212010634cd1
[ main] o.a.hadoop.hive.ql.session.SessionState : Created local directory: C:/Users/myuser/AppData/Local/Temp/myuser/fileasdasdsa
[ main] o.a.hadoop.hive.ql.session.SessionState : Created HDFS directory: /tmp/hive/myuser/asdsadsa/_tmp_space.db
[ main] o.a.s.sql.hive.client.HiveClientImpl : Warehouse location for Hive client (version 1.2.1) is file:/C:/Users/myuser/SpringScalaAPI/spark-warehouse
[ main] o.a.s.s.e.s.s.StateStoreCoordinatorRef : Registered StateStoreCoordinator endpoint
--------------------------------------------------------------------Session was created

老实说,这不是我第一次处理此类错误。上次我用的是play framework。在这种情况下需要执行哪些具体步骤?哪些变量真正应该配置,哪些变量不重要?

最佳答案

使用 Spark 2,您可以尝试这样的操作,

val ss = SparkSession
.builder()
.appName(" Hive example")
.config("hive.metastore.uris", "thrift://localhost:9083")
.enableHiveSupport()
.getOrCreate()

注意 hive.metastore.uris 属性,将 localhost 更改为指向您的沙箱或集群。

一个ss被初始化,你可以阅读下表,

val df = ss.read.table("db_name.table_name")

希望这对您有所帮助。干杯。

关于scala - Spark 连接到本地 Hive 而不是远程,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50275223/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com