gpt4 book ai didi

apache-spark - 我确实通过hiveql更改了表。然后用spark-sql显示表是行不通的。错误:路径不存在

转载 作者:行者123 更新时间:2023-12-01 00:32:47 25 4
gpt4 key购买 nike

我确实通过HiveQL修改了表格。

"ALTER TABLE new_law_area_2 RENAME TO law_area"


然后,我打算通过spark-sql显示我的表。

"SELECT * FROM law_area LIMIT 10"


但是,它不起作用...出现此错误。

org.spark_project.guava.util.concurrent.UncheckedExecutionException:org.apache.spark.sql.AnalysisException

18/04/18 14:17:47错误SparkSQLDriver:在[select * from law_area limit 10]中失败
org.spark_project.guava.util.concurrent.UncheckedExecutionException:org.apache.spark.sql.AnalysisException:路径不存在:hdfs://dmlab/apps/hive/warehouse/dimension.db/new_law_area_2;
在org.spark_project.guava.cache.LocalCache $ LocalLoadingCache.getUnchecked(LocalCache.java:4882)
在org.spark_project.guava.cache.LocalCache $ LocalLoadingCache.apply(LocalCache.java:4898)
在org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:128)
在org.apache.spark.sql.hive.HiveSessionCatalog.lookupRelation(HiveSessionCatalog.scala:70)
在org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $ .org $ apache $ spark $ sql $ catalyst $ analysis $ Analyzer $ ResolveRelations $$ lookupTableFromCatalog(Analyzer.scala:457)
在org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $$ anonfun $ apply $ 8.applyOrElse(Analyzer.scala:479)
在org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $$ anonfun $ apply $ 8.applyOrElse(Analyzer.scala:464)处
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ resolveOperators $ 1.apply(LogicalPlan.scala:61)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ resolveOperators $ 1.apply(LogicalPlan.scala:61)
在org.apache.spark.sql.catalyst.trees.CurrentOrigin $ .withOrigin(TreeNode.scala:70)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.trees.TreeNode $$ anonfun $ 4.apply(TreeNode.scala:307)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:305)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.trees.TreeNode $$ anonfun $ 4.apply(TreeNode.scala:307)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:305)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.apply(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.trees.TreeNode $$ anonfun $ 4.apply(TreeNode.scala:307)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
在org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:305)
在org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)
在org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $ .apply(Analyzer.scala:464)
在org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $ .apply(Analyzer.scala:454)
在org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $ execute $ 1 $$ anonfun $ apply $ 1.apply(RuleExecutor.scala:85)
在org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $ execute $ 1 $$ anonfun $ apply $ 1.apply(RuleExecutor.scala:82)
在scala.collection.LinearSeqOptimized $ class.foldLeft(LinearSeqOptimized.scala:124)
在scala.collection.immutable.List.foldLeft(List.scala:84)
在org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $ execute $ 1.apply(RuleExecutor.scala:82)
在org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $ execute $ 1.apply(RuleExecutor.scala:74)
在scala.collection.immutable.List.foreach(List.scala:381)
在org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
在org.apache.spark.sql.execution.QueryExecution.analyzed $ lzycompute(QueryExecution.scala:69)
在org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
在org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
在org.apache.spark.sql.Dataset $ .ofRows(Dataset.scala:63)
在org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
在org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
在org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
在org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:335)
在org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
在org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver $ .main(SparkSQLCLIDriver.scala:247)
在org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:497)
在org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:745)
在org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:187)
在org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:212)
在org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:126)
在org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


我怎么解决这个问题?

请...我的所有桌子都不能这样工作...

我想使用spark-sql

最佳答案

请尝试 -

alter table law_area set location 'hdfs://dmlab/apps/hive/warehouse/dimension.db/law_area'

关于apache-spark - 我确实通过hiveql更改了表。然后用spark-sql显示表是行不通的。错误:路径不存在,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43464543/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com