gpt4 book ai didi

hadoop - 如何从Hive中的特定存储桶数据查询数据

转载 作者:行者123 更新时间:2023-12-02 20:19:24 25 4
gpt4 key购买 nike

我在hive中创建了一个存储桶的表,其架构如下:


CREATE TABLE Songs_data_bucket (
Song_id STRING,
artist_id STRING,
album_name STRING,
song_views INT,
song_rating FLOAT)
CLUSTERED BY(song_rating)
INTO 4 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';


在这里,在song_rating列上完成存储,并将整个数据分为4个存储区。现在,当我尝试仅使用命令检查第一个存储桶的内容时

SELECT * FROM Songs_data_bucket TABLESAMPLE(BUCKET 0 out of 4 on song_rating )


我遇到了错误

14:40:46.835 [cf87ec7a-8910-453c-92ea-4aa98426a8f7 main] ERROR org.apache.hadoop.hive.ql.parse.CalcitePlanner - CBO failed, skipping CBO.
org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Table Sample specified for songs_data_bucket. Currently we don't support Table Sample clauses in CBO, turn off cbo for queries on tableSamples.
at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genTableLogicalPlan(CalcitePlanner.java:1660) ~[hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3116) ~[hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:939) ~[hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:893) ~[hive-exec-2.1.0.jar:2.1.0]
at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113) ~[calcite-core-1.6.0.jar:1.6.0]
at org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:969) ~[calcite-core-1.6.0.jar:1.6.0]
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149) ~[calcite-core-1.6.0.jar:1.6.0]
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106) ~[calcite-core-1.6.0.jar:1.6.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:712) ~[hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:280) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10755) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:239) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1145) [hive-exec-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:184) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:269) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:324) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:460) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:447) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:294) [hive-service-2.1.0.jar:2.1.0]
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:497) [hive-service-2.1.0.jar:2.1.0]
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_231]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_231]
at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1426) [hive-jdbc-2.1.0.jar:2.1.0]
at com.sun.proxy.$Proxy23.ExecuteStatement(Unknown Source) [?:?]
at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:308) [hive-jdbc-2.1.0.jar:2.1.0]
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250) [hive-jdbc-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:977) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.Commands.execute(Commands.java:1148) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.Commands.sql(Commands.java:1063) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1137) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:965) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:875) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.cli.HiveCli.runWithArgs(HiveCli.java:35) [hive-beeline-2.1.0.jar:2.1.0]
at org.apache.hive.beeline.cli.HiveCli.main(HiveCli.java:29) [hive-beeline-2.1.0.jar:2.1.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_231]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_231]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_231]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_231]
at org.apache.hadoop.util.RunJar.run(RunJar.java:244) [hadoop-common-2.10.0.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:158) [hadoop-common-2.10.0.jar:?]
OK
No rows selected (0.491 seconds)


从日志看来,hive不再支持表空间。无论如何,是否有查询特定bbucket的数据的方法,而不是使用上面的命令,或者我在命令中缺少了一些东西。

请协助查询...

最佳答案

仔细阅读日志后,我发现设置属性hive.cbo.enable为false解决了我的问题。看起来像 hive 团队进行了一些优化,但无论如何它解决了我的查询。

关于hadoop - 如何从Hive中的特定存储桶数据查询数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61383522/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com