gpt4 book ai didi

hadoop - Hive DML事务(更新/删除)不适用于子查询

转载 作者:行者123 更新时间:2023-12-02 19:58:33 24 4
gpt4 key购买 nike

我知道Hive / Hadoop不是要更新/删除,但我的要求是根据表person21的数据更新表person20。随着Hive和ORC的进步,它支持ACID,但看起来还不成熟。

$ hive --version 

hive 1.1.0-cdh5.6.0

以下是我为测试更新逻辑而执行的详细步骤。
CREATE TABLE person20(
persid int,
lastname string,
firstname string)
CLUSTERED BY (
persid)
INTO 1 BUCKETS
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://hostname.com:8020/user/hive/warehouse/person20'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='3',
'numRows'='2',
'rawDataSize'='348',
'totalSize'='1730',
'transactional'='true',
'transient_lastDdlTime'='1489668385')

插入语句:
INSERT INTO TABLE person20 VALUES (0,'PP','B'),(2,'X','Y');

选择语句:
set hive.cli.print.header=true;

select * from person20;

persid lastname firstname
2 X Y
0 PP B

我有另一个表是person20的副本,即person21:
CREATE TABLE person21(
persid int,
lastname string,
firstname string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://hostname.com:8020/user/hive/warehouse/person21'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='1',
'numRows'='2',
'rawDataSize'='11',
'totalSize'='13',
'transient_lastDdlTime'='1489668344')

插入语句:
INSERT INTO TABLE person20 VALUES (0,'SS','B'),(2,'X','Y');

选择语句:
select * from person21;

persid lastname firstname
2 X1 Y
0 SS B

我想实现MERGE逻辑:
Merge into  person20 p20 USING person21 p21
ON (p20.persid=p21.persid)
WHEN MATCHED THEN
UPDATE set p20.lastname=p21.lastname
  • 但是,合并不能与我的HIVE Hive 1.1.0-cdh5.6.0版本一起使用。
    从Hive 2.2开始将提供此功能。

  • 其他选项是相关子查询更新:
    hive -e "set hive.auto.convert.join.noconditionaltask.size = 10000000; set hive.support.concurrency = true; set hive.enforce.bucketing = true; set hive.exec.dynamic.partition.mode = nonstrict; set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.compactor.initiator.on = true;
    set hive.compactor.worker.threads = 1 ; UPDATE person20 SET lastname = (select lastname from person21 where person21.lastname=person20.lastname);"
  • 此语句给出以下错误:-

  • Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.6.0.jar!/hive-log4j.properties NoViableAltException(224@[400:1: precedenceEqualExpression : ( (left= precedenceBitwiseOrExpression -> $left) ( ( KW_NOT precedenceEqualNegatableOperator notExpr= precedenceBitwiseOrExpression ) -> ^( KW_NOT ^( precedenceEqualNegatableOperator $precedenceEqualExpression $notExpr) ) | ( precedenceEqualOperator equalExpr= precedenceBitwiseOrExpression ) -> ^( precedenceEqualOperator $precedenceEqualExpression $equalExpr) | ( KW_NOT KW_IN LPAREN KW_SELECT )=> ( KW_NOT KW_IN subQueryExpression ) -> ^( KW_NOT ^( TOK_SUBQUERY_EXPR ^( TOK_SUBQUERY_OP KW_IN ) subQueryExpression $precedenceEqualExpression) ) | ( KW_NOT KW_IN expressions ) -> ^( KW_NOT ^( TOK_FUNCTION KW_IN $precedenceEqualExpression expressions ) ) | ( KW_IN LPAREN KW_SELECT )=> ( KW_IN subQueryExpression ) -> ^( TOK_SUBQUERY_EXPR ^( TOK_SUBQUERY_OP KW_IN ) subQueryExpression $precedenceEqualExpression) | ( KW_IN expressions ) -> ^( TOK_FUNCTION KW_IN $precedenceEqualExpression expressions ) | ( KW_NOT KW_BETWEEN (min= precedenceBitwiseOrExpression ) KW_AND (max= precedenceBitwiseOrExpression ) ) -> ^( TOK_FUNCTION Identifier["between"] KW_TRUE $left $min $max) | ( KW_BETWEEN (min= precedenceBitwiseOrExpression ) KW_AND (max= precedenceBitwiseOrExpression ) ) -> ^( TOK_FUNCTION Identifier["between"] KW_FALSE $left $min $max) )* | ( KW_EXISTS LPAREN KW_SELECT )=> ( KW_EXISTS subQueryExpression ) -> ^( TOK_SUBQUERY_EXPR ^( TOK_SUBQUERY_OP KW_EXISTS ) subQueryExpression ) );]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at org.antlr.runtime.DFA.predict(DFA.java:116) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8651) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9673) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9792) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceOrExpression(HiveParser_IdentifiersParser.java:9951) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6567) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6791) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6862) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7247) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7307) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7491) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7651) at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7811) at org.apache.hadoop.hive.ql.parse.HiveParser.precedencePlusExpression(HiveParser.java:44550) at org.apache.hadoop.hive.ql.parse.HiveParser.columnAssignmentClause(HiveParser.java:44206) at org.apache.hadoop.hive.ql.parse.HiveParser.setColumnsClause(HiveParser.java:44271) at org.apache.hadoop.hive.ql.parse.HiveParser.updateStatement(HiveParser.java:44417) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1616) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1062) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:201) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1119) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1167) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1045) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:702) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) FAILED: ParseException line 1:33 cannot recognize input near 'select' 'lastname' 'from' in expression specification



    我认为它不支持子查询。相同的语句适用于常量。
    hive -e "set hive.auto.convert.join.noconditionaltask.size = 10000000; set hive.support.concurrency = true; set hive.enforce.bucketing = true; set hive.exec.dynamic.partition.mode = nonstrict; set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.compactor.initiator.on = true;
    set hive.compactor.worker.threads = 1 ; UPDATE person20 SET lastname = 'PP' WHERE persid = 0;"

    -此语句成功更新了记录。

    您能否帮助我找到在HIVE中执行DML /合并操作的最佳策略。

    最佳答案

    您可以通过蛮力做到这一点:

  • 重新创建表person20,但不创建ACID,在虚拟col名称上进行分区,并为“dummy”
  • 分配一个分区
  • 填充person20person21
  • 创建工作表tmpperson20,其结构与person20完全相同,并且具有相同的“虚拟”分区
  • INSERT INTO tmpperson20 PARTITION (dummy='dummy') SELECT p20.persid, p21.lastname, ... FROM person20 p20 JOIN person21 p21 ON p20.persid=p21.persid
  • INSERT INTO tmpperson20 PARTITION (dummy='dummy') SELECT * FROM person20 p20 WHERE NOT EXISTS (select p21.persid FROM person21 p21 WHERE p20.persid=p21.persid)
  • ALTER TABLE person20 DROP PARTITION (dummy='dummy')
  • ALTER TABLE person20 EXCHANGE PARTITION (dummy='dummy') WITH tmpperson20
  • 现在您可以删除tmpperson20

  • 但是由于存储,使用ACID表可能会更加棘手。

    您也可以尝试使用在游标上迭代并在循环中应用单个UPDATE的过程语言。低效率的大量更新...

    Hive 2.x附带了 HPL/SQL实用程序,它可能已安装在Hive 1.x之上,但是我从来没有机会尝试过。 Oracle方言对Hive感到很奇怪...!

    或者,您可以在循环中使用JDBC ResultSetPreparedStatement开发一些自定义Java代码。

    关于hadoop - Hive DML事务(更新/删除)不适用于子查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42844875/

    24 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com