gpt4 book ai didi

sql - 如何避免Hive中的交叉连接?

转载 作者:行者123 更新时间:2023-12-02 14:13:58 24 4
gpt4 key购买 nike

我有两张 table 。一个包含 100 万条记录,另一个包含 2000 万条记录。

    table 1    value    (1, 1)    (2, 2)    (3, 3)    (4, 4)    (5, 4)    ....    table 2    value    (55, 11)    (33, 22)    (44, 66)    (22, 11)    (11, 33)    ....

我需要使用表1中的值乘以表2中的值,得到结果的排名,并获得排名中的前5名。他们的结果将是这样的:

    value from table 1, top 5 for each value in table 1    (1, 1), 1*44 + 1*66 = 110    (1, 1), 1*55 + 1*11 = 66    (1, 1), 1*33 + 1*22 = 55    (1, 1), 1*11 + 1*33 = 44    (1, 1), 1*22 + 1* 11 = 33    .....

我尝试在配置单元中使用交叉连接。但我总是因为表太大而失败。

最佳答案

首先从表 2 中选择前 5 个表,然后与第一个表进行交叉连接。这与交叉连接两个表并在交叉连接后取top5相同,但第一种情况连接的行数会少得多。具有小型 5 行数据集的交叉联接将转换为映射联接,并以与 table1 全扫描一样快的速度执行。

看下面的演示。交叉连接转变为映射连接。注"Map Join Operator"在计划和此警告中:"Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product" :

hive> set hive.cbo.enable=true;
hive> set hive.compute.query.using.stats=true;
hive> set hive.execution.engine=tez;
hive> set hive.auto.convert.join.noconditionaltask=false;
hive> set hive.auto.convert.join=true;
hive> set hive.vectorized.execution.enabled=true;
hive> set hive.vectorized.execution.reduce.enabled=true;
hive> set hive.vectorized.execution.mapjoin.native.enabled=true;
hive> set hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled=true;
hive>
> explain
> with table1 as (
> select stack(5,1,2,3,4,5) as id
> ),
> table2 as
> (select t2.id
> from (select t2.id, dense_rank() over(order by id desc) rnk
> from (select stack(11,55,33,44,22,11,1,2,3,4,5,6) as id) t2
> )t2
> where t2.rnk<6
> )
> select t1.id, t1.id*t2.id
> from table1 t1
> cross join table2 t2;
Warning: Map Join MAPJOIN[19][bigTable=?] in task 'Map 1' is a cross product
OK
Plan not optimized by CBO.

Vertex dependency in root stage
Map 1 <- Reducer 3 (BROADCAST_EDGE)
Reducer 3 <- Map 2 (SIMPLE_EDGE)

Stage-0
Fetch Operator
limit:-1
Stage-1
Map 1
File Output Operator [FS_17]
compressed:false
Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
table:{"serde:":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe","input format:":"org.apache.hadoop.mapred.TextInputFormat","output format:":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"}
Select Operator [SEL_16]
outputColumnNames:["_col0","_col1"]
Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
Map Join Operator [MAPJOIN_19]
| condition map:[{"":"Inner Join 0 to 1"}]
| HybridGraceHashJoin:true
| keys:{}
| outputColumnNames:["_col0","_col1"]
| Statistics:Num rows: 1 Data size: 26 Basic stats: COMPLETE Column stats: NONE
|<-Reducer 3 [BROADCAST_EDGE]
| Reduce Output Operator [RS_14]
| sort order:
| Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
| value expressions:_col0 (type: int)
| Select Operator [SEL_9]
| outputColumnNames:["_col0"]
| Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
| Filter Operator [FIL_18]
| predicate:(dense_rank_window_0 < 6) (type: boolean)
| Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
| PTF Operator [PTF_8]
| Function definitions:[{"Input definition":{"type:":"WINDOWING"}},{"partition by:":"0","name:":"windowingtablefunction","order by:":"_col0(DESC)"}]
| Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
| Select Operator [SEL_7]
| | outputColumnNames:["_col0"]
| | Statistics:Num rows: 1 Data size: 0 Basic stats: PARTIAL Column stats: COMPLETE
| |<-Map 2 [SIMPLE_EDGE]
| Reduce Output Operator [RS_6]
| key expressions:0 (type: int), col0 (type: int)
| Map-reduce partition columns:0 (type: int)
| sort order:+-
| Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
| UDTF Operator [UDTF_5]
| function name:stack
| Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
| Select Operator [SEL_4]
| outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11"]
| Statistics:Num rows: 1 Data size: 48 Basic stats: COMPLETE Column stats: COMPLETE
| TableScan [TS_3]
| alias:_dummy_table
| Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE
|<-UDTF Operator [UDTF_2]
function name:stack
Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
Select Operator [SEL_1]
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5"]
Statistics:Num rows: 1 Data size: 24 Basic stats: COMPLETE Column stats: COMPLETE
TableScan [TS_0]
alias:_dummy_table
Statistics:Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: COMPLETE

Time taken: 0.199 seconds, Fetched: 66 row(s)

只需用您的表格替换我的演示中的堆栈即可。

关于sql - 如何避免Hive中的交叉连接?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53184889/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com