gpt4 book ai didi

apache-spark - 在 Spark SQL 中高效地缓存数据帧

转载 作者:行者123 更新时间:2023-12-02 02:55:01 30 4
gpt4 key购买 nike

用例是多次自连接一个表。

// Hive Table
val network_file = spark.sqlContext.sql("SELECT * FROM
test.network_file")

// Cache
network_file.cache()

network_file.createOrReplaceTempView("network_design")

现在下面的查询进行了多次自连接。

val res = spark.sqlContext.sql("""select 
one.sourcehub as source,
one.mappedhub as first_leg,
two.mappedhub as second_leg,
one.destinationhub as dest
from
(select * from network_design) one JOIN
(select * from network_design) two JOIN
(select * from network_design) three
ON (two.sourcehub = one.mappedhub )
AND (three.sourcehub = two.mappedhub)
AND (one.destinationhub = two.destinationhub )
AND (two.destinationhub = three.destinationhub)
group by source, first_leg, second_leg, dest
""")

问题是上述查询的物理计划建议读取表 3 次。

== Physical Plan ==
*HashAggregate(keys=[sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84], functions=[])
+- Exchange hashpartitioning(sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84, 200)
+- *HashAggregate(keys=[sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84], functions=[])
+- *Project [sourcehub#83, destinationhub#84, mappedhub#85, mappedhub#109]
+- *BroadcastHashJoin [mappedhub#109, destinationhub#108], [sourcehub#110, destinationhub#111], Inner, BuildRight
:- *Project [sourcehub#83, destinationhub#84, mappedhub#85, destinationhub#108, mappedhub#109]
: +- *BroadcastHashJoin [mappedhub#85, destinationhub#84], [sourcehub#107, destinationhub#108], Inner, BuildRight
: :- *Filter (isnotnull(destinationhub#84) && isnotnull(mappedhub#85))
: : +- InMemoryTableScan [sourcehub#83, destinationhub#84, mappedhub#85], [isnotnull(destinationhub#84), isnotnull(mappedhub#85)]
: : +- InMemoryRelation [sourcehub#83, destinationhub#84, mappedhub#85], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
: : +- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]
: +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false], input[1, string, false]))
: +- *Filter ((isnotnull(sourcehub#107) && isnotnull(destinationhub#108)) && isnotnull(mappedhub#109))
: +- InMemoryTableScan [sourcehub#107, destinationhub#108, mappedhub#109], [isnotnull(sourcehub#107), isnotnull(destinationhub#108), isnotnull(mappedhub#109)]
: +- InMemoryRelation [sourcehub#107, destinationhub#108, mappedhub#109], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
: +- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false], input[1, string, false]))
+- *Filter (isnotnull(sourcehub#110) && isnotnull(destinationhub#111))
+- InMemoryTableScan [sourcehub#110, destinationhub#111], [isnotnull(sourcehub#110), isnotnull(destinationhub#111)]
+- InMemoryRelation [sourcehub#110, destinationhub#111, mappedhub#112], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]

Spark 不应该缓存表一次而不读取多次吗?对于这些自连接情况,我们如何有效地在 Spark 中缓存表?

Spark 版本 - 2.2Hive ORC 是下游的商店。

最佳答案

此语句序列忽略要缓存的数据帧:

network_file.cache() #the result of this is not being used at all
network_file.createOrReplaceTempView("network_design") #doesn't have the cached DF in lineage

您应该覆盖变量或在返回的数据框中注册表:

network_file = network_file.cache()
network_file.createOrReplaceTempView("network_design")

或者:

network_file.cache().createOrReplaceTempView("network_design")

关于apache-spark - 在 Spark SQL 中高效地缓存数据帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49893268/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com