gpt4 book ai didi

java - 如何缓存分区数据集并在多个查询中使用?

转载 作者:塔克拉玛干 更新时间:2023-11-02 08:03:38 24 4
gpt4 key购买 nike

我有以下代码:

dataset
.distinct()
.repartition(400)
.persist(StorageLevel.MEMORY_ONLY())
.createOrReplaceTempView("temp");
sqlContext.sql("select * from temp");

这只是一个示例,我需要对同一实体执行大约 100 次查询,这就是我坚持使用它的原因。我以为当我查询 temp它将查询缓存的实体,但是当我检查 spark ui 的查询详细信息时,我看到对 temp 上的每个查询都执行了重新分区。 ,因此查询数据集并为每个查询执行 DAG。

----------------编辑------------------------

这里的图表和查询的逻辑计划,对我来说是一样的,我的期望是第一个查询执行所有必需的步骤,然后它将直接访问内存中的 View 。

我已经检查了 sqlContext.isCached("temp")并打印true

查询执行图

enter image description here

第一个查询计划

== Parsed Logical Plan ==
'Project [11 AS tenant_id#4958, cube_purchase_details AS cube_name#4959, purchase_activity AS field#4960, 'purchase_activity AS value#4961]
+- 'UnresolvedRelation `filter_temp`

== Analyzed Logical Plan ==
tenant_id: string, cube_name: string, field: string, value: string
Project [11 AS tenant_id#4958, cube_purchase_details AS cube_name#4959, purchase_activity AS field#4960, purchase_activity#4062 AS value#4961]
+- SubqueryAlias filter_temp, `filter_temp`
+- Aggregate [purchase_activity#4062], [purchase_activity#4062]
+- Project [purchase_activity#4062]
+- Repartition 400, true
+- GlobalLimit 10000
+- LocalLimit 10000
+- Project [purchase_activity#4062, top_shop_1#4069, top_brand_1#4072, top_brand_2#4073, top_brand_3#4074, top_brand_4#4075, top_brand_5#4076, top_manufacturer_1#4077, top_manufacturer_2#4078, top_manufacturer_3#4079, top_manufacturer_4#4080, top_manufacturer_5#4081, top_product_category_1#4082, top_product_category_2#4083, top_product_category_3#4084, top_product_category_4#4085, top_product_category_5#4086, top_salesperson_1#4093, top_salesperson_2#4094, top_salesperson_3#4095, age_category#4109, inactive#4115, activity_id#4144, activity_name#4145, ... 67 more fields]
+- Relation[purchase_detail_id#3918,tenant_id#3919,purchase_detail_date#3920,purchase_detail_type#3921,user_id#3922,user_domain#3923,purchase_id#3924,purchase_date#3925,is_purchase#3926,year#3927,quarter#3928,month#3929,week#3930,weekday#3931,day#3932,former_purchase_id#3933,pd_shop_id#3934,customer_id#3935,loyalty_id#3936,quantity#3937,unit_price#3938,total_price#3939,discount#3940,currency#3941,... 219 more fields] parquet

其他查询计划

== Parsed Logical Plan ==
'Project [11 AS tenant_id#6816, cube_purchase_details AS cube_name#6817, top_brand_1 AS field#6818, 'top_brand_1 AS value#6819]
+- 'UnresolvedRelation `filter_temp`

== Analyzed Logical Plan ==
tenant_id: string, cube_name: string, field: string, value: string
Project [11 AS tenant_id#6816, cube_purchase_details AS cube_name#6817, top_brand_1 AS field#6818, top_brand_1#4072 AS value#6819]
+- SubqueryAlias filter_temp, `filter_temp`
+- Aggregate [top_brand_1#4072], [top_brand_1#4072]
+- Project [top_brand_1#4072]
+- Repartition 400, true
+- GlobalLimit 10000
+- LocalLimit 10000
+- Project [purchase_activity#4062, top_shop_1#4069, top_brand_1#4072, top_brand_2#4073, top_brand_3#4074, top_brand_4#4075, top_brand_5#4076, top_manufacturer_1#4077, top_manufacturer_2#4078, top_manufacturer_3#4079, top_manufacturer_4#4080, top_manufacturer_5#4081, top_product_category_1#4082, top_product_category_2#4083, top_product_category_3#4084, top_product_category_4#4085, top_product_category_5#4086, top_salesperson_1#4093, top_salesperson_2#4094, top_salesperson_3#4095, age_category#4109, inactive#4115, activity_id#4144, activity_name#4145, ... 67 more fields]
+- Relation[purchase_detail_id#3918,tenant_id#3919,purchase_detail_date#3920,purchase_detail_type#3921,user_id#3922,user_domain#3923,purchase_id#3924,purchase_date#3925,is_purchase#3926,year#3927,quarter#3928,month#3929,week#3930,weekday#3931,day#3932,former_purchase_id#3933,pd_shop_id#3934,customer_id#3935,loyalty_id#3936,quantity#3937,unit_price#3938,total_price#3939,discount#3940,currency#3941,... 219 more fields] parquet

这里是 Spark UI 存储页面的屏幕截图,以备不时之需。

enter image description here

如何从 spark-sql 访问这个持久化数据集?

最佳答案

How can I access this persisted dataset from spark-sql?

只要您在其他查询中引用它们,Spark SQL 就会为您重新使用缓存的查询。使用 explain 运算符来确认和 Web UI 的“工作详情”页面(在“工作”选项卡下)。


Dataset 的 persistcache 运算符是惰性的(与 SQL 的 CACHE TABLE 查询相反),因此在下面的操作之后它不会真的缓存。

dataset
.distinct()
.repartition(400)
.persist(StorageLevel.MEMORY_ONLY())
.createOrReplaceTempView("temp");

persist(StorageLevel.MEMORY_ONLY()) 只是提示 Spark SQL 应该在下次执行操作时缓存关系。这导致人们执行 headcount 操作来触发缓存的模式。

持久化表后,您可以使用 Web UI 的“存储”选项卡查看其对应的缓存 RDD 条目。

此外,在已经缓存的数据集上执行cache会给你一个警告。

scala> q.distinct.repartition(5).cache.head
17/05/16 10:57:49 WARN CacheManager: Asked to cache already cached data.
res4: org.apache.spark.sql.Row = [2]

scala> q.cache()
17/05/16 10:59:54 WARN CacheManager: Asked to cache already cached data.
res6: q.type = [key: bigint]

认为您希望在缓存执行计划之后以某种方式应该更短,因此已经执行的步骤应该以某种方式从计划中删除。对吗?

如果是这样,你的理解是部分正确的。部分原因是虽然部分查询计划仍在执行计划中,但一旦执行查询,缓存表(和相应的阶段)应该已经从缓存中获取。

您可以在参与查询的职位详情页面查看。缓存部分在执行DAG中用绿色小圆圈标记。

Execution Plan with Cached Part

引自Understanding your Apache Spark Application Through Visualization :

Second, one of the RDDs is cached in the first stage (denoted by the green highlight). Since the enclosing operation involves reading from HDFS, caching this RDD means future computations on this RDD can access at least a subset of the original file from memory instead of from HDFS.

您还可以查看查询以及使用 explain 运算符缓存的数据源。

scala> q.explain
== Physical Plan ==
InMemoryTableScan [key#21L]
+- InMemoryRelation [key#21L], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- *Project [(id#18L % 5) AS key#21L]
+- *Range (0, 1000, step=1, splits=8)

关于java - 如何缓存分区数据集并在多个查询中使用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43939226/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com