gpt4 book ai didi

arrays - 检查另一个数组中存在的数组的所有元素

转载 作者:行者123 更新时间:2023-12-02 14:52:27 25 4
gpt4 key购买 nike

我有一个 df1 Spark 数据帧

id     transactions
1 [1, 2, 3, 5]
2 [1, 2, 3, 6]
3 [1, 2, 9, 8]
4 [1, 2, 5, 6]

root
|-- id: int (nullable = true)
|-- transactions: array (nullable = false)
|-- element: int(containsNull = true)
None

我有一个 df2 Spark 数据帧

items   cost
[1] 1.0
[2] 1.0
[2, 1] 2.0
[6, 1] 2.0

root
|-- items: array (nullable = false)
|-- element: int (containsNull = true)
|-- cost: int (nullable = true)
None

我想检查 items 列中的所有数组元素是否都在 transactions 列中。

第一行 ([1, 2, 3, 5]) 包含项目列中的 [1],[2],[2, 1]。因此我需要总结它们相应的成本:1.0 + 1.0 + 2.0 = 4.0

我想要的输出是

id     transactions    score
1 [1, 2, 3, 5] 4.0
2 [1, 2, 3, 6] 6.0
3 [1, 2, 9, 8] 4.0
4 [1, 2, 5, 6] 6.0

我尝试使用带有 collect()/toLocalIterator 的循环,但它似乎效率不高。我会有大数据。

我认为创建这样的 UDF 可以解决问题。但它会引发错误。

from pyspark.sql.functions import udf
def containsAll(x, y):
result = all(elem in x for elem in y)

if result:
print("Yes, transactions contains all items")
else :
print("No")

contains_udf = udf(containsAll)
dataFrame.withColumn("result", contains_udf(df2.items, df1.transactions)).show()

还有其他办法吗?

最佳答案

2.4之前有效的udf(注意它必须返回一些东西

from pyspark.sql.functions import udf

@udf("boolean")
def contains_all(x, y):
if x is not None and y is not None:
return set(y).issubset(set(x))

在 2.4 或更高版本中不需要 udf:

from pyspark.sql.functions import array_intersect, size

def contains_all(x, y):
return size(array_intersect(x, y)) == size(y)

用法:

from pyspark.sql.functions import col, sum as sum_, when

df1 = spark.createDataFrame(
[(1, [1, 2, 3, 5]), (2, [1, 2, 3, 6]), (3, [1, 2, 9, 8]), (4, [1, 2, 5, 6])],
("id", "transactions")
)

df2 = spark.createDataFrame(
[([1], 1.0), ([2], 1.0), ([2, 1], 2.0), ([6, 1], 2.0)],
("items", "cost")
)


(df1
.crossJoin(df2).groupBy("id", "transactions")
.agg(sum_(when(
contains_all("transactions", "items"), col("cost")
)).alias("score"))
.show())

结果:

+---+------------+-----+                                                        
| id|transactions|score|
+---+------------+-----+
| 1|[1, 2, 3, 5]| 4.0|
| 4|[1, 2, 5, 6]| 6.0|
| 2|[1, 2, 3, 6]| 6.0|
| 3|[1, 2, 9, 8]| 4.0|
+---+------------+-----+

如果 df2 很小,最好将其用作局部变量:

items = sc.broadcast([
(set(items), cost) for items, cost in df2.select("items", "cost").collect()
])

def score(y):
@udf("double")
def _(x):
if x is not None:
transactions = set(x)
return sum(
cost for items, cost in y.value
if items.issubset(transactions))
return _


df1.withColumn("score", score(items)("transactions")).show()
+---+------------+-----+
| id|transactions|score|
+---+------------+-----+
| 1|[1, 2, 3, 5]| 4.0|
| 2|[1, 2, 3, 6]| 6.0|
| 3|[1, 2, 9, 8]| 4.0|
| 4|[1, 2, 5, 6]| 6.0|
+---+------------+-----+

终于可以爆破加入了

from pyspark.sql.functions import explode

costs = (df1
# Explode transactiosn
.select("id", explode("transactions").alias("item"))
.join(
df2
# Add id so we can later use it to identify source
.withColumn("_id", monotonically_increasing_id().alias("_id"))
# Explode items
.select(
"_id", explode("items").alias("item"),
# We'll need size of the original items later
size("items").alias("size"), "cost"),
["item"])
# Count matches in groups id, items
.groupBy("_id", "id", "size", "cost")
.count()
# Compute cost
.groupBy("id")
.agg(sum_(when(col("size") == col("count"), col("cost"))).alias("score")))

costs.show()
+---+-----+                                                                      
| id|score|
+---+-----+
| 1| 4.0|
| 3| 4.0|
| 2| 6.0|
| 4| 6.0|
+---+-----+

然后将结果与原来的df1连接起来,

df1.join(costs, ["id"])

但这并不是一个简单的解决方案,需要多次洗牌。它可能仍然优于笛卡尔积 (crossJoin),但它取决于实际数据。

关于arrays - 检查另一个数组中存在的数组的所有元素,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54894115/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com