gpt4 book ai didi

python - 稀疏向量 pyspark

转载 作者:太空狗 更新时间:2023-10-30 00:27:28 25 4
gpt4 key购买 nike

我想找到一种使用数据帧在 PySpark 中创建备用向量的有效方法。

假设给定交易输入:

df = spark.createDataFrame([
(0, "a"),
(1, "a"),
(1, "b"),
(1, "c"),
(2, "a"),
(2, "b"),
(2, "b"),
(2, "b"),
(2, "c"),
(0, "a"),
(1, "b"),
(1, "b"),
(2, "cc"),
(3, "a"),
(4, "a"),
(5, "c")
], ["id", "category"])
+---+--------+
| id|category|
+---+--------+
| 0| a|
| 1| a|
| 1| b|
| 1| c|
| 2| a|
| 2| b|
| 2| b|
| 2| b|
| 2| c|
| 0| a|
| 1| b|
| 1| b|
| 2| cc|
| 3| a|
| 4| a|
| 5| c|
+---+--------+

总结格式:

df.groupBy(df["id"],df["category"]).count().show()
+---+--------+-----+
| id|category|count|
+---+--------+-----+
| 1| b| 3|
| 1| a| 1|
| 1| c| 1|
| 2| cc| 1|
| 2| c| 1|
| 2| a| 1|
| 1| a| 1|
| 0| a| 2|
+---+--------+-----+

我的目标是通过 id 获取此输出:

+---+-----------------------------------------------+
| id| feature |
+---+-----------------------------------------------+
| 2|SparseVector({a: 1.0, b: 3.0, c: 1.0, cc: 1.0})|

你能给我指明正确的方向吗?使用 Java 中的 mapreduce 对我来说似乎更容易。

最佳答案

这可以通过 pivotVectorAssembler 轻松完成。用 pivot 替换聚合:

 pivoted = df.groupBy("id").pivot("category").count().na.fill(0)

并组装:

from pyspark.ml.feature import VectorAssembler

input_cols = [x for x in pivoted.columns if x != id]

result = (VectorAssembler(inputCols=input_cols, outputCol="features")
.transform(pivoted)
.select("id", "features"))

结果如下。这将根据稀疏性选择更有效的表示:

+---+---------------------+
|id |features |
+---+---------------------+
|0 |(5,[1],[2.0]) |
|5 |(5,[0,3],[5.0,1.0]) |
|1 |[1.0,1.0,3.0,1.0,0.0]|
|3 |(5,[0,1],[3.0,1.0]) |
|2 |[2.0,1.0,3.0,1.0,1.0]|
|4 |(5,[0,1],[4.0,1.0]) |
+---+---------------------+

但当然你仍然可以将它转换为单一表示:

from pyspark.ml.linalg import SparseVector, VectorUDT
import numpy as np

def to_sparse(c):
def to_sparse_(v):
if isinstance(v, SparseVector):
return v
vs = v.toArray()
nonzero = np.nonzero(vs)[0]
return SparseVector(v.size, nonzero, vs[nonzero])
return udf(to_sparse_, VectorUDT())(c)
+---+-------------------------------------+
|id |features |
+---+-------------------------------------+
|0 |(5,[1],[2.0]) |
|5 |(5,[0,3],[5.0,1.0]) |
|1 |(5,[0,1,2,3],[1.0,1.0,3.0,1.0]) |
|3 |(5,[0,1],[3.0,1.0]) |
|2 |(5,[0,1,2,3,4],[2.0,1.0,3.0,1.0,1.0])|
|4 |(5,[0,1],[4.0,1.0]) |
+---+-------------------------------------+

关于python - 稀疏向量 pyspark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43809587/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com