gpt4 book ai didi

apache-spark - Spark Scala - 如何对数据帧行进行分组并将复杂函数应用于组?

转载 作者:行者123 更新时间:2023-12-04 05:13:26 28 4
gpt4 key购买 nike

我正在尝试解决这个 super 简单的问题,但我已经厌倦了,我希望有人能帮我解决这个问题。我有一个形状像这样的数据框:

---------------------------|  Category  | Product_ID ||------------+------------+| a          | product 1  || a          | product 2  || a          | product 3  || a          | product 1  || a          | product 4  || b          | product 5  || b          | product 6  |---------------------------

How do i group these rows by category and apply complicated function in Scala?Maybe something like this:

val result = df.groupBy("Category").apply(myComplexFunction)

这个 myComplexFunction 应该为每个类别生成下表,并将成对相似性上传到 Hive 表中或将其保存到 HDFS 中:
+--------------------------------------------------+|              | Product_1 | Product_2 | Product_3 |+------------+------------+------------------------+| Product_1    | 1.0       | 0.1       |    0.8    || Product_2    | 0.1       | 1.0       |    0.5    || Product_3    | 0.8       | 0.5       |    1.0    |+--------------------------------------------------+

Here is the function i want to apply (it is just computing item-item cosine similarity within each category):

def myComplexFunction(context_data : DataFrame, country_name: String,
context_id: String, table_name_correlations: String,
context_layer: String, context_index: String) : Boolean = {
val unique_identifier = country_name + "_" + context_layer + "_" + context_index
val temp_table_vocabulary = "temp_vocabulary_" + unique_identifier
val temp_table_similarities = "temp_similarities_" + unique_identifier
val temp_table_correlations = "temp_correlations_" + unique_identifier


//context.count()
// fit a CountVectorizerModel from the corpus
//println("Creating sparse incidence matrix")
val cvModel: CountVectorizerModel = new CountVectorizer().setInputCol("words").setOutputCol("features").fit(context_data)
val incidence = cvModel.transform(context_data)

// ========================================================================================
// create dataframe of mapping from indices into the item id
//println("Creating vocabulary")
val vocabulary_rdd = sc.parallelize(cvModel.vocabulary)
val rows_vocabulary_rdd = vocabulary_rdd.zipWithIndex.map{ case (s,i) => Row(s,i)}
val vocabulary_field1 = StructField("Product_ID", StringType, true)
val vocabulary_field2 = StructField("Product_Index", LongType, true)
val schema_vocabulary = StructType(Seq(vocabulary_field1, vocabulary_field2))
val df_vocabulary = hiveContext.createDataFrame(rows_vocabulary_rdd, schema_vocabulary)

// ========================================================================================
//println("Computing similarity matrix")
val myvectors = incidence.select("features").rdd.map(r => r(0).asInstanceOf[Vector])
val mat: RowMatrix = new RowMatrix(myvectors)
val sims = mat.columnSimilarities(0.0)

// ========================================================================================
// Convert records of the Matrix Entry RDD into Rows
//println("Extracting paired similarities")
val rowRdd = sims.entries.map{case MatrixEntry(i, j, v) => Row(i, j, v)}

// ========================================================================================
// create dataframe schema
//println("Creating similarity dataframe")
val field1 = StructField("Product_Index", LongType, true)
val field2 = StructField("Neighbor_Index", LongType, true)
var field3 = StructField("Similarity_Score", DoubleType, true)
val schema_similarities = StructType(Seq(field1, field2, field3))

// create the dataframe
val df_similarities = hiveContext.createDataFrame(rowRdd, schema_similarities)

// ========================================================================================
//println("Register vocabulary and correlations as spark temp tables")
df_vocabulary.registerTempTable(temp_table_vocabulary)
df_similarities.registerTempTable(temp_table_similarities)

// ========================================================================================
//println("Extracting Product_ID")
val temp_corrs = hiveContext.sql(
s"SELECT T1.Product_ID, T2.Neighbor_ID, T1.Similarity_Score " +
s"FROM " +
s"(SELECT Product_ID, Neighbor_Index, Similarity_Score " +
s"FROM $temp_table_similarities LEFT JOIN $temp_table_vocabulary " +
s"WHERE $temp_table_similarities.Product_Index = $temp_table_vocabulary.Product_Index) AS T1 " +
s"LEFT JOIN " +
s"(SELECT Product_ID AS Neighbor_ID, Product_Index as Neighbor_Index FROM $temp_table_vocabulary) AS T2 " +
s"ON " +
s"T1.Neighbor_Index = T2.Neighbor_Index")

// ========================================================================================
val context_corrs = temp_corrs.withColumn("Context_Layer", lit(context_layer)).withColumn("Context_ID", lit(context_id)).withColumn("Country", lit(country_name))
context_corrs.registerTempTable(temp_table_correlations)

// ========================================================================================
hiveContext.sql(s"INSERT INTO TABLE $table_name_correlations SELECT * FROM $temp_table_correlations")

// ========================================================================================
// clean up environment
//println("Cleaning up temp tables")
hiveContext.dropTempTable(temp_table_correlations)
hiveContext.dropTempTable(temp_table_similarities)
hiveContext.dropTempTable(temp_table_vocabulary)

return true
}

val partitioned = tokenized.repartition(tokenized("context_id"))
val context_counts = partitioned.mapPartitions()
//val context_counts = model_code_ids.zipWithIndex.map{case (model_code_id, context_index) => compute_similarity(tokenized.filter(tokenized("context_id") === model_code_id), country_name, model_code_id.asInstanceOf[String], table_name_correlations, context_layer, context_index.toString)}

}

我已经尝试过以下操作:

val category_ids = df.select("Category").distinct.collect()
val result = category_ids.map(category_id => myComplexFunction(df.filter(df("Category") <=> category_id)))

我不知道为什么,但这种方法按顺序运行而不是并行运行。

最佳答案

余弦相似度不是一个复杂的函数,可以使用标准 SQL 聚合来表示。让我们考虑以下示例:

val df = Seq(
("feat1", 1.0, "item1"),
("feat2", 1.0, "item1"),
("feat6", 1.0, "item1"),
("feat1", 1.0, "item2"),
("feat3", 1.0, "item2"),
("feat4", 1.0, "item3"),
("feat5", 1.0, "item3"),
("feat1", 1.0, "item4"),
("feat6", 1.0, "item4")
).toDF("feature", "value", "item")

哪里 feature是特征标识符, value是一个对应的值和 item是对象标识符和 feature , item pair 只有一个对应的值。

余弦相似度定义为:

cosine_similarity

其中分子可以计算为:

val numer = df.as("this").withColumnRenamed("item", "this")
.join(df.as("other").withColumnRenamed("item", "other"), Seq("feature"))
.where($"this" < $"other")
.groupBy($"this", $"other")
.agg(sum($"this.value" * $"other.value").alias("dot"))

分母中使用的规范为:

import org.apache.spark.sql.functions.sqrt

val norms = df.groupBy($"item").agg(sqrt(sum($"value" * $"value")).alias("norm"))

//组合在一起:

val cosine = ($"dot" / ($"this_norm.norm" * $"other_norm.norm")).as("cosine") 

val similarities = numer
.join(norms.alias("this_norm").withColumnRenamed("item", "this"), Seq("this"))
.join(norms.alias("other_norm").withColumnRenamed("item", "other"), Seq("other"))
.select($"this", $"other", cosine)

结果表示上三角矩阵的非零项,忽略对角线(这是微不足道的):

+-----+-----+-------------------+
| this|other| cosine|
+-----+-----+-------------------+
|item1|item4| 0.8164965809277259|
|item1|item2|0.40824829046386296|
|item2|item4| 0.4999999999999999|
+-----+-----+-------------------+

这应该相当于:

import org.apache.spark.sql.functions.array
import org.apache.spark.mllib.linalg.distributed.{IndexedRow, IndexedRowMatrix}
import org.apache.spark.mllib.linalg.Vectors

val pivoted = df.groupBy("item").pivot("feature").sum()
.na.fill(0.0)
.orderBy("item")

val mat = new IndexedRowMatrix(pivoted
.select(array(pivoted.columns.tail.map(col): _*))
.rdd
.zipWithIndex
.map {
case (row, idx) =>
new IndexedRow(idx, Vectors.dense(row.getSeq[Double](0).toArray))
})

mat.toCoordinateMatrix.transpose
.toIndexedRowMatrix.columnSimilarities
.toBlockMatrix.toLocalMatrix

0.0  0.408248290463863  0.0  0.816496580927726
0.0 0.0 0.0 0.4999999999999999
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0

关于你的代码:
  • 执行是顺序的,因为您的代码在本地 ( collected ) 集合上运行。
  • myComplexFunction不能进一步分布式,因为它是分布式数据结构和上下文。
  • 关于apache-spark - Spark Scala - 如何对数据帧行进行分组并将复杂函数应用于组?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40681794/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com