gpt4 book ai didi

java - Apache Spark 中的分层数据操作

转载 作者:行者123 更新时间:2023-12-05 06:37:15 25 4
gpt4 key购买 nike

我在 Spark (v2.1.1) 中有一个包含分层数据的 3 列(如下所示)的数据集。

  • My target objective is to assign incremental numbering to each row based on the parent-child hierarchy. Graphically it can be said that the hierarchical data is a collection of trees.
  • As per below table, I already have the rows grouped based on 'Global_ID'. Now I would like to generate the 'Value' column in an incremental order but based on the hierarchy of data from 'Parent' and 'Child' columns.

表格表示(值是所需的输出):

    +-----------+--------+-------+         +-----------+--------+-------+-------+
| Current Dataset | | Desired Dataset (Output) |
+-----------+--------+-------+ +-----------+--------+-------+-------+
| Global_ID | Parent | Child | | Global_ID | Parent | Child | Value |
+-----------+--------+-------+ +-----------+--------+-------+-------+
| 111 | 111 | 123 | | 111 | 111 | 111 | 1 |
| 111 | 135 | 246 | | 111 | 111 | 123 | 2 |
| 111 | 123 | 456 | | 111 | 123 | 789 | 3 |
| 111 | 123 | 789 | | 111 | 123 | 456 | 4 |
| 111 | 111 | 111 | | 111 | 111 | 135 | 5 |
| 111 | 135 | 468 | | 111 | 135 | 246 | 6 |
| 111 | 135 | 268 | | 111 | 135 | 468 | 7 |
| 111 | 268 | 321 | | 111 | 135 | 268 | 8 |
| 111 | 138 | 139 | | 111 | 268 | 321 | 9 |
| 111 | 111 | 135 | | 111 | 111 | 138 | 10 |
| 111 | 111 | 138 | | 111 | 138 | 139 | 11 |
| 222 | 222 | 654 | | 222 | 222 | 222 | 12 |
| 222 | 654 | 721 | | 222 | 222 | 987 | 13 |
| 222 | 222 | 222 | | 222 | 222 | 654 | 14 |
| 222 | 721 | 127 | | 222 | 654 | 721 | 15 |
| 222 | 222 | 987 | | 222 | 721 | 127 | 16 |
| 333 | 333 | 398 | | 333 | 333 | 333 | 17 |
| 333 | 333 | 498 | | 333 | 333 | 398 | 18 |
| 333 | 333 | 333 | | 333 | 333 | 498 | 19 |
| 333 | 333 | 598 | | 333 | 333 | 598 | 20 |
+-----------+--------+-------+ +-----------+--------+-------+-------+

树表示(期望值在每个节点旁边表示):

                      +-----+                                           +-----+
1 | 111 | 17 | 333 |
+--+--+ +--+--+
| |
+---------------+--------+-----------------+ +----------+----------+
| | | | | |
+--v--+ +--v--+ +--v--+ +--v--+ +--v--+ +--v--+
2 | 123 | 5 | 135 | 10 | 138 | | 398 | | 498 | | 598 |
+--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+
+-----+-----+ +--------+--------+ | 18 19 20
| | | | | |
+--v--+ +--v--+ +--v--+ +--v--+ +--v--+ +--v--+
| 789 | | 456 | | 246 | | 468 | | 268 | | 139 | +-----+
+-----+ +-----+ +-----+ +-----+ +--+--+ +-----+ 12 | 222 |
3 4 6 7 8 | 11 +--+--+
+--v--+ |
| 321 | +------+-------+
+--+--+ | |
9 +--v--+ +--v--+
13 | 987 | 14 | 654 |
+--+--+ +--+--+
|
+--v--+
15 | 721 |
+--+--+
|
+--v--+
16 | 127 |
+--+--+

代码片段:

Dataset<Row> myDataset = spark
.sql("select Global_ID, Parent, Child from RECORDS");

JavaPairRDD<Row,Long> finalDataset = myDataset.groupBy(new Column("Global_ID"))
.agg(functions.sort_array(functions.collect_list(new Column("Parent").as("parent_col"))),
functions.sort_array(functions.collect_list(new Column("Child").as("child_col"))))
.orderBy(new Column("Global_ID"))
.withColumn("vars", functions.explode(<Spark UDF>)
.select(new Column("vars"),new Column("parent_col"),new Column("child_col"))
.javaRDD().zipWithIndex();


// Sample UDF (TODO: Actual Implementation)
spark.udf().register("computeValue",
(<Column Names>) -> <functionality & implementation>,
DataTypes.<xxx>);

经过大量研究并引用了博客中的许多建议,我尝试了以下方法,但无法实现我的方案的结果。

Tech Stack :

  • Apache Spark (v2.1.1)

  • Java 8

  • AWS EMR Cluster (Spark App Deployment)


Data Volume:

  • Approximately ~20 million rows in the Dataset

Approaches Tried:

  1. Spark GraphX + GraphFrames:

  2. Spark GraphX Pregel API:


对当前方法的替代(或)修改的任何建议都会非常有帮助,因为我完全迷失了为这个用例找出解决方案。

感谢您的帮助!谢谢!

最佳答案

注意:以下解决方案是在 scala spark 中。您可以轻松地转换为 Java 代码。

检查一下。我尝试使用 Spark Sql 来做,你可以得到一个想法。基本上的想法是在聚合和分组时对 child 、 parent 和全局对象进行排序。一旦按 globalid 分组和排序,展开其余部分。您将获得有序的结果表,稍后您可以zipWithIndex 添加排名(值)

   import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions.udf

val sqlContext = new SQLContext(sc)
import sqlContext.implicits._

val t = Seq((111,111,123), (111,111,111), (111,123,789), (111,268,321), (222,222,654), (222,222,222), (222,721,127), (333,333,398), (333,333,333), (333,333,598))
val ddd = sc.parallelize(t).toDF
val zip = udf((xs: Seq[Int], ys: Seq[Int]) => xs zip ys)
val dd1 = ddd
.groupBy($"_1")
.agg(sort_array(collect_list($"_2")).as("v"),
sort_array(collect_list($"_3")).as("w"))
.orderBy(asc("_1"))
.withColumn("vars", explode(zip($"v", $"w")))
.select($"_1", $"vars._1", $"vars._2").rdd.zipWithIndex

dd1.collect

输出

    res24: Array[(org.apache.spark.sql.Row, Long)] = Array(([111,111,111],0), ([111,111,123],1), ([111,123,321],2),
([111,268,789],3), ([222,222,127],4), ([222,222,222],5), ([222,721,654],6),([333,333,333],7), ([333,333,398],8), ([333,333,598],9))

关于java - Apache Spark 中的分层数据操作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47996055/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com