gpt4 book ai didi

json - 在读取/加载时将原始 JSON 保留为 Spark DataFrame 中的列?

转载 作者:行者123 更新时间:2023-12-04 15:17:47 27 4
gpt4 key购买 nike

在将数据读入 Spark DataFrame 时,我一直在寻找一种将原始 (JSON) 数据添加为列的方法。我有一种方法可以通过连接来做到这一点,但我希望有一种方法可以使用 Spark 2.2.x+ 在单个操作中做到这一点。

例如数据:

{"team":"Golden Knights","colors":"gold,red,black","origin":"Las Vegas"}
{"team":"Sharks","origin": "San Jose", "eliminated":"true"}
{"team":"Wild","colors":"red,green,gold","origin":"Minnesota"}

执行时:
val logs = sc.textFile("/Users/vgk/data/tiny.json") // example data file
spark.read.json(logs).show

可以预见,我们得到:

+--------------+----------+--------------------+--------------+
| colors|eliminated| origin| team|
+--------------+----------+--------------------+--------------+
|gold,red,black| null| Las Vegas|Golden Knights|
| null| true| San Jose| Sharks|
|red,green,gold| null| Minnesota| Wild|
|red,white,blue| false|District of Columbia| Capitals|
+--------------+----------+--------------------+--------------+

我希望在初始加载时拥有以上内容,但将原始 JSON 数据作为附加列。例如(截断的原始值):

+--------------+-------------------------------+--------------+--------------------+
| colors|eliminated| origin| team| value|
+--------------+----------+--------------------+--------------+--------------------+
|red,white,blue| false|District of Columbia| Capitals|{"colors":"red,wh...|
|gold,red,black| null| Las Vegas|Golden Knights|{"colors":"gold,r...|
| null| true| San Jose| Sharks|{"eliminated":"tr...|
|red,green,gold| null| Minnesota| Wild|{"colors":"red,gr...|
+--------------+----------+--------------------+--------------+--------------------+

一个非理想的解决方案涉及一个连接:
val logs = sc.textFile("/Users/vgk/data/tiny.json")
val df = spark.read.json(logs).withColumn("uniqueID",monotonically_increasing_id)
val rawdf = df.toJSON.withColumn("uniqueID",monotonically_increasing_id)
df.join(rawdf, "uniqueID")

这导致与上述相同的数据帧,但添加并添加了 uniqueID柱子。此外,json 是从 DF 呈现的,不一定是“原始”数据。实际上它们是相等的,但对于我的用例,实际的原始数据更可取。

有没有人知道将原始 JSON 数据捕获为加载时的附加列的解决方案?

最佳答案

如果您有接收到的数据的架构,那么您可以使用 from_jsonschema获取所有字段并保留 raw字段原样

val logs = spark.sparkContext.textFile(path) // example data file

val schema = StructType(
StructField("team", StringType, true)::
StructField("colors", StringType, true)::
StructField("eliminated", StringType, true)::
StructField("origin", StringType, true)::Nil
)

logs.toDF("values")
.withColumn("json", from_json($"values", schema))
.select("values", "json.*")

.show(false)

输出:
+------------------------------------------------------------------------+--------------+--------------+----------+---------+
|values |team |colors |eliminated|origin |
+------------------------------------------------------------------------+--------------+--------------+----------+---------+
|{"team":"Golden Knights","colors":"gold,red,black","origin":"Las Vegas"}|Golden Knights|gold,red,black|null |Las Vegas|
|{"team":"Sharks","origin": "San Jose", "eliminated":"true"} |Sharks |null |true |San Jose |
|{"team":"Wild","colors":"red,green,gold","origin":"Minnesota"} |Wild |red,green,gold|null |Minnesota|
+------------------------------------------------------------------------+--------------+--------------+----------+---------+

希望他的帮助!

关于json - 在读取/加载时将原始 JSON 保留为 Spark DataFrame 中的列?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50217668/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com