gpt4 book ai didi

apache-spark - 在 spark scala 中过滤具有两种不同模式的列

转载 作者:行者123 更新时间:2023-12-04 04:21:12 25 4
gpt4 key购买 nike

我有一个包含三列的数据框; ID、CO_ID 和 DATA,其中 DATA 列有两种不同的架构,如下所示:

|ID  |CO_ID |Data
|130 |NA | [{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]
|536 |NA | [{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]
|518 |NA | null
|938 |611 | {"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}
|742 |NA | {"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}

现在我想创建一个包含 ID、CO_ID、NUMBER、ADDRESS 和 NAME 列的数据框。如果没有值,则在 NUMBER、ADDRESS 和 NAME 中填充值 null。

首先我必须用不同的模式过滤上面的数据框,我该怎么做?

最佳答案

这是一种方法:

val df = Seq(
(130, "NA","""[{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 231, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]"""),
(536, "NA","""[{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 232, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}}]"""),
(518,"NA", null),
(938, "611", """{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 233, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}"""),
(742, "NA", """{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 234, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}"""))
.toDF("ID","CO_ID","Data")


import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.functions.{from_json, array, when, length, lit}

val schema = (new StructType)
.add("NUMBER", "string", true)
.add("ADDRESS", "string", true)
.add("PHONE", "string", true)
.add("NAME", "string", true)

val df_ar = df.withColumn("json",
when($"data"
.startsWith("[{") && $"data".endsWith("}]"), $"data".substr(lit(2), length($"data") - 2))
.otherwise($"data")) //checks whether data start with '[{' and ends with '}]' if it does removes []
.withColumn("json", from_json($"json", schema)) //covert to JSON based on given schema
.withColumn("number", $"json.NUMBER")
.withColumn("address", $"json.ADDRESS")
.withColumn("name", $"json.NAME")

df_ar.select("ID", "CO_ID", "number", "address", "name").show(false)

此解决方案首先从 JSON 字符串中删除 [],然后应用给定的模式将字符串 JSON 转换为 StructType 列。

输出:

+---+-----+------+-----------------------+------+
|ID |CO_ID|number|address |name |
+---+-----+------+-----------------------+------+
|130|NA |AW9F |PLOT NO. 231, JAIPUR RJ|SACHIN|
|536|NA |AW9F |PLOT NO. 232, JAIPUR RJ|SACHIN|
|518|NA |null |null |null |
|938|611 |AW9F |PLOT NO. 233, JAIPUR RJ|SACHIN|
|742|NA |AW9F |PLOT NO. 234, JAIPUR RJ|SACHIN|
+---+-----+------+-----------------------+------+

关于apache-spark - 在 spark scala 中过滤具有两种不同模式的列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55947431/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com