gpt4 book ai didi

scala - concat_ws 从 spark 数据帧的输出中删除空字符串

转载 作者:行者123 更新时间:2023-12-04 23:53:08 25 4
gpt4 key购买 nike

这是我的数据框的输出

val finaldf.show(false)

+------------------+-------------------------+---------------------+---------------+-------------------------+--------------+----------+----------+---------+-------------------------+-------------------------+-----------------------+---------------------------+--------------------------+-------------------+-----------------------+--------------------+------------------------+------------+----------------------+-----------+
|DataPartition |TimeStamp |Source_organizationId|Source_sourceId|FilingDateTime |SourceTypeCode|DocumentId|Dcn |DocFormat|StatementDate |IsFilingDateTimeEstimated|ContainsPreliminaryData|CapitalChangeAdjustmentDate|CumulativeAdjustmentFactor|ContainsRestatement|FilingDateTimeUTCOffset|ThirdPartySourceCode|ThirdPartySourcePriority|SourceTypeId|ThirdPartySourceCodeId|FFAction|!||
+------------------+-------------------------+---------------------+---------------+-------------------------+--------------+----------+----------+---------+-------------------------+-------------------------+-----------------------+---------------------------+--------------------------+-------------------+-----------------------+--------------------+------------------------+------------+----------------------+-----------+
|SelfSourcedPrivate|2017-11-02T10:23:59+00:00|4298009288 |80 |2017-09-28T23:00:00+00:00|10K |null |171105584 |ASFILED |2017-07-31T00:00:00+00:00|false |false |2017-07-31T00:00:00+00:00 |1.0 |false |-300 |SS |1 |3011835 |1000716240 |I|!| |
|SelfSourcedPublic |2017-11-21T12:09:23+00:00|4295904170 |364 |2017-08-08T17:00:00+00:00|10Q |null |null |null |2017-07-30T00:00:00+00:00|false |false |2017-07-30T00:00:00+00:00 |1.0 |false |-300 |SS |1 |3011836 |1000716240 |I|!| |
|SelfSourcedPublic |2017-11-21T12:09:23+00:00|4295904170 |365 |2017-10-10T17:00:00+00:00|10K |null |null |null |2017-09-30T00:00:00+00:00|false |false |2017-09-30T00:00:00+00:00 |1.0 |false |-300 |SS |1 |3011835 |1000716240 |I|!| |
|SelfSourcedPublic |2017-11-21T12:17:49+00:00|4295904170 |365 |2017-10-10T17:00:00+00:00|10K |null |null |null |2017-09-30T00:00:00+00:00|false |false |2017-09-30T00:00:00+00:00 |1.0 |false |-300 |SS |1 |3011835 |1000716240 |I|!| |

concat_ws null 何时从行中删除。

val finaldf = diff.foldLeft(tempReorder){(temp2df, colName) => temp2df.withColumn(colName, lit("null"))}
//finaldf.show(false)

val headerColumn = data.columns.toSeq
val header = headerColumn.mkString("", "|^|", "|!|").dropRight(3)

val finaldfWithDelimiter=finaldf.select(concat_ws("|^|",finaldf.schema.fieldNames.map(col): _*).as("concatenated")).withColumnRenamed("concatenated", header)
finaldfWithDelimiter.show(false)

我得到低于输出

+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|DataPartition|^|TimeStamp|^|Source_organizationId|^|Source_sourceId|^|FilingDateTime|^|SourceTypeCode|^|DocumentId|^|Dcn|^|DocFormat|^|StatementDate|^|IsFilingDateTimeEstimated|^|ContainsPreliminaryData|^|CapitalChangeAdjustmentDate|^|CumulativeAdjustmentFactor|^|ContainsRestatement|^|FilingDateTimeUTCOffset|^|ThirdPartySourceCode|^|ThirdPartySourcePriority|^|SourceTypeId|^|ThirdPartySourceCodeId|^|FFAction|!||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|SelfSourcedPrivate|^|2017-11-02T10:23:59+00:00|^|4298009288|^|80|^|2017-09-28T23:00:00+00:00|^|10K|^|171105584|^|ASFILED|^|2017-07-31T00:00:00+00:00|^|false|^|false|^|2017-07-31T00:00:00+00:00|^|1.0|^|false|^|-300|^|SS|^|1|^|3011835|^|1000716240|^|I|!| |
|SelfSourcedPublic|^|2017-11-21T12:09:23+00:00|^|4295904170|^|364|^|2017-08-08T17:00:00+00:00|^|10Q|^|2017-07-30T00:00:00+00:00|^|false|^|false|^|2017-07-30T00:00:00+00:00|^|1.0|^|false|^|-300|^|SS|^|1|^|3011836|^|1000716240|^|I|!| |
|SelfSourcedPublic|^|2017-11-21T12:09:23+00:00|^|4295904170|^|365|^|2017-10-10T17:00:00+00:00|^|10K|^|2017-09-30T00:00:00+00:00|^|false|^|false|^|2017-09-30T00:00:00+00:00|^|1.0|^|false|^|-300|^|SS|^|1|^|3011835|^|1000716240|^|I|!|

在输出 DocumentId 中为空的被替换。

无法弄清楚我错过了什么?

最佳答案

concat_ws 确实会在串联过程中删除 null 列。如果您想为连接结果中的每个 null 保留一个占位符,一种方法是创建一个 Map 类型相关的 colName -> nullValuena.fill() 对拼接前的dataframe进行变换,如下图:

val df = Seq(
(new Integer(1), "a"),
(new Integer(2), null),
(null, "c")
).toDF("col1", "col2")

df.withColumn("concat", concat_ws("|", df.columns.map(col): _*)).
show
// +----+----+------+
// |col1|col2|concat|
// +----+----+------+
// | 1| a| 1|a|
// | 2|null| 2|
// |null| c| c|
// +----+----+------+

val naMap = df.dtypes.map( t => t._2 match {
case "StringType" => (t._1, "(n/a)")
case "IntegerType" => (t._1, 0)
case "LongType" => (t._1, 0L)
// cases for other types ...
} ).toMap
// naMap: scala.collection.immutable.Map[String,Any] =
// Map(col1 -> 0, col2 -> (n/a))

df.na.fill(naMap).
withColumn("concat", concat_ws("|", df.columns.map(col): _*)).
show
// +----+-----+-------+
// |col1| col2| concat|
// +----+-----+-------+
// | 1| a| 1|a|
// | 2|(n/a)|2|(n/a)|
// | 0| c| 0|c|
// +----+-----+-------+

关于scala - concat_ws 从 spark 数据帧的输出中删除空字符串,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48734445/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com