gpt4 book ai didi

scala - 有没有办法从 Scala 中数据框的现有列创建多列?

转载 作者:可可西里 更新时间:2023-11-01 16:28:08 26 4
gpt4 key购买 nike

我正在尝试将 RDBMS 表提取到 Hive 中。我通过以下方式获得了数据框:

val yearDF = spark.read.format("jdbc").option("url", connectionUrl)
.option("dbtable", "(select * from schema.tablename where source_system_name='DB2' and period_year='2017') as year2017")
.option("user", devUserName)
.option("password", devPassword)
.option("numPartitions",15)
.load()

这些是数据框的列:

geography:string|
project:string|
reference_code:string
product_line:string
book_type:string
cc_region:string
cc_channel:string
cc_function:string
pl_market:string
ptd_balance:double
qtd_balance:double
ytd_balance:double
xx_last_update_tms:timestamp
xx_last_update_log_id:int
xx_data_hash_code:string
xx_data_hash_id:bigint

ptd_balance、qtd_balance、ytd_balance 列是 double 列的双数据类型。我们的项目希望通过创建新列将其数据类型从 Double 转换为 String:ptd_balance_text、qtd_balance_text、ytd_balance_text 使用相同的数据以避免任何数据截断。

withColumn 将在数据框中创建一个新列。withColumnRenamed 将重命名现有列。

数据框有近 1000 万条记录。有没有一种有效的方法可以创建多个与数据框中现有列具有相同数据和不同类型的新列?

最佳答案

如果我处在你的位置,我会更改提取查询或要求 BI 团队付出一些努力 :P 在提取时动态添加和转换字段,但无论如何你问的是可能的。

您可以从现有列中添加列,如下所示。检查 addColsTosampleDF dataframe。我希望下面的评论足以理解,如果您有任何问题,请随时在评论中添加,我会编辑我的答案。

scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import org.apache.spark.sql.{DataFrame, Row, SparkSession}

scala> val ss = SparkSession.builder().appName("TEST").getOrCreate()
18/08/07 15:51:42 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
ss: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@6de4071b

//Sample dataframe with int, double and string fields
scala> val sampleDf = Seq((100, 1.0, "row1"),(1,10.12,"col_float")).toDF("col1", "col2", "col3")
sampleDf: org.apache.spark.sql.DataFrame = [col1: int, col2: double ... 1 more field]

scala> sampleDf.printSchema
root
|-- col1: integer (nullable = false)
|-- col2: double (nullable = false)
|-- col3: string (nullable = true)

//Adding columns col1_string from col1 and col2_doubletostring from col2 with casting and alias
scala> val addColsTosampleDF = sampleDf.
select(sampleDf.col("col1"),
sampleDf.col("col2"),
sampleDf.col("col3"),
sampleDf.col("col1").cast("string").alias("col1_string"),
sampleDf.col("col2").cast("string").alias("col2_doubletostring"))
addColsTosampleDF: org.apache.spark.sql.DataFrame = [col1: int, col2: double ... 3 more fields]

//Schema with added columns
scala> addColsTosampleDF.printSchema
root
|-- col1: integer (nullable = false)
|-- col2: double (nullable = false)
|-- col3: string (nullable = true)
|-- col1_string: string (nullable = false)
|-- col2_doubletostring: string (nullable = false)

scala> addColsTosampleDF.show()
+----+-----+---------+-----------+-------------------+
|col1| col2| col3|col1_string|col2_doubletostring|
+----+-----+---------+-----------+-------------------+
| 100| 1.0| row1| 100| 1.0|
| 1|10.12|col_float| 1| 10.12|
+----+-----+---------+-----------+-------------------+

关于scala - 有没有办法从 Scala 中数据框的现有列创建多列?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51729222/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com