gpt4 book ai didi

scala - 元组到 Spark Scala中的数据帧

转载 作者:行者123 更新时间:2023-12-04 17:07:40 27 4
gpt4 key购买 nike

我有一个名为 array list 的数组,它看起来像这样

arraylist: Array[(String, Any)] = Array((id,772914), (x4,2), (x5,24), (x6,1), (x7,77491.25), (x8,17911.77778), (x9,225711), (x10,17), (x12,6), (x14,5), (x16,5), (x18,5.0), (x19,8.0), (x20,7959.0), (x21,676.0), (x22,228.5068871), (x23,195.0), (x24,109.6015511), (x25,965.0), (x26,1017.79043), (x27,2.0), (Target,1), (x29,13), (x30,735255.5), (x31,332998.432), (x32,38168.75), (x33,107957.5278), (x34,13), (x35,13), (x36,13), (x37,13), (x38,13), (x39,13), (x40,13), (x41,7), (x42,13), (x43,13), (x44,13), (x45,13), (x46,13), (x47,13), (x48,13), (x49,14.0), (x50,2.588435821), (x51,617127.5), (x52,414663.9738), (x53,39900.0), (x54,16743.15781), (x55,105000.0), (x56,52842.29076), (x57,25750.46154), (x58,8532.045819), (x64,13), (x66,13), (x67,13), (x68,13), (x69,13), (x70,13), (x71,13), (x73,13), (...

我想将其转换为具有两列“ID”和值的数据框。我正在使用的代码是
val df = sc.parallelize(arraylist).toDF("Names","Values")

但是我收到一个错误
java.lang.UnsupportedOperationException: Schema for type Any is not supported

我怎样才能克服这个问题?

最佳答案

消息告诉你一切:) Any 不支持作为 DataFrame 的列类型。 Any类型可能由作为元组的第二个元素的空值引起

将数组列表类型更改为 Array[(String, Int)] (如果您可以手动执行;如果它被 Scala 扣除,则检查第二个元素的空值和无效值)或手动创建模式:

import org.apache.spark.sql.types._
import org.apache.spark.sql._

val arraylist: Array[(String, Any)] = Array(("id",772914), ("x4",2.0), ("x5",24.0));

val schema = StructType(
StructField("Names", StringType, false) ::
StructField("Values", DoubleType, false) :: Nil)
val rdd = sc.parallelize (arraylist).map (x => Row(x._1, x._2.asInstanceOf[Number].doubleValue()))

val df = sqlContext.createDataFrame(rdd, schema)

df.show()

注意:createDataFrame 需要 RDD[Row],所以我将元组的 RDD 转换为行的 RDD

关于scala - 元组到 Spark Scala中的数据帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41042809/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com