- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我只是尝试用 spark 制作数据框。我只是尝试制作如下代码。
首先,我导入如下
import org.apache.spark.sql.types._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.spark.sql.Row
import scala.collection.mutable.ListBuffer
import org.apache.spark.util._
import org.apache.spark.sql.types.IntegerType`
然后,我尝试为数据框制作 Row 和 Schema,如下所示。
val Employee = Seq(Row("Kim","Seoul","1000000"),Row("Lee","Busan","2000000"),Row("Park","Jeju","3000000"),Row("Jeong","Daejon","3400000"))
val EmployeeSchema = List(StructField("Name", StringType, true), StructField("City", StringType, true), StructField("Salary", IntegerType, true))
val EmpDF = spark.createDataFrame(spark.sparkContext.parallelize(Employee),StructType(EmployeeSchema))
最后,我尝试查看数据框是否可以使用
EmpDF.show
我得到如下错误
ERROR Executor: Exception in task 2.0 in stage 1.0 (TID 3)
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException:
java.lang.String is not a valid external type for schema of int
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, Name), StringType), true, false) AS Name#0
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, City), StringType), true, false) AS City#1
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, Salary), IntegerType) AS Salary#2
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:594)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:594)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_1$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:289)
... 25 more
20/07/12 16:32:51 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, Name), StringType), true, false) AS Name#0
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, City), StringType), true, false) AS City#1
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, Salary), IntegerType) AS Salary#2
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:594)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:594)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
我该如何解决这个问题?
我已经尝试过如下导入
import org.apache.spark.serializer.KryoSerializer
import org.apache.spark.serializer.Serializer
但现在它显示错误
ERROR Executor: Exception in task 2.0 in stage 5.0 (TID 13)
最佳答案
Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
是由于定义的模式和实际数据之间的类型不匹配 "Jeong","Daejon","3400000"-> (string,string,string)
。但您在架构中指定为 (String,String,Integer)
。
更新特定于整数类型的代码 1:
import org.apache.spark.sql.types._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.spark.sql.Row
import scala.collection.mutable.ListBuffer
import org.apache.spark.util._
import org.apache.spark.sql.types._
val Employee = Seq(Row("Kim","Seoul",1000000),Row("Lee","Busan",2000000),Row("Park","Jeju",3000000),Row("Jeong","Daejon",3400000))
val EmployeeSchema = List(StructField("Name", StringType, true), StructField("City", StringType, true), StructField("Salary", IntegerType, true))
val EmpDF = spark.createDataFrame(spark.sparkContext.parallelize(Employee),StructType(EmployeeSchema))
EmpDF.show()
/*+-----+------+-------+
| Name| City| Salary|
+-----+------+-------+
| Kim| Seoul|1000000|
| Lee| Busan|2000000|
| Park| Jeju|3000000|
|Jeong|Daejon|3400000|
+-----+------+-------+*/
特定于字符串类型的更新代码:
import org.apache.spark.sql.types._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.spark.sql.Row
import scala.collection.mutable.ListBuffer
import org.apache.spark.util._
import org.apache.spark.sql.types._
val Employee = Seq(Row("Kim","Seoul","1000000"),Row("Lee","Busan","2000000"),Row("Park","Jeju","3000000"),Row("Jeong","Daejon","3400000"))
val EmployeeSchema = List(StructField("Name", StringType, true), StructField("City", StringType, true), StructField("Salary", StringType, true))
val EmpDF = spark.createDataFrame(spark.sparkContext.parallelize(Employee),StructType(EmployeeSchema))
EmpDF.show()
/*+-----+------+-------+
| Name| City| Salary|
+-----+------+-------+
| Kim| Seoul|1000000|
| Lee| Busan|2000000|
| Park| Jeju|3000000|
|Jeong|Daejon|3400000|
+-----+------+-------+*/
关于scala - java.lang.String is not a valid external type for schema of int error in creating spark 数据帧,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62858613/
我正在尝试使用 y 组合器在 Scala 中定义 gcd: object Main { def y[A,B]( f : (A => B) => A => B ) : A => B = f(y(f)
我正在尝试了解返回指向函数的指针的函数,在我尝试编译代码后,它给了我这种错误: cannot convert int (*(int))(int) to int (*(int))(int) in ass
所以我一直在关注 youtube 上的游戏编程教程,然后弹出了这段代码:bufferedImageObject.getRGB(int, int, int, int, int[], int, int);
我正在将时间现在 与存储在数据库某处的时间进行比较。数据库中存储的时间格式为“yyyyMMddHHmmss”。例如,数据库可能会为存储的时间值返回 201106203354。然后我使用一个函数将时间现
例如 Maze0.bmp (0,0) (319,239) 65 120 Maze0.bmp (0,0) (319,239) 65 120 (254,243,90) Maze0.bmp (0,0) (
评论 Steve Yegge的post关于 server-side Javascript开始讨论语言中类型系统的优点和这个 comment描述: ... examples from H-M style
我正在研究 C 的指针,从 Deitel 的书中我不明白 int(*function)(int,int) 和 int*function(int, int) 表示函数时。 最佳答案 C 中读取类型的经验
您好,我使用 weblogic 11g 创建 war 应用程序,我对 joda time 的方法有疑问 new DateTime(int, int, int, int, int, int); 这抛出了
Create a method called average that calculates the average of the numbers passed as parameters. The
var a11: Int = 0 var a12: Int = 0 var a21: Int = 0 var a22: Int = 0 var valueDeterminant = a11 * a12
我正在为一个项目设置 LED 阵列。我得到了一个 LED 阵列,可以根据引脚变化电压进行更改,但我无法添加更多引脚。 当我尝试时,编译失败并显示错误:函数“int getMode(int, int,
除了创建对列表执行简单操作的函数之外,我对 haskell 还是很陌生。我想创建一个列表,其中包含 Int 类型的内容, 和 Int -> Int -> Int 类型的函数. 这是我尝试过的: dat
这个问题已经有答案了: Java add buttons dynamically as an array [duplicate] (4 个回答) 已关闭 7 年前。 StackOverFlow问题今天
我有几个 EditText View ,我想在其中设置左侧的图像,而 setCompoundDrawablesWithIntrinsicBounds 似乎不起作用。图形似乎没有改变。 有人知道为什么会
#include using namespace std; int main() { static_assert(is_constructible, int(*)(int,int)>::val
fun sum(a: Int, b: Int) = a + b val x = 1.to(2) 我在找: sum.tupled(x),或者 sum(*x) 当然,以上都不能用 Kotlin 1.1.3
有一个函数: func (first: Int) -> Int -> Bool -> String { return ? } 返回值怎么写?我对上面 func 的返回类型感到很困惑。 最
type foo = A of int * int | B of (int * int) int * int 和 (int * int) 有什么区别?我看到的唯一区别在于模式匹配: let test_
我正在尝试制作一个 slider 游戏。在这个类中,我使用 Graphics 对象 g2 的 drawImage 方法来显示“拼图”的 block 。但在绘制类方法中,我收到此错误:找不到符号方法dr
我试着理解这个表达: static Func isOdd = i => (i & 1) == 1; 但是这是什么意思呢? 例如我有 i = 3。然后 (3 & 1) == 1 或 i = 4。然后
我是一名优秀的程序员,十分优秀!