- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
使用空安全等于运算符执行左外连接会导致 NullPointerException
.
版本
Spark 2.2.0,
斯卡拉 2.11.8
scala> var d1 = Seq((null, 1), ("a1", 2)).toDF("a", "b")
scala> d1.show
+----+---+
| a| b|
+----+---+
|null| 1|
| a1| 2|
+----+---+
scala> var d2 = Seq(("a2", 3)).toDF("a", "b")
scala> d2.show
+---+---+
| a| b|
+---+---+
| a2| 3|
+---+---+
scala> d1.joinWith(d2, d1("a") <=> d2("a"), "left_outer").show
17/10/10 09:44:39 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 8)
java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
17/10/10 10:19:28 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
17/10/10 10:19:28 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
最佳答案
我可以说在版本 Spark 2.2.0 和 Scala 2.11.8 中没有发现问题,使用相同的示例,因为在使用具有空安全的相同代码示例时我没有得到任何异常
<=>
===
val d1 = sc.parallelize(Seq(
(null, 1), ("a1",2))
).toDF("a", "b")
d1.show
+----+---+
| a| b|
+----+---+
|null| 1|
| a1| 2|
+----+---+
val d2 = sc.parallelize(Seq(
("a2",3))
).toDF("a", "b")
d2.show
+---+---+
| a| b|
+---+---+
| a2| 3|
+---+---+
d1.joinWith(d2, d1("a") <=> d2("a"), "left_outer").show()
+--------+----+
| _1| _2|
+--------+----+
|[null,1]|null|
| [a1,2]|null|
+--------+----+
d1.joinWith(d2, d1("a") === d2("a"), "left_outer").show()
+--------+----+
| _1| _2|
+--------+----+
|[null,1]|null|
| [a1,2]|null|
+--------+----+
val x = Seq((100L,null), (102L,"17179869185L"), (101L,"17179869186L"), (200L,"17179869186L"), (401L,"1L"), (500L,"1L"), (600L,"8589934593L"), (700L,"8589934593L"), (800L,"8589934593L"), (900L,"8589934594L"), (1000L,"8589934594L"), (1200L,"2L"), (1300L,"2L"), (1301L,"2L"), (1400L,"17179869187L"), (1500L,"17179869188L"), (1600L,"8589934595L")).toDF("u","x1")
x.show()
+----+------------+
| u| x1|
+----+------------+
| 100| null|
| 102|17179869185L|
| 101|17179869186L|
| 200|17179869186L|
| 401| 1L|
| 500| 1L|
| 600| 8589934593L|
| 700| 8589934593L|
| 800| 8589934593L|
| 900| 8589934594L|
|1000| 8589934594L|
|1200| 2L|
|1300| 2L|
|1301| 2L|
|1400|17179869187L|
|1500|17179869188L|
|1600| 8589934595L|
+----+------------+
val y = Seq(("17179869187L",-8589934595L), ("17179869188L",-8589934595L), ("17179869185L",-858993
4593L)).toDF("x2","y")
y.show()
+------------+-----------+
| x2| y|
+------------+-----------+
|17179869187L|-8589934595|
|17179869188L|-8589934595|
|17179869185L|-8589934593|
+------------+-----------+
x.join(y,'x1 === 'x2, "left_outer").show()
+----+------------+------------+-----------+
| u| x1| x2| y|
+----+------------+------------+-----------+
| 100| null| null| null|
| 102|17179869185L|17179869185L|-8589934593|
| 101|17179869186L| null| null|
| 200|17179869186L| null| null|
| 401| 1L| null| null|
| 500| 1L| null| null|
| 600| 8589934593L| null| null|
| 700| 8589934593L| null| null|
| 800| 8589934593L| null| null|
| 900| 8589934594L| null| null|
|1000| 8589934594L| null| null|
|1200| 2L| null| null|
|1300| 2L| null| null|
|1301| 2L| null| null|
|1400|17179869187L|17179869187L|-8589934595|
|1500|17179869188L|17179869188L|-8589934595|
|1600| 8589934595L| null| null|
+----+------------+------------+-----------+
x: org.apache.spark.sql.DataFrame = [u: bigint, x1: string]
y: org.apache.spark.sql.DataFrame = [x2: string, y: bigint]
Command took 1.00 second
关于scala - Spark 2.2 空安全左外连接空指针异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46668984/
问题很简单:我正在寻找一种优雅的使用方式 CompletableFuture#exceptionally与 CompletableFuture#supplyAsync 一起.这是行不通的: priva
对于 Web 服务,我们通常使用 maven-jaxb2-plugin 生成 java bean,并在 Spring 中使用 JAXB2 编码。我想知道如何处理 WSDL/XSD 中声明的(SOAP-
这个问题已经有答案了: Array index out of bound behavior (10 个回答) 已关闭 8 年前。 我对下面的 C 代码感到好奇 int main(){
当在类的开头使用上下文和资源初始化 MediaPlayer 对象时,它会抛出 NullPointer 异常,但是当在类的开头声明它时(因此它是 null),然后以相同的方式初始化它在onCreate方
嘿 我尝试将 java 程序连接到 REST API。 使用相同的代码部分,我在 Java 6 中遇到了 Java 异常,并且在 Java 8 中运行良好。 环境相同: 信任 机器 unix 用户 代
我正在尝试使用 Flume 和 Hive 进行 Twitter 分析。为了从 twitter 获取推文,我在 flume.conf 文件中设置了所有必需的参数(consumerKey、consumer
我在 JavaFX 异常方面遇到一些问题。我的项目在我的 Eclipse 中运行,但现在我的 friend 也尝试访问该项目。我们已共享并直接保存到保管箱文件夹中。但他根本无法让它发挥作用。他在控制台
假设我使用 blur() 事件验证了电子邮件 ID,我正在这样做: $('#email').blur(function(){ //make ajax call , check if dupli
我这样做是为了从 C 代码调用非托管函数。 pCallback 是一个函数指针,因此在托管端是一个委托(delegate)。 [DllImport("MyDLL.dll")] public stati
为什么这段代码是正确的: try { } catch(ArrayOutOfBoundsException e) {} 这是错误的: try { } catch(IOException e) {} 这段
我遇到了以下问题:有导出函数的DLL。 代码示例如下:[动态链接库] __declspec(dllexport) int openDevice(int,void**) [应用] 开发者.h: __de
从其他线程,我知道我们不应该在析构函数中抛出异常!但是对于下面的例子,它确实有效。这是否意味着我们只能在一个实例的析构函数中抛出异常?我们应该如何理解这个代码示例! #include using n
为什么需要异常 引出 public static void main(String[
1. Java的异常机制 Throwable类是Java异常类型的顶层父类,一个对象只有是 Throwable 类的(直接或者间接)实例,他才是一个异常对象,才能被异常处理机制识别。JDK中内
我是 Python 的新手,我对某种异常方法的实现有疑问。这是代码(缩写): class OurException(Exception): """User defined Exception"
我已经创建了以下模式来表示用户和一组线程之间的关联,这些线程按他们的最后一条消息排序(用户已经阅读了哪些线程,哪些没有): CREATE TABLE table(user_id bigint, mes
我正在使用 Python 编写一个简单的自动化脚本,它可能会在多个位置引发异常。在他们每个人中,我都想记录一条特定的消息并退出程序。为此,我在捕获异常并处理它(执行特定的日志记录操作等)后引发 Sys
谁能解释一下为什么这会导致错误: let xs = [| "Mary"; "Mungo"; "Midge" |] Array.iter printfn xs 虽然不是这样: Array.iter pr
在我使用 Play! 的网站上,我有一个管理部分。所有 Admin Controller 都有一个 @With 和一个 @Check 注释。 断开连接后,一切正常。连接后,每次加载页面(任何页面,无论
我尝试连接到 azure 表存储并添加一个对象。它在本地主机上工作得很好,但是在我使用的服务器上我得到以下异常及其内部异常: Exception of type 'Microsoft.Wind
我是一名优秀的程序员,十分优秀!