- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在 Spark 中对 RDD 进行排序。我知道我可以使用 sortBy
转换以获得排序的 RDD。我正在尝试测量如何sortBy
与使用 mapPartitions
相比执行对各个分区进行排序,然后使用reduce函数合并分区以获得排序列表。当我使用这种方法时,我遇到了 java.lang.InvocationTargetException
这是我的实现:
import java.util.*;
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
class Inc{
String line;
Double income;
}
public class SimpleApp {
public static void main(String[] args) {
String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> rdd = sc.textFile("data.txt",4);
long start = System.currentTimeMillis();
JavaRDD<LinkedList<Inc>> rdd3 = rdd.mapPartitions(new FlatMapFunction<Iterator<String>, LinkedList<Inc>>(){
@Override
public Iterable<LinkedList<Inc>> call(Iterator<String> t)
throws Exception {
LinkedList<Inc> lines = new LinkedList<Inc>();
while(t.hasNext()){
Inc i = new Inc();
String s = t.next();
i.line = s;
String arr1[] = s.split(",");
i.income = Double.parseDouble(arr1[24]);
lines.add(i);
}
Collections.sort(lines, new IncomeComparator());
LinkedList<LinkedList<Inc>> list = new LinkedList<LinkedList<Inc>>();
list.add(lines);
return list;
}
});
rdd3.reduce(new Function2<LinkedList<Inc>, LinkedList<Inc>, LinkedList<Inc>>(){
@Override
public LinkedList<Inc> call(LinkedList<Inc> a,
LinkedList<Inc> b) throws Exception {
LinkedList<Inc> result = new LinkedList<Inc>();
while (a.size() > 0 && b.size() > 0) {
if (a.getFirst().income.compareTo(b.getFirst().income) <= 0)
result.add(a.poll());
else
result.add(b.poll());
}
while (a.size() > 0)
result.add(a.poll());
while (b.size() > 0)
result.add(b.poll());
return result;
}
});
long end = System.currentTimeMillis();
System.out.println(end - start);
}
public static class IncomeComparator implements Comparator<Inc> {
@Override
public int compare(Inc a, Inc b) {
return a.income.compareTo(b.income);
}
}
}
我得到的错误是
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/06/01 15:01:50 INFO SparkContext: Running Spark version 1.3.0
15/06/01 15:01:50 INFO SecurityManager: Changing view acls to: rshankar
15/06/01 15:01:50 INFO SecurityManager: Changing modify acls to: rshankar
15/06/01 15:01:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(rshankar); users with modify permissions: Set(rshankar)
15/06/01 15:01:50 INFO Slf4jLogger: Slf4jLogger started
15/06/01 15:01:51 INFO Remoting: Starting remoting
15/06/01 15:01:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@sslab01.cs.purdue.edu:40654]
15/06/01 15:01:51 INFO Utils: Successfully started service 'sparkDriver' on port 40654.
15/06/01 15:01:51 INFO SparkEnv: Registering MapOutputTracker
15/06/01 15:01:51 INFO SparkEnv: Registering BlockManagerMaster
15/06/01 15:01:51 INFO DiskBlockManager: Created local directory at /tmp/spark-4cafc660-84b5-4e51-9553-7ded22f179a9/blockmgr-fa2d7355-ba5a-4eec-9a4c-bebbf6b41b95
15/06/01 15:01:51 INFO MemoryStore: MemoryStore started with capacity 265.1 MB
15/06/01 15:01:51 INFO HttpFileServer: HTTP File server directory is /tmp/spark-6f487208-ecdb-4e82-ab1e-b55fc2d910b9/httpd-584c4b2b-47d7-4d6d-80e9-965b7721c8ae
15/06/01 15:01:51 INFO HttpServer: Starting HTTP Server
15/06/01 15:01:51 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/01 15:01:51 INFO AbstractConnector: Started SocketConnector@0.0.0.0:37466
15/06/01 15:01:51 INFO Utils: Successfully started service 'HTTP file server' on port 37466.
15/06/01 15:01:51 INFO SparkEnv: Registering OutputCommitCoordinator
15/06/01 15:01:51 INFO Server: jetty-8.y.z-SNAPSHOT
15/06/01 15:01:51 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/06/01 15:01:51 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/06/01 15:01:51 INFO SparkUI: Started SparkUI at http://sslab01.cs.purdue.edu:4040
15/06/01 15:01:51 INFO SparkContext: Added JAR file:/homes/rshankar/spark-java/target/simple-project-1.0.jar at http://128.10.25.101:37466/jars/simple-project-1.0.jar with timestamp 1433170911502
15/06/01 15:01:51 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@sslab01.cs.purdue.edu:7077/user/Master...
15/06/01 15:01:51 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150601150151-0003
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor added: app-20150601150151-0003/0 on worker-20150601150057-sslab05.cs.purdue.edu-48984 (sslab05.cs.purdue.edu:48984) with 4 cores
15/06/01 15:01:51 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150601150151-0003/0 on hostPort sslab05.cs.purdue.edu:48984 with 4 cores, 512.0 MB RAM
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor added: app-20150601150151-0003/1 on worker-20150601150013-sslab02.cs.purdue.edu-42836 (sslab02.cs.purdue.edu:42836) with 4 cores
15/06/01 15:01:51 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150601150151-0003/1 on hostPort sslab02.cs.purdue.edu:42836 with 4 cores, 512.0 MB RAM
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor added: app-20150601150151-0003/2 on worker-20150601150046-sslab04.cs.purdue.edu-57866 (sslab04.cs.purdue.edu:57866) with 4 cores
15/06/01 15:01:51 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150601150151-0003/2 on hostPort sslab04.cs.purdue.edu:57866 with 4 cores, 512.0 MB RAM
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor added: app-20150601150151-0003/3 on worker-20150601150032-sslab03.cs.purdue.edu-43239 (sslab03.cs.purdue.edu:43239) with 4 cores
15/06/01 15:01:51 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150601150151-0003/3 on hostPort sslab03.cs.purdue.edu:43239 with 4 cores, 512.0 MB RAM
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/0 is now RUNNING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/1 is now RUNNING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/2 is now RUNNING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/3 is now RUNNING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/0 is now LOADING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/2 is now LOADING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/3 is now LOADING
15/06/01 15:01:51 INFO AppClient$ClientActor: Executor updated: app-20150601150151-0003/1 is now LOADING
15/06/01 15:01:51 INFO NettyBlockTransferService: Server created on 35703
15/06/01 15:01:51 INFO BlockManagerMaster: Trying to register BlockManager
15/06/01 15:01:51 INFO BlockManagerMasterActor: Registering block manager sslab01.cs.purdue.edu:35703 with 265.1 MB RAM, BlockManagerId(<driver>, sslab01.cs.purdue.edu, 35703)
15/06/01 15:01:51 INFO BlockManagerMaster: Registered BlockManager
15/06/01 15:01:52 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/06/01 15:01:52 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0, maxMem=278019440
15/06/01 15:01:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 265.1 MB)
15/06/01 15:01:52 INFO MemoryStore: ensureFreeSpace(4959) called with curMem=32728, maxMem=278019440
15/06/01 15:01:52 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.8 KB, free 265.1 MB)
15/06/01 15:01:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on sslab01.cs.purdue.edu:35703 (size: 4.8 KB, free: 265.1 MB)
15/06/01 15:01:52 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/06/01 15:01:52 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.java:24
15/06/01 15:01:52 WARN LoadSnappy: Snappy native library is available
15/06/01 15:01:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/01 15:01:52 WARN LoadSnappy: Snappy native library not loaded
15/06/01 15:01:52 INFO FileInputFormat: Total input paths to process : 1
15/06/01 15:01:52 INFO SparkContext: Starting job: reduce at SimpleApp.java:51
15/06/01 15:01:52 INFO DAGScheduler: Got job 0 (reduce at SimpleApp.java:51) with 4 output partitions (allowLocal=false)
15/06/01 15:01:52 INFO DAGScheduler: Final stage: Stage 0(reduce at SimpleApp.java:51)
15/06/01 15:01:52 INFO DAGScheduler: Parents of final stage: List()
15/06/01 15:01:52 INFO DAGScheduler: Missing parents: List()
15/06/01 15:01:52 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[2] at mapPartitions at SimpleApp.java:28), which has no missing parents
15/06/01 15:01:52 INFO MemoryStore: ensureFreeSpace(3432) called with curMem=37687, maxMem=278019440
15/06/01 15:01:52 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.4 KB, free 265.1 MB)
15/06/01 15:01:52 INFO MemoryStore: ensureFreeSpace(2530) called with curMem=41119, maxMem=278019440
15/06/01 15:01:52 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 265.1 MB)
15/06/01 15:01:52 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on sslab01.cs.purdue.edu:35703 (size: 2.5 KB, free: 265.1 MB)
15/06/01 15:01:52 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/06/01 15:01:52 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
15/06/01 15:01:52 INFO DAGScheduler: Submitting 4 missing tasks from Stage 0 (MapPartitionsRDD[2] at mapPartitions at SimpleApp.java:28)
15/06/01 15:01:52 INFO TaskSchedulerImpl: Adding task set 0.0 with 4 tasks
15/06/01 15:01:54 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@sslab04.cs.purdue.edu:55037/user/Executor#212129285] with ID 2
15/06/01 15:01:54 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:54 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:54 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:54 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:54 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@sslab05.cs.purdue.edu:36783/user/Executor#-1944847176] with ID 0
15/06/01 15:01:54 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@sslab02.cs.purdue.edu:37539/user/Executor#-1786204780] with ID 1
15/06/01 15:01:54 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@sslab03.cs.purdue.edu:48810/user/Executor#614047045] with ID 3
15/06/01 15:01:54 INFO BlockManagerMasterActor: Registering block manager sslab03.cs.purdue.edu:43948 with 265.1 MB RAM, BlockManagerId(3, sslab03.cs.purdue.edu, 43948)
15/06/01 15:01:54 INFO BlockManagerMasterActor: Registering block manager sslab05.cs.purdue.edu:57248 with 265.1 MB RAM, BlockManagerId(0, sslab05.cs.purdue.edu, 57248)
15/06/01 15:01:54 INFO BlockManagerMasterActor: Registering block manager sslab04.cs.purdue.edu:43152 with 265.1 MB RAM, BlockManagerId(2, sslab04.cs.purdue.edu, 43152)
15/06/01 15:01:54 INFO BlockManagerMasterActor: Registering block manager sslab02.cs.purdue.edu:55873 with 265.1 MB RAM, BlockManagerId(1, sslab02.cs.purdue.edu, 55873)
15/06/01 15:01:54 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on sslab04.cs.purdue.edu:43152 (size: 2.5 KB, free: 265.1 MB)
15/06/01 15:01:55 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on sslab04.cs.purdue.edu:43152 (size: 4.8 KB, free: 265.1 MB)
15/06/01 15:01:56 WARN TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, sslab04.cs.purdue.edu): java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at java.io.ObjectStreamClass$FieldReflector.getObjFieldValues(ObjectStreamClass.java:2050)
at java.io.ObjectStreamClass.getObjFieldValues(ObjectStreamClass.java:1252)
... 15 more
15/06/01 15:01:56 INFO TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 1]
15/06/01 15:01:56 INFO TaskSetManager: Starting task 3.1 in stage 0.0 (TID 4, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:56 INFO TaskSetManager: Starting task 2.1 in stage 0.0 (TID 5, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:56 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 2]
15/06/01 15:01:56 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 3]
15/06/01 15:01:56 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 6, sslab05.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:56 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 7, sslab02.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:56 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on sslab02.cs.purdue.edu:55873 (size: 2.5 KB, free: 265.1 MB)
15/06/01 15:01:56 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on sslab05.cs.purdue.edu:57248 (size: 2.5 KB, free: 265.1 MB)
15/06/01 15:01:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on sslab02.cs.purdue.edu:55873 (size: 4.8 KB, free: 265.1 MB)
15/06/01 15:01:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on sslab05.cs.purdue.edu:57248 (size: 4.8 KB, free: 265.1 MB)
15/06/01 15:01:57 INFO TaskSetManager: Lost task 3.1 in stage 0.0 (TID 4) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 4]
15/06/01 15:01:57 INFO TaskSetManager: Starting task 3.2 in stage 0.0 (TID 8, sslab05.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:57 INFO TaskSetManager: Lost task 2.1 in stage 0.0 (TID 5) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 5]
15/06/01 15:01:57 INFO TaskSetManager: Starting task 2.2 in stage 0.0 (TID 9, sslab03.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:57 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on sslab03.cs.purdue.edu:43948 (size: 2.5 KB, free: 265.1 MB)
15/06/01 15:01:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on sslab03.cs.purdue.edu:43948 (size: 4.8 KB, free: 265.1 MB)
15/06/01 15:01:57 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 7) on executor sslab02.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 6]
15/06/01 15:01:57 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 10, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:57 INFO TaskSetManager: Lost task 3.2 in stage 0.0 (TID 8) on executor sslab05.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 7]
15/06/01 15:01:57 INFO TaskSetManager: Starting task 3.3 in stage 0.0 (TID 11, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:57 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 6) on executor sslab05.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 8]
15/06/01 15:01:57 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 12, sslab04.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:58 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 10) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 9]
15/06/01 15:01:58 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 13, sslab03.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:58 INFO TaskSetManager: Lost task 2.2 in stage 0.0 (TID 9) on executor sslab03.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 10]
15/06/01 15:01:58 INFO TaskSetManager: Starting task 2.3 in stage 0.0 (TID 14, sslab03.cs.purdue.edu, PROCESS_LOCAL, 1358 bytes)
15/06/01 15:01:58 INFO TaskSetManager: Lost task 3.3 in stage 0.0 (TID 11) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 11]
15/06/01 15:01:58 ERROR TaskSetManager: Task 3 in stage 0.0 failed 4 times; aborting job
15/06/01 15:01:58 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 12) on executor sslab04.cs.purdue.edu: java.lang.reflect.InvocationTargetException (null) [duplicate 12]
15/06/01 15:01:58 INFO TaskSchedulerImpl: Cancelling stage 0
15/06/01 15:01:58 INFO TaskSchedulerImpl: Stage 0 was cancelled
15/06/01 15:01:58 INFO DAGScheduler: Job 0 failed: reduce at SimpleApp.java:51, took 5.941191 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 11, sslab04.cs.purdue.edu): java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at java.io.ObjectStreamClass$FieldReflector.getObjFieldValues(ObjectStreamClass.java:2050)
at java.io.ObjectStreamClass.getObjFieldValues(ObjectStreamClass.java:1252)
... 15 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1191)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
我不确定我做错了什么。我将不胜感激任何帮助。谢谢!
最佳答案
您的类 Inc 应标记为可序列化。序列化调试器似乎试图提供帮助但失败了,并在此过程中掩盖了序列化错误。
关于java - 使用mapPartitions和reduce在Apache Spark中对RDD进行排序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30577149/
我正在尝试对每个条目有多个值的关联数组进行排序。 例如 [0] => stdClass Object ( [type] => node [sid] => 158 [score] => 0.059600
我在 mysql 中有“日期”列以这种格式保存日期 2014 年 9 月 17 日(日-月-年) 我需要对它们进行升序排序,所以我使用了这个命令: SELECT * FROM table ORDER
我目前正在将 MySQL 存储过程重写为 MS SQL 存储过程,但遇到了问题。 在 MySQL 存储过程中,有一个游标,它根据最近的日期 (effdate) 选择一个值并将其放入变量 (thestt
我想要 gwt r.QuestionId- 排序。但是我得到未排序的 QuestionId 尽管我提到了 QuestionId ASC 的顺序。 SELECT r.QuestionId,
我有一个关于在 scandir 函数中排序的基本问题。到目前为止,我阅读了 POSIX readdir 的手册页,但没有找到有关订购保证的具体信息。 但是当我遍历大目录(无法更改,只读)时,我在多个系
基本上我必须从 SQL 数据库中构建项目列表,但是用户可以选择对 7 个过滤器的任意组合进行过滤,也可以选择要排序的列以及按方向排序。 正如您可以想象的那样,这会以大量不同的组合进行编码,并且数据集非
我有两张 table 。想象第一个是一个目录,包含很多文件(第二个表)。 第二个表(文件)包含修改日期。 现在,我想选择所有目录并按修改日期 ASC 对它们进行排序(因此,最新的修改最上面)。我不想显
我想先根据用户的状态然后根据用户名来排序我的 sql 请求。该状态由 user_type 列设置: 1=活跃,2=不活跃,3=创始人。 我会使用此请求来执行此操作,但它不起作用,因为我想在“活跃”成员
在 C++ 中,我必须实现一个“类似 Excel/Access”(引用)的查询生成器,以允许对数据集进行自定义排序。如果您在 Excel 中使用查询构建器或 SQL 中的“ORDER BY a, b,
我面临这样的挑战: 检索按字段 A 排序的文档 如果字段 B 存在/不为空 . 否则 按字段排序 C. 在 SQL 世界中,我会做两个查询并创建一个 UNION SELECT,但我不知道如何从 Mon
我想对源列表执行以下操作: map 列表 排序 折叠 排序 展开 列表 其中一些方法(例如map和toList)是可链接的,因为它们返回非空对象。但是,sort 方法返回 void,因为它对 List
我制作了一个用于分析 Windows 日志消息编号的脚本。 uniq -c 数字的输出很难预测,因为根据数字的大小会有不同的空白。此时,我手动删除了空白。 这是对消息进行排序和计数的命令: cat n
我有以下词典: mydict1 = {1: 11, 2: 4, 5: 1, 6: 1} mydict2 = {1: 1, 5: 1} 对于它们中的每一个,我想首先按值(降序)排序,然后按键(升序)排序
我刚刚开始使用泛型,目前在对多个字段进行排序时遇到问题。 案例: 我有一个 PeopleList 作为 TObjectList我希望能够通过一次选择一个排序字段,但尽可能保留以前的排序来制作类似 Ex
有没有办法在 sql 中组合 ORDER BY 和 IS NULL 以便我可以在列不为空时按列排序,但如果它为null,按另一列排序? 最佳答案 类似于: ORDER BY CASE WHEN
我有一个包含 2 列“id”和“name”的表。 id 是常规的自动增量索引,name 只是 varchar。 id name 1 john 2 mary 3 pop 4 mary 5 j
场景 网站页面有一个带有分页、过滤、排序功能的表格 View 。 表中的数据是从REST API服务器获取的,数据包含数百万条记录。 数据库 REST API 服务器 Web 服务器 浏览器 问
假设我有一本字典,其中的键(单词)和值(分数)如下: GOD 8 DONG 16 DOG 8 XI 21 我想创建一个字典键(单词)的 NSArray,首先按分数排序,然后按字
如何在 sphinx 上通过 sql 命令选择前 20 行按标题 WEIGHT 排序,接下来 20 行按标题 ASC 排序(总共 40 个结果),但不要给出重复的标题输出。 我尝试了这个 sql 命令
我有一个奇怪的问题,当从 SQLite 数据库中选择信息并根据日期排序时,返回的结果无效。 我的SQL语句是这样的: Select pk from usersDates order by dateti
我是一名优秀的程序员,十分优秀!