- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我无法解决以下由 filtered.foreachPartition(iter => {
) 触发的序列化问题。我虽然 foreachPartition
可以解决序列化问题,但它不是那么,如何使用redisPool
呢?
编辑(我更新了代码以使其更清晰):
val redis_host = "localhost"
val redist_port = 6379
messages.foreachRDD(rdd => {
rdd.foreachPartition(iter => {
val redisPool = new Pool(new JedisPool(new JedisPoolConfig(), redis_host, redis_port, 2000))
iter.foreach({ msg =>
println(msg.mkString(","))
})
})
})
我假设变量 redis_host
和 redis_port
不可序列化,但我如何正确序列化它们以便代码可以在集群上运行,而不仅仅是在本地运行?
上面显示的代码抛出错误:
org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1.apply(KafkaDecisionsConsumer.scala:135) at org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1.apply(KafkaDecisionsConsumer.scala:134) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.NotSerializableException: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer Serialization stack: - object not serializable (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer, value: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer@3fba5c74) - field (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1, name: $outer, type: class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer) - object (class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1, ) - field (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1$$anonfun$apply$1, name: $outer, type: class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1) - object (class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1$$anonfun$apply$1, ) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301) ... 30 more Exception in thread "main" org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) at org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1.apply(KafkaDecisionsConsumer.scala:135) at org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1.apply(KafkaDecisionsConsumer.scala:134) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.NotSerializableException: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer Serialization stack: - object not serializable (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer, value: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer@3fba5c74) - field (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1, name: $outer, type: class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer) - object (class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1, ) - field (class: org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1$$anonfun$apply$1, name: $outer, type: class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1) - object (class org.test.manager.service.consumer.kafka.KafkaDecisionsConsumer$$anonfun$run$1$$anonfun$apply$1, ) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
最佳答案
解决方案是在匿名函数中延迟初始化池。在 Java 中你可以这样做:
messages.foreachRDD(new RedisFunction(redis_host, redis_port))
messages.count()
class RedisFunction implements F {
private Pool pool = null;
private final String redis_host;
private final String redis_port;
RedisFunction(String redis_host, String redis_port) {
this.redis_host = redis_host;
this.redis_port = redis_port;
initPool();
}
private void initPool() {
this.pool = new Pool(new JedisPool(new JedisPoolConfig(), redis_host, redis_port, 2000))
}
public Void call(JavaRDD<> rdd) {
if(this.pool == null) {
initPool();
}
rdd = rdd.map(....); /*your rdd transformations go here*/
rdd.count(); //spark action
}
}
上面的 java 示例应该可以帮助您解决序列化问题。
关于scala - 运行 Spark Streaming 作业时出现序列化问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40685123/
我正在尝试实现具有以下签名的方法: public static Pair, Stream> flatten(Iterator, Stream>> iterator); 该方法的目标是将每种流类型展平
我有两个流从两个不同的 api 获取。 Stream get monthOutStream => monthOutController.stream; Stream get resultOutStre
Stream.of(int[])返回 Stream ,而 Stream.of(String[])返回 Stream . 为什么这两种方法的行为不同?两者都应该返回 Stream和 Stream或 St
我正在使用 rxdart在 dart 中处理流的包。我被困在处理一个特殊的问题上。 请看一下这个虚拟代码: final userId = BehaviorSubject(); Stream getSt
我到处都找遍了,还是没弄明白。我知道你可以用流建立两个关联: 用于支持数据存储的包装器意味着作为消费者和供应商之间的抽象层 数据随着时间的推移变得可用,而不是一次全部 SIMD 代表单指令,多数据;在
考虑下面的代码: List l=new ArrayList<>(); l.add(23);l.add(45);l.add(90); Stream str=l.stream
我有一个大型主干/requirejs 应用程序,我想迁移到 webpack,最新的“webpack”:“^4.27.1”,但我遇到了一个我无法解决的错误。 我一直在阅读 https://webpack
我正在使用 xmpp 开发聊天应用程序,根据我们的要求,我们有三台服务器 Apache Tomcat 7、ejabbered 2.1.11 和 mysql 5.5, to run xmppbot on
我知道如何使用 Java 库,并且我可以编写一些循环来执行我需要的操作,但问题更多,为什么 scala.collection.JavaConverters 中没有任何内容或scala.collecti
我正在尝试创建一个单一的衬里,它应该计算一个非常长的文本文件中的唯一单词。独特的词例如:márya fëdorovna scarlet-liveried,...所以基本上都是非英语词。 我的问题是我的
如果我有以下情况: StreamWriter MySW = null; try { Stream MyStream = new FileStream("asdf.txt"); MySW =
有人可以帮我将以下语句转换为 Java8: 我有一个像这样的 HashMap : private Map, List>> someMap; 我想在java8中转换以下逻辑: private Strin
有人可以帮我将以下语句转换为 Java8: 我有一个像这样的 HashMap : private Map, List>> someMap; 我想在java8中转换以下逻辑: private Strin
考虑两种测试方法parallel()和sequential(): @Test public void parallel() throws Exception { System.ou
我是 NodeJS 的新手,我基本上想做的是通过 HTTP 将 .pdf 上传到我的服务器。我正在使用 POST rquest 来处理 Content-Type multipart/form-data
哪个更好:MemoryStream.WriteTo(Stream destinationStream) 或 Stream.CopyTo(Stream destinationStream)?? 我正在谈
给定一个 Stream,我想创建一个新的 Stream,其中的元素在它们之间有时间延迟。 我尝试使用 tokio_core::reactor::Timeout 和 Stream 的 and_then
我是 Kafka Streams 和 Spring Cloud Stream 的新手,但在将集成相关代码移动到属性文件方面已经阅读了有关它的好东西,因此开发人员可以主要专注于事物的业务逻辑方面。 这里
源代码看起来非常相似:pump , pipe .为什么我要使用一个而不是另一个?一个只是另一个的更好版本吗? 最佳答案 Stream.pipe 现在显然是自 0.3.x 以来的首选方法,因此尽可能尝试
我正在寻找是否有更好的方法来解决我不得不使用这些签名的困境(注意:由于 Spock 测试,T[][] 是必需的,我提供 T[][] 作为数据提供商) 我的方法签名是: public T[][] cr
我是一名优秀的程序员,十分优秀!