- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我想将转换后的流写入Elasticsearch索引,如下所示:
transformed.foreachRDD(rdd => {
if (!rdd.isEmpty()) {
val messages = rdd.map(prepare)
messages.saveAsNewAPIHadoopFile("-", classOf[NullWritable], classOf[MapWritable], classOf[EsOutputFormat], ec)
}
})
val messages = rdd.map(prepare)
行引发错误(请参见下文)。我陷入了尝试以其他方式解决此问题的困境(例如在
@transient
旁边添加
val conf
),但是似乎没有任何效果。
6/06/28 19:23:00 ERROR JobScheduler: Error running job streaming job 1467134580000 ms.0 org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:324) at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:323) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.map(RDD.scala:323) at de.kp.spark.elastic.stream.EsStream$$anonfun$run$1.apply(EsStream.scala:77) at de.kp.spark.elastic.stream.EsStream$$anonfun$run$1.apply(EsStream.scala:75) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.NotSerializableException: org.apache.hadoop.conf.Configuration Serialization stack: - object not serializable (class: org.apache.hadoop.conf.Configuration, value: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml) - field (class: de.kp.spark.elastic.stream.EsStream, name: de$kp$spark$elastic$stream$EsStream$$conf, type: class org.apache.hadoop.conf.Configuration) - object (class de.kp.spark.elastic.stream.EsStream, de.kp.spark.elastic.stream.EsStream@6b156e9a) - field (class: de.kp.spark.elastic.stream.EsStream$$anonfun$run$1, name: $outer, type: class de.kp.spark.elastic.stream.EsStream) - object (class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1, ) - field (class: de.kp.spark.elastic.stream.EsStream$$anonfun$run$1$$anonfun$2, name: $outer, type: class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1) - object (class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1$$anonfun$2, ) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301) ... 30 more Exception in thread "main" org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:324) at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:323) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.map(RDD.scala:323) at de.kp.spark.elastic.stream.EsStream$$anonfun$run$1.apply(EsStream.scala:77) at de.kp.spark.elastic.stream.EsStream$$anonfun$run$1.apply(EsStream.scala:75) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.NotSerializableException: org.apache.hadoop.conf.Configuration Serialization stack: - object not serializable (class: org.apache.hadoop.conf.Configuration, value: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml) - field (class: de.kp.spark.elastic.stream.EsStream, name: de$kp$spark$elastic$stream$EsStream$$conf, type: class org.apache.hadoop.conf.Configuration) - object (class de.kp.spark.elastic.stream.EsStream, de.kp.spark.elastic.stream.EsStream@6b156e9a) - field (class: de.kp.spark.elastic.stream.EsStream$$anonfun$run$1, name: $outer, type: class de.kp.spark.elastic.stream.EsStream) - object (class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1, ) - field (class: de.kp.spark.elastic.stream.EsStream$$anonfun$run$1$$anonfun$2, name: $outer, type: class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1) - object (class de.kp.spark.elastic.stream.EsStream$$anonfun$run$1$$anonfun$2, ) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301) ... 30 more
class: org.apache.hadoop.conf.Configuration, value: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml
)
class EsStream(name:String,conf:HConf) extends SparkBase with Serializable {
/* Elasticsearch configuration */
val ec = getEsConf(conf)
/* Kafka configuration */
val (kc,topics) = getKafkaConf(conf)
def run() {
val ssc = createSSCLocal(name,conf)
/*
* The KafkaInputDStream returns a Tuple where only the second component
* holds the respective message; we therefore reduce to a DStream[String]
*/
val stream = KafkaUtils.createStream[String,String,StringDecoder,StringDecoder](ssc,kc,topics,StorageLevel.MEMORY_AND_DISK).map(_._2)
/*
* Inline transformation of the incoming stream by any function that maps
* a DStream[String] onto a DStream[String]
*/
val transformed = transform(stream)
/*
* Write transformed stream to Elasticsearch index
*/
transformed.foreachRDD(rdd => {
if (!rdd.isEmpty()) {
val messages = rdd.map(prepare)
messages.saveAsNewAPIHadoopFile("-", classOf[NullWritable], classOf[MapWritable], classOf[EsOutputFormat], ec)
}
})
ssc.start()
ssc.awaitTermination()
}
def transform(stream:DStream[String]) = stream
private def getEsConf(config:HConf):HConf = {
val _conf = new HConf()
_conf.set("es.nodes", conf.get("es.nodes"))
_conf.set("es.port", conf.get("es.port"))
_conf.set("es.resource", conf.get("es.resource"))
_conf
}
private def getKafkaConf(config:HConf):(Map[String,String],Map[String,Int]) = {
val cfg = Map(
"group.id" -> conf.get("kafka.group"),
"zookeeper.connect" -> conf.get("kafka.zklist"),
"zookeeper.connection.timeout.ms" -> conf.get("kafka.timeout")
)
val topics = conf.get("kafka.topics").split(",").map((_,conf.get("kafka.threads").toInt)).toMap
(cfg,topics)
}
private def prepare(message:String):(Object,Object) = {
val m = JSON.parseFull(message) match {
case Some(map) => map.asInstanceOf[Map[String,String]]
case None => Map.empty[String,String]
}
val kw = NullWritable.get
val vw = new MapWritable
for ((k, v) <- m) vw.put(new Text(k), new Text(v))
(kw, vw)
}
}
最佳答案
从conf:HConf
的类构造函数中删除EsStream
,并将其编写为class EsStream(name:String)
。
接下来创建一个带有签名的方法:public def init(conf:HConf):Map(String,String)
在这种方法中,您将阅读所需的配置并在其中更新ec
和(kc,topics)
。
之后,您应该调用run方法。
关于scala - org.apache.spark.SparkException:任务无法序列化(由org.apache.hadoop.conf.Configuration引起),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38082657/
我通过 spring ioc 编写了一些 Rest 应用程序。但我无法解决这个问题。这是我的异常(exception): org.springframework.beans.factory.BeanC
我对 TestNG、Spring 框架等完全陌生,我正在尝试使用注释 @Value通过 @Configuration 访问配置文件注释。 我在这里想要实现的目标是让控制台从配置文件中写出“hi”,通过
为此工作了几个小时。我完全被难住了。 这是 CS113 的实验室。 如果用户在程序(二进制计算器)结束时选择继续,我们需要使用 goto 语句来到达程序的顶部。 但是,我们还需要释放所有分配的内存。
我正在尝试使用 ffmpeg 库构建一个小的 C 程序。但是我什至无法使用 avformat_open_input() 打开音频文件设置检查错误代码的函数后,我得到以下输出: Error code:
使用 Spring Initializer 创建一个简单的 Spring boot。我只在可用选项下选择 DevTools。 创建项目后,无需对其进行任何更改,即可正常运行程序。 现在,当我尝试在项目
所以我只是在 Mac OS X 中通过 brew 安装了 qt。但是它无法链接它。当我尝试运行 brew link qt 或 brew link --overwrite qt 我得到以下信息: ton
我在提交和 pull 时遇到了问题:在提交的 IDE 中,我看到: warning not all local changes may be shown due to an error: unable
我跑 man gcc | grep "-L" 我明白了 Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more inf
我有一段代码,旨在接收任何 URL 并将其从网络上撕下来。到目前为止,它运行良好,直到有人给了它这个 URL: http://www.aspensurgical.com/static/images/a
在过去的 5 个小时里,我一直在尝试在我的服务器上设置 WireGuard,但在完成所有设置后,我无法 ping IP 或解析域。 下面是服务器配置 [Interface] Address = 10.
我正在尝试在 GitLab 中 fork 我的一个私有(private)项目,但是当我按下 fork 按钮时,我会收到以下信息: No available namespaces to fork the
我这里遇到了一些问题。我是 node.js 和 Rest API 的新手,但我正在尝试自学。我制作了 REST API,使用 MongoDB 与我的数据库进行通信,我使用 Postman 来测试我的路
下面的代码在控制台中给出以下消息: Uncaught DOMException: Failed to execute 'appendChild' on 'Node': The new child el
我正在尝试调用一个新端点来显示数据,我意识到在上一组有效的数据中,它在数据周围用一对额外的“[]”括号进行控制台,我认为这就是问题是,而新端点不会以我使用数据的方式产生它! 这是 NgFor 失败的原
我正在尝试将我的 Symfony2 应用程序部署到我的 Azure Web 应用程序,但遇到了一些麻烦。 推送到远程时,我在终端中收到以下消息 remote: Updating branch 'mas
Minikube已启动并正在运行,没有任何错误,但是我无法 curl IP。我在这里遵循:https://docs.traefik.io/user-guide/kubernetes/,似乎没有提到关闭
每当我尝试docker组成任何项目时,都会出现以下错误。 我尝试过有和没有sudo 我在这台机器上只有这个问题。我可以在Mac和Amazon WorkSpace上运行相同的容器。 (myslabs)
我正在尝试 pip install stanza 并收到此消息: ERROR: No matching distribution found for torch>=1.3.0 (from stanza
DNS 解析看起来不错,但我无法 ping 我的服务。可能是什么原因? 来自集群中的另一个 Pod: $ ping backend PING backend.default.svc.cluster.l
我正在使用Hibernate 4 + Spring MVC 4当我开始 Apache Tomcat Server 8我收到此错误: Error creating bean with name 'wel
我是一名优秀的程序员,十分优秀!