- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我使用 Spark Structured Streaming 分别使用下面的 scala 代码处理传入和传出 Apache Kafka 的数据流。
我可以使用 kafka 源成功读取数据流,但是在尝试将流写入 Kafka 接收器时,出现以下错误:
ERROR MicroBatchExecution:91 - Query [id = 234750ca-d416-4182-b3cc-4e2c1f922724, runId = 4c4b0931-9876-456f-8d56-752623803332] terminated with error java.lang.IllegalArgumentException: Expected e.g. {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}, got 1 {"path":"file:///path/to/file.csv","timestamp":1536564701000,"batchId":0}
at org.apache.spark.sql.kafka010.JsonUtils$.partitionOffsets(JsonUtils.scala:74)
at org.apache.spark.sql.kafka010.KafkaSourceOffset$.apply(KafkaSourceOffset.scala:64)
at org.apache.spark.sql.kafka010.KafkaSource$$anon$1.deserialize(KafkaSource.scala:124)
at org.apache.spark.sql.kafka010.KafkaSource$$anon$1.deserialize(KafkaSource.scala:99)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.get(HDFSMetadataLog.scala:198)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets$lzycompute(KafkaSource.scala:129)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets(KafkaSource.scala:97)
at org.apache.spark.sql.kafka010.KafkaSource.getBatch(KafkaSource.scala:207)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$populateStartOffsets$2.apply(MicroBatchExecution.scala:216)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$populateStartOffsets$2.apply(MicroBatchExecution.scala:213)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$populateStartOffsets(MicroBatchExecution.scala:213)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:124)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:121)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:117)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Expected e.g. {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}, got 1
{"path":""file:///path/to/file.csv"","timestamp":1536564701000,"batchId":0}
=== Streaming Query ===
Identifier: [id = 234750ca-d416-4182-b3cc-4e2c1f922724, runId = 851d0cd7-aabe-45c8-8a14-94227f90e174]
Current Committed Offsets: {KafkaSource[Subscribe[t]]: {"logOffset":2}}
Current Available Offsets: {KafkaSource[Subscribe[t]]: {"logOffset":3}}
object spark_kafka_attempt2 {
def main(args: Array[String]) {
val spark = SparkSession
.builder
.appName("spark_kafka_test")
.getOrCreate()
import spark.implicits._
val input_lines = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094")
.option("subscribe", "input_stream")
.option("startingOffsets", "earliest")
.load()
val inputStreamSchema = new StructType()
.add("input_id", "long")
.add("timestamp", "timestamp")
.add("type", "string")
val lines = input_lines.selectExpr("CAST(value AS STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
.select(from_json($"value", inputStreamSchema).as("data"), $"timestamp".as("arrival_timestamp"))
.select("data.*", "arrival_timestamp")
val query = lines
.selectExpr("CAST(input_id AS STRING) AS key", "to_json(struct(*)) AS value")
.writeStream
.format("kafka")
.outputMode("update")
.option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094")
.option("topic", "processed_stream")
.option("checkpointLocation", "/home/local/directory")
.start()
query.awaitTermination()
}
}
name := "spark_kafka_test"
version := "0.1"
scalaVersion := "2.11.8"
val sparkVersion = "2.3.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-sql" % sparkVersion,
"org.apache.spark" %% "spark-sql-kafka-0-10" % sparkVersion
)
./spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1 --class spark_kafka_test --master local[4] /home/salman/Development/spark_kafka_attempt2/target/scala-2.11/spark_kafka_test_2.11-0.1.jar
最佳答案
经过大量调查和浏览,我发现以下解决方案将处理后的流写入 kafka sink:
创建以下 KafkaSink 类
import java.util.Properties
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}
import org.apache.spark.sql.ForeachWriter
class KafkaSink(topic:String, servers:String) extends ForeachWriter[(String, String)]
{
val kafkaProperties = new Properties()
kafkaProperties.put("bootstrap.servers", servers)
kafkaProperties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
kafkaProperties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
val results = new scala.collection.mutable.HashMap[String, String]
var producer: KafkaProducer[String, String] = _
def open(partitionId: Long,version: Long): Boolean = {
producer = new KafkaProducer(kafkaProperties)
true
}
def process(value: (String, String)): Unit = {
producer.send(new ProducerRecord(topic, value._1 + ":" + value._2))
}
def close(errorOrNull: Throwable): Unit = {
producer.close()
}
}
val outputDf = lines.selectExpr("CAST(input_id AS STRING) AS key", "to_json(struct(*)) AS value").as[(String, String)]
val topic = "processed_stream"
val brokers = "localhost:9092,localhost:9093,localhost:9094"
val writer = new KafkaSink(topic, brokers)
val query = outputDf
.writeStream
.foreach(writer)
.outputMode("update")
.start()
关于scala - Spark 结构化流 + Kafka 集成 : MicroBatchExecution PartitionOffsets Error,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53040640/
我目前正在尝试基于哈希表构建字典。逻辑是:有一个名为 HashTable 的结构,其中包含以下内容: HashFunc HashFunc; PrintFunc PrintEntry; CompareF
如果我有一个指向结构/对象的指针,并且该结构/对象包含另外两个指向其他对象的指针,并且我想删除“包含这两个指针的对象而不破坏它所持有的指针”——我该怎么做这样做吗? 指向对象 A 的指针(包含指向对象
像这样的代码 package main import "fmt" type Hello struct { ID int Raw string } type World []*Hell
我有一个采用以下格式的 CSV: Module, Topic, Sub-topic 它需要能够导入到具有以下格式的 MySQL 数据库中: CREATE TABLE `modules` ( `id
通常我使用类似的东西 copy((uint8_t*)&POD, (uint8_t*)(&POD + 1 ), back_inserter(rawData)); copy((uint8_t*)&PODV
错误 : 联合只能在具有兼容列类型的表上执行。 结构(层:字符串,skyward_number:字符串,skyward_points:字符串)<> 结构(skyward_number:字符串,层:字符
我有一个指向结构的指针数组,我正在尝试使用它们进行 while 循环。我对如何准确初始化它并不完全有信心,但我一直这样做: Entry *newEntry = malloc(sizeof(Entry)
我正在学习 C,我的问题可能很愚蠢,但我很困惑。在这样的函数中: int afunction(somevariables) { if (someconditions)
我现在正在做一项编程作业,我并没有真正完全掌握链接,因为我们还没有涉及它。但是我觉得我需要它来做我想做的事情,因为数组还不够 我创建了一个结构,如下 struct node { float coef;
给定以下代码片段: #include #include #define MAX_SIZE 15 typedef struct{ int touchdowns; int intercepti
struct contact list[3]; int checknullarray() { for(int x=0;x<10;x++) { if(strlen(con
这个问题在这里已经有了答案: 关闭 11 年前。 Possible Duplicate: Empty “for” loop in Facebook ajax what does AJAX call
我刚刚在反射器中浏览了一个文件,并在结构构造函数中看到了这个: this = new Binder.SyntaxNodeOrToken(); 我以前从未见过该术语。有人能解释一下这个赋值在 C# 中的
我经常使用字符串常量,例如: DICT_KEY1 = 'DICT_KEY1' DICT_KEY2 = 'DICT_KEY2' ... 很多时候我不介意实际的文字是什么,只要它们是独一无二的并且对人类读
我是 C 的新手,我不明白为什么下面的代码不起作用: typedef struct{ uint8_t a; uint8_t* b; } test_struct; test_struct
您能否制作一个行为类似于内置类之一的结构,您可以在其中直接分配值而无需调用属性? 前任: RoundedDouble count; count = 5; 而不是使用 RoundedDouble cou
这是我的代码: #include typedef struct { const char *description; float value; int age; } swag
在创建嵌套列表时,我认为 R 具有对列表元素有用的命名结构。我有一个列表列表,并希望应用包含在任何列表中的每个向量的函数。 lapply这样做但随后剥离了列表的命名结构。我该怎么办 lapply嵌套列
我正在做一个用于学习目的的个人组织者,我从来没有使用过 XML,所以我不确定我的解决方案是否是最好的。这是我附带的 XML 文件的基本结构:
我是新来的 nosql概念,所以当我开始学习时 PouchDB ,我找到了这个转换表。我的困惑是,如何PouchDB如果可以说我有多个表,是否意味着我需要创建多个数据库?因为根据我在 pouchdb
我是一名优秀的程序员,十分优秀!