- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试遵循本指南: https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html但我不明白为什么我大部分时间都没有向控制台写入数据,为什么它的垃圾邮件执行线程日志记录?
我需要配置什么吗?这是我的代码:
SparkSession spark = SparkSession
.builder()
.appName("Testing")
.config("spark.master", "local")
.getOrCreate();
StructType recordSchema = new StructType()
.add("description", "string")
.add("location", "string")
.add("id", "string")
.add("title", "string")
.add("company", "string")
.add("place", "string")
.add("date", "string")
.add("senorityLevel", "string")
.add("function", "string")
.add("employmentType", "string")
.add("industries", "string");
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "127.0.0.1:9092")
.option("subscribe", "linkedin-producer")
.option("startingOffsets", "earliest")
.option("kafka.group.id","test")
.load();
StreamingQuery query = df
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.select(from_json(col("value").cast("string"), recordSchema).as("data"))
.select("data.*")
.writeStream()
.outputMode(OutputMode.Append())
.format("console")
.start();
try {
query.awaitTermination(10000);
} catch (StreamingQueryException e) {
e.printStackTrace();
}
有时我在控制台中得到 df,但我的控制台充满了这个:
[Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.7.0
[Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 448719dc99a19793
[Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1613492229792
[Executor task launch worker for task 1] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-3, groupId=test] Subscribed to partition(s): linkedin-producer-0
[Executor task launch worker for task 1] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-3, groupId=test] Seeking to offset 0 for partition linkedin-producer-0
[Executor task launch worker for task 1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-test-3, groupId=test] Cluster ID: N88wfukWTIS-ycMeSGhhng
[task-result-getter-0] INFO org.apache.spark.network.client.TransportClientFactory - Successfully created connection to /10.0.0.9:44237 after 76 ms (0 ms spent in bootstraps)
[Executor task launch worker for task 1] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-3, groupId=test] Seeking to offset 500 for partition linkedin-producer-0
[task-result-getter-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 0.0 (TID 0) in 1069 ms on 10.0.0.9 (executor driver) (1/3)
[dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerInfo - Removed taskresult_0 on 10.0.0.9:44237 in memory (size: 2.9 MiB, free: 848.4 MiB)
[Executor task launch worker for task 1] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-3, groupId=test] Seeking to offset 909 for partition linkedin-producer-0
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Commit authorized for partition 1 (task 1, attempt 0, stage 0.0)
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Committed partition 1 (task 1, attempt 0, stage 0.0)
[Executor task launch worker for task 1] INFO org.apache.spark.storage.memory.MemoryStore - Block taskresult_1 stored as bytes in memory (estimated size 2.9 MiB, free 845.5 MiB)
[dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerInfo - Added taskresult_1 in memory on 10.0.0.9:44237 (size: 2.9 MiB, free: 845.5 MiB)
[Executor task launch worker for task 1] INFO org.apache.spark.executor.Executor - Finished task 1.0 in stage 0.0 (TID 1). 3003495 bytes result sent via BlockManager)
[dispatcher-event-loop-1] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 2.0 in stage 0.0 (TID 2, 10.0.0.9, executor driver, partition 2, PROCESS_LOCAL, 8103 bytes)
[Executor task launch worker for task 2] INFO org.apache.spark.executor.Executor - Running task 2.0 in stage 0.0 (TID 2)
[task-result-getter-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 1.0 in stage 0.0 (TID 1) in 304 ms on 10.0.0.9 (executor driver) (2/3)
[dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerInfo - Removed taskresult_1 on 10.0.0.9:44237 in memory (size: 2.9 MiB, free: 848.4 MiB)
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = none
bootstrap.servers = [127.0.0.1:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-test-4
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = test
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
[Executor task launch worker for task 2] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.7.0
[Executor task launch worker for task 2] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 448719dc99a19793
[Executor task launch worker for task 2] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1613492230087
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-4, groupId=test] Subscribed to partition(s): linkedin-producer-2
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-4, groupId=test] Seeking to offset 0 for partition linkedin-producer-2
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-test-4, groupId=test] Cluster ID: N88wfukWTIS-ycMeSGhhng
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-4, groupId=test] Seeking to offset 500 for partition linkedin-producer-2
[Executor task launch worker for task 2] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-test-4, groupId=test] Seeking to offset 905 for partition linkedin-producer-2
[Executor task launch worker for task 2] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Commit authorized for partition 2 (task 2, attempt 0, stage 0.0)
[Executor task launch worker for task 2] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Committed partition 2 (task 2, attempt 0, stage 0.0)
[Executor task launch worker for task 2] INFO org.apache.spark.storage.memory.MemoryStore - Block taskresult_2 stored as bytes in memory (estimated size 2.9 MiB, free 845.5 MiB)
[dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerInfo - Added taskresult_2 in memory on 10.0.0.9:44237 (size: 2.9 MiB, free: 845.5 MiB)
[Executor task launch worker for task 2] INFO org.apache.spark.executor.Executor - Finished task 2.0 in stage 0.0 (TID 2). 3001144 bytes result sent via BlockManager)
[task-result-getter-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 2.0 in stage 0.0 (TID 2) in 240 ms on 10.0.0.9 (executor driver) (3/3)
[dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerInfo - Removed taskresult_2 on 10.0.0.9:44237 in memory (size: 2.9 MiB, free: 848.4 MiB)
[task-result-getter-2] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Removed TaskSet 0.0, whose tasks have all completed, from pool
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 0 (start at Spark.java:73) finished in 1.730 s
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Killing all running tasks in stage 0: Stage finished
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.scheduler.DAGScheduler - Job 0 finished: start at Spark.java:73, took 1.768779 s
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@52eea1c3 is committing.
-------------------------------------------
Batch: 0
-------------------------------------------
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator - Code generated in 29.841333 ms
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator - Code generated in 30.563541 ms
+--------------------+--------+----------+--------------------+--------------------+--------------------+----------+----------------+--------------------+--------------+--------------------+
| description|location| id| title| company| place| date| senorityLevel| function|employmentType| industries|
+--------------------+--------+----------+--------------------+--------------------+--------------------+----------+----------------+--------------------+--------------+--------------------+
|Job Summary We ar...| Israel|2406654159| Retail Intern|Disney Media & En...|Tel Aviv, Tel Avi...|2021-02-11|Mid-Senior level|General Business,...| Full-time|Marketing and Adv...|
|We're looking for...| Israel|2404180635| Personal Assistant| Lemonade|Tel Aviv, Tel Avi...|2021-01-07| Entry level| Administrative| Full-time|Marketing and Adv...|
|Job Summary We ar...| Israel|2398561147|Retail intern -12...|The Walt Disney C...| Tel Aviv, Israel|2021-02-10| Internship| Marketing| Full-time| Entertainment|
|We're looking for...| Israel|2404180635| Personal Assistant| Lemonade|Tel Aviv, Tel Avi...|2021-01-07| Entry level| Administrative| Full-time|Marketing and Adv...|
|We're looking for...| Israel|2404180635| Personal Assistant| Lemonade|Tel Aviv, Tel Avi...|2021-01-07| Entry level| Administrative| Full-time|Marketing and Adv...|
|Job Summary We ar...| Israel|2406654159| Retail Intern|Disney Media & En...|Tel Aviv, Tel Avi...|2021-02-11|Mid-Senior level|General Business,...| Full-time|Marketing and Adv...|
|Job Summary We ar...| Israel|2398561147|Retail intern -12...|The Walt Disney C...| Tel Aviv, Israel|2021-02-10| Internship| Marketing| Full-time| Entertainment|
|At CrowdStrike we...| Israel|2406653801| HR Generalist| CrowdStrike|Ramat Gan, Tel Av...|2021-02-11| Associate| Human Resources| Full-time|Information Techn...|
|Job Description W...| Israel|2406699205|HR Administrator ...| Akamai Technologies|Tel Aviv, Tel Avi...|2021-02-11| Not Applicable| Human Resources| Full-time|Computer Networki...|
|JOB PURPOSE To as...| Israel|2403563715|Research, Campaig...|Amnesty Internati...|Jerusalem Municip...|2021-02-09| Entry level| Research| Contract|Nonprofit Organiz...|
|Job Description A...| Israel|2383126490|Receptionist – Pa...| Ceragon Networks|Tel Aviv, Tel Avi...|2021-02-01| Not Applicable| Administrative| Full-time|Computer Networki...|
|Fiverr is looking...| Israel|2419715658| Data Analyst| About Fiverr| Tel Aviv, Israel|2021-02-11|Mid-Senior level|Information Techn...| Full-time| Internet|
|חברת AlfaCloud - ...| Israel|2400094107| Project Manager|AlfaCloud - ERP S...| Tel Aviv, Israel|2021-02-11| Entry level|Project Managemen...| Full-time| Computer Software|
|טדי הפקות מחפשת א...| Israel|2396568054| Booking Agent| Tedy Productions| Tel Aviv, Israel|2021-02-09| Entry level|Design, Art/Creat...| Full-time| |
|The Norman Tel Av...| Israel|2418149015| Front Desk Staff| The Norman Tel Aviv| Tel Aviv, Israel|2021-02-10| Entry level| Administrative| Full-time| Hospitality|
|Are you a stellar...| Israel|2405797088|Regional Operatio...| Wolt|Tel Aviv, Tel Avi...|2021-02-11| Director| Management| Full-time|Marketing and Adv...|
|About CXBuzz Inte...| Israel|2400078284| Journalism Intern| CXBuzz| Tel Aviv, Israel|2021-02-11| Internship| Education, Training| Internship| Publishing|
|Job Summary We ar...| Israel|2406654159| Retail Intern|Disney Media & En...|Tel Aviv, Tel Avi...|2021-02-11|Mid-Senior level|General Business,...| Full-time|Marketing and Adv...|
|Job Summary We ar...| Israel|2398561147|Retail intern -12...|The Walt Disney C...| Tel Aviv, Israel|2021-02-10| Internship| Marketing| Full-time| Entertainment|
|At CrowdStrike we...| Israel|2406653801| HR Generalist| CrowdStrike|Ramat Gan, Tel Av...|2021-02-11| Associate| Human Resources| Full-time|Information Techn...|
+--------------------+--------+----------+--------------------+--------------------+--------------------+----------+----------------+--------------------+--------------+--------------------+
only showing top 20 rows
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source write support org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@52eea1c3 committed.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.streaming.CheckpointFileManager - Writing atomically to file:/tmp/temporary-c9ccc957-c729-4f8f-8635-1a029de31511/commits/0 using temp file file:/tmp/temporary-c9ccc957-c729-4f8f-8635-1a029de31511/commits/.0.0cdf78cd-795c-4c3c-94d1-91341e38187f.tmp
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.streaming.CheckpointFileManager - Renamed temp file file:/tmp/temporary-c9ccc957-c729-4f8f-8635-1a029de31511/commits/.0.0cdf78cd-795c-4c3c-94d1-91341e38187f.tmp to file:/tmp/temporary-c9ccc957-c729-4f8f-8635-1a029de31511/commits/0
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.streaming.MicroBatchExecution - Streaming query made progress: {
"id" : "9d193cbf-379e-495e-87e3-18f9f09145ea",
"runId" : "2e9f6d84-23af-4b23-89cd-73ecef66d290",
"name" : null,
"timestamp" : "2021-02-16T16:17:06.949Z",
"batchId" : 0,
"numInputRows" : 3813,
"processedRowsPerSecond" : 1035.5784899511136,
"durationMs" : {
"addBatch" : 2786,
"getBatch" : 22,
"latestOffset" : 446,
"queryPlanning" : 363,
"triggerExecution" : 3681,
"walCommit" : 23
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "KafkaV2[Subscribe[linkedin-producer]]",
"startOffset" : null,
"endOffset" : {
"linkedin-producer" : {
"2" : 1269,
"1" : 1272,
"0" : 1272
}
},
"numInputRows" : 3813,
"processedRowsPerSecond" : 1035.5784899511136
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleTable$@793ec5d7",
"numOutputRows" : 3813
}
}
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-0
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-2
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-1
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-0 to position FetchPosition{offset=1272, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-2 to position FetchPosition{offset=1269, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-1 to position FetchPosition{offset=1272, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.spark.sql.execution.streaming.MicroBatchExecution - Streaming query made progress: {
"id" : "9d193cbf-379e-495e-87e3-18f9f09145ea",
"runId" : "2e9f6d84-23af-4b23-89cd-73ecef66d290",
"name" : null,
"timestamp" : "2021-02-16T16:17:10.664Z",
"batchId" : 1,
"numInputRows" : 0,
"inputRowsPerSecond" : 0.0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"latestOffset" : 3,
"triggerExecution" : 4
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "KafkaV2[Subscribe[linkedin-producer]]",
"startOffset" : {
"linkedin-producer" : {
"2" : 1269,
"1" : 1272,
"0" : 1272
}
},
"endOffset" : {
"linkedin-producer" : {
"2" : 1269,
"1" : 1272,
"0" : 1272
}
},
"numInputRows" : 0,
"inputRowsPerSecond" : 0.0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleTable$@793ec5d7",
"numOutputRows" : 0
}
}
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-0
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-2
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-1
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-0 to position FetchPosition{offset=1272, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-2 to position FetchPosition{offset=1269, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-1 to position FetchPosition{offset=1272, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-0
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-2
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Seeking to LATEST offset of partition linkedin-producer-1
[stream execution thread for [id = 9d193cbf-379e-495e-87e3-18f9f09145ea, runId = 2e9f6d84-23af-4b23-89cd-73ecef66d290]] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=consumer-test-1, groupId=test] Resetting offset for partition linkedin-producer-0 to position FetchPosition{offset=1272, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[omri-Lenovo-YOGA-920-13IKB:9092 (id: 0 rack: null)], epoch=0}}.
.
.
.
pom.xml:
<!--Spark-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-avro_2.12</artifactId>
<version>3.0.0</version>
</dependency>
最佳答案
您正在获取记录器信息,因为您已将默认日志记录级别用作 INFO。通过 spark.sparkContext.setLogLevel("WARN")
将日志级别设置为 WARN。
关于java - 如何避免连续出现 "Resetting offset"和 "Seeking to LATEST offset"?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66228274/
如何检查一个元素是否立即隐藏。即如何通知元素的可见性。 在我的例子中,该元素是通过 slideUp 函数隐藏的。我应该立即收到有关该元素的可见性的通知。 我想到了使用bind()方法。但它没有类似 o
if (srcbloc == NULL) { fprintf(stderr, "warning!: memrip source is null!\n"); exit(1); } if
当我在数据库的旧 View 中清理一些问题时,我遇到了这个“奇怪”的连接条件: from tblEmails [e] join tblPersonEmails [pe]
如何水平对齐多张图像,一张一张地?它们不必适合宽度屏幕:相反,我希望它们超过后者的宽度,如果这有任何意义的话。 我已经检查了很多类似问题的答案,但找不到任何可以解决我的问题的答案。 HTML:
我知道 Cassandra 中的列有 TTL。但是也可以在一行上设置 TTL 吗?在每列上设置 TTL 并不能解决我的问题,如下面的用例所示: 在某些时候,一个进程想要删除一个带有 TTL 的完整行(
我有一个 NSTextField 和 Label,其值绑定(bind)到 View Controller 中的相同 NSString 这里的问题是标签只有在我按 Tab 时才会更新。 如何使其连续,以
例如。 1."abc"; ===>abc 2."ab c"; ===>ab_c 3."ab c"; ===>ab_c 4."ab c" ===>ab_c 对于多个连续空格也是如此。 我怎样
大家好,我想获取前一天或最后一天的信息,只有当我按下按钮时,它才会显示最后一天(星期六)的所有信息,如果我再次单击按钮,它将显示最后一天的信息(星期五)如果我再次点击它(星期四)谢谢你们帮助我 编辑:
我需要从实时音频流中提取ICY元数据,并正在使用mplayer进行此操作,因为它在播放音频流时会输出元数据。我欢迎其他方式执行此操作,目标是将更新的元数据(歌曲信息)保存到文本文件中,只要歌曲(或数据
语音识别有没有解决方案 只有几个字(2 个就够了,10 个就不错了。100 个就很棒了。不需要更多) 也在移动浏览器上运行(是否可以为此使用 flash(而不是 java)?) 可以安装在您自己的服务
我有一个单词列表, list1 = ['hello', 'how', 'are', 'you?', 'i', 'am', 'fine', 'thanks.', 'great!'] 我想加入, list
我正在开发一个程序,但我不断收到“对‘dosell’的 undefined reference ”,我不太明白发生了什么。这是函数的声明: void dosell(int *cash, int *nu
我无法提出执行我要做的事情所需的查询。 我有三个这样的表: client_files ----------------------- client_id file_id ---------
我一直在寻找一个插件/脚本,当到达底部时,它会从头开始继续滚动网站,就像一个连续的循环。 示例:http://unfold.no/和 http://www.aquiesdonde.com.ar/ 我尝
这个问题在这里已经有了答案: How to prevent scanf causing a buffer overflow in C? (6 个答案) 关闭 6 年前。 我一直在使用一个非常简单的程
给定一个整数数组,找到具有相同数量的 x 和 y 的连续子序列的总数。例如 x=1 和 y=2 的数组 [1,2,1] ans = 2 表示它的两个子数组 [1,2] 和 [2,1]。检查每个连续的子
所以,我有一个所有正自然数的数组。我得到了一个阈值。我必须找出总和小于给定阈值的数字(连续)的最大计数。 For example, IP: arr = {3,1,2,1} Threshold = 5
我制作了像内置相机一样的相机应用。 我想实现像内置相机一样的连续对焦功能。(此功能我不触摸屏幕,但相机会尝试自行对焦。) 因此,将其设置为 surfaceCreated : Camera.Pa
我有这样的数据: f x A 1.1 A 2.2 A 3.3 B 3.5 B 3.7 B 3.9 B 4.1 B 4.5 A 5.1 A 5.2 C 5.4 C 5.5 C 6.1 B 6.2 B
假设我有一个包含一组数据点的表,每个数据点由一个时间戳和一个值组成。如果至少有 N 个连续记录(按时间戳排序)高于给定值 X,我将如何编写返回 true (1) 的查询,否则返回 false (0)?
我是一名优秀的程序员,十分优秀!