- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我是 kafkaStream 的新手,我正在开发一个 Stream,但是当我启动我的应用程序时,很多日志都在记录。
例如,如何将日志级别从 Debbug 更改为 Info。
谢谢你。
6:54:12.720 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:12.721 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 36 to node 2147483646
16:54:12.725 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.020 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.021 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 37 to node 2147483646
16:54:13.023 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent a full fetch response that created a new incremental fetch session 1486821637 with 1 response partition(s)
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Fetch READ_UNCOMMITTED at offset 0 for partition TOPIC-DEV-ACH-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lag
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lead
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=1) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=1,topics=[],forgotten_topics_data=[]} with correlation id 38 to node 2
16:54:13.160 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 39 to node 1
16:54:13.320 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.320 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 40 to node 2147483646
16:54:13.322 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=2) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 41 to node 2
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 42 to node 2147483646
16:54:13.626 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 43 to node 1
16:54:13.921 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.922 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 44 to node 2147483646
16:54:13.925 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:14.054 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:14.055 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=3) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 45 to node 2
16:54:14.167 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=4,topics=[],forgotten_topics_data=[]} with correlation id 46 to node 1
最佳答案
这是 Java 应用程序的 log4j 配置,并非特定于 Kafka
添加或更改 log4j.properties
文件在您的 src/main/resources
应用程序的文件夹。
将根级别设置为 INFO 而不是 DEBUG
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
关于java - 如何更改 KafkaStream 的日志级别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58629398/
我正在开发 Kafka 流应用程序,但在弄清楚如何使聚合工作时遇到了一些麻烦。 我有一个 KStream bankTransactions其中键的类型为 String和类型 JsonNode 的值所以
我一直在研究 Kafka Streams 应用程序上的磁盘写入,并将拓扑减少到最低限度,即: KStream stream = builder.stream("input-topic"); 然而,在
我正在研究将现有 Flink 应用程序/拓扑转换为使用 KafkaStreams 的 POC。我的问题是关于部署。 具体来说 - 在 Flink 中,将“工作节点”添加到 flink 安装中,然后向拓
我正在玩 Kafka 和流技术;我已经为 KStream 创建了一个自定义序列化器和反序列化器,我将使用它来接收来自给定主题的消息。 现在,问题是我正在以这种方式创建一个 serde: JsonSer
我有一个 Kafka 消费者类,它监听事件并执行事件(订单、客户)之间的连接并将它们存储在物化 View 中。当收到 REST 调用时,我创建了另一个类来访问状态存储。但我越来越java.lang.I
我有一个 Kafka Streams 应用程序,它使用 Kafka Streams DSL 连接到我们的 Kafka 集群,如下所示: KStreamBuilder builder = new KSt
我是 kafkaStream 的新手,我正在开发一个 Stream,但是当我启动我的应用程序时,很多日志都在记录。 例如,如何将日志级别从 Debbug 更改为 Info。 谢谢你。 6:54:12.
下面是我的代码片段。我想并行kafka流处理。但我不想放入 Runnable,也不想多次启动这个应用程序。 有没有类似streams.parallel()的方法? final
我有一个 Kafka 主题,我希望其中的消息具有两种不同的键类型:旧的和新的。 即 "1-new" , "1-old" , "2-new" , "2-old" .键是唯一的,但有些可能会丢失。 现在使
所以我试图让交互式查询与 Kafka 流一起工作。我有 Zookeeper 和 Kafka 在本地运行(在 Windows 上)。我使用 C:\temp 作为存储文件夹,用于 Zookeeper 和
我已经设置了一个简单的聚合,将来自多个流的值平均在一起,我正在尝试对其进行测试。我已经花了很多时间,但似乎无法直接理解这些概念。我的流如下: // Combine multiple streams t
在 Kafka(0.11.0.1) Streams 中,演示应用程序 Play with a Streams Application // Serializers/deserializers (ser
我正在尝试删除下游变更日志中值为 null 的记录,我知道在状态存储中它们只是通过为 null(逻辑删除)而被删除,但是当您对 KTable 或 Stream 进行聚合时,它们会跳过null 并且不删
是否有可能获得window final result在 Kafka Streams 中通过抑制中间结果。 我无法实现这个目标。我的代码有什么问题? val builder = StreamsB
来自https://micronaut-projects.github.io/micronaut-kafka/latest/guide/#kafkaStream ,我可以看到有一种简单的方法来运行运行
我正在为大容量高速分布式应用程序编写 Kafka Consumer。我只有一个主题,但收到的消息率非常高。为更多消费者提供服务的多个分区将适合此用例。最好的消费方式是拥有多个流阅读器。根据文档或可用示
我有四个使用相同应用程序 ID 运行的 Kafka 流应用程序实例。所有输入主题都属于单个分区。为了实现可扩展性,我通过了一个具有多个分区的中间虚拟主题。我已经设置了request.timeout.m
我尝试在 Kafka (0.11) 的聚合函数中使用 SessionWindows,但无法理解为什么会出现错误。 这是我的代码片段: // defining some values: public s
我用来创建 GlobalKTable 的话题非常活跃。在 KStream-GlobalKTable join 的文档中我读了 The GlobalKTable is fully bootstrappe
我有一个由 KafkaStreams Java api 编写的 Kafka 应用程序。它从 Mysql binlog 读取数据并执行一些与我的问题无关的操作。问题是某一特定行在 avro 反序列化过程
我是一名优秀的程序员,十分优秀!