- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我是 kafkaStream 的新手,我正在开发一个 Stream,但是当我启动我的应用程序时,很多日志都在记录。
例如,如何将日志级别从 Debbug 更改为 Info。
谢谢你。
6:54:12.720 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:12.721 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 36 to node 2147483646
16:54:12.725 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.020 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.021 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 37 to node 2147483646
16:54:13.023 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent a full fetch response that created a new incremental fetch session 1486821637 with 1 response partition(s)
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Fetch READ_UNCOMMITTED at offset 0 for partition TOPIC-DEV-ACH-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lag
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lead
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=1) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=1,topics=[],forgotten_topics_data=[]} with correlation id 38 to node 2
16:54:13.160 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 39 to node 1
16:54:13.320 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.320 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 40 to node 2147483646
16:54:13.322 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=2) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 41 to node 2
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 42 to node 2147483646
16:54:13.626 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 43 to node 1
16:54:13.921 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.922 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 44 to node 2147483646
16:54:13.925 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:14.054 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:14.055 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=3) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 45 to node 2
16:54:14.167 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=4,topics=[],forgotten_topics_data=[]} with correlation id 46 to node 1
最佳答案
这是 Java 应用程序的 log4j 配置,并非特定于 Kafka
添加或更改 log4j.properties
文件在您的 src/main/resources
应用程序的文件夹。
将根级别设置为 INFO 而不是 DEBUG
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
关于java - 如何更改 KafkaStream 的日志级别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58629398/
我正在编写一个具有以下签名的 Java 方法。 void Logger(Method method, Object[] args); 如果一个方法(例如 ABC() )调用此方法 Logger,它应该
我是 Java 新手。 我的问题是我的 Java 程序找不到我试图用作的图像文件一个 JButton。 (目前这段代码什么也没做,因为我只是得到了想要的外观第一的)。这是我的主课 代码: packag
好的,今天我在接受采访,我已经编写 Java 代码多年了。采访中说“Java 垃圾收集是一个棘手的问题,我有几个 friend 一直在努力弄清楚。你在这方面做得怎么样?”。她是想骗我吗?还是我的一生都
我的 friend 给了我一个谜语让我解开。它是这样的: There are 100 people. Each one of them, in his turn, does the following
如果我将使用 Java 5 代码的应用程序编译成字节码,生成的 .class 文件是否能够在 Java 1.4 下运行? 如果后者可以工作并且我正在尝试在我的 Java 1.4 应用程序中使用 Jav
有关于why Java doesn't support unsigned types的问题以及一些关于处理无符号类型的问题。我做了一些搜索,似乎 Scala 也不支持无符号数据类型。限制是Java和S
我只是想知道在一个 java 版本中生成的字节码是否可以在其他 java 版本上运行 最佳答案 通常,字节码无需修改即可在 较新 版本的 Java 上运行。它不会在旧版本上运行,除非您使用特殊参数 (
我有一个关于在命令提示符下执行 java 程序的基本问题。 在某些机器上我们需要指定 -cp 。 (类路径)同时执行java程序 (test为java文件名与.class文件存在于同一目录下) jav
我已经阅读 StackOverflow 有一段时间了,现在我才鼓起勇气提出问题。我今年 20 岁,目前在我的家乡(罗马尼亚克卢日-纳波卡)就读 IT 大学。足以介绍:D。 基本上,我有一家提供簿记应用
我有 public JSONObject parseXML(String xml) { JSONObject jsonObject = XML.toJSONObject(xml); r
我已经在 Java 中实现了带有动态类型的简单解释语言。不幸的是我遇到了以下问题。测试时如下代码: def main() { def ks = Map[[1, 2]].keySet()
一直提示输入 1 到 10 的数字 - 结果应将 st、rd、th 和 nd 添加到数字中。编写一个程序,提示用户输入 1 到 10 之间的任意整数,然后以序数形式显示该整数并附加后缀。 public
我有这个 DownloadFile.java 并按预期下载该文件: import java.io.*; import java.net.URL; public class DownloadFile {
我想在 GUI 上添加延迟。我放置了 2 个 for 循环,然后重新绘制了一个标签,但这 2 个 for 循环一个接一个地执行,并且标签被重新绘制到最后一个。 我能做什么? for(int i=0;
我正在对对象 Student 的列表项进行一些测试,但是我更喜欢在 java 类对象中创建硬编码列表,然后从那里提取数据,而不是连接到数据库并在结果集中选择记录。然而,自从我这样做以来已经很长时间了,
我知道对象创建分为三个部分: 声明 实例化 初始化 classA{} classB extends classA{} classA obj = new classB(1,1); 实例化 它必须使用
我有兴趣使用 GPRS 构建车辆跟踪系统。但是,我有一些问题要问以前做过此操作的人: GPRS 是最好的技术吗?人们意识到任何问题吗? 我计划使用 Java/Java EE - 有更好的技术吗? 如果
我可以通过递归方法反转数组,例如:数组={1,2,3,4,5} 数组结果={5,4,3,2,1}但我的结果是相同的数组,我不知道为什么,请帮助我。 public class Recursion { p
有这样的标准方式吗? 包括 Java源代码-测试代码- Ant 或 Maven联合单元持续集成(可能是巡航控制)ClearCase 版本控制工具部署到应用服务器 最后我希望有一个自动构建和集成环境。
我什至不知道这是否可能,我非常怀疑它是否可能,但如果可以,您能告诉我怎么做吗?我只是想知道如何从打印机打印一些文本。 有什么想法吗? 最佳答案 这里有更简单的事情。 import javax.swin
我是一名优秀的程序员,十分优秀!