gpt4 book ai didi

java - 运行 Kafka 性能流量时出现错误 "This server is not the leader for that topic-partition"

转载 作者:太空宇宙 更新时间:2023-11-04 12:03:00 26 4
gpt4 key购买 nike

更新 2018 年 8 月 15 日我执行strace监控系统调用mprotect,发现确实被阻塞了几秒。

     strace -f -e trace=mprotect,mmap,munmap -T -t -p `pidof java` 2>&1 |tee mp1.txt

[pid 27007] 03:52:48 mprotect(0x7f9766226000, 4096, PROT_NONE) = 0 <3.631704>

但我没有确定原因。

更新 2018 年 8 月 14 日我发现这是一个 JVM STW 事件。我使用以下选项调试了 JVM

 -XX:+PrintGCApplicationStoppedTime
-XX:+PrintSafepointStatistics
-XX:PrintSafepointStatisticsCount=1
-XX:+SafepointTimeout
-XX:SafepointTimeoutDelay=500

下面有一些日志

              vmop                    [threads: total initially_running wait_to_block]    [time: spin block sync cleanup vmop] page_trap_count
488.188: no vm operation [ 73 1 1 ] [ 1 0 3301 0 0 ] 1

2018-08-13T22:16:09.744-0400: 491.491: Total time for which application threads were stopped: 3.3021375 seconds, Stopping threads took: 3.3018193 seconds

奇怪的是自旋/阻塞时间为零,而同步时间为 3301。我基于open jdk 1.8 编译了一个JVM,并添加了一些调试日志,我发现它被阻止在下面的代码上,

     void SafepointSynchronize::begin() {
... ...

if (UseCompilerSafepoints && DeferPollingPageLoopCount < 0) {
// Make polling safepoint aware
guarantee (PageArmed == 0, "invariant") ;
PageArmed = 1 ;
os::make_polling_page_unreadable();
}
... ...
}

在os::make_polling_page_unreadable函数中,调用::mprotect有信号量依赖,

down_write(&current->mm->mmap_sem);

我怀疑信号量 mmap_sem 争用会导致此 STW 事件。但是不知道是哪个函数导致的?有什么帮助吗?


原始问题

我现在正在测试 Kafka 的性能。我在具有 6 个节点的集群中创建了一个主题,该集群具有 36 个分区和 4 个副本。一个 zookeeper 节点运行在一个单独的节点上。

kafka-topics --create --topic kf.p36.r4 --zookeeper l2 --partitions 36 --replication-factor 4

[root@g9csf002-0-0-3 kafka]# kafka-topics --describe --zookeeper l2 --topic kf.p36.r4
Topic:kf.p36.r4 PartitionCount:36 ReplicationFactor:4 Configs:
Topic: kf.p36.r4 Partition: 0 Leader: 1 Replicas: 1,5,6,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 1 Leader: 2 Replicas: 2,6,1,3 Isr: 1,3,6,2
Topic: kf.p36.r4 Partition: 2 Leader: 3 Replicas: 3,1,2,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 3 Leader: 4 Replicas: 4,2,3,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 4 Leader: 5 Replicas: 5,3,4,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 5 Leader: 6 Replicas: 6,4,5,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 6 Leader: 1 Replicas: 1,6,2,3 Isr: 3,6,2,1
Topic: kf.p36.r4 Partition: 7 Leader: 2 Replicas: 2,1,3,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 8 Leader: 3 Replicas: 3,2,4,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 9 Leader: 4 Replicas: 4,3,5,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 10 Leader: 5 Replicas: 5,4,6,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 11 Leader: 6 Replicas: 6,5,1,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 12 Leader: 1 Replicas: 1,2,3,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 13 Leader: 2 Replicas: 2,3,4,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 14 Leader: 3 Replicas: 3,4,5,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 15 Leader: 4 Replicas: 4,5,6,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 16 Leader: 5 Replicas: 5,6,1,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 17 Leader: 6 Replicas: 6,1,2,3 Isr: 3,2,6,1
Topic: kf.p36.r4 Partition: 18 Leader: 1 Replicas: 1,3,4,5 Isr: 3,4,5,1
Topic: kf.p36.r4 Partition: 19 Leader: 2 Replicas: 2,4,5,6 Isr: 6,2,4,5
Topic: kf.p36.r4 Partition: 20 Leader: 3 Replicas: 3,5,6,1 Isr: 3,5,6,1
Topic: kf.p36.r4 Partition: 21 Leader: 4 Replicas: 4,6,1,2 Isr: 4,2,6,1
Topic: kf.p36.r4 Partition: 22 Leader: 5 Replicas: 5,1,2,3 Isr: 3,5,2,1
Topic: kf.p36.r4 Partition: 23 Leader: 6 Replicas: 6,2,3,4 Isr: 3,6,2,4
Topic: kf.p36.r4 Partition: 24 Leader: 1 Replicas: 1,4,5,6 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 25 Leader: 2 Replicas: 2,5,6,1 Isr: 1,6,2,5
Topic: kf.p36.r4 Partition: 26 Leader: 3 Replicas: 3,6,1,2 Isr: 3,2,6,1
Topic: kf.p36.r4 Partition: 27 Leader: 4 Replicas: 4,1,2,3 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 28 Leader: 5 Replicas: 5,2,3,4 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 29 Leader: 6 Replicas: 6,3,4,5 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 30 Leader: 1 Replicas: 1,5,6,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 31 Leader: 2 Replicas: 2,6,1,3 Isr: 1,3,6,2
Topic: kf.p36.r4 Partition: 32 Leader: 3 Replicas: 3,1,2,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 33 Leader: 4 Replicas: 4,2,3,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 34 Leader: 5 Replicas: 5,3,4,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 35 Leader: 6 Replicas: 6,4,5,1 Isr: 4,5,6,1

我运行了两个生产者实例,kafka-producer-perf-test

kafka-producer-perf-test --topic kf.p36.r4  --num-records  600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1 

总流量为 240k tps,每条消息为 1024 字节。当我跑240k tps的流量时,一开始一切正常,但过了一段时间,出现了一些错误信息。

[root@g9csf002-0-0-1 ~]# kafka-producer-perf-test --topic kf.p36.r4  --num-records  600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1 
599506 records sent, 119901.2 records/sec (117.09 MB/sec), 4.8 ms avg latency, 147.0 max latency.
600264 records sent, 120052.8 records/sec (117.24 MB/sec), 2.0 ms avg latency, 13.0 max latency.
599584 records sent, 119916.8 records/sec (117.11 MB/sec), 1.9 ms avg latency, 13.0 max latency.
600760 records sent, 120152.0 records/sec (117.34 MB/sec), 1.9 ms avg latency, 13.0 max latency.
599764 records sent, 119904.8 records/sec (117.09 MB/sec), 2.0 ms avg latency, 35.0 max latency.
276603 records sent, 21408.9 records/sec (20.91 MB/sec), 103.0 ms avg latency, 10743.0 max latency.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.

我研究了kafka broker的日志,发现broker和zookeeper之间的通信有问题。

  [2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, clo

zookeeper客户端是zookeeper-3.4.10.jar,我下载了代码并添加了一些日志到src/java/main/org/apache/zookeeper/ClientCnxn.java

发现SendThread访问变量state时有时会被阻塞

  [2018-08-06 01:27:54,793] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: to = 4000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: timeToNextPing = 2000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: before clientCnxnSocket.doTransport, to = 2000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:56,795] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: to = 1998 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: timeToNextPing = -1002 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: sendPing has done. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: before clientCnxnSocket.doTransport, to = 1998 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: to = -3768 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)

可以发现在2018-08-06 01:27:562018-08-06 01:28:02之间,线程被阻塞了,什么都不做.更改后的代码如下所示,

                // If we are in read-only mode, seek for read/write server
if (state == States.CONNECTEDREADONLY) {
long now = System.currentTimeMillis();
int idlePingRwServer = (int) (now - lastPingRwServer);
if (idlePingRwServer >= pingRwTimeout) {
lastPingRwServer = now;
idlePingRwServer = 0;
pingRwTimeout =
Math.min(2*pingRwTimeout, maxPingRwTimeout);
pingRwServer();
}
to = Math.min(to, pingRwTimeout - idlePingRwServer);
}

LOG.info("ROVER: before clientCnxnSocket.doTransport, to = " + to );
clientCnxnSocket.doTransport(to, pendingQueue, outgoingQueue, ClientCnxn.this);
LOG.info("ROVER: after clientCnxnSocket.doTransport");
LOG.info("ROVER: state = " + state);
} catch (Throwable e) {

安装的kafka是confluent-kafka-2.11,而java是

[root@g9csf0002-0-0-12 kafka]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

现在我不知道如何解决这个问题,任何人都可以对此有所了解吗?

最佳答案

我以前遇到过这个问题,有时 Kafka JVM 会长时间进行垃圾收集,或者内部网络发生了一些奇怪的事情。我注意到在我们的案例中超时都在 6 秒或 7 秒左右(这看起来与您的相似)。问题是,如果 Kafka 不能在指定的时间段内访问 zookeeper,它就会崩溃,它开始报告复制不足的分区,时不时地导致整个集群宕机。因此,如果我没记错的话,我们将超时时间增加到 15 秒,之后一切正常,错误为零。

这些是kafka broker的相应设置:

zookeeper.session.timeout.ms    Default: 6000ms
zookeeper.connection.timeout.ms

我们都改变了,IIRC,但你应该先尝试改变 session 配置。

关于java - 运行 Kafka 性能流量时出现错误 "This server is not the leader for that topic-partition",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51705961/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com