- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在使用运行 0.10.1.1 并使用 SSL 的 Heroku Kafka。它们只支持最新的协议(protocol)。
Heroku Kafka 使用 SSL 进行身份验证和颁发客户端证书和 key ,并提供 CA 证书。我将它们分别放在 client_cert.pem
、client_key.pem
和 trusted_cert.pem
中,然后运行以下命令来构建 keystore :
openssl pkcs12 -export -in client_cert.pem -inkey client_key.pem -certfile client_cert.pem -out client.p12
keytool -importkeystore -srckeystore client.p12 -srcstoretype pkcs12 -destkeystore kafka.keystore.jks -deststoretype JKS
keytool -keystore kafka.truststore.jks -alias CARoot -import -file trusted_cert.pem
然后我创建了包含以下内容的 client-ssl.properties
:
ssl.protocol=SSL
security.protocol=SSL
ssl.truststore.location=kafka.truststore.jks
ssl.truststore.type=JKS
ssl.truststore.password=xxxx
ssl.keystore.location=kafka.keystore.jks
ssl.keystore.type=JKS
ssl.keystore.password=xxxx
ssl.key.password=xxxx
然后我将 kafka-console-producer
(版本 0.10.1.1)与以下内容一起使用:
kafka-console-producer --broker-list kafka+ssl://a.a.a.a:9096,kafka+ssl://b.b.b.b:9096,kafka+ssl://c.c.c.c:9096 --producer.config client-ssl.properties --topic robintest
(robintest
主题已创建。)
[2017-01-31 10:06:50,385] INFO ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [kafka+ssl://a.a.a.a:9096, kafka+ssl://b.b.b.b:9096, kafka+ssl://c.c.c.c:9096]
buffer.memory = 33554432
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = kafka.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = SSL
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = kafka.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-01-31 10:06:50,390] INFO ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [kafka+ssl://a.a.a.a:9096, kafka+ssl://b.b.b.b:9096, kafka+ssl://c.c.c.c:9096]
buffer.memory = 33554432
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = kafka.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = SSL
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = kafka.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2017-01-31 10:06:50,396] DEBUG Added sensor with name bufferpool-wait-time (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,398] DEBUG Added sensor with name buffer-exhausted-records (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,399] DEBUG Updated cluster metadata version 1 to Cluster(id = null, nodes = [b.b.b.b:9096 (id: -2 rack: null), c.c.c.c:9096 (id: -3 rack: null), a.a.a.a:9096 (id: -1 rack: null)], partitions = []) (org.apache.kafka.clients.Metadata)
[2017-01-31 10:06:50,457] DEBUG Added sensor with name connections-closed: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,457] DEBUG Added sensor with name connections-created: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,457] DEBUG Added sensor with name bytes-sent-received: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,457] DEBUG Added sensor with name bytes-sent: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,458] DEBUG Added sensor with name bytes-received: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,458] DEBUG Added sensor with name select-time: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,459] DEBUG Added sensor with name io-time: (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,462] DEBUG Added sensor with name batch-size (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,462] DEBUG Added sensor with name compression-rate (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,462] DEBUG Added sensor with name queue-time (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,462] DEBUG Added sensor with name request-time (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,464] DEBUG Added sensor with name produce-throttle-time (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,465] DEBUG Added sensor with name records-per-request (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,465] DEBUG Added sensor with name record-retries (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,465] DEBUG Added sensor with name errors (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,465] DEBUG Added sensor with name record-size-max (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:50,467] DEBUG Starting Kafka producer I/O thread. (org.apache.kafka.clients.producer.internals.Sender)
[2017-01-31 10:06:50,468] INFO Kafka version : 0.10.1.1 (org.apache.kafka.common.utils.AppInfoParser)
[2017-01-31 10:06:50,468] INFO Kafka commitId : f10ef2720b03b247 (org.apache.kafka.common.utils.AppInfoParser)
[2017-01-31 10:06:50,468] DEBUG Kafka producer started (org.apache.kafka.clients.producer.KafkaProducer)
此时,我发送一条记录,然后回车。
[2017-01-31 10:06:53,194] DEBUG Initialize connection to node -2 for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,194] DEBUG Initiating connection to node -2 at b.b.b.b:9096. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,457] DEBUG Added sensor with name node--2.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,457] DEBUG Added sensor with name node--2.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,458] DEBUG Added sensor with name node--2.latency (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,460] DEBUG Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 102808, SO_TIMEOUT = 0 to node -2 (org.apache.kafka.common.network.Selector)
[2017-01-31 10:06:53,463] DEBUG Completed connection to node -2 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,692] DEBUG Sending metadata request {topics=[robintest]} to node -2 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,724] DEBUG Connection with ec2-34-194-25-39.compute-1.amazonaws.com/b.b.b.b disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:488)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Thread.java:745)
[2017-01-31 10:06:53,728] DEBUG Node -2 disconnected. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,728] WARN Bootstrap broker b.b.b.b:9096 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,729] DEBUG Initialize connection to node -1 for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,729] DEBUG Initiating connection to node -1 at a.a.a.a:9096. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,791] DEBUG Added sensor with name node--1.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,792] DEBUG Added sensor with name node--1.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,792] DEBUG Added sensor with name node--1.latency (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:53,792] DEBUG Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 102808, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector)
[2017-01-31 10:06:53,792] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:53,994] DEBUG Sending metadata request {topics=[robintest]} to node -1 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,025] DEBUG Connection with ec2-34-194-39-35.compute-1.amazonaws.com/a.a.a.a disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:488)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Thread.java:745)
[2017-01-31 10:06:54,026] DEBUG Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,026] WARN Bootstrap broker a.a.a.a:9096 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,027] DEBUG Initialize connection to node -3 for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,027] DEBUG Initiating connection to node -3 at c.c.c.c:9096. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,102] DEBUG Added sensor with name node--3.bytes-sent (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:54,103] DEBUG Added sensor with name node--3.bytes-received (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:54,103] DEBUG Added sensor with name node--3.latency (org.apache.kafka.common.metrics.Metrics)
[2017-01-31 10:06:54,104] DEBUG Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 102808, SO_TIMEOUT = 0 to node -3 (org.apache.kafka.common.network.Selector)
[2017-01-31 10:06:54,104] DEBUG Completed connection to node -3 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,309] DEBUG Sending metadata request {topics=[robintest]} to node -3 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,342] DEBUG Connection with ec2-34-194-45-119.compute-1.amazonaws.com/c.c.c.c disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:488)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Thread.java:745)
[2017-01-31 10:06:54,342] DEBUG Node -3 disconnected. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,343] WARN Bootstrap broker c.c.c.c:9096 disconnected (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,343] DEBUG Initialize connection to node -1 for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,343] DEBUG Initiating connection to node -1 at a.a.a.a:9096. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,348] DEBUG Initialize connection to node -2 for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,348] DEBUG Initiating connection to node -2 at b.b.b.b:9096. (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,376] DEBUG Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 102808, SO_TIMEOUT = 0 to node -2 (org.apache.kafka.common.network.Selector)
[2017-01-31 10:06:54,377] DEBUG Completed connection to node -2 (org.apache.kafka.clients.NetworkClient)
[2017-01-31 10:06:54,379] DEBUG Created socket with SO_RCVBUF = 33304, SO_SNDBUF = 102808, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector)
[2017-01-31 10:06:54,379] DEBUG Completed connection to node -1 (org.apache.kafka.clients.NetworkClient)
这些条目会一直持续到我终止进程为止。
我已经尝试了每一种配置组合,包括在属性文件中为所有配置添加前缀 producer.
,删除整个配置(这似乎没有什么区别),将密码设置为不正确值(这似乎没有什么区别)。我还尝试使用他们的凭据连接到不同的提供商 (www.cloudkarafka.com),我得到了相同的结果。所以这绝对像是一个配置问题。
最佳答案
事实证明,我的 Kafka 集群(Heroku 附加组件)实际上并没有运行 0.10.1.1,它运行的是 0.10.0.1。两者似乎具有不兼容的消费者 API。 (我不得不说,“这正是存在语义版本控制的原因。”)
收件人upgrade Kafka running on Heroku ,使用:heroku kafka:upgrade --version 0.10
升级到最新的 0.10.X 版本。因此,如果您使用的是 0.9 并且想要 0.10.0.1,祝您好运。
关于apache-kafka - 配置 Kafka 客户端以连接已发布的 SSL key /证书,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41961132/
我在 Windows 机器上启动 Kafka-Server 时出现以下错误。我已经从以下链接下载了 Scala 2.11 - kafka_2.11-2.1.0.tgz:https://kafka.ap
关于Apache-Kafka messaging queue . 我已经从 Kafka 下载页面下载了 Apache Kafka。我已将其提取到 /opt/apache/installed/kafka
假设我有 Kafka 主题 cars。 我还有一个消费者组 cars-consumers 订阅了 cars 主题。 cars-consumers 消费者组当前位于偏移量 89。 当我现在删除 cars
我想知道什么最适合我:Kafka 流或 Kafka 消费者 api 或 Kafka 连接? 我想从主题中读取数据,然后进行一些处理并写入数据库。所以我编写了消费者,但我觉得我可以编写 Kafka 流应
我曾研究过一些 Kafka 流应用程序和 Kafka 消费者应用程序。最后,Kafka流不过是消费来自Kafka的实时事件的消费者。因此,我无法弄清楚何时使用 Kafka 流或为什么我们应该使用
Kafka Acknowledgement 和 Kafka 消费者 commitSync() 有什么区别 两者都用于手动偏移管理,并希望两者同步工作。 请协助 最佳答案 使用 spring-kafka
如何在 Kafka 代理上代理 Apache Kafka 生产者请求,并重定向到单独的 Kafka 集群? 在我的特定情况下,无法更新写入此集群的客户端。这意味着,执行以下操作是不可行的: 更新客户端
我需要在 Kafka 10 中命名我的消费者,就像我在 Kafka 8 中所做的一样,因为我有脚本可以嗅出并进一步使用这些信息。 显然,consumer.id 的默认命名已更改(并且现在还单独显示了
1.概述 我们会看到zk的数据中有一个节点/log_dir_event_notification/,这是一个序列号持久节点 这个节点在kafka中承担的作用是: 当某个Broker上的LogDir出现
我正在使用以下命令: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test.topic --property
我很难理解 Java Spring Boot 中的一些 Kafka 概念。我想针对在服务器上运行的真实 Kafka 代理测试消费者,该服务器有一些生产者已将数据写入/已经将数据写入各种主题。我想与服务
我的场景是我使用了很多共享前缀的 Kafka 主题(例如 house.door, house.room ) 并使用 Kafka 流正则表达式主题模式 API 使用所有主题。 一切看起来都不错,我得到了
有没有办法以编程方式获取kafka集群的版本?例如,使用AdminClient应用程序接口(interface)。 我想在消费者/生产者应用程序中识别 kafka 集群的版本。 最佳答案 目前无法检索
每当我尝试重新启动 kafka 时,它都会出现以下错误。一旦我删除/tmp/kafka-logs 它就会得到解决,但它也会删除我的主题。 有办法解决吗? ERROR Error while
我是 Apache Kafka 的新用户,我仍在了解内部结构。 在我的用例中,我需要从 Kafka Producer 客户端动态增加主题的分区数。 我发现了其他类似的 questions关于增加分区大
正如 Kafka 文档所示,一种方法是通过 kafka.tools.MirrorMaker 来实现这一点。但是,我需要将一个主题(比如 测试 带 1 个分区)(其内容和元数据)从生产环境复制到没有连接
我已经在集群中配置了 3 个 kafka,我正在尝试与 spring-kafka 一起使用。 但是在我杀死 kafka 领导者之后,我无法将其他消息发送到队列中。 我将 spring.kafka.bo
我的 kafka sink 连接器从多个主题(配置了 10 个任务)读取,并处理来自所有主题的 300 条记录。根据每个记录中保存的信息,连接器可以执行某些操作。 以下是触发器记录中键值对的示例: "
我有以下 kafka 流代码 public class KafkaStreamHandler implements Processor{ private ProcessorConte
当 kafka-streams 应用程序正在运行并且 Kafka 突然关闭时,应用程序进入“等待”模式,发送警告日志的消费者和生产者线程无法连接,当 Kafka 回来时,一切都应该(理论上)去恢复正常
我是一名优秀的程序员,十分优秀!