- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用 apache flink 流来使用 kafka 主题。
但我遇到了这个问题。
2018-04-10 02:55:59,856|- ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
2018-04-10 02:56:00,052|- The configuration 'auto.create.topics.enable' was supplied but isn't a known config.
2018-04-10 02:56:00,064|- Kafka version : 1.0.0
2018-04-10 02:56:00,064|- Kafka commitId : aaa7af6d4a11b29d
2018-04-10 02:56:40,064|- ConsumerConfig values:
auto.commit.interval.ms = 1000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = engine-kafka-consumer
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-04-10 02:56:40,073|- ConsumerConfig values:
auto.commit.interval.ms = 1000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = engine-kafka-consumer
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-04-10 02:56:40,079|- ConsumerConfig values:
auto.commit.interval.ms = 1000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = engine-kafka-consumer
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-04-10 02:56:40,082|- ConsumerConfig values:
auto.commit.interval.ms = 1000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = engine-kafka-consumer
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 30000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-04-10 02:56:40,783|- Kafka version : 1.0.0
2018-04-10 02:56:40,783|- Kafka commitId : aaa7af6d4a11b29d
2018-04-10 02:56:40,818|- Kafka version : 1.0.0
2018-04-10 02:56:40,819|- Kafka commitId : aaa7af6d4a11b29d
2018-04-10 02:56:40,819|- Kafka version : 1.0.0
2018-04-10 02:56:40,819|- Kafka commitId : aaa7af6d4a11b29d
2018-04-10 02:56:40,820|- Kafka version : 1.0.0
2018-04-10 02:56:40,820|- Kafka commitId : aaa7af6d4a11b29d
2018-04-10 02:56:41,906|- [Consumer clientId=consumer-2, groupId=engine-kafka-consumer] Connection to node -1 could not be established. Broker may not be available.
2018-04-10 02:56:41,925|- [Consumer clientId=consumer-4, groupId=engine-kafka-consumer] Connection to node -1 could not be established. Broker may not be available.
2018-04-10 02:56:41,931|- [Consumer clientId=consumer-1, groupId=engine-kafka-consumer] Connection to node -1 could not be established. Broker may not be available.
2018-04-10 02:56:41,948|- [Consumer clientId=consumer-3, groupId=engine-kafka-consumer] Connection to node -1 could not be established. Broker may not be available.
2018-04-10 02:56:42,013|- [Consumer clientId=consumer-2, groupId=engine-kafka-consumer] Connection to node -1 could not be established. Broker may not be available.
// kafka
"org.apache.kafka" %% "kafka" % "1.0.0"
"net.manub" %% "scalatest-embedded-kafka" % "0.14.0" % Test
//flink
"org.apache.flink" %% "flink-scala" % "1.4.2"
"org.apache.flink" %% "flink-streaming-scala" % "1.4.2"
"org.apache.flink" %% "flink-connector-kafka-0.9" % "1.4.0"
"org.apache.flink" %% "flink-connector-cassandra" % "1.4.2"
最佳答案
我遇到了同样的问题。当消费者运行另一台计算机而不是 kafka 服务器时,就会发生这种情况。请修改
config/server.properties .听众=PLAINTEXT://serverip:9092。
注意:severip 不能设置为 127.0.0.1 或 localhost,应该设置为消费者可以连接的 ip。
关于apache-kafka - Kafka - 无法建立到节点 -1 的连接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49744897/
我通过 spring ioc 编写了一些 Rest 应用程序。但我无法解决这个问题。这是我的异常(exception): org.springframework.beans.factory.BeanC
我对 TestNG、Spring 框架等完全陌生,我正在尝试使用注释 @Value通过 @Configuration 访问配置文件注释。 我在这里想要实现的目标是让控制台从配置文件中写出“hi”,通过
为此工作了几个小时。我完全被难住了。 这是 CS113 的实验室。 如果用户在程序(二进制计算器)结束时选择继续,我们需要使用 goto 语句来到达程序的顶部。 但是,我们还需要释放所有分配的内存。
我正在尝试使用 ffmpeg 库构建一个小的 C 程序。但是我什至无法使用 avformat_open_input() 打开音频文件设置检查错误代码的函数后,我得到以下输出: Error code:
使用 Spring Initializer 创建一个简单的 Spring boot。我只在可用选项下选择 DevTools。 创建项目后,无需对其进行任何更改,即可正常运行程序。 现在,当我尝试在项目
所以我只是在 Mac OS X 中通过 brew 安装了 qt。但是它无法链接它。当我尝试运行 brew link qt 或 brew link --overwrite qt 我得到以下信息: ton
我在提交和 pull 时遇到了问题:在提交的 IDE 中,我看到: warning not all local changes may be shown due to an error: unable
我跑 man gcc | grep "-L" 我明白了 Usage: grep [OPTION]... PATTERN [FILE]... Try `grep --help' for more inf
我有一段代码,旨在接收任何 URL 并将其从网络上撕下来。到目前为止,它运行良好,直到有人给了它这个 URL: http://www.aspensurgical.com/static/images/a
在过去的 5 个小时里,我一直在尝试在我的服务器上设置 WireGuard,但在完成所有设置后,我无法 ping IP 或解析域。 下面是服务器配置 [Interface] Address = 10.
我正在尝试在 GitLab 中 fork 我的一个私有(private)项目,但是当我按下 fork 按钮时,我会收到以下信息: No available namespaces to fork the
我这里遇到了一些问题。我是 node.js 和 Rest API 的新手,但我正在尝试自学。我制作了 REST API,使用 MongoDB 与我的数据库进行通信,我使用 Postman 来测试我的路
下面的代码在控制台中给出以下消息: Uncaught DOMException: Failed to execute 'appendChild' on 'Node': The new child el
我正在尝试调用一个新端点来显示数据,我意识到在上一组有效的数据中,它在数据周围用一对额外的“[]”括号进行控制台,我认为这就是问题是,而新端点不会以我使用数据的方式产生它! 这是 NgFor 失败的原
我正在尝试将我的 Symfony2 应用程序部署到我的 Azure Web 应用程序,但遇到了一些麻烦。 推送到远程时,我在终端中收到以下消息 remote: Updating branch 'mas
Minikube已启动并正在运行,没有任何错误,但是我无法 curl IP。我在这里遵循:https://docs.traefik.io/user-guide/kubernetes/,似乎没有提到关闭
每当我尝试docker组成任何项目时,都会出现以下错误。 我尝试过有和没有sudo 我在这台机器上只有这个问题。我可以在Mac和Amazon WorkSpace上运行相同的容器。 (myslabs)
我正在尝试 pip install stanza 并收到此消息: ERROR: No matching distribution found for torch>=1.3.0 (from stanza
DNS 解析看起来不错,但我无法 ping 我的服务。可能是什么原因? 来自集群中的另一个 Pod: $ ping backend PING backend.default.svc.cluster.l
我正在使用Hibernate 4 + Spring MVC 4当我开始 Apache Tomcat Server 8我收到此错误: Error creating bean with name 'wel
我是一名优秀的程序员,十分优秀!