gpt4 book ai didi

apache-kafka - kafka SASL/SCRAM 认证失败

转载 作者:行者123 更新时间:2023-12-04 10:05:53 26 4
gpt4 key购买 nike

我试图为我的 kafka 集群添加安全性,我遵循了文档:

  • https://kafka.apache.org/documentation/#security_sasl_scram
  • https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_scram.html#

  • 我使用这个添加用户:
    kafka-configs.sh --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

    我修改了 server.properties:
    broker.id=1
    listeners=SASL_PLAINTEXT://kafka1:9092
    advertised.listeners=SASL_PLAINTEXT://kafka1:9092
    sasl.enabled.mechanisms=SCRAM-SHA-256
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
    security.inter.broker.protocol=SASL_PLAINTEXT
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    default.replication.factor=3
    min.insync.replicas=2
    log.dirs=/var/lib/kafka
    num.partitions=3
    num.recovery.threads.per.data.dir=1
    offsets.topic.replication.factor=3
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    log.retention.hours=168
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000
    zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka
    zookeeper.connection.timeout.ms=6000
    group.initial.rebalance.delay.ms=0

    创建 jaas 文件:
    KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret"
    };

    在/etc/profile.d 中创建文件 kafka_opts.sh:
    export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf

    但是当我启动 kafka 时,它会抛出以下错误:
    [2020-05-04 10:54:08,782] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka1/kafka1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)

    我使用每个服务器的 ip 而不是 kafka1,kafka2,kafka3,zookeeper1,zookeeper2 和 zookeeper3,有人可以帮助我解决我的问题吗?

    最佳答案

    我的主要问题是这个配置:

    zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka

    server.properties 中的这个配置需要按照zookeeper 创建kafka 信息的方式进行排序,但这会影响我需要执行命令的方式 kafka-configs.sh ,所以我将解释我需要遵循的步骤
  • 首先修改zookeeper。

  • 我已经从官网下载了zookeeper https://zookeeper.apache.org/releases.html

    我修改了 zoo.cfg 文件并添加了安全配置:
    tickTime=2000
    dataDir=/var/lib/zookeeper/
    clientPort=2181
    initLimit=5
    syncLimit=2
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
    requireClientAuthScheme=sasl

    我为zookeeper创建jaas文件:
    Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin_secret";
    };

    我在/conf/上创建文件 java.env 并添加以下内容:
    SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf"

    有了这个文件,你告诉zookeeper使用jaas文件让kafka向zookeeper进行身份验证,以验证zookeeper正在获取你只需要运行的文件:
    zkServer.sh print-cmd

    它会回应:
    /usr/bin/java
    ZooKeeper JMX enabled by default
    Using config: /opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg
    "java" -Dzookeeper.log.dir="/opt/apache-zookeeper-3.6.0-bin/bin/../logs" ........-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf....... "/opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg" > "/opt/apache-zookeeper-3.6.0-bin/bin/../logs/zookeeper.out" 2>&1 < /dev/null
  • 修改kafka

  • 我已经从官网下载了kafka https://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz

    我在 server.properties 文件中修改/添加了以下配置:
    listeners=SASL_PLAINTEXT://kafka1:9092
    advertised.listeners=SASL_PLAINTEXT://kafka1:9092
    sasl.enabled.mechanisms=SCRAM-SHA-256
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
    security.inter.broker.protocol=SASL_PLAINTEXT
    authorizer.class.name=kafka.security.authorizer.AclAuthorizer
    allow.everyone.if.no.acl.found=false
    super.users=User:admin

    我为 kafka 创建了 jaas 文件:
    KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin_secret";
    };
    Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="admin"
    password="admin_secret";
    };

    您需要了解的一件重要事情是,Client 部分需要与 zookeeper 中的 jaas 文件相同,而 KafkaServer 部分用于代理间通信。

    另外我需要告诉 kafka 使用 jaas 文件,这可以通过设置变量 KAFKA_OPTS 来完成:
    export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf
  • 为 kafka 代理创建用户管理员

  • 运行以下命令:
    kafka-configs.sh --zookeeper zookeeper:2181/kafka --alter --add-config 'SCRAM-SHA-256=[password=admin_secret]' --entity-type users --entity-name admin

    正如我之前提到的,我的错误是我没有将/kafka 部分添加到 zookeeper ip(请注意,使用 zookeeper 的所有内容都需要在 ip 的末尾添加/kafka 部分),现在如果您启动 zookeeper 并kafka 一切都会很好。

    关于apache-kafka - kafka SASL/SCRAM 认证失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61594103/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com