gpt4 book ai didi

logging - 配置Kafka Connect分布式连接器日志(connectDistributed.out)

转载 作者:行者123 更新时间:2023-12-01 12:11:47 27 4
gpt4 key购买 nike

目前有两种Kafka Connect正在收集日志。

  • connect-rest.log.2018-07-01-21 , connect-rest.log.2018-07-01-22 ...
  • connectDistributed.out

  • 问题是我不知道如何配置 connectDistributed.out Kafka Connect 中的文件。以下是该文件的示例输出:
    [2018-07-11 08:42:40,798] INFO WorkerSinkTask{id=elasticsearch-sink- 
    connector-0} Committing offsets asynchronously using sequence number
    216: {test-1=OffsetAndMetadata{offset=476028, metadata=‘’},
    test-0=OffsetAndMetadata{offset=478923, metadata=‘’},
    test-2=OffsetAndMetadata{offset=477944, metadata=‘’}}
    (org.apache.kafka.connect.runtime.WorkerSinkTask:325)
    [2018-07-11 08:43:40,798] INFO WorkerSinkTask{id=elasticsearch-sink-connector0}
    Committing offsets asynchronously using sequence number 217:
    {test-1=OffsetAndMetadata{offset=476404, metadata=‘’},
    test-0=OffsetAndMetadata{offset=479241, metadata=‘’},
    test-2=OffsetAndMetadata{offset=478316, metadata=‘’}}
    (org.apache.kafka.connect.runtime.WorkerSinkTask:325)

    没有配置任何日志记录选项,文件大小随着时间的推移变得越来越大。今天,它达到了 20GB,我不得不手动清空文件。所以我的问题是如何配置这个 connectDistributed.out ?
    我正在为其他组件(例如 kafka 代理日志)配置日志选项。

    以下是 confluent-4.1.0/etc/kafka下的一些Kafka Connect相关的日志配置我正在使用的。

    log4j.properties
    log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
    log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

    log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
    log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
    log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

    log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
    log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
    log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

    log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
    log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
    log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

    # Change the two lines below to adjust ZK client logging
    log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
    log4j.logger.org.apache.zookeeper=INFO

    # Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
    log4j.logger.kafka=INFO
    log4j.logger.org.apache.kafka=INFO

    # Change to DEBUG or TRACE to enable request logging
    log4j.logger.kafka.request.logger=WARN, requestAppender
    log4j.additivity.kafka.request.logger=false

    # Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
    # related to the handling of requests
    #log4j.logger.kafka.network.Processor=TRACE, requestAppender
    #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
    #log4j.additivity.kafka.server.KafkaApis=false
    log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
    log4j.additivity.kafka.network.RequestChannel$=false

    log4j.logger.kafka.controller=TRACE, controllerAppender
    log4j.additivity.kafka.controller=false

    log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
    log4j.additivity.kafka.log.LogCleaner=false

    log4j.logger.state.change.logger=TRACE, stateChangeAppender
    log4j.additivity.state.change.logger=false

    # Access denials are logged at INFO level, change to DEBUG to also log allowed accesses
    log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
    log4j.additivity.kafka.authorizer.logger=false

    连接-log4j.properties
    log4j.rootLogger=INFO, stdout

    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

    log4j.logger.org.apache.zookeeper=ERROR
    log4j.logger.org.I0Itec.zkclient=ERROR
    log4j.logger.org.reflections=ERROR


    log4j.appender.kafkaConnectRestAppender=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.kafkaConnectRestAppender.DatePattern='.'yyyy-MM-dd-HH
    log4j.appender.kafkaConnectRestAppender.File=/home/ec2-user/logs/connect-rest.log
    log4j.appender.kafkaConnectRestAppender.layout=org.apache.log4j.PatternLayout
    log4j.appender.kafkaConnectRestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

    log4j.logger.org.apache.kafka.connect.runtime.rest=INFO, kafkaConnectRestAppender
    log4j.additivity.org.apache.kafka.connect.runtime.rest=false

    最佳答案

    connectDistributed.out文件仅在您使用守护进程模式时才存在,例如

    connect-distributed -daemon connect-distributed.properties

    原因 : 来自 kafka-run-class脚本, CONSOLE_OUTPUT_FILE设置为 connectDistributed.out
    # Launch mode
    if [ "x$DAEMON_MODE" = "xtrue" ]; then
    nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
    else...

    选项 1:加载自定义 log4j 属性文件

    您可以更新 KAFKA_LOG4J_OPTS在开始连接之前指向您想要的任何 log4j 属性文件的环境变量(请参见下面的示例)
    $ export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file://path/to/connect-log4j-new.properties"
    $ connect-distributed connect-distributed.properties

    注意:不使用 -daemon这里

    如果您不再拥有 ConsoleAppender在 log4j 属性中,这将几乎没有输出,并且只是挂起,所以对 nohup 会很好。它。

    默认的 log4j 配置名为 connect-log4j.properties ,在 Confluent Platform 中,这是在 etc/kafka/文件夹。这是默认情况下的样子
    log4j.rootLogger=INFO, stdout

    log4j.appender.stdout=org.apache.log4j.ConsoleAppender

    为了设置最大日志文件大小,您需要更改根记录器以转到 FileAppender 而不是 ConsoleAppender,但我更喜欢使用 DailyRollingFileAppender .

    Here is an example
    log4j.rootLogger=INFO, stdout, FILE

    log4j.appender.FILE=org.apache.log4j.DailyRollingFileAppender
    log4j.appender.FILE.DatePattern='.'yyyy-MM-dd
    log4j.appender.FILE.File=/var/log/kafka-connect/connect.log
    log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
    log4j.appender.FILE.layout.ConversionPattern=[%d] %p %m (%c)%n

    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

    log4j.logger.org.apache.zookeeper=ERROR
    log4j.logger.org.I0Itec.zkclient=ERROR
    log4j.logger.org.reflections=ERROR

    关于logging - 配置Kafka Connect分布式连接器日志(connectDistributed.out),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51281484/

    27 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com