gpt4 book ai didi

java - 如何更改 org.apache.commons.logging.Log.info ("massage") 将写入日志文件

转载 作者:可可西里 更新时间:2023-11-01 16:08:30 25 4
gpt4 key购买 nike

我正在 java 平台上开发 hadoop 的开源。

我添加了类(在 yarn timeline server 中)

除了打印信息,还做各种事情,

我用两个库写信息

import org.apache.commons.logging.Log;

import org.apache.commons.logging.LogFactory;

示例:

private static final Log LOG =LogFactory.getLog(IntermediateHistoryStore.class);
LOG.info("massage");

为了查看我的更改,我通过 hadoop 的 cmd 或通过任务管理器运行时间线服务:

**C:\hdp\hadoop-2.7.1.2.3.0.0-2557>** C:\Java\jdk1.7.0_79\bin\java -Xmx1000m -Dhadoop.log.dir=c:\hadoop\logs\hadoop -Dyarn.log.dir=c:\hadoop\logs\hadoop -Dhadoop.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.log.file=yarn-timelineserver-B-YAIF-9020.log -Dyarn.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dyarn.id.str= -Dhadoop.home.dir=C:\hdp\hadoop-2.7.1.2.3.0.0-2557 -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -Dyarn.policy.file=hadoop-policy.xml -Djava.library.path=;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\bin -classpath C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\common\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\hdfs\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\mapreduce\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\share\hadoop\yarn\lib\*;C:\hdp\hadoop-2.7.1.2.3.0.0-2557\etc\hadoop\timelineserver-config\log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer

之后我还需要通过 hadoop cmd 运行 pig 脚本

问题:我打印的所有信息都是直接写到控制台(cmd)并且不归档 (yarn-timelineserver.log)

hadoop的cmd结果:

AI: INFO 17-11-2015 11:22, 1: Configuration file has been successfully found as resource
AI: WARN 17-11-2015 11:22, 1: 'MaxTelemetryBufferCapacity': null value is replaced with '500'
AI: WARN 17-11-2015 11:22, 1: 'FlushIntervalInSeconds': null value is replaced with '5'
AI: WARN 17-11-2015 11:22, 1: Found an old version of HttpClient jar, for best performance consider upgrading to version 4.3+
AI: INFO 17-11-2015 11:22, 1: Using Apache HttpClient 4.2
AI: TRACE 17-11-2015 11:22, 1: No back-off container defined, using the default 'EXPONENTIAL'
AI: WARN 17-11-2015 11:22, 1: 'Channel.MaxTransmissionStorageCapacityInMB': null value is replaced with '10'
AI: TRACE 17-11-2015 11:22, 1: C:\Users\b-yaif\AppData\Local\Temp\1\AISDK\native\1.0.2 folder exists
AI: TRACE 17-11-2015 11:22, 1: Java process name is set to 'java#1'
AI: TRACE 17-11-2015 11:22, 1: Successfully loaded library 'applicationinsights-core-native-win64.dll'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessMemoryPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_ProcessCpuPerformanceCounter'
AI: TRACE 17-11-2015 11:22, 1: Registering PC 'JSDK_WindowsPerformanceCounterAsPC'

****[INFO] IntermediateHistoryStore - The variable ( telemetry ) is initialized successfully....!
[INFO] IntermediateHistoryStore - The variable ( originalStorage ) is initialized successfully....!****

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinde
r.class]
SLF4J: Found binding in [jar:file:/C:/hdp/hadoop-2.7.1.2.3.0.0-2557/share/hadoop/yarn/SaveHistoryToFile-1.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerB
inder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[INFO] MetricsConfig - loaded properties from hadoop-metrics2.properties

[INFO] MetricsSystemImpl - Scheduled snapshot period at 10 second(s).

[INFO] MetricsSystemImpl - ApplicationHistoryServer metrics system started

[INFO] LeveldbTimelineStore - Using leveldb path c:/hadoop/logs/hadoop/timeline/leveldb-timeline-store.ldb

[INFO] LeveldbTimelineStore - Loaded timeline store version info 1.0

[INFO] LeveldbTimelineStore - Starting deletion thread with ttl 604800000 and cycle interval 300000

[INFO] LeveldbTimelineStore - Deleted 2 entities of type MAPREDUCE_JOB

[INFO] LeveldbTimelineStore - Deleted 4 entities of type MAPREDUCE_TASK

[INFO] LeveldbTimelineStateStore - Loading the existing database at th path: c:/hadoop/logs/hadoop/timeline-state/timeline-state-store.ldb

[INFO] LeveldbTimelineStore - Discarded 6 entities for timestamp 1447147360471 and earlier in 0.031 seconds

[INFO] LeveldbTimelineStateStore - Loaded timeline state store version info 1.0

[INFO] LeveldbTimelineStateStore - Loading timeline service state from leveldb

[INFO] LeveldbTimelineStateStore - Loaded 138 master keys and 0 tokens from leveldb, and latest sequence number is 0
[INFO] TimelineDelegationTokenSecretManagerService$TimelineDelegationTokenSecretManager - Recovering TimelineDelegationTokenSecretManager
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] CallQueueManager - Using callQueue class java.util.concurrent.LinkedBlockingQueue
[INFO] Server - Starting Socket Reader #1 for port 10200

[INFO] Server - Starting Socket Reader #2 for port 10200

[INFO] Server - Starting Socket Reader #3 for port 10200

[INFO] Server - Starting Socket Reader #4 for port 10200

[INFO] Server - Starting Socket Reader #5 for port 10200

[INFO] RpcServerFactoryPBImpl - Adding protocol org.apache.hadoop.yarn.api.ApplicationHistoryProtocolPB to the server
[INFO] Server - IPC Server Responder: starting
[INFO] Server - IPC Server listener on 10200: starting
[INFO] ApplicationHistoryClientService - Instantiated ApplicationHistoryClientService at b-yaif-9020.middleeast.corp.microsoft.com/10.165.224.174:1020
0
[INFO] ApplicationHistoryServer - Instantiating AHSWebApp at b-yaif-9020.middleeast.corp.microsoft.com:8188
[WARN] HttpRequestLog - Jetty request log can only be enabled using Log4j
[INFO] HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[INFO] HttpServer2 - Added global filter 'Timeline Authentication Filter' (class=org.apache.hadoop.yarn.server.timeline.security.TimelineAuthenticatio
nFilter)
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context applicationhis
tory
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[INFO] HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
[INFO] HttpServer2 - adding path spec: /applicationhistory/*
[INFO] HttpServer2 - adding path spec: /ws/*
[INFO] HttpServer2 - Jetty bound to port 8188
[INFO] AbstractDelegationTokenSecretManager - Updating the current master key for generating delegation tokens
[INFO] AbstractDelegationTokenSecretManager - Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)

我希望所有从 [INFO] 开始的行都被写入文件 log那个 yarn 时间线写 (yarn-timeline.log)

最佳答案

我认为你应该使用 log4j 而不是 commons logging..它非常简单且最常用的 logging api..它可以将文件记录到控制台以及文件..

关于java - 如何更改 org.apache.commons.logging.Log.info ("massage") 将写入日志文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33755236/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com