gpt4 book ai didi

hadoop - Flume 不将日志写入 Hdfs

转载 作者:可可西里 更新时间:2023-11-01 14:53:38 27 4
gpt4 key购买 nike

所以我配置了 flume 以将我的 apache2 访问日志写入 hdfs ...并且我通过 flume 的日志发现所有配置都是正确的,但我不知道为什么它仍然不写入 hdfs .所以这是我的水槽配置文件

#agent and component of agent
search.sources = so
search.sinks = si
search.channels = sc

# Configure a channel that buffers events in memory:
search.channels.sc.type = memory
search.channels.sc.capacity = 20000
search.channels.sc.transactionCapacity = 100


# Configure the source:
search.sources.so.channels = sc
search.sources.so.type = exec
search.sources.so.command = tail -F /var/log/apache2/access.log

# Describe the sink:
search.sinks.si.channel = sc
search.sinks.si.type = hdfs
search.sinks.si.hdfs.path = hdfs://localhost:9000/flumelogs/
search.sinks.si.hdfs.writeFormat = Text
search.sinks.si.hdfs.fileType = DataStream
search.sinks.si.hdfs.rollSize=0
search.sinks.si.hdfs.rollCount = 10000
search.sinks.si.hdfs.batchSize=1000
search.sinks.si.rollInterval=1

这是我的水槽日志

14/12/18 17:47:56 INFO node.AbstractConfigurationProvider: Creating channels
14/12/18 17:47:56 INFO channel.DefaultChannelFactory: Creating instance of channel sc type memory
14/12/18 17:47:56 INFO node.AbstractConfigurationProvider: Created channel sc
14/12/18 17:47:56 INFO source.DefaultSourceFactory: Creating instance of source so, type exec
14/12/18 17:47:56 INFO sink.DefaultSinkFactory: Creating instance of sink: si, type: hdfs
14/12/18 17:47:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/18 17:47:56 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/12/18 17:47:56 INFO node.AbstractConfigurationProvider: Channel sc connected to [so, si]
14/12/18 17:47:56 INFO node.Application: Starting new configuration:{ sourceRunners:{so=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:so,state:IDLE} }} sinkRunners:{si=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3de76481 counterGroup:{ name:null counters:{} } }} channels:{sc=org.apache.flume.channel.MemoryChannel{name: sc}} }
14/12/18 17:47:56 INFO node.Application: Starting Channel sc
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: sc: Successfully registered new MBean.
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: sc started
14/12/18 17:47:56 INFO node.Application: Starting Sink si
14/12/18 17:47:56 INFO node.Application: Starting Source so
14/12/18 17:47:56 INFO source.ExecSource: Exec source starting with command:tail -F /var/log/apache2/access.log
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: si: Successfully registered new MBean.
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: si started
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: so: Successfully registered new MBean.
14/12/18 17:47:56 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: so started

这是我用来启动flume的命令

flume-ng agent -n search -c conf -f ../conf/flume-conf-search 

我在hdfs中有一个路径

       hadoop fs -mkdir hdfs://localhost:9000/flumelogs

但我不知道为什么它不写入 hdfs..我可以看到 apache2 的访问日志..但是 flume 没有将它们发送到 hdfs/flumelogs 目录....请帮忙! !

最佳答案

我认为这不是权限问题,当 flume 刷新到 HDFS 时您会看到异常。这个问题有两个可能的原因:

1) 缓冲区中没有足够的数据,flume 认为它还没有刷新。您的接收器批量大小为 1000,您的 channel 容量为 20000。要验证这一点,CTRL -C 您的水槽进程,这将强制进程刷新到 HDFS。

2) 更可能的原因是您的 exec 源未正常运行。这可能是由于 tail 命令的路径问题。在命令中添加 tail 的完整路径,例如/bin/tail -F/var/log/apache2/access.log 或/usr/bin/tail -F/var/log/apache2/access.log(取决于您的系统)检查

which tail 

正确的路径。

关于hadoop - Flume 不将日志写入 Hdfs,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27546621/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com