- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我尝试通过logstash将csv文件vrom filebeat摄取到hdfs中。
Filebeat 成功将其转移到 logstash,因为我使用 stdout{codec=>rubydebug} 并且我可以看到它们正在被解析。似乎问题在 webhdfs 模块内部开始。
logstash-sample.conf
input {
beats {
port => 5044
}
}
output {
stdout{
codec=>rubydebug
}
webhdfs{
host=>"x.x.x.x"
port => 50070
path => "/user/logstash/df=%{+YYY-MM-dd}/logstash-%{+HH}.log"
user => "root"
}
}
错误
[WARN ] 2020-06-25 11:43:27.437 [[main]>worker0] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"RecoveryInProgressException","javaClassName":"org.apache.hadoop.hdfs.protocol.RecoveryInProgressException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_412561106_43 on 10.64.2.236 because lease recovery is in progress. Try again later.\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2591)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:27.465 [[main]>worker0] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"RecoveryInProgressException","javaClassName":"org.apache.hadoop.hdfs.protocol.RecoveryInProgressException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_-352502454_44 on 10.64.2.236 because another recovery is in progress by DFSClient_NONMAPREDUCE_-601230909_43 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2599)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:27.984 [[main]>worker0] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"RecoveryInProgressException","javaClassName":"org.apache.hadoop.hdfs.protocol.RecoveryInProgressException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_557749444_45 on 10.64.2.236 because another recovery is in progress by DFSClient_NONMAPREDUCE_-601230909_43 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2599)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:29.010 [[main]>worker0] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"RecoveryInProgressException","javaClassName":"org.apache.hadoop.hdfs.protocol.RecoveryInProgressException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_1871050100_46 on 10.64.2.236 because another recovery is in progress by DFSClient_NONMAPREDUCE_-601230909_43 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2599)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:30.547 [[main]>worker0] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.64.2.236:50010,DS-99ecf21e-ad4a-41f0-a3ae-7d430e2f5ea0,DISK]], original=[DatanodeInfoWithStorage[10.64.2.236:50010,DS-99ecf21e-ad4a-41f0-a3ae-7d430e2f5ea0,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration."}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[ERROR] 2020-06-25 11:43:32.570 [[main]>worker0] webhdfs - Max write retries reached. Events will be discarded. Exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_2073135911_40 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}
[WARN ] 2020-06-25 11:43:33.528 [Ruby-0-Thread-5: :1] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_1033064335_41 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:33.556 [Ruby-0-Thread-5: :1] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_1192450546_42 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:34.077 [Ruby-0-Thread-5: :1] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_-405184820_43 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:35.099 [Ruby-0-Thread-5: :1] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_2036955244_44 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[WARN ] 2020-06-25 11:43:36.621 [Ruby-0-Thread-5: :1] webhdfs - webhdfs write caused an exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_245206024_45 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}. Maybe you should increase retry_interval or reduce number of workers. Retrying...
[ERROR] 2020-06-25 11:43:38.653 [Ruby-0-Thread-5: :1] webhdfs - Max write retries reached. Events will be discarded. Exception: {"RemoteException":{"exception":"AlreadyBeingCreatedException","javaClassName":"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException","message":"Failed to APPEND_FILE /user/logstash/df=2020-06-25/logstash-11.log for DFSClient_NONMAPREDUCE_331276113_46 on 10.64.2.236 because this file lease is currently owned by DFSClient_NONMAPREDUCE_1387502348_39 on 10.64.2.236\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2639)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n"}}
编辑
hadoop fs -chmod -R 777 /user/logstash
这只会更改现有文件夹,但每次我运行 logstash 它都会生成无法访问的新文件
root@ambari:/etc/logstash/conf.d# hdfs dfs -ls /user/logstash
Found 1 items
drwxrwxrwx - root hdfs 0 2020-06-25 13:23 /user/logstash/df=2020-06-25
root@ambari:/etc/logstash/conf.d# hdfs dfs -ls /user/logstash/df=2020-06-25
Found 2 items
-rwxrwxrwx 3 root hdfs 63 2020-06-25 12:43 /user/logstash/df=2020-06-25/logstash-11.log
-rw-r--r-- 3 root hdfs 63 2020-06-25 13:23 /user/logstash/df=2020-06-25/logstash-13.log
最新的是13
root@ambari:/etc/logstash# sudo -u hdfs hdfs dfs -chown -R logstash /user/logstash
root@ambari:/etc/logstash# hdfs dfs -ls /user
Found 6 items
drwxrwx--- - ambari-qa hdfs 0 2020-05-28 03:08 /user/ambari-qa
drwxr-xr-x - hadoop hdfs 0 2020-06-25 11:21 /user/hadoop
drwxr-xr-x - hdfs hdfs 0 2020-06-02 05:08 /user/hdfs
drwxrwxrwx - logstash hdfs 0 2020-06-26 07:23 /user/logstash
drwxrwxrwx - root hdfs 0 2020-06-26 09:53 /user/root
drwxr-xr-x - hdfs hdfs 0 2020-06-02 04:48 /user/sample
但是当我运行它时,它仍然返回相同的错误日志
chown
/user/logstash
他们将继续生成当前用户无法访问的文件,我应该怎么做才能让用户附加/访问该文件?非常感谢最佳答案
@于连森。您的进程正在跨 hdfs 数据节点附加文件。我能够在互联网上找到一些相关信息,搜索“hdfs Max write retries达到。事件将被丢弃。”
Locked file leases are a known issue when appending with multiplethreads to the same file via webhdfs.
With my research and tryouts, it turned out that if the replicationfactor for HDFS is greater than the number of datanodes available,append would fail to write as it needs to take care of multiplecopies. Try setting a low replica factor for the log file.
关于hadoop - logstash 到 webhdfs 未能 APPEND_FILE/user/,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62574615/
我很快就会明白,我不是 Git 甚至 Gitkraken 的高手。因此,我只有一个修补程序、一个主分支和一个功能分支。我在修补程序、提交、推送和 merge 到 master 中进行更改(然后我也推送
我刚开始使用 stub 请求来测试对 iOS 的外部 API 的异步调用。我目前被以下代码困住了,我无法弄清楚什么不起作用。 我想要实现的非常简单的事情是,如果我从网站收到 200 响应,我将 Vie
设置: 一个 JPA ReviewRepository延长 CrudRepository 我的测试使用切片测试注释 @DataJpaTest 我的测试@Autowired ReviewReposito
我尝试通过logstash将csv文件vrom filebeat摄取到hdfs中。 Filebeat 成功将其转移到 logstash,因为我使用 stdout{codec=>rubydebug} 并
我看到很多教程解释了如何在 Tensorflow 的 Bazel WORKSPACE 中构建项目(例如 this one)。但我似乎无法找到一种方法来构建我自己的项目并将 tensorflow 作为依
我正在运行 Ubuntu 10.04 并且最初安装了 ruby 1.9.1(来自源代码)。我刚刚决定试用 ruby 1.9.2 和 rails 3,现在似乎是使用 rvm 处理多个 ruby
我有一个应用程序从后端接收支持的语言环境列表作为以下响应: {locales: [{code: 'enUS'}, {code: 'deDE'}, {code: 'arAR'}]} 我想使用 date-
我是一名优秀的程序员,十分优秀!