gpt4 book ai didi

hadoop 协议(protocol)消息太大。可能是恶意的。使用 CodedInputStream.setSizeLimit() 增加大小限制

转载 作者:可可西里 更新时间:2023-11-01 14:13:00 27 4
gpt4 key购买 nike

我在数据节点的日志中看到了这一点。这可能是因为我正在将 500 万个文件复制到 HDFS 中:

java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:507)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:738)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:874)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at com.google.protobuf.CodedInputStream.readSInt64(CodedInputStream.java:363)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:326)
... 7 more

我只是使用 hadoop fs -put .... 将文件复制到 HDFS。最近我开始在客户端收到这些类型的消息:

15/06/30 15:00:58 INFO hdfs.DFSClient: Could not complete /pdf-nxml/file1.nxml._COPYING_ retrying...
15/06/30 15:01:05 INFO hdfs.DFSClient: Could not complete /pdf-nxml/2014-full/file2.nxml._COPYING_ retrying...

我每分钟大约收到 3 次类似上述的消息,但异常在数据节点上更频繁。

我该如何解决这个问题?

编辑
我不得不重新启动 hadoop,现在它无法正常启动,每个数据节点的日志文件中都有这些:

2015-07-01 06:20:35,748 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Unsuccessfully sent block report 0x2ac82e1cf6e64,  containing 1 storage report(s), of which we sent 0. The reports had 6342936 total blocks and used 0 RPC(s). This took 542 msec to generate and 240 msecs for RPC and NN processing. Got back no commands.
2015-07-01 06:20:35,748 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-1043486900-10.0.1.42-1434126972501 (Datanode Uuid d5dcf9a0-c82d-49d8-8162-af5910c3e3fe) service to cruncher02/10.0.1.42:8020
java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:507)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:738)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:874)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at com.google.protobuf.CodedInputStream.readSInt64(CodedInputStream.java:363)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:326)
... 7 more

最佳答案

这个问题的答案已经在评论中提供了:

My hadoop 2.7.0 cluster was not starting. I had to recompile protobuf-2.5.0, changing com.google.protobuf.CodedInputStream#DEFAULT_SIZE_LIMIT to 64 << 24. Then I modified hdfs-site.xml to include ipc.maximum.data.length 134217728 and now it seems back up.

关于hadoop 协议(protocol)消息太大。可能是恶意的。使用 CodedInputStream.setSizeLimit() 增加大小限制,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31140481/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com