gpt4 book ai didi

hadoop - 如何修复 "File could only be replicated to 0 nodes instead of minReplication (=1)."?

转载 作者:可可西里 更新时间:2023-11-01 14:46:16 30 4
gpt4 key购买 nike

I asked a similar question a while ago ,并认为我解决了这个问题,但事实证明它消失了只是因为我正在处理一个较小的数据集。

很多人问过这个问题,我已经遍历了所有我能找到的互联网帖子,但仍然没有取得任何进展。

我想做的是:我在配置单元中有一个外部表 browserdata,它引用了大约 1 GB 的数据。我尝试将该数据粘贴到分区表 partbrowserdata 中,其定义如下:

CREATE EXTERNAL TABLE IF NOT EXISTS partbrowserdata (                                                                                                                                                              
BidID string,
Timestamp_ string,
iPinYouID string,
UserAgent string,
IP string,
RegionID int,
AdExchange int,
Domain string,
URL string,
AnonymousURL string,
AdSlotID string,
AdSlotWidth int,
AdSlotHeight int,
AdSlotVisibility string,
AdSlotFormat string,
AdSlotFloorPrice decimal,
CreativeID string,
BiddingPrice decimal,
AdvertiserID string,
UserProfileIDs array<string>
)
PARTITIONED BY (CityID int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
LOCATION '/user/maria_dev/data2';

使用此查询:

insert into table partbrowserdata partition(cityid) 
select BidID,Timestamp_ ,iPinYouID ,UserAgent ,IP ,RegionID ,AdExchange ,Domain ,URL ,AnonymousURL ,AdSlotID ,AdSlotWidth ,AdSlotHeight ,AdSlotVisibility ,AdSlotFormat ,AdSlotFloorPrice ,CreativeID ,BiddingPrice ,AdvertiserID ,UserProfileIDs ,CityID
from browserdata;

每次,在每个平台上,无论是 hortonworks 还是 cloudera,我都会收到这条消息:

Caused by: 

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/maria_dev/data2/.hive-staging_hive_2019-02-06_18-58-39_333_7627883726303986643-1/_task_tmp.-ext-10000/cityid=219/_tmp.000000_3 could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3389)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:683)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:495)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)

at org.apache.hadoop.ipc.Client.call(Client.java:1504)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:413)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1814)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1610)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:773)

我该怎么办?我不明白为什么会这样。不过,这似乎确实是一个内存问题,因为我可以插入几行,但由于某种原因不能插入所有行。请注意,我在 HDFS 上有足够的内存,所以 1 gig 的额外数据是一分钱一分货,所以这可能是 RAM 问题?

这是我的 dfs 报告输出:

enter image description here

我已经在所有执行引擎上尝试过:sparktezmr

请不要建议说我需要格式化名称节点的解决方案,因为它们不起作用,而且它们在任何方面都不是解决方案。

更新:

在查看 namenode 的日志后,我注意到了这一点,如果它有帮助的话:

Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK ], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], stor agePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

这些日志表明:

For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.ser ver.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology

我该怎么做?

我也注意到这里有一个类似的 Unresolved 帖子:

HDP 2.2@Linux/CentOS@OracleVM (Hortonworks) fails on remote submission from Eclipse@Windows

更新 2:

我刚刚尝试用 spark 对其进行分区,它有效!所以,这一定是一个 hive 错误......

更新 3:

刚刚在 MapR 上对此进行了测试并且它有效,但 MapR 不使用 HDFS。这绝对是某种 HDFS + Hive 组合错误。

证明:

enter image description here

最佳答案

我最终联系了 cloudera 论坛,他们在几分钟内回答了我的问题:http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Why-can-t-I-partition-a-1-gigabyte-dataset-into-300/m-p/86554#M3981我尝试了 Harsh J 的建议并且效果很好!

他是这样说的:

If you are dealing with unordered partitioning from a data source, you can end up creating a lot of files in parallel as the partitioning is attempted.

In HDFS, when a file (or more specifically, its block) is open, the DataNode performs a logical reservation of its target block size. So if your configured block size is 128 MiB, then every concurrently open block will deduct that value (logically) from the available remaining space the DataNode publishes to the NameNode.

This reservation is done to help manage space and guarantees of a full block write to a client, so that a client that's begun writing its file never runs into an out of space exception mid-way.

Note: When the file is closed, only the actual length is persisted, and the reservation calculation is adjusted to reflect the reality of used and available space. However, while the file block remains open, its always considered to be holding a full block size.

The NameNode further will only select a DataNode for a write if it can guarantee full target block size. It will ignore any DataNodes it deems (based on its reported values and metrics) unfit for the requested write's parameters. Your error shows that the NameNode has stopped considering your only live DataNode when trying to allocate a new block request.

As an example, 70 GiB of available space will prove insufficient if there will be more than 560 concurrent, open files (70 GiB divided into 128 MiB block sizes). So the DataNode will 'appear full' at the point of ~560 open files, and will no longer serve as a valid target for further file requests.

It appears per your description of the insert that this is likely, as each of the 300 chunks of the dataset may still carry varied IDs, resulting in a lot of open files requested per parallel task, for insert into several different partitions.

You could 'hack' your way around this by reducing the request block size within the query (set dfs.blocksize to 8 MiB for ex.), influencing the reservation calculation. However, this may not be a good idea for larger datasets as you scale, since it will drive up the file:block count and increase memory costs for the NameNode.

A better way to approach this would be to perform a pre-partitioned insert (sort first by partition and then insert in a partitioned manner). Hive for example provides this as an option: hive.optimize.sort.dynamic.partition, and if you use plain Spark or MapReduce then their default strategy of partitioning does exactly this.

所以,在一天结束时,我做了 set hive.optimize.sort.dynamic.partition=true; 一切都开始工作了。但我还做了另一件事。

这是我之前调查此问题时发表的一篇帖子:Why do I get "File could only be replicated to 0 nodes" when writing to a partitioned table?我遇到了 Hive 无法对我的数据集进行分区的问题,因为 hive.exec.max.dynamic.partitions 设置为 100,所以,我用 google 搜索了这个问题在 hortonworks 论坛的某个地方,我看到了一个答案,说我应该这样做:

SET hive.exec.max.dynamic.partitions=100000; 
SET hive.exec.max.dynamic.partitions.pernode=100000;

这是另一个问题,也许 Hive 会尝试打开与您设置的 hive.exec.max.dynamic.partitions 一样多的并发连接,所以我的 insert 查询直到我将这些值降低到 500 才开始工作。

关于hadoop - 如何修复 "File could only be replicated to 0 nodes instead of minReplication (=1)."?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54561086/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com