gpt4 book ai didi

hadoop - HDFS阻止问题

转载 作者:行者123 更新时间:2023-12-02 21:35:14 32 4
gpt4 key购买 nike

当我运行fsck命令时,它显示的总块数为68(平均块大小为286572 B)。我怎么只有68个街区?

我最近安装了版本为Hadoop 2.6.0的CDH5。

--

[hdfs @ cluster1〜] $ hdfs fsck /

Connecting to namenode via http://cluster1.abc:50070
FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015
....................................................................Status: HEALTHY
Total size: 19486905 B
Total dirs: 569
Total files: 68
Total symlinks: 0
Total blocks (validated): 68 (avg. block size 286572 B)
Minimally replicated blocks: 68 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 1.9411764
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds


The filesystem under path '/' is HEALTHY

--

这是我运行hdfsadmin -repot命令时得到的:

[hdfs @ cluster1〜] $ hdfs dfsadmin -report
Configured Capacity: 5715220577895 (5.20 TB)
Present Capacity: 5439327449088 (4.95 TB)
DFS Remaining: 5439303270400 (4.95 TB)
DFS Used: 24178688 (23.06 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 504

--

另外,我的配置单元查询无法启动MapReduce作业,是否可能是上述问题?

有什么建议吗?

谢谢!

最佳答案

块是分布在文件系统节点中的数据块。因此,例如,如果您有200MB的文件,则实际上将有2个块,每个块128和72 mbs。

因此,不必担心框架会解决的障碍。如fsck报告所示,HDFS中有68个文件,因此有68个块。

关于hadoop - HDFS阻止问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32784010/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com